Refine
Year of publication
- 2021 (4) (remove)
Document Type
- Article (2)
- Conference Proceeding (1)
- Working Paper (1)
Language
- English (4)
Has Fulltext
- yes (4)
Is part of the Bibliography
- no (4)
Keywords
- BRCA1 (1)
- BRCA2 (1)
- Consensus (1)
- Hereditary breast cancer (1)
- Hereditary ovarian cancer (1)
- Panel testing (1)
Institute
We report first results on elliptic flow of identified particles at midrapidity in Au+Au collisions at sqrt[sNN] = 130 GeV using the STAR TPC at RHIC. The elliptic flow as a function of transverse momentum and centrality differs significantly for particles of different masses. This dependence can be accounted for in hydrodynamic models, indicating that the system created shows a behavior consistent with collective hydrodynamical flow. The fit to the data with a simple model gives information on the temperature and flow velocities at freeze-out.
Background: The German Consortium for Hereditary Breast and Ovarian Cancer (GC-HBOC) has established a multigene panel (TruRisk®) for the analysis of risk genes for familial breast and ovarian cancer. Summary: An interdisciplinary team of experts from the GC-HBOC has evaluated the available data on risk modification in the presence of pathogenic mutations in these genes based on a structured literature search and through a formal consensus process. Key Messages: The goal of this work is to better assess individual disease risk and, on this basis, to derive clinical recommendations for patient counseling and care at the centers of the GC-HBOC from the initial consultation prior to genetic testing to the use of individual risk-adapted preventive/therapeutic measures.
Study Design: Cross-sectional survey
Objective: To determine the influence of surgeons’ level of experience and subspeciality training on the reliability, reproducibility, and accuracy of sacral fracture classification using the AO Spine Sacral Injury Classification System.
Summary of Background Data: An ideal classification system is easily comprehensible and reliable amongst the diverse group of surgeons. A surgeons’ level of experience may have a significant effect on the reliability and accuracy of a classification system. Moreover, surgeons of different subspecialities may have various levels of comfort with imaging assessment of sacral injuries required for accurate diagnosis and classification.
Methods: High-resolution computerized tomography (CT) images from 26 cases were assessed by 172 investigators from a diverse array of surgical subspecialities (general orthopaedics, neurosurgery, orthopaedic spine, orthopaedic trauma) and experience (<5, 5-10, 11-20, >20 years). Validation assessments were performed via web conference using high-resolution images, as well as axial/sagittal/coronal CT scan sequences. Two assessments were performed by each investigator independently three weeks apart in randomized order. Reliability and reproducibility were calculated with cohen’s kappa coefficient (k) and gold standard classification agreement was determined for each fracture morphology and subtype and stratified by experience and subspeciality.
Results: Respondents achieved an overall k = 0.87 for morphology and k = 0.77 for subtype classification, representing excellent and substantial intraobserver reproducibility, respectively. Respondents from all four practice experience groups demonstrated excellent interobserver reliability when classifying overall morphology (k=0.842/0.850, Assessment 1/Assessment 2) and substantial interobserver reliability in overall subtype (k=0.719/0.751) in both assessments. General orthopaedists, neurosurgeons, and orthopaedic spine surgeons exhibited excellent interobserver reliability in overall morphology classification and substantial interobserver reliability in overall subtype classification. Surgeons in each experience category and subspecialty correctly classified fracture morphology in over 90% of cases and fracture subtype in over 80% of cases according to the gold standard. Correct overall classification of fracture morphology (Assessment 1: p= 0.024, Assessment 2: p=0.006) and subtype (p2<0.001) differed significantly with surgeons with >20 years of experience demonstrating increased difficulty correctly classifying all fracture subtypes overall in comparison to the other experience groups. Correct overall classification did not significantly differ by subspecialty.
Conclusions: Overall, the AO Spine Sacral Injury Classification System appears to be universally applicable among surgeons of various subspecialties and levels of experience with acceptable reliability, reproducibility, and accuracy.
Disclosures: author 1: none; author 2: consultant=Medtronic, Nuvasive, ISD, Asutra, Stryker, Bioventus, Zimmer, teledocs, Clinical Spine Surgery, AOSpine ; author 3: none; author 4: grants/research support=AOSpine, consultant=DPS, icotec; author 5: none; author 6: none; author 7: grants/research support=DPS; author 8: none; author 9: grants/research support=NIH, RTI, CSRS, royalties=Inion ; author 10: stock/shareholder=Advanced Spinal Intellectual Properties; Atlas Spine; Avaz Surgical; Bonovo Orthopaedics; Computational Biodynamics; Cytonics; Deep Health; Dimension Orthotics LLC; Electrocore; Flagship Surgical; FlowPharma; Globus; Innovative Surgical Design; Insight Therapeutics; Jushi; Nuvasive; Orthobullets; Paradigm Spine; Parvizi Surgical Innovation; Progressive Spinal Technologies; Replication Medica; Spine Medica; Spineology; Stout Medical; Vertiflex; ViewFi Health, royalties=Aesculap; Atlas Spine; Globus; Medtronics; SpineWave; Stryker Spine,other financial report=AO Spine
Non-standard errors
(2021)
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.