Refine
Language
- English (2)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Institute
- Medizin (2)
Background: Misconceptions about ADHD stigmatize affected people, reduce credibility of providers, and prevent/delay treatment. To challenge misconceptions, we curated findings with strong evidence base. Methods: We reviewed studies with more than 2000 participants or meta-analyses from five or more studies or 2000 or more participants. We excluded meta-analyses that did not assess publication bias, except for meta-analyses of prevalence. For network meta-analyses we required comparison adjusted funnel plots. We excluded treatment studies with waiting-list or treatment as usual controls. From this literature, we extracted evidence-based assertions about the disorder. Results: We generated 208 empirically supported statements about ADHD. The status of the included statements as empirically supported is approved by 80 authors from 27 countries and 6 continents. The contents of the manuscript are endorsed by 366 people who have read this document and agree with its contents. Conclusions: Many findings in ADHD are supported by meta-analysis. These allow for firm statements about the nature, course, outcome causes, and treatments for disorders that are useful for reducing misconceptions and stigma.
Investigators in the cognitive neurosciences have turned to Big Data to address persistent replication and reliability issues by increasing sample sizes, statistical power, and representativeness of data. While there is tremendous potential to advance science through open data sharing, these efforts unveil a host of new questions about how to integrate data arising from distinct sources and instruments. We focus on the most frequently assessed area of cognition - memory testing - and demonstrate a process for reliable data harmonization across three common measures. We aggregated raw data from 53 studies from around the world which measured at least one of three distinct verbal learning tasks, totaling N = 10,505 healthy and brain-injured individuals. A mega analysis was conducted using empirical bayes harmonization to isolate and remove site effects, followed by linear models which adjusted for common covariates. After corrections, a continuous item response theory (IRT) model estimated each individual subject’s latent verbal learning ability while accounting for item difficulties. Harmonization significantly reduced inter-site variance by 37% while preserving covariate effects. The effects of age, sex, and education on scores were found to be highly consistent across memory tests. IRT methods for equating scores across AVLTs agreed with held-out data of dually-administered tests, and these tools are made available for free online. This work demonstrates that large-scale data sharing and harmonization initiatives can offer opportunities to address reproducibility and integration challenges across the behavioral sciences.