This paper reports on Monte Carlo simulation results for future measurements of the moduli of time-like proton electromagnetic form factors, |GE | and |GM|, using the ¯pp → μ+μ− reaction at PANDA (FAIR). The electromagnetic form factors are fundamental quantities parameterizing the electric and magnetic structure of hadrons. This work estimates the statistical and total accuracy with which the form factors can be measured at PANDA, using an analysis of simulated data within the PandaRoot software framework. The most crucial background channel is ¯pp → π+π−,due to the very similar behavior of muons and pions in the detector. The suppression factors are evaluated for this and all other relevant background channels at different values of antiproton beam momentum. The signal/background separation is based on a multivariate analysis, using the Boosted Decision Trees method. An expected background subtraction is included in this study, based on realistic angular distribuations of the background contribution. Systematic uncertainties are considered and the relative total uncertainties of the form factor measurements are presented.
Investigators in the cognitive neurosciences have turned to Big Data to address persistent replication and reliability issues by increasing sample sizes, statistical power, and representativeness of data. While there is tremendous potential to advance science through open data sharing, these efforts unveil a host of new questions about how to integrate data arising from distinct sources and instruments. We focus on the most frequently assessed area of cognition - memory testing - and demonstrate a process for reliable data harmonization across three common measures. We aggregated raw data from 53 studies from around the world which measured at least one of three distinct verbal learning tasks, totaling N = 10,505 healthy and brain-injured individuals. A mega analysis was conducted using empirical bayes harmonization to isolate and remove site effects, followed by linear models which adjusted for common covariates. After corrections, a continuous item response theory (IRT) model estimated each individual subject’s latent verbal learning ability while accounting for item difficulties. Harmonization significantly reduced inter-site variance by 37% while preserving covariate effects. The effects of age, sex, and education on scores were found to be highly consistent across memory tests. IRT methods for equating scores across AVLTs agreed with held-out data of dually-administered tests, and these tools are made available for free online. This work demonstrates that large-scale data sharing and harmonization initiatives can offer opportunities to address reproducibility and integration challenges across the behavioral sciences.