Refine
Year of publication
Document Type
- Article (170)
- Preprint (61)
- Working Paper (1)
Language
- English (232)
Has Fulltext
- yes (232)
Is part of the Bibliography
- no (232) (remove)
Keywords
- Branching fraction (5)
- Quark-Gluon Plasma (4)
- Quarkonium (4)
- Experimental nuclear physics (3)
- Experimental particle physics (3)
- Hadron-Hadron Scattering (3)
- Jets and Jet Substructure (3)
- BESIII (2)
- Charmed mesons (2)
- Electroweak interaction (2)
The production of the Λ(1520) baryonic resonance has been measured at midrapidity in inelastic pp collisions at s√=7 TeV and in p–Pb collisions at sNN−−−√=5.02 TeV for non-single diffractive events and in multiplicity classes. The resonance is reconstructed through its hadronic decay channel Λ(1520) →pK− and the charge conjugate with the ALICE detector. The integrated yields and mean transverse momenta are calculated from the measured transverse momentum distributions in pp and p–Pb collisions. The mean transverse momenta follow mass ordering as previously observed for other hyperons in the same collision systems. A Blast-Wave function constrained by other light hadrons (π, K, K0S, p, Λ) describes the shape of the Λ(1520) transverse momentum distribution up to 3.5 GeV/c in p–Pb collisions. In the framework of this model, this observation suggests that the Λ(1520) resonance participates in the same collective radial flow as other light hadrons. The ratio of the yield of Λ(1520) to the yield of the ground state particle Λ remains constant as a function of charged-particle multiplicity, suggesting that there is no net effect of the hadronic phase in p–Pb collisions on the Λ(1520) yield.
Non-standard errors
(2021)
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.