Refine
Year of publication
Document Type
- Preprint (542)
- Article (429)
- Working Paper (1)
Language
- English (972)
Has Fulltext
- yes (972)
Is part of the Bibliography
- no (972)
Keywords
- Heavy Ion Experiments (15)
- Hadron-Hadron scattering (experiments) (12)
- Hadron-Hadron Scattering (10)
- LHC (9)
- Heavy-ion collision (6)
- Heavy-ion collisions (5)
- ALICE experiment (4)
- Heavy Quark Production (4)
- Jets (4)
- ALICE (3)
Institute
Knowledge about the biogeographic affinities of the world’s tropical forests helps to better understand regional differences in forest structure, diversity, composition, and dynamics. Such understanding will enable anticipation of region-specific responses to global environmental change. Modern phylogenies, in combination with broad coverage of species inventory data, now allow for global biogeographic analyses that take species evolutionary distance into account. Here we present a classification of the world’s tropical forests based on their phylogenetic similarity. We identify five principal floristic regions and their floristic relationships: (i) Indo-Pacific, (ii) Subtropical, (iii) African, (iv) American, and (v) Dry forests. Our results do not support the traditional neo- versus paleotropical forest division but instead separate the combined American and African forests from their Indo-Pacific counterparts. We also find indications for the existence of a global dry forest region, with representatives in America, Africa, Madagascar, and India. Additionally, a northern-hemisphere Subtropical forest region was identified with representatives in Asia and America, providing support for a link between Asian and American northern-hemisphere forests.
Non-standard errors
(2021)
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.