Refine
Document Type
- Part of Periodical (1)
- Working Paper (1)
Language
- English (2)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Keywords
- Arthropods (1)
- DNA barcoding (1)
- French Polynesia (1)
- Moorea BioCode (1)
- SymbioCode (1)
We report here on the taxonomic and molecular diversity of 10 929 terrestrial arthropod specimens, collected on four islands of the Society Archipelago, French Polynesia. The survey was part of the ‘SymbioCode Project’ that aims to establish the Society Islands as a natural laboratory in which to investigate the flux of bacterial symbionts (e.g., Wolbachia) and other genetic material among branches of the arthropod tree. The sample includes an estimated 1127 species, of which 1098 included at least one DNA-barcoded specimen and 29 were identified to species level using morphological traits only. Species counts based on molecular data emphasize that some groups have been understudied in this region and deserve more focused taxonomic effort, notably Diptera, Lepidoptera and Hymenoptera. Some taxa that were also subjected to morphological scrutiny reveal a consistent match between DNA and morphology-based species boundaries in 90% of the cases, with a larger than expected genetic diversity in the remaining 10%. Many species from this sample are new to this region or are undescribed. Some are under description, but many await inspection by motivated experts, who can use the online images or request access to ethanol-stored specimens.
Non-standard errors
(2021)
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.