Universitätspublikationen
Refine
Year of publication
Document Type
- Article (13775) (remove)
Language
- English (11041)
- German (2282)
- Portuguese (222)
- Spanish (97)
- Italian (53)
- French (36)
- Multiple languages (9)
- Ukrainian (9)
- slo (7)
- Turkish (4)
Has Fulltext
- yes (13775)
Keywords
- inflammation (93)
- COVID-19 (84)
- SARS-CoV-2 (61)
- Adorno (56)
- cancer (43)
- apoptosis (42)
- crystal structure (41)
- Inflammation (39)
- aging (39)
- Ausstellung (38)
Institute
- Medizin (5131)
- Physik (1657)
- Biowissenschaften (1055)
- Biochemie und Chemie (990)
- Frankfurt Institute for Advanced Studies (FIAS) (737)
- Gesellschaftswissenschaften (726)
- Geowissenschaften (512)
- Präsidium (445)
- Philosophie (431)
- Informatik (397)
- Senckenbergische Naturforschende Gesellschaft (347)
- Institut für Sozialforschung (IFS) (337)
- Rechtswissenschaft (337)
- Institut für Ökologie, Evolution und Diversität (335)
- E-Finance Lab e.V. (304)
- Psychologie (286)
- Biochemie, Chemie und Pharmazie (281)
- Biodiversität und Klima Forschungszentrum (BiK-F) (270)
- Neuere Philologien (220)
- Geschichtswissenschaften (213)
- Kulturwissenschaften (190)
- Wirtschaftswissenschaften (177)
- Exzellenzcluster Makromolekulare Komplexe (171)
- Psychologie und Sportwissenschaften (165)
- Pharmazie (156)
- Geowissenschaften / Geographie (137)
- MPI für Biophysik (135)
- Georg-Speyer-Haus (117)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (112)
- Sportwissenschaften (112)
- Sonderforschungsbereiche / Forschungskollegs (108)
- Erziehungswissenschaften (102)
- Zentrum für Biomolekulare Magnetische Resonanz (BMRZ) (102)
- MPI für Hirnforschung (87)
- Zentrum für Arzneimittelforschung, Entwicklung und Sicherheit (ZAFES) (87)
- Geographie (86)
- Buchmann Institut für Molekulare Lebenswissenschaften (BMLS) (66)
- Mathematik (62)
- Informatik und Mathematik (59)
- Deutsches Institut für Internationale Pädagogische Forschung (DIPF) (49)
- Universitätsbibliothek (46)
- Evangelische Theologie (41)
- Sprach- und Kulturwissenschaften (38)
- ELEMENTS (37)
- MPI für empirische Ästhetik (37)
- Sustainable Architecture for Finance in Europe (SAFE) (31)
- Ernst Strüngmann Institut (25)
- Institut für sozial-ökologische Forschung (ISOE) (23)
- Philosophie und Geschichtswissenschaften (23)
- House of Finance (HoF) (20)
- Cornelia Goethe Centrum für Frauenstudien und die Erforschung der Geschlechterverhältnisse (CGC) (19)
- Exzellenzcluster Herz-Lungen-System (18)
- Center for Membrane Proteomics (CMP) (17)
- Fachübergreifend (17)
- Sprachwissenschaften (17)
- Starker Start ins Studium: Qualitätspakt Lehre (17)
- Center for Financial Studies (CFS) (16)
- Sigmund-Freud Institut – Forschungsinstitut fur Psychoanalyse und ihre Anwendungen (16)
- Zentrum für Interdisziplinäre Afrikaforschung (ZIAF) (13)
- Extern (12)
- Universität des 3. Lebensalters e.V. (11)
- Katholische Theologie (10)
- Interdisziplinäres Zentrum für Neurowissenschaften Frankfurt (IZNF) (9)
- Institut für Wirtschaft, Arbeit, und Kultur (IWAK) (8)
- Institute for Monetary and Financial Stability (IMFS) (8)
- Center for Scientific Computing (CSC) (7)
- Helmholtz International Center for FAIR (7)
- LOEWE-Schwerpunkt für Integrative Pilzforschung (7)
- DFG-Forschergruppen (6)
- Institute for Law and Finance (ILF) (6)
- Zentrum für Nordamerika-Forschung (ZENAF) (6)
- Goethe-Zentrum für Wissenschaftliches Rechnen (G-CSC) (5)
- Hessische Stiftung für Friedens- und Konfliktforschung (HSFK) (5)
- Hochschulrechenzentrum (5)
- Forschungszentrum Historische Geisteswissenschaften (FHG) (4)
- Frobenius Institut (4)
- Interdisziplinäres Zentrum für Ostasienstudien (IZO) (4)
- Zentrum für Weiterbildung (4)
- LOEWE-Schwerpunkt Außergerichtliche und gerichtliche Konfliktlösung (3)
- Akademie für Bildungsforschung und Lehrerbildung (bisher: Zentrum für Lehrerbildung und Schul- und Unterrichtsforschung) (2)
- Institut für Bienenkunde (2)
- Institut für Religionsphilosophische Forschung (2)
- keine Angabe Institut (2)
- (1)
- Centre for Drug Research (1)
- Diagnostic Center of Acute Leukemia (1)
- Europäische Akademie der Arbeit in der Universität Frankfurt am Main (1)
- Fachübergreifende Einrichtungen (1)
- SFB 268 (1)
- Wilhelm-Merton-Zentrum (1)
- Zentrale Einrichtung (1)
- studiumdigitale (1)
Objectives: To compare efficacy and safety of ixekizumab (IXE) to adalimumab (ADA) in biological disease-modifying antirheumatic drug-naïve patients with both active psoriatic arthritis (PsA) and skin disease and inadequate response to conventional synthetic disease-modifying antirheumatic drug (csDMARDs).
Methods: Patients with active PsA were randomised (1:1) to approved dosing of IXE or ADA in an open-label, head-to-head, blinded assessor clinical trial. The primary objective was to evaluate whether IXE was superior to ADA at week 24 for simultaneous achievement of a ≥50% improvement from baseline in the American College of Rheumatology criteria (ACR50) and a 100% improvement from baseline in the Psoriasis Area and Severity Index (PASI100). Major secondary objectives, also at week 24, were to evaluate whether IXE was: (1) non-inferior to ADA for achievement of ACR50 and (2) superior to ADA for PASI100 response. Additional PsA, skin, treat-to-target and quality-of-life outcome measures were assessed at week 24.
Results: The primary efficacy endpoint was met (IXE: 36%, ADA: 28%; p=0.036). IXE was non-inferior for ACR50 response (IXE: 51%, ADA: 47%; treatment difference: 3.9%) and superior for PASI100 response (IXE: 60%, ADA: 47%; p=0.001). IXE had greater response versus ADA in additional PsA, skin, nail, treat-to-target and quality-of-life outcomes. Serious adverse events were reported in 8.5% (ADA) and 3.5% (IXE) of patients.
Conclusions: IXE was superior to ADA in achievement of simultaneous improvement of joint and skin disease (ACR50 and PASI100) in patients with PsA and inadequate response to csDMARDs. Safety and tolerability for both biologicals were aligned with established safety profiles.
A handling study to assess use of the Respimat(®) Soft Mist™ inhaler in children under 5 years old
(2015)
Background: Respimat® Soft Mist™ Inhaler (SMI) is a hand-held device that generates an aerosol with a high, fine-particle fraction, enabling efficient lung deposition. The study objective was to assess inhalation success among children using Respimat SMI, and the requirement for assistance by the parent/caregiver and/or a valved holding chamber (VHC).
Methods: This open-label study enrolled patients aged <5 years with respiratory disease and history of coughing and/or recurrent wheezing. Patients inhaled from the Respimat SMI (air only; no aerosol) using a stepwise configuration: “1” (dose released by child); “2” (dose released by parent/caregiver), and “3” (Respimat SMI with VHC, facemask, and parent/caregiver help). Co-primary endpoints included the ability to perform successful inhalation as assessed by the investigators using a standardized handling questionnaire and evaluation of the reasons for success. Inhalation profile in the successful handling configuration was verified with a pneumotachograph. Patient satisfaction and preferences were investigated in a questionnaire.
Results: Of the children aged 4 to <5 years (n=27) and 3 to <4 years (n=30), 55.6% and 30.0%, respectively, achieved success without a VHC or help; with assistance, another 29.6% and 10.0%, respectively, achieved success, and the remaining children were successful with VHC. All children aged 2 to <3 years (n=20) achieved success with the Respimat SMI and VHC. Of those aged <2 years (n=22), 95.5% had successful handling of the Respimat SMI with VHC and parent/caregiver help. Inhalation flow profiles generally confirmed the outcome of the handling assessment by the investigators. Most parent/caregiver and/or child respondents were satisfied with operation, instructions for use, handling, and ease of holding the Respimat SMI with or without a VHC.
Conclusions: The Respimat SMI is suitable for children aged <5 years; however, children aged <5 years are advised to add a VHC to complement its use.
The haloarchaeon Haloferax volcanii contains nearly 2800 small non-coding RNAs (sRNAs). One intergenic sRNA, sRNA132, was chosen for a detailed characterization. A deletion mutant had a growth defect and thus underscored the importance of sRNA132. A microarray analysis identified the transcript of an operon for a phosphate-specific ABC transporter as a putative target of sRNA132. Both the sRNA132 and the operon transcript accumulated under low phosphate concentrations, indicating a positive regulatory role of sRNA132. A kinetic analysis revealed that sRNA132 is essential shortly after the onset of phosphate starvation, while other regulatory processes take over after several hours. Comparison of the transcriptomes of wild-type and the sRNA132 gene deletion mutant 30 min after the onset of phosphate starvation revealed that sRNA132 controls a regulon of about 40 genes. Remarkably, the regulon included a second operon for a phosphate-specific ABC transporter, which also depended on sRNA132 for rapid induction in the absence of phosphate. Competitive growth experiments of the wild-type and ABC transporter operon deletion mutants underscored the importance of both transporters for growth at low phosphate concentrations. Northern blot analyses of four additional members of the sRNA132 regulon verified that all four transcripts depended on sRNA132 for rapid regulation after the onset of phosphate starvation. Importantly, this is the first example for the transient importance of a sRNA for any archaeal and bacterial species. In addition, this study unraveled the first sRNA regulon for haloarchaea.
The caddisfly subfamily Drusinae BANKS comprises roughly 100 species inhabiting mountain ranges in Europe, Asia Minor and the Caucasus. A 3-gene phylogeny of the subfamily previously identified three major clades that were corroborated by larval morphology and feeding ecologies: scraping grazers, omnivorous shredders and filtering carnivores. Larvae of filtering carnivores exhibit unique head capsule complexities, unknown from other caddisfly larvae. Here we assess the species-level relationships within filtering carnivores, hypothesizing that head capsule complexity is derived from simple shapes observed in the other feeding groups. We summarize the current systematics and taxonomy of the group, clarify the systematic position of Cryptothrix nebulicola, and present a larval key to filtering carnivorous Drusinae. We infer relationships of all known filtering carnivorous Drusinae and 34 additional Drusinae species using Bayesian species tree analysis and concatenated Bayesian phylogenetic analysis of 3805bp of sequence data from six gene regions (mtCOI5-P, mtCOI3-P, 16S mrDNA, CADH, WG, 28S nrDNA), morphological cladistics from 308 characters, and a total evidence analysis. All analyses support monophyly of the three feeding ecology groups but fail to fully resolve internal relationships. Within filtering carnivores, variation in head setation and frontoclypeus structure may be associated with progressive niche adaptation, with less complex species recovered at a basal position. We propose that diversification of complex setation and frontoclypeus shape represents a recent evolutionary development, hypothetically enforcing speciation and niche specificity within filtering carnivorous Drusinae.
Autophagy is a highly conserved catabolic process cells use to maintain their homeostasis by degrading misfolded, damaged and excessive proteins, nonfunctional organelles, foreign pathogens and other cellular components. Hence, autophagy can be nonselective, where bulky portions of the cytoplasm are degraded upon stress, or a highly selective process, where preselected cellular components are degraded. To distinguish between different cellular components, autophagy employs selective autophagy receptors, which will link the cargo to the autophagy machinery, thereby sequestering it in the autophagosome for its subsequent degradation in the lysosome. Autophagy receptors undergo post-translational and structural modifications to fulfil their role in autophagy, or upon executing their role, for their own degradation. We highlight the four most prominent protein modifications – phosphorylation, ubiquitination, acetylation and oligomerisation – that are essential for autophagy receptor recruitment, function and turnover. Understanding the regulation of selective autophagy receptors will provide deeper insights into the pathway and open up potential therapeutic avenues.
Autophagy is a highly conserved catabolic process cells use to maintain their homeostasis by degrading misfolded, damaged and excessive proteins, nonfunctional organelles, foreign pathogens and other cellular components. Hence, autophagy can be nonselective, where bulky portions of the cytoplasm are degraded upon stress, or a highly selective process, where preselected cellular components are degraded. To distinguish between different cellular components, autophagy employs selective autophagy receptors, which will link the cargo to the autophagy machinery, thereby sequestering it in the autophagosome for its subsequent degradation in the lysosome. Autophagy receptors undergo post-translational and structural modifications to fulfil their role in autophagy, or upon executing their role, for their own degradation. We highlight the four most prominent protein modifications – phosphorylation, ubiquitination, acetylation and oligomerisation – that are essential for autophagy receptor recruitment, function and turnover. Understanding the regulation of selective autophagy receptors will provide deeper insights into the pathway and open up potential therapeutic avenues.
The use of phylogenies in ecology is increasingly common and has broadened our understanding of biological diversity. Ecological sub-disciplines, particularly conservation, community ecology and macroecology, all recognize the value of evolutionary relationships but the resulting development of phylogenetic approaches has led to a proliferation of phylogenetic diversity metrics. The use of many metrics across the sub-disciplines hampers potential meta-analyses, syntheses, and generalizations of existing results. Further, there is no guide for selecting the appropriate metric for a given question, and different metrics are frequently used to address similar questions. To improve the choice, application, and interpretation of phylo-diversity metrics, we organize existing metrics by expanding on a unifying framework for phylogenetic information.
Generally, questions about phylogenetic relationships within or between assemblages tend to ask three types of question: how much; how different; or how regular? We show that these questions reflect three dimensions of a phylogenetic tree: richness, divergence, and regularity. We classify 70 existing phylo-diversity metrics based on their mathematical form within these three dimensions and identify ‘anchor’ representatives: for α-diversity metrics these are PD (Faith's phylogenetic diversity), MPD (mean pairwise distance), and VPD (variation of pairwise distances). By analysing mathematical formulae and using simulations, we use this framework to identify metrics that mix dimensions, and we provide a guide to choosing and using the most appropriate metrics. We show that metric choice requires connecting the research question with the correct dimension of the framework and that there are logical approaches to selecting and interpreting metrics. The guide outlined herein will help researchers navigate the current jungle of indices.
Justification: In Mexico, the number of unidentified bodies has been steadily rising for years. By now, more than 50,000 bodies are considered unidentified. Forensic laboratories that could perform comparative molecular genetic investigation are often overburdened and examinations can take months. Therefore, pragmatic approaches that can help to identify more unknown bodies must be sought. The increased use of distinctive physical features might be one, and the high rate of tattooed people in Mexico points towards a great potential of tattoos as a tool for identification. The prerequisite for a comparison of antemortem (missing persons) and postmortem (unknown bodies) data is an objective description of the particularities, e.g., of the tattoos. The aim of this study was to establish an objective classification for tattoo motives, taking into consideration local preferences.
Methods: In the database of the medicolegal services of the Instituto Jaliscience de Ciencias Forenses (IJCF) in Guadalajara, postmortem data of 1000 tattooed bodies from 2019 were evaluated. According to sex and age, the tattooed body localization and the tattoo motives were categorized.
Results: The 1000 tattooed deceased showed tattoos on 2342 body localizations. The motives were grouped and linked to the following 11 keywords (with decreasing frequency): letters/numbers, human, symbol (other), plant, symbol (religious), animal, object, fantasy/demon/comic, tribal/ornament/geometry, other, unrecognizable.
Conclusion: Using the proposed classification, tattoo motives can be described objectively and classified in a practical way. If used for antemortem (missing persons) and postmortem (unknown bodies) documentation, motives can be searched and compared efficiently—helping to identify unknown bodies.
Animal tracking and biologging devices record large amounts of data on individual movement behaviors in natural environments. In these data, movement ecologists often view unexplained variation around the mean as “noise” when studying patterns at the population level. In the field of behavioral ecology, however, focus has shifted from population means to the biological underpinnings of variation around means. Specifically, behavioral ecologists use repeated measures of individual behavior to partition behavioral variability into intrinsic among-individual variation and reversible behavioral plasticity and to quantify: a) individual variation in behavioral types (i.e. different average behavioral expression), b) individual variation in behavioral plasticity (i.e. different responsiveness of individuals to environmental gradients), c) individual variation in behavioral predictability (i.e. different residual within-individual variability of behavior around the mean), and d) correlations among these components and correlations in suites of behaviors, called ‘behavioral syndromes’. We here suggest that partitioning behavioral variability in animal movements will further the integration of movement ecology with other fields of behavioral ecology. We provide a literature review illustrating that individual differences in movement behaviors are insightful for wildlife and conservation studies and give recommendations regarding the data required for addressing such questions. In the accompanying R tutorial we provide a guide to the statistical approaches quantifying the different aspects of among-individual variation. We use movement data from 35 African elephants and show that elephants differ in a) their average behavior for three common movement behaviors, b) the rate at which they adjusted movement over a temporal gradient, and c) their behavioral predictability (ranging from more to less predictable individuals). Finally, two of the three movement behaviors were correlated into a behavioral syndrome (d), with farther moving individuals having shorter mean residence times. Though not explicitly tested here, individual differences in movement and predictability can affect an individual’s risk to be hunted or poached and could therefore open new avenues for conservation biologists to assess population viability. We hope that this review, tutorial, and worked example will encourage movement ecologists to examine the biology of individual variation in animal movements hidden behind the population mean.
Ferroptosis is an iron-dependent form of cell death, which is triggered by disturbed membrane integrity due to an overproduction of lipid peroxides. Induction of ferroptosis comprises several alterations, i.e. altered iron metabolism, response to oxidative stress, or lipid peroxide production. At the physiological level transcription, translation, and microRNAs add to the appearance and/or activity of building blocks that negatively or positively balance ferroptosis. Ferroptosis contributes to tissue damage in the case of, e.g., brain and heart injury but may be desirable to overcome chemotherapy resistance. For a more complete picture, it is crucial to also consider the cellular microenvironment, which during inflammation and in the tumor context is dominated by hypoxia. This graphical review visualizes basic mechanisms of ferroptosis, categorizes general inducers and inhibitors of ferroptosis, and puts a focus on microRNAs, iron homeostasis, and hypoxia as regulatory components.
Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable because of the combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm uses interaction delays reconstructed for directed bivariate interactions to tag potentially spurious edges on the basis of their timing signatures in the context of the surrounding network. Such tagged interactions may then be pruned, which produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation in MATLAB to test the algorithm’s performance on simulated networks as well as networks derived from magnetoencephalographic data. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a tractable and data-efficient way of reconstructing approximative networks of multivariate interactions. It is preferable if available data are limited or if fully multivariate approaches are computationally infeasible.
The membrane proximal external region (MPER) of the HIV-1 glycoprotein gp41 is targeted by the broadly neutralizing antibodies 2F5 and 4E10. To date, no immunization regimen in animals or humans has produced HIV-1 neutralizing MPER-specific antibodies. We immunized llamas with gp41-MPER proteoliposomes and selected a MPER-specific single chain antibody (VHH), 2H10, whose epitope overlaps with that of mAb 2F5. Bi-2H10, a bivalent form of 2H10, which displayed an approximately 20-fold increased affinity compared to the monovalent 2H10, neutralized various sensitive and resistant HIV-1 strains, as well as SHIV strains in TZM-bl cells. X-ray and NMR analyses combined with mutagenesis and modeling revealed that 2H10 recognizes its gp41 epitope in a helical conformation. Notably, tryptophan 100 at the tip of the long CDR3 is not required for gp41 interaction but essential for neutralization. Thus bi-2H10 is an anti-MPER antibody generated by immunization that requires hydrophobic CDR3 determinants in addition to epitope recognition for neutralization similar to the mode of neutralization employed by mAbs 2F5 and 4E10.
Wetlands such as bogs, swamps, or freshwater marshes are hotspots of biodiversity. For 5.1 million km2 of inland wetlands, the dynamics of area and water storage, which strongly impact biodiversity and ecosystem services, were simulated using the global hydrological model WaterGAP. For the first time, the impacts of both human water use and man‐made reservoirs (WUR) and future climate change (CC) on wetlands around the globe were quantified. WUR impacts are concentrated in arid/semiarid regions, where WUR decreased mean wetland water storage by more than 5% on 8.2% of the mean wetland area during 1986–2005 (Am), with highest decreases in groundwater depletion area. Using output of three climate models, CC impacts on wetlands were quantified, distinguishing unavoidable impacts [i.e., at 2 °C global warming (GW)] from avoidable impacts (difference between 3 °C and 2 °C impacts). Even unavoidable CC impacts are projected to be much larger than WUR impacts, also in arid/semiarid regions. On most wetland area with reliable estimates, avoidable CC impacts are more than twice as large as unavoidable impacts. In case of 2 °C GW, half of Am is estimated to be unaffected by mean storage changes of more than 5%, but only one third in case of 3 °C GW. Temporal variability of water storage will increase for most wetlands. Wetlands in dry regions will be affected the most, particularly by water storage decreases in the dry season. Different from wealthier countries, low‐income countries will dominantly suffer from a decrease in wetland water storage due to CC.
Irrigation intensifies land use by increasing crop yield but also impacts water resources. It affects water and energy balances and consequently the microclimate in irrigated regions. Therefore, knowledge of the extent of irrigated land is important for hydrological and crop modelling, global change research, and assessments of resource use and management. Information on the historical evolution of irrigated lands is limited. The new global Historical Irrigation Dataset (HID) provides estimates of the temporal development of the area equipped for irrigation (AEI) between 1900 and 2005 at 5 arc-minute resolution. We collected subnational irrigation statistics from various sources and found that the global extent of AEI increased from 63 million ha (Mha) in 1900 to 112 Mha in 1950 and 306 Mha in 2005. We developed eight gridded versions of time series of AEI by combining subnational irrigation statistics with different data sets on the historical extent of cropland and pasture. Different rules were applied to maximize consistency of the gridded products to subnational irrigation statistics or to historical cropland and pasture data sets. The HID reflects very well the spatial patterns of irrigated land in the western United States as shown on historical maps. Mean aridity on irrigated land increased and river discharge decreased from 1900–1950 whereas aridity decreased from 1950–2005. The dataset and its documentation are made available in an open data repository at https://mygeohub.org/publications/8 (doi:10.13019/M2MW2G).
Irrigation intensifies land use by increasing crop yield but also impacts water resources. It affects water and energy balances and consequently the microclimate in irrigated regions. Therefore, knowledge of the extent of irrigated land is important for hydrological and crop modelling, global change research, and assessments of resource use and management. Information on the historical evolution of irrigated lands is limited. The new global historical irrigation data set (HID) provides estimates of the temporal development of the area equipped for irrigation (AEI) between 1900 and 2005 at 5 arcmin resolution. We collected sub-national irrigation statistics from various sources and found that the global extent of AEI increased from 63 million ha (Mha) in 1900 to 111 Mha in 1950 and 306 Mha in 2005. We developed eight gridded versions of time series of AEI by combining sub-national irrigation statistics with different data sets on the historical extent of cropland and pasture. Different rules were applied to maximize consistency of the gridded products to sub-national irrigation statistics or to historical cropland and pasture data sets. The HID reflects very well the spatial patterns of irrigated land as shown on historical maps for the western United States (around year 1900) and on a global map (around year 1960). Mean aridity on irrigated land increased and mean natural river discharge on irrigated land decreased from 1900 to 1950 whereas aridity decreased and river discharge remained approximately constant from 1950 to 2005. The data set and its documentation are made available in an open-data repository at https://mygeohub.org/publications/8 (doi:10.13019/M20599).
Global investment in biomedical research has grown significantly over the last decades, reaching approximately a quarter of a trillion US dollars in 2010. However, not all of this investment is distributed evenly by gender. It follows, arguably, that scarce research resources may not be optimally invested (by either not supporting the best science or by failing to investigate topics that benefit women and men equitably). Women across the world tend to be significantly underrepresented in research both as researchers and research participants, receive less research funding, and appear less frequently than men as authors on research publications. There is also some evidence that women are relatively disadvantaged as the beneficiaries of research, in terms of its health, societal and economic impacts. Historical gender biases may have created a path dependency that means that the research system and the impacts of research are biased towards male researchers and male beneficiaries, making it inherently difficult (though not impossible) to eliminate gender bias. In this commentary, we – a group of scholars and practitioners from Africa, America, Asia and Europe – argue that gender-sensitive research impact assessment could become a force for good in moving science policy and practice towards gender equity. Research impact assessment is the multidisciplinary field of scientific inquiry that examines the research process to maximise scientific, societal and economic returns on investment in research. It encompasses many theoretical and methodological approaches that can be used to investigate gender bias and recommend actions for change to maximise research impact. We offer a set of recommendations to research funders, research institutions and research evaluators who conduct impact assessment on how to include and strengthen analysis of gender equity in research impact assessment and issue a global call for action.
Recent phylogenomic studies have failed to conclusively resolve certain branches of the placental mammalian tree, despite the evolutionary analysis of genomic data from 32 species. Previous analyses of single genes and retroposon insertion data yielded support for different phylogenetic scenarios for the most basal divergences. The results indicated that some mammalian divergences were best interpreted not as a single bifurcating tree, but as an evolutionary network. In these studies the relationships among some orders of the super-clade Laurasiatheria were poorly supported, albeit not studied in detail. Therefore, 4775 protein-coding genes (6,196,263 nucleotides) were collected and aligned in order to analyze the evolution of this clade. Additionally, over 200,000 introns were screened in silico, resulting in 32 phylogenetically informative long interspersed nuclear elements (LINE) insertion events.
The present study shows that the genome evolution of Laurasiatheria may best be understood as an evolutionary network. Thus, contrary to the common expectation to resolve major evolutionary events as a bifurcating tree, genome analyses unveil complex speciation processes even in deep mammalian divergences. We exemplify this on a subset of 1159 suitable genes that have individual histories, most likely due to incomplete lineage sorting or introgression, processes that can make the genealogy of mammalian genomes complex.
These unexpected results have major implications for the understanding of evolution in general, because the evolution of even some higher level taxa such as mammalian orders may sometimes not be interpreted as a simple bifurcating pattern.
Although autism spectrum disorders (ASDs) have a substantial genetic basis, most of the known genetic risk has been traced to rare variants, principally copy number variants (CNVs). To identify common risk variation, the Autism Genome Project (AGP) Consortium genotyped 1558 rigorously defined ASD families for 1 million single-nucleotide polymorphisms (SNPs) and analyzed these SNP genotypes for association with ASD. In one of four primary association analyses, the association signal for marker rs4141463, located within MACROD2, crossed the genome-wide association significance threshold of P < 5 × 10−8. When a smaller replication sample was analyzed, the risk allele at rs4141463 was again over-transmitted; yet, consistent with the winner's curse, its effect size in the replication sample was much smaller; and, for the combined samples, the association signal barely fell below the P < 5 × 10−8 threshold. Exploratory analyses of phenotypic subtypes yielded no significant associations after correction for multiple testing. They did, however, yield strong signals within several genes, KIAA0564, PLD5, POU6F2, ST8SIA2 and TAF1C.
Aging of biological systems is controlled by various processes which have a potential impact on gene expression. Here we report a genome-wide transcriptome analysis of the fungal aging model Podospora anserina. Total RNA of three individuals of defined age were pooled and analyzed by SuperSAGE (serial analysis of gene expression). A bioinformatics analysis identified different molecular pathways to be affected during aging. While the abundance of transcripts linked to ribosomes and to the proteasome quality control system were found to decrease during aging, those associated with autophagy increase, suggesting that autophagy may act as a compensatory quality control pathway. Transcript profiles associated with the energy metabolism including mitochondrial functions were identified to fluctuate during aging. Comparison of wild-type transcripts, which are continuously down-regulated during aging, with those down-regulated in the long-lived, copper-uptake mutant grisea, validated the relevance of age-related changes in cellular copper metabolism. Overall, we (i) present a unique age-related data set of a longitudinal study of the experimental aging model P. anserina which represents a reference resource for future investigations in a variety of organisms, (ii) suggest autophagy to be a key quality control pathway that becomes active once other pathways fail, and (iii) present testable predictions for subsequent experimental investigations.
BACKGROUND: Acetogenic bacteria are able to use CO2 as terminal electron acceptor of an anaerobic respiration, thereby producing acetate with electrons coming from H2. Due to this feature, acetogens came into focus as platforms to produce biocommodities from waste gases such as H2+CO2 and/or CO. A prerequisite for metabolic engineering is a detailed understanding of the mechanisms of ATP synthesis and electron-transfer reactions to ensure redox homeostasis. Acetogenesis involves the reduction of CO2 to acetate via soluble enzymes and is coupled to energy conservation by a chemiosmotic mechanism. The membrane-bound module, acting as an ion pump, was of special interest for decades and recently, an Rnf complex was shown to couple electron flow from reduced ferredoxin to NAD+ with the export of Na+ in Acetobacterium woodii. However, not all acetogens have rnf genes in their genome. In order to gain further insights into energy conservation of non-Rnf-containing, thermophilic acetogens, we sequenced the genome of Thermoanaerobacter kivui.
RESULTS: The genome of Thermoanaerobacter kivui comprises 2.9 Mbp with a G+C content of 35% and 2,378 protein encoding orfs. Neither autotrophic growth nor acetate formation from H2+CO2 was dependent on Na+ and acetate formation was inhibited by a protonophore, indicating that H+ is used as coupling ion for primary bioenergetics. This is consistent with the finding that the c subunit of the F1FO ATP synthase does not have the conserved Na+ binding motif. A search for potential H+-translocating, membrane-bound protein complexes revealed genes potentially encoding two different proton-reducing, energy-conserving hydrogenases (Ech).
CONCLUSIONS: The thermophilic acetogen T. kivui does not use Na+ but H+ for chemiosmotic ATP synthesis. It does not contain cytochromes and the electrochemical proton gradient is most likely established by an energy-conserving hydrogenase (Ech). Its thermophilic nature and the efficient conversion of H2+CO2 make T. kivui an interesting acetogen to be used for the production of biocommodities in industrial micobiology. Furthermore, our experimental data as well as the increasing number of sequenced genomes of acetogenic bacteria supported the new classification of acetogens into two groups: Rnf- and Ech-containing acetogens.
Background: To perform a comprehensive study on the relationship between vitamin D metabolism and the response to interferon-α-based therapy of chronic hepatitis C.
Methodology/Principal Findings: Associations between a functionally relevant polymorphism in the gene encoding the vitamin D 1α-hydroxylase (CYP27B1-1260 rs10877012) and the response to treatment with pegylated interferon-α (PEG-IFN-α) and ribavirin were determined in 701 patients with chronic hepatitis C. In addition, associations between serum concentrations of 25-hydroxyvitamin D3 (25[OH]D3) and treatment outcome were analysed. CYP27B1-1260 rs10877012 was found to be an independent predictor of sustained virologic response (SVR) in patients with poor-response IL28B genotypes (15% difference in SVR for rs10877012 genotype AA vs. CC, p = 0.02, OR = 1.52, 95% CI = 1.061–2.188), but not in patients with favourable IL28B genotype. Patients with chronic hepatitis C showed a high prevalence of vitamin D insufficiency (25[OH]D3<20 ng/mL) during all seasons, but 25(OH)D3 serum levels were not associated with treatment outcome.
Conclusions/Significance: Our study suggests a role of bioactive vitamin D (1,25[OH]2D3, calcitriol) in the response to treatment of chronic hepatitis C. However, serum concentration of the calcitriol precursor 25(OH)D3 is not a suitable predictor of treatment outcome.
Model frameworks, based on Floquet theory, have been shown to produce effective tools for accurately predicting phase-noise response of single (free-running) oscillator systems. This method of approach, referred to herein as macro-modeling, has been discussed in several highly influential papers and now constitutes an established branch of modern circuit theory. The increased application of, for example, injection-locked oscillators and oscillator arrays in modern communication systems has subsequently exposed the demand for similar rigorous analysis tools aimed at coupled oscillating systems. This paper presents a novel solution in terms of a macro-model characterizing the phase-response of synchronized coupled oscillator circuits and systems perturbed by weak noise sources. The framework is generalized and hence applicable to all circuit configurations and coupling topologies generating a synchronized steady-state. It advances and replaces the phenomenological descriptions currently found in the published literature pertaining to this topic and, as such, represents a significant breakthrough w.r.t. coupled oscillator noise modeling. The proposed model is readily implemented numerically using standard routines.
Using more than a million randomly generated equations of state that satisfy theoretical and observational constraints, we construct a novel, scale-independent description of the sound speed in neutron stars, where the latter is expressed in a unit cube spanning the normalized radius, r/R, and the mass normalized to the maximum one, M/MTOV. From this generic representation, a number of interesting and surprising results can be deduced. In particular, we find that light (heavy) stars have stiff (soft) cores and soft (stiff) outer layers, or that the maximum of the sound speed is located at the center of light stars but moves to the outer layers for stars with M/MTOV ≳ 0.7, reaching a constant value of cs = 1 2 2 as M → MTOV. We also show that the sound speed decreases below the conformal limit cs = 1 3 2 at the center of stars with M = MTOV. Finally, we construct an analytic expression that accurately describes the radial dependence of the sound speed as a function of the neutron-star mass, thus providing an estimate of the maximum sound speed expected in a neutron star.
LIN-2/7 (L27) domains are protein interaction modules that preferentially hetero-oligomerize, a property critical for their function in directing specific assembly of supramolecular signaling complexes at synapses and other polarized cell-cell junctions. We have solved the solution structure of the heterodimer composed of the L27 domains from LIN-2 and LIN-7. Comparison of this structure with other L27 domain structures has allowed us to formulate a general model for why most L27 domains form an obligate heterodimer complex. L27 domains can be divided in two types (A and B), with each heterodimer comprising an A/B pair. We have identified two keystone positions that play a central role in discrimination. The residues at these positions are energetically acceptable in the context of an A/B heterodimer, but would lead to packing defects or electrostatic repulsion in the context of A/A and B/B homodimers. As predicted by the model, mutations of keystone residues stabilize normally strongly disfavored homodimers. Thus, L27 domains are specifically optimized to avoid homodimeric interactions.
Introduction: Neuronal death and subsequent denervation of target areas are hallmarks of many neurological disorders. Denervated neurons lose part of their dendritic tree, and are considered "atrophic", i.e. pathologically altered and damaged. The functional consequences of this phenomenon are poorly understood.
Results: Using computational modelling of 3D-reconstructed granule cells we show that denervation-induced dendritic atrophy also subserves homeostatic functions: By shortening their dendritic tree, granule cells compensate for the loss of inputs by a precise adjustment of excitability. As a consequence, surviving afferents are able to activate the cells, thereby allowing information to flow again through the denervated area. In addition, action potentials backpropagating from the soma to the synapses are enhanced specifically in reorganized portions of the dendritic arbor, resulting in their increased synaptic plasticity. These two observations generalize to any given dendritic tree undergoing structural changes.
Conclusions: Structural homeostatic plasticity, i.e. homeostatic dendritic remodeling, is operating in long-term denervated neurons to achieve functional homeostasis.
We consider versions of the FIND algorithm where the pivot element used is the median of a subset chosen uniformly at random from the data. For the median selection we assume that subsamples of size asymptotic to c⋅nα are chosen, where 0<α≤12, c>0 and n is the size of the data set to be split. We consider the complexity of FIND as a process in the rank to be selected and measured by the number of key comparisons required. After normalization we show weak convergence of the complexity to a centered Gaussian process as n→∞, which depends on α. The proof relies on a contraction argument for probability distributions on càdlàg functions. We also identify the covariance function of the Gaussian limit process and discuss path and tail properties.
Schopenhauer afirma que uma ética não dogmática requer leis demonstráveis derivadas da experiência. Nesse sentido o fundamento de uma ética deve ser uma metafísica imanente, que sustente, na experiência possível, suas afirmações, e que seja, por isso mesmo, capaz de dar de uma vez por todas um fundamento legítimo à moral. A fundamentação da moral schopenhaueriana segue, portanto, uma argumentação muito próxima de uma metodologia científica. Para Schopenhauer a filosofia deve se aproximar mais de uma cosmologia do que da teologia. Max Horkheimer em “O pensamento de Schopenhauer em relação à ciência e à religião” destaca a fecundidade de tal posição filosófica e atualiza a importância de Schopenhauer tanto para sua formação quanto para uma legítima interpretação da modernidade. Acompanhamos, neste artigo tanto os aspectos fundamentais da fundamentação schopenhaueriana da moral, quanto aspectos da interpretação de Horkheimer da empreitada do filósofo.
Yeast cells can be killed upon expression of pro-apoptotic mammalian proteins. We have established a functional yeast survival screen that was used to isolate novel human anti-apoptotic genes overexpressed in treatment-resistant tumors. The screening of three different cDNA libraries prepared from metastatic melanoma, glioblastomas and leukemic blasts allowed for the identification of many yeast cell death-repressing cDNAs, including 28% of genes that are already known to inhibit apoptosis, 35% of genes upregulated in at least one tumor entity and 16% of genes described as both anti-apoptotic in function and upregulated in tumors. These results confirm the great potential of this screening tool to identify novel anti-apoptotic and tumor-relevant molecules. Three of the isolated candidate genes were further analyzed regarding their anti-apoptotic function in cell culture and their potential as a therapeutic target for molecular therapy. PAICS, an enzyme required for de novo purine biosynthesis, the long non-coding RNA MALAT1 and the MAST2 kinase are overexpressed in certain tumor entities and capable of suppressing apoptosis in human cells. Using a subcutaneous xenograft mouse model, we also demonstrated that glioblastoma tumor growth requires MAST2 expression. An additional advantage of the yeast survival screen is its universal applicability. By using various inducible pro-apoptotic killer proteins and screening the appropriate cDNA library prepared from normal or pathologic tissue of interest, the survival screen can be used to identify apoptosis inhibitors in many different systems.
Species of the genus Blautia are typical inhabitants of the human gut and considered as beneficial gut microbes. However, their role in the gut microbiome and their metabolic features are poorly understood. Blautia schinkii was described as an acetogenic bacterium, characterized by a functional Wood–Ljungdahl pathway (WLP) of acetogenesis from H2 + CO2. Here we report that two relatives, Blautia luti and Blautia wexlerae do not grow on H2 + CO2. Inspection of the genome sequence revealed all genes of the WLP except genes encoding a formate dehydrogenase and an electron-bifurcating hydrogenase. Enzyme assays confirmed this prediction. Accordingly, resting cells neither converted H2 + CO2 nor H2 + HCOOH + CO2 to acetate. Carbon monoxide is an intermediate of the WLP and substrate for many acetogens. Blautia luti and B. wexlerae had an active CO dehydrogenase and resting cells performed acetogenesis from HCOOH + CO2 + CO, demonstrating a functional WLP. Bioinformatic analyses revealed that many Blautia strains as well as other gut acetogens lack formate dehydrogenases and hydrogenases. Thus, the use of formate instead of H2 + CO2 as an interspecies hydrogen and electron carrier seems to be more common in the gut microbiome.
Increased sympathetic noradrenergic signaling is crucially involved in fear and anxiety as defensive states. MicroRNAs regulate dynamic gene expression during synaptic plasticity and genetic variation of microRNAs modulating noradrenaline transporter gene (SLC6A2) expression may thus lead to altered central and peripheral processing of fear and anxiety. In silico prediction of microRNA regulation of SLC6A2 was confirmed by luciferase reporter assays and identified hsa-miR-579-3p as a regulating microRNA. The minor (T)-allele of rs2910931 (MAFcases = 0.431, MAFcontrols = 0.368) upstream of MIR579 was associated with panic disorder in patients (pallelic = 0.004, ncases = 506, ncontrols = 506) and with higher trait anxiety in healthy individuals (pASI = 0.029, pACQ = 0.047, n = 3112). Compared to the major (A)-allele, increased promoter activity was observed in luciferase reporter assays in vitro suggesting more effective MIR579 expression and SLC6A2 repression in vivo (p = 0.041). Healthy individuals carrying at least one (T)-allele showed a brain activation pattern suggesting increased defensive responding and sympathetic noradrenergic activation in midbrain and limbic areas during the extinction of conditioned fear. Panic disorder patients carrying two (T)-alleles showed elevated heart rates in an anxiety-provoking behavioral avoidance test (F(2, 270) = 5.47, p = 0.005). Fine-tuning of noradrenaline homeostasis by a MIR579 genetic variation modulated central and peripheral sympathetic noradrenergic activation during fear processing and anxiety. This study opens new perspectives on the role of microRNAs in the etiopathogenesis of anxiety disorders, particularly their cardiovascular symptoms and comorbidities.
Background: The human ATP-binding cassette, subfamily B, member 11 (ABCB11) gene encodes the bile salt export pump, which is exclusively expressed at the canalicular membrane of hepatocytes. A frequent variant in the coding region, c.1331 T > C, leading to the amino acid exchange p.V444A, has been associated with altered serum bile salt levels in healthy individuals and predisposes homozygous carriers of the [C] allele for obstetric cholestasis. Recently, elevated bile salt levels were shown to be significantly associated with rates and risk of cirrhosis in patients with chronic hepatitis C virus (HCV) infection treated with pegylated interferon-alpha2 and ribavirin, suggesting a potential role for bile salt levels in HCV treatment outcomes and in the fibrogenic evolution of HCV-related liver disease. The aim of this study was to investigate a possible association of ABCB11 c.1331 T > C with hepatitis C virus (HCV) infection and fibrosis stages as assessed by non-invasive transient elastography in a German cohort of patients.
Methods: ABCB11 c.1331 T > C genotype was determined by allelic discrimination assay in 649 HCV infected cases and 413 controls. Overall, 444 cases were staged for fibrotic progression by measurement of liver stiffness.
Results: Homo- or heterozygous presence of the frequent [C] allele was associated with HCV positivity (OR = 1.41, CI = 1.02 - 1.95, p = 0.037). No association was detectable between the ABCB11 c.1331 T > C genotype and increased liver stiffness.
Conclusions: Our data confirm that homozygous presence of the major [C] allele of ABCB11 c.1331 T > C is a genetic susceptibility factor for HCV infection, but not for liver fibrosis.
Video and image data are regularly used in the field of benthic ecology to document biodiversity. However, their use is subject to a number of challenges, principally the identification of taxa within the images without associated physical specimens. The challenge of applying traditional taxonomic keys to the identification of fauna from images has led to the development of personal, group, or institution level reference image catalogues of operational taxonomic units (OTUs) or morphospecies. Lack of standardisation among these reference catalogues has led to problems with observer bias and the inability to combine datasets across studies. In addition, lack of a common reference standard is stifling efforts in the application of artificial intelligence to taxon identification. Using the North Atlantic deep sea as a case study, we propose a database structure to facilitate standardisation of morphospecies image catalogues between research groups and support future use in multiple front-end applications. We also propose a framework for coordination of international efforts to develop reference guides for the identification of marine species from images. The proposed structure maps to the Darwin Core standard to allow integration with existing databases. We suggest a management framework where high-level taxonomic groups are curated by a regional team, consisting of both end users and taxonomic experts. We identify a mechanism by which overall quality of data within a common reference guide could be raised over the next decade. Finally, we discuss the role of a common reference standard in advancing marine ecology and supporting sustainable use of this ecosystem.
Climate change and its impacts already pose considerable challenges for societies that will further increase with global warming (IPCC, 2014a, b). Uncertainties of the climatic response to greenhouse gas emissions include the potential passing of large-scale tipping points (e.g. Lenton et al., 2008; Levermann et al., 2012; Schellnhuber, 2010) and changes in extreme meteorological events (Field et al., 2012) with complex impacts on societies (Hallegatte et al., 2013). Thus climate change mitigation is considered a necessary societal response for avoiding uncontrollable impacts (Conference of the Parties, 2010). On the other hand, large-scale climate change mitigation itself implies fundamental changes in, for example, the global energy system. The associated challenges come on top of others that derive from equally important ethical imperatives like the fulfilment of increasing food demand that may draw on the same resources. For example, ensuring food security for a growing population may require an expansion of cropland, thereby reducing natural carbon sinks or the area available for bio-energy production. So far, available studies addressing this problem have relied on individual impact models, ignoring uncertainty in crop model and biome model projections. Here, we propose a probabilistic decision framework that allows for an evaluation of agricultural management and mitigation options in a multi-impact-model setting. Based on simulations generated within the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), we outline how cross-sectorally consistent multi-model impact simulations could be used to generate the information required for robust decision making.
Using an illustrative future land use pattern, we discuss the trade-off between potential gains in crop production and associated losses in natural carbon sinks in the new multiple crop- and biome-model setting. In addition, crop and water model simulations are combined to explore irrigation increases as one possible measure of agricultural intensification that could limit the expansion of cropland required in response to climate change and growing food demand. This example shows that current impact model uncertainties pose an important challenge to long-term mitigation planning and must not be ignored in long-term strategic decision making.
We derive a shape derivative formula for the family of principal Dirichlet eigenvalues λs(Ω) of the fractional Laplacian (−Δ)s associated with bounded open sets Ω⊂RN of class C1,1. This extends, with a help of a new approach, a result in Dalibard and Gérard-Varet (Calc. Var. 19(4):976–1013, 2013) which was restricted to the case s=12. As an application, we consider the maximization problem for λs(Ω) among annular-shaped domains of fixed volume of the type B∖B¯¯¯¯′, where B is a fixed ball and B′ is ball whose position is varied within B. We prove that λs(B∖B¯¯¯¯′) is maximal when the two balls are concentric. Our approach also allows to derive similar results for the fractional torsional rigidity. More generally, we will characterize one-sided shape derivatives for best constants of a family of subcritical fractional Sobolev embeddings.
A Internet das Coisas tem revolucionado a forma de produção e reprodução do conhecimento. Trata-se de um tipo de interface comunicacional entre humanos, máquinas e objetos que, ao fundir os mundos material e informacional, suscita as seguintes questões: (1) a possibilidade de obtenção imediata de quaisquer informações implicaria na produção do pensamento crítico, em uma espécie de relação causa-efeito?; (2) se é possível acessar as informações em quaisquer tempo e espaço, quais seriam as transformações decorrentes no processo formativo dos alunos e dos professores? Justamente essas questões motivaram os autores do artigo a elaborar o seguinte objetivo: refletir criticamente sobre a revitalização do conceito de formação (Bildung ) na temporalidade e localidade da Internet das Coisas.
The extraordinary desiccation resistance of the opportunistic human pathogen Acinetobacter baumannii is a key to its survival and spread in medical care units. The accumulation of compatible solute such as glutamate, mannitol and trehalose contributes to the desiccation resistance. Here, we have used osmolarity as a tool to study the response of cells to low water activities and studied the role of a potential inorganic osmolyte, K+, in osmostress response. Growth of A. baumannii was K+-dependent and the K+-dependence increased with the osmolarity of the medium. After an osmotic upshock, cells accumulated K+ and K+ accumulation increased with the salinity of the medium. K+ uptake was reduced in the presence of glycine betaine. The intracellular pools of compatible solutes were dependent on the K+ concentration: mannitol and glutamate concentrations increased with increasing K+ concentrations whereas trehalose was highest at low K+. After osmotic upshock, cells first accumulated K+ followed by synthesis of glutamate; later, mannitol and trehalose synthesis started, accompanied with a decrease of intracellular K+ and glutamate. These experiments demonstrate K+ uptake as a first response to osmostress in A. baumannii and demonstrate a hierarchy in the time-dependent accumulation of K+ and different organic solutes.
The HADES experiment at GSI has recently provided data on the flow coefficients v1,..., v4 for protons in Au+Au reactions at Elab = 1.23 AGeV (or √sNN = 2.4 GeV). This data allows to estimate the shear viscosity over entropy density ratio, η/s at low energies via a coarse graining analysis of the UrQMD transport simulations of the flow harmonics in comparison to the experimental data. By this we can provide for the first time an estimate of η/s ≈ 0.65 ± 0.15 (or (8 ± 2)(4π)−1) at such low energies.
O problema da universalização na filosofia moral não foi de forma alguma tematizado por Adorno. Contudo, há elementos nas suas obras que nos possibilitam refletir sobre esse tema. Tendo por base os escritos de Adorno, e esta é minha tese, pode-se estudar como a ambivalência das diretrizes normativas é integrada numa teoria crítica da moral sem que se renuncie a uma pretensão de validade crítico-normativa.
Este ensaio se propõe a rastrear, no transcurso de sua vida e de seus escritos, a seguinte asserção de Adorno: "Estudei filosofia e música. Em vez de me decidir por uma, sempre tive a impressão de que perseguia a mesma coisa em ambas", e mostrar como a relação contínua e dialética entre música e filosofia foi fecunda em sua formação educacional e científica, bem como na constituição de seu original pensamento filosófico.
A new method of event characterization based on Deep Learning is presented. The PointNet models can be used for fast, online event-by-event impact parameter determination at the CBM experiment. For this study, UrQMD and the CBM detector simulation are used to generate Au+Au collision events at 10 AGeV which are then used to train and evaluate PointNet based architectures. The models can be trained on features like the hit position of particles in the CBM detector planes, tracks reconstructed from the hits or combinations thereof. The Deep Learning models reconstruct impact parameters from 2-14 fm with a mean error varying from -0.33 to 0.22 fm. For impact parameters in the range of 5-14 fm, a model which uses the combination of hit and track information of particles has a relative precision of 4-9% and a mean error of -0.33 to 0.13 fm. In the same range of impact parameters, a model with only track information has a relative precision of 4-10% and a mean error of -0.18 to 0.22 fm. This new method of event-classification is shown to be more accurate and less model dependent than conventional methods and can utilize the performance boost of modern GPU processor units.
Amide proton transfer-chemical exchange saturation transfer (APT-CEST) imaging provides important information for the diagnosis and monitoring of tumors. For such analysis, complete coverage of the brain is advantageous, especially when registration is performed with other magnetic resonance (MR) modalities, such as MR spectroscopy (MRS). However, the acquisition of Z-spectra across several slices via multislice imaging may be time-consuming. Therefore, in this paper, we present a new approach for fast multislice imaging, allowing us to acquire 16 slices per frequency offset within 8 s. The proposed fast CEST-EPI sequence employs a presaturation module, which drives the magnetization into the steady-state equilibrium for the first frequency offset. A second module, consisting of a single CEST pulse (for maintaining the steady-state) followed by an EPI acquisition, passes through a loop to acquire multiple slices and adjacent frequency offsets. Thus, the whole Z-spectrum can be recorded much faster than the conventional saturation scheme, which employs a presaturation for each single frequency offset. The validation of the CEST sequence parameters was performed by using the conventional saturation scheme. Subsequently, the proposed and a modified version of the conventional CEST sequence were compared in vitro on a phantom with different T1 times and in vivo on a brain tumor patient. No significant differences between both sequences could be found in vitro. The in vivo data yielded almost identical MTRasym contrasts for the white and gray matter as well as for tumor tissue. Our results show that the proposed fast CEST-EPI sequence allows for rapid data acquisition and provides similar CEST contrasts as the modified conventional scheme while reducing the scanning time by approximately 50%.
A new artificial regulatory system for essential genes in yeast is described. It prevents translation of target mRNAs upon tetracycline (tc) binding to aptamers introduced into their 5'UTRs. Exploiting direct RNA–ligand interaction renders auxiliary protein factors unnecessary. Therefore, our approach is strain independent and not susceptible to interferences by heterologous expressed regulatory proteins. We use a simple PCR-based strategy, which allows easy tagging of any target gene and the level of gene expression can be adjusted due to various tc aptamer-regulated promoters. As proof of concept, five differently expressed genes were targeted, two of which could not be regulated previously. In all cases, adding tc completely prevented growth and, as shown for Nop14p, rapidly abolished de novo protein synthesis providing a powerful tool for conditional regulation of yeast gene expression.
The Education Against Tobacco (EAT) network delivers smoking prevention advice in secondary schools, typically using the mirroring approach (i.e., a "selfie" altered with a face-aging app and shared with a class). In November 2017, however, the German assembly of EAT opted to expand its remit to include nursing students. To assess the transferability of the existing approach, we implemented it with the self-developed face-aging app "Smokerface" (=mixed − methods approach) in six nursing schools. Anonymous questionnaires were used to assess the perceptions of 197 students (age 18–40 years; 83.8% female; 26.4% smokers; 23.3% daily smokers) collecting qualitative and quantitative data for our cross-sectional study. Most students perceived the intervention to be fun (73.3%), but a minority disagreed that their own animated selfie (25.9%) or the reaction of their peers (29.5%) had motivated them to stop smoking. The impact on motivation not to smoke was considerably lower than experienced with seventh graders (63.2% vs. 42.0%; notably, more smokers also disagreed (45.1%) than agreed (23.5%) with this statement. Agreement rates on the motivation not to smoke item were higher in females than in males and in year 2–3 than in year 1 students. Potential improvements included greater focus on pathology (29%) and discussing external factors (26%). Overall, the intervention seemed to be appealing for nursing students
O presente artigo pretende sustentar que o diagnóstico adorniano sobre perda da experiência qualitativa do tempo no expressionismo de Schoenberg pode ser desdobrado a partir da mobilização de dois conceitos da musicologia: o trabalho temático e a harmonia tonal. Ambos os conceitos estão associados a procedimentos compositivos em virtude dos quais logrou-se, na música anterior ao expressionismo, uma continuidade temporal que obedecia a uma necessidade, isto é, não contingente, ou fortuita. A dissolução da continuidade temporal na obra expressionista de Schoenberg é, portanto, resultado do abandono destes mesmos procedimentos.
O artigo propõe uma aproximação entre dois campos de pesquisa distintos, mas com notórias afinidades eletivas: o campo da arqueologia e o da estética filosófica. Pretende-se saber de que modo, no interior do pensamento dialético de Adorno, articulam-se os conceitos de pré-história e proto-história, tendo como fio condutor a temática da arte rupestre e a sua contrapartida moderna, isto é, a reprodutibilidade técnica. Tal aproximação tem como ponto de partida um instigante parágrafo da obra póstuma de Adorno, “Ästhetische Theorie”, presente na subseção assim classificada por Rolf Tiedmann como "Moderne Kunst und Industrielle Produktion", no qual Adorno afirma que há uma convergência entre a arte rupestre e a câmera fotográfica, que se daria na objetivação (Objektivation), isto é, na ação de separar o ato subjetivo do objeto que é visto. A partir desta constatação, a contribuição maior deste artigo estaria em identificar uma espécie de proto-história da reprodutibilidade técnica no mundo pré-histórico. Deste modo, numa perspectiva radicalmente dialética pode-se dizer que o progresso virtual e tecnológico sentido nas últimas décadas não representa algo qualitativamente novo na história humana, sendo apenas um desdobramento de uma tendência já contida na pré-história, algo que nos leva a crer que não conseguimos ainda superar o estado de imanência mítica denunciado amplamente por Adorno e Horkheimer na “Dialektik der Aufklärung. Para demonstrar isto o presente artigo almeja reconstruir as principais linhas de força da "Dialética do Esclarecimento", centrando na categoria de mito (Seção 1). Após, pretende apresentar a relação entre pré-história e proto-história no contexto do pensamento adorniano, especialmente nas obras e nos ensaios do período intermediário de sua bibliografia, tais como “Minima Moralia” e “Prismen” (Seção 2). Por último, deseja apresentar algumas reflexões de Adorno sobre a arte rupestre e a reprodutibilidade técnica presentes na Teoria Estética (Seção 3).
O texto trata da relação entre esfera pública e meios de comunicação de massa, no corpus bibliográfico de Jürgen Habermas, nestes 50 anos que nos separam de Strukturwandel der Öffentlichkeit (1962). O objetivo do texto é mostrar que, diferente de alguns estudos críticos, não se trata de uma lacuna investigativa - ausência, abandono ou não exploração do tema - , mas de uma abordagem secundária, implícita; que a abordagem secundária do tema está relacionada com a posição pessimista original de Habermas acerca da influência negativa dos meios de comunicação de massa, na despolitização da esfera pública; que o pessimismo de Habermas sobre os efeitos negativos dos meios de comunicação de massa mantém uma conexão interna com a orientação original da crítica da cultura de massa de Adorno. Isso significa que, apesar das reformulações e novos diagnósticos, a posição cética de Habermas quanto ao potencial democrático dos meios de comunicação de massa na repolitização da esfera pública parece não ter mudado em seus fundamentos, nestes 50 anos.
A sociedade como um todo vivencia uma crise de ordem ética, mas de uma ética, sobretudo, coletiva. As “pequenas” barbáries instalam-se silenciosamente na sociedade, fomentadas por um sistema econômico que, na sua raiz, é excludente e implacável com aqueles que não se enquadram de alguma forma nele. As pequenas banalizações das injustiças criam elementos que alertam para a possibilidade de que Auschwitz se repita, medo presente e evidenciado por Adorno nas suas obras mais importantes. Os objetivos deste trabalho são responder aos seguintes questionamentos: 1) como traduzir para o dia-a-dia da educação a desconstrução da cultura de violência e preconceito que avança em todo o mundo? 2) Como trabalhar uma pedagogia da imaginação, que seja, antes de mais nada, a condição de imaginar o outro, de imaginar-se no lugar do outro? 3) Quais as bases epistemológicas norteadoras de tal objetivo? Para tanto, verificar-se-á que, por meio da pedagogia do antipreconceito, fundamentada sobretudo no pensamento de Adorno, de Horkheimer e de Marcuse, é possível promover a interdisciplinaridade que aproxime razão de afeto e que denuncie a negação do preconceito, esclarecendo, ainda, que para se fazer uma sociedade mais humanizada torna-se impositivo passar pela desconstrução, pela compreensão e pela reconstrução da educação. Entender e esclarecer como o homem médio viabiliza as posturas excludentes e preconceituosas, impostas tanto pela condição humana quanto pelo capitalismo contemporâneo, no cotidiano da educação constitui uma de suas metas. Enfim, o trabalho propõe-se afirmar a possibilidade de uma ética para a sociedade tecnológica e de semiformação, fundamentada na pedagogia do antipreconceito, cuja finalidade seja a formação de sujeitos conscientes das limitações da ciência e do uso da tecnologia para impedir a barbárie, cuja bandeira seja a do combate à intolerância e às violências e o instrumento principal seja a imaginação.
By means of the analysis of two Theodor Adomo's texts temporal1y very distant from each other -one written in the beginning of his career, the other in his maturity -, this article shows that the essay was for him not merely a theme of reflection, but also and upmost a kind of matrix for his thought. Within this matrix, through resort to a tradition, begun, in the Modernity, with Montaigne and solidified with Leibniz and the English empiricists, Adorno seeks to build, in the last phase of his philosophy, his conception of an "Anti-system", in which the indispensable coherence of thought can be kept save from instrumentalization by the domination system.
O presente artigo discute as atuais transformações nos sistemas educacionais em todo o mundo. Tendo como foco a União Europeia (UE) e a Organização para a Cooperação e o Desenvolvimento Econômico (OCDE) como atores de políticas, seu argumento é que tais transformações implicam uma tripla "economização" da política educacional, que pode ser constatada em todos os níveis da área educacional. A importância cada vez maior dessas organizações nas questões educacionais configura uma transição para uma "constelação pós-nacional" também na área educacional, na medida em que a soberania educacional nacional está, no mínimo, passando por reajustes. No entanto, a "economização" das políticas educacionais não se limita a aproximar a educação das necessidades da economia e a transformar seus serviços em mercadorias comercializáveis. Ela também afeta o nível operacional da educação. Uma lógica de produção está sendo implementada na descrição realizada pelas próprias instituições do sistema educacional, que deixaram de ser estabelecimentos burocraticamente administrados para ser concebidos como uma atividade comercial gerencialmente controlada, uma atividade na qual uma ação empresarial se faz necessária. Esse novo tipo de administração faz surgir o problema da legitimação democrática das decisões políticas que, em termos ideais, combina três elementos: o democrático, o "expertocrático" e o ético-profissional. O artigo discute as consequências de uma mudança no equilíbrio desses três elementos no caso da Alemanha.
O presente texto apresenta uma reflexão sobre a educação dos sentidos a partir do estudo das obras “A indústria cultural” de Theodor W. Adorno e “A obra de arte na era de sua reprodutibilidade técnica” de Walter Benjamin. Primeiramente apresentamos uma breve contextualização histórica sobre o período no qual se desenvolveram os referentes textos. Em seguida buscamos demonstrar através do pensamento de Walter Benjamin como a arte passou a educar os sentidos das classes trabalhadoras a partir da sua reprodutibilidade técnica. Nas considerações finais, apontamos a importância da leitura de ambas as obras para possíveis reflexões acerca da formação do discurso nas tomadas de decisão social e cultural na atualidade. Como também buscamos compreender a função política da arte na formação crítica do sujeito.
Vegetation responds to drought through a complex interplay of plant hydraulic mechanisms, posing challenges for model development and parameterization. We present a mathematical model that describes the dynamics of leaf water-potential over time while considering different strategies by which plant species regulate their water-potentials. The model has two parameters: the parameter λ describing the adjustment of the leaf water potential to changes in soil water potential, and the parameter Δψww describing the typical ‘well-watered’ leaf water potentials at non-stressed (near-zero) levels of soil water potential. Our model was tested and calibrated on 110 time-series datasets containing the leaf- and soil water potentials of 66 species under drought and non-drought conditions. Our model successfully reproduces the measured leaf water potentials over time based on three different regulation strategies under drought. We found that three parameter sets derived from the measurement data reproduced the dynamics of 53% of an drought dataset, and 52% of a control dataset [root mean square error (RMSE) < 0.5 MPa)]. We conclude that, instead of quantifying water-potential-regulation of different plant species by complex modeling approaches, a small set of parameters may be sufficient to describe the water potential regulation behavior for large-scale modeling. Thus, our approach paves the way for a parsimonious representation of the full spectrum of plant hydraulic responses to drought in dynamic vegetation models.
The upcoming commissioning of the superconducting (SC) continuous wave Helmholtz linear accelerators first of series cryomodule is going to demand precise alignment of the four internal SC cavities and two SC solenoids. For optimal results, a beam-based alignment method is used to reduce the misalignment of the whole cryomodule, as well as its individual components. A symmetric beam of low transverse emittance is required for this method, which is to be formed by a collimation system. It consists of two separate plates with milled slits, aligned in the horizontal and vertical direction. The collimation system and alignment measurements are proposed, investigated, and realized. The complete setup of this system and its integration into the existing environment at the GSI High Charge State Injector are presented, as well as the results of the recent reference measurements.
As a centerpiece of antigen processing, the ATP-binding cassette transporter associated with antigen processing (TAP) became a main target for viral immune evasion. The herpesviral ICP47 inhibits TAP function, thereby suppressing an adaptive immune response. Here, we report on a thermostable ICP47-TAP complex, generated by fusion of different ICP47 fragments. These fusion complexes allowed us to determine the direction and positioning in the central cavity of TAP. ICP47-TAP fusion complexes are arrested in a stable conformation, as demonstrated by MHC I surface expression, melting temperature, and the mutual exclusion of herpesviral TAP inhibitors. We unveiled a conserved region next to the active domain of ICP47 as essential for the complete stabilization of the TAP complex. Binding of the active domain of ICP47 arrests TAP in an open inward facing conformation rendering the complex inaccessible for other viral factors. Based on our findings, we propose a dual interaction mechanism for ICP47. A per se destabilizing active domain inhibits the function of TAP, whereas a conserved C-terminal region additionally stabilizes the transporter. These new insights into the ICP47 inhibition mechanism can be applied for future structural analyses of the TAP complex.
Derived from a biophysical model for the motion of a crawling cell, the evolution system(⋆){ut=Δu−∇⋅(u∇v),0=Δv−kv+u, is investigated in a finite domain Ω⊂Rn, n≥2, with k≥0. Whereas a comprehensive literature is available for cases in which (⋆) describes chemotaxis-driven population dynamics and hence is accompanied by homogeneous Neumann-type boundary conditions for both components, the presently considered modeling context, besides yet requiring the flux ∂νu−u∂νv to vanish on ∂Ω, inherently involves homogeneous Dirichlet boundary conditions for the attractant v, which in the current setting corresponds to the cell's cytoskeleton being free of pressure at the boundary. This modification in the boundary setting is shown to go along with a substantial change with respect to the potential to support the emergence of singular structures: It is, inter alia, revealed that in contexts of radial solutions in balls there exist two critical mass levels, distinct from each other whenever k>0 or n≥3, that separate ranges within which (i) all solutions are global in time and remain bounded, (ii) both global bounded and exploding solutions exist, or (iii) all nontrivial solutions blow up. While critical mass phenomena distinguishing between regimes of type (i) and (ii) belong to the well-understood characteristics of (⋆) when posed under classical no-flux boundary conditions in planar domains, the discovery of a distinct secondary critical mass level related to the occurrence of (iii) seems to have no nearby precedent. In the planar case with the domain being a disk, the analytical results are supplemented with some numerical illustrations, and it is discussed how the findings can be interpreted biophysically for the situation of a cell on a flat substrate.
Purpose: A study of real-time adaptive radiotherapy systems was performed to test the hypothesis that, across delivery systems and institutions, the dosimetric accuracy is improved with adaptive treatments over non-adaptive radiotherapy in the presence of patient-measured tumor motion.
Methods and materials: Ten institutions with robotic(2), gimbaled(2), MLC(4) or couch tracking(2) used common materials including CT and structure sets, motion traces and planning protocols to create a lung and a prostate plan. For each motion trace, the plan was delivered twice to a moving dosimeter; with and without real-time adaptation. Each measurement was compared to a static measurement and the percentage of failed points for γ-tests recorded.
Results: For all lung traces all measurement sets show improved dose accuracy with a mean 2%/2 mm γ-fail rate of 1.6% with adaptation and 15.2% without adaptation (p < 0.001). For all prostate the mean 2%/2 mm γ-fail rate was 1.4% with adaptation and 17.3% without adaptation (p < 0.001). The difference between the four systems was small with an average 2%/2 mm γ-fail rate of <3% for all systems with adaptation for lung and prostate.
Conclusions: The investigated systems all accounted for realistic tumor motion accurately and performed to a similar high standard, with real-time adaptation significantly outperforming non-adaptive delivery methods.
The prediction of protein–ligand interactions and their corresponding binding free energy is a challenging task in structure-based drug design and related applications. Docking and scoring is broadly used to propose the binding mode and underlying interactions as well as to provide a measure for ligand affinity or differentiate between active and inactive ligands. Various studies have revealed that most docking software packages reliably predict the binding mode, although scoring remains a challenge. Here, a diverse benchmark data set of 99 matched molecular pairs (3D-MMPs) with experimentally determined X-ray structures and corresponding binding affinities is introduced. This data set was used to study the predictive power of 13 commonly used scoring functions to demonstrate the applicability of the 3D-MMP data set as a valuable tool for benchmarking scoring functions.
Correlation functions provide information on the properties of mesons in vacuum and of hot nuclear matter. In this work, we present a new method to derive a well-defined spectral representation for correlation functions. Combining this method with the quark gap equation and the inhomogeneous Bethe–Salpeter equation in the rainbow-ladder approximation, we calculate in-vacuum masses of light mesons and the electrical conductivity of the quark–gluon plasma. The analysis can be extended to other observables of strong-interaction systems.
RcsF, a proposed auxiliary regulator of the regulation of capsule synthesis (rcs) phosphorelay system, is a key element for understanding the RcsC-D-A/B signaling cascade, which is responsible for the regulation of more than 100 genes and is involved in cell division, motility, biofilm formation, and virulence. The RcsC-D-A/B system is one of the most complex bacterial signal transduction pathways, consisting of several membrane-bound and soluble proteins. RcsF is a lipoprotein attached to the outer membrane and plays an important role in activating the RcsC-d-A/B pathway. The exact mechanism of activation of the rcs phosphorelay by RcsF, however, remains unknown. We have analyzed the sequence of RcsF and identified three structural elements: 1) an N-terminal membrane-anchored helix (residues 3-13), 2) a loop (residues 14-48), and 3) a C-terminal folded domain (residues 49-134). We have determined the structure of this C-terminal domain and started to investigate its interaction with potential partners. Important features of its structure are two disulfide bridges between Cys-74 and Cys-118 and between Cys-109 and Cys-124. To evaluate the importance of this RcsF disulfide bridge network in vivo, we have examined the ability of the full-length protein and of specific Cys mutants to initiate the rcs signaling cascade. The results indicate that the Cys-74/Cys-118 and the Cys-109/Cys-124 residues correlate pairwise with the activity of RcsF. Interaction studies showed a weak interaction with an RNA hairpin. However, no interaction could be detected with reagents that are believed to activate the rcs phosphorelay, such as lysozyme, glucose, or Zn(2+) ions.
Neste artigo, que é originalmente um discurso de posse no Instituto Otto Suhr na Universidade Livre de Berlin, Axel Honneth esboça o programa de uma teoria intersubjetiva do reconhecimento, utilizando esta última categoria como o núcleo conceitual de uma Teoria Crítica da sociedade na qual a experiência pré-cientifica de desrespeito às expectativas sociais se conecta à formação de demandas emancipatórias.
Polo-like kinase 1 (PLK1) is a crucial regulator of cell cycle progression. It is established that the activation of PLK1 depends on the coordinated action of Aurora-A and Bora. Nevertheless, very little is known about the spatiotemporal regulation of PLK1 during G2, specifically, the mechanisms that keep cytoplasmic PLK1 inactive until shortly before mitosis onset. Here, we describe PLK1 dimerization as a new mechanism that controls PLK1 activation. During the early G2 phase, Bora supports transient PLK1 dimerization, thus fine-tuning the timely regulated activation of PLK1 and modulating its nuclear entry. At late G2, the phosphorylation of T210 by Aurora-A triggers dimer dissociation and generates active PLK1 monomers that support entry into mitosis. Interfering with this critical PLK1 dimer/monomer switch prevents the association of PLK1 with importins, limiting its nuclear shuttling, and causes nuclear PLK1 mislocalization during the G2-M transition. Our results suggest a novel conformational space for the design of a new generation of PLK1 inhibitors.
O presente estudo tem como objetivo investigar a ideia de reconhecimento jurídico na teoria de Axel Honneth, o que se dará mediante análise da obra Luta por Reconhecimento. Honneth, ancorado nas teorias de Hegel e Mead, estabelece o papel do direito como esfera de reconhecimento individual e seu potencial de asseguramento do autorrespeito. Empreende-se uma reconstrução da teoria honnethiana no atinente aos papéis desempenhados pelo direito na teoria da Luta por Reconhecimento de Axel Honneth, para em seguida, a partir da leitura que o autor desenvolve da teoria de Thomas Marshall, analisar o papel desempenhado pelos direitos subjetivos fundamentais como medium de sedimentação e ampliação de novas formas de reconhecimento e cidadania.
The regulation of cellular copper homeostasis is crucial in biology. Impairments lead to severe dysfunctions and are known to affect aging and development. Previously, a loss-of-function mutation in the gene encoding the copper-sensing and copper-regulated transcription factor GRISEA of the filamentous fungus Podospora anserina was reported to lead to cellular copper depletion and a pleiotropic phenotype with hypopigmentation of the mycelium and the ascospores, affected fertility and increased lifespan by approximately 60% when compared to the wild type. This phenotype is linked to a switch from a copper-dependent standard to an alternative respiration leading to both a reduced generation of reactive oxygen species (ROS) and of adenosine triphosphate (ATP). We performed a genome-wide comparative transcriptome analysis of a wild-type strain and the copper-depleted grisea mutant. We unambiguously assigned 9,700 sequences of the transcriptome in both strains to the more than 10,600 predicted and annotated open reading frames of the P. anserina genome indicating 90% coverage of the transcriptome. 4,752 of the transcripts differed significantly in abundance with 1,156 transcripts differing at least 3-fold. Selected genes were investigated by qRT-PCR analyses. Apart from this general characterization we analyzed the data with special emphasis on molecular pathways related to the grisea mutation taking advantage of the available complete genomic sequence of P. anserina. This analysis verified but also corrected conclusions from earlier data obtained by single gene analysis, identified new candidates of factors as part of the cellular copper homeostasis system including target genes of transcription factor GRISEA, and provides a rich reference source of quantitative data for further in detail investigations. Overall, the present study demonstrates the importance of systems biology approaches also in cases were mutations in single genes are analyzed to explain the underlying mechanisms controlling complex biological processes like aging and development.
Stationarity of the constituents of the body and of its functionalities is a basic requirement for life, being equivalent to survival in first place. Assuming that the resting state activity of the brain serves essential functionalities, stationarity entails that the dynamics of the brain needs to be regulated on a time-averaged basis. The combination of recurrent and driving external inputs must therefore lead to a non-trivial stationary neural activity, a condition which is fulfiled for afferent signals of varying strengths only close to criticality. In this view, the benefits of working in the vicinity of a second-order phase transition, such as signal enhancements, are not the underlying evolutionary drivers, but side effects of the requirement to keep the brain functional in first place. It is hence more appropriate to use the term 'self-regulated' in this context, instead of 'self-organized'.
We present a deterministic workflow for genotyping single and double transgenic individuals directly upon nascence that prevents overproduction and reduces wasted animals by two-thirds. In our vector concepts, transgenes are accompanied by two of four clearly distinguishable transformation markers that are embedded in interweaved, but incompatible Lox site pairs. Following Cre-mediated recombination, the genotypes of single and double transgenic individuals were successfully identified by specific marker combinations in 461 scorings.
Here we present a formal description of Biremis panamae Barka, Witkowski et Weisenborn sp. nov., which was isolated from the marine littoral environment of the Pacific Ocean coast of Panama. The description is based on morphology (light and electron microscopy) and the rbcL, psbC and SSU sequences of one clone of this species. The new species is included in Biremis due to its morphological features; i.e. two marginal rows of foramina, chambered striae, and girdle composed of numerous punctate copulae. The new species also possesses a striated valve face which is not seen in most known representatives of marine littoral Biremis species. In this study we also present the relationship of Biremis to other taxa using morphology, DNA sequence data and observations of auxosporulation. Our results based on these three sources point to an evolutionary relationship between Biremis, Neidium and Scoliopleura. The unusual silicified incunabular caps present in them are known otherwise only in Muelleria, which is probably related to the Neidiaceae and Scoliotropidaceae. We also discuss the relationship between Biremis and the recently described Labellicula and Olifantiella.
Reactive oxygen species (ROS) are constant by-products of aerobic life. In excess, ROS lead to cytotoxic protein aggregates, which are a hallmark of ageing in animals and linked to age-related pathologies in humans. Acylamino acid-releasing enzymes (AARE) are bifunctional serine proteases, acting on oxidized proteins. AARE are found in all domains of life, albeit under different names, such as acylpeptide hydrolase (APEH/ACPH), acylaminoacyl peptidase (AAP), or oxidized protein hydrolase (OPH). In humans, AARE malfunction is associated with age-related pathologies, while their function in plants is less clear. Here, we provide a detailed analysis of AARE genes in the plant lineage and an in-depth analysis of AARE localization and function in the moss Physcomitrella and the angiosperm Arabidopsis. AARE loss-of-function mutants have not been described for any organism so far. We generated and analysed such mutants and describe a connection between AARE function, aggregation of oxidized proteins and plant ageing, including accelerated developmental progression and reduced life span. Our findings complement similar findings in animals and humans, and suggest a unified concept of ageing may exist in different life forms.
Organ-on-a-chip technology has the potential to accelerate pharmaceutical drug development, improve the clinical translation of basic research, and provide personalized intervention strategies. In the last decade, big pharma has engaged in many academic research cooperations to develop organ-on-a-chip systems for future drug discoveries. Although most organ-on-a-chip systems present proof-of-concept studies, miniaturized organ systems still need to demonstrate translational relevance and predictive power in clinical and pharmaceutical settings. This review explores whether microfluidic technology succeeded in paving the way for developing physiologically relevant human in vitro models for pharmacology and toxicology in biomedical research within the last decade. Individual organ-on-a-chip systems are discussed, focusing on relevant applications and highlighting their ability to tackle current challenges in pharmacological research.
Knowledge of consumers' willingness to pay (WTP) is a prerequisite to profitable price-setting. To gauge consumers' WTP, practitioners often rely on a direct single question approach in which consumers are asked to explicitly state their WTP for a product. Despite its popularity among practitioners, this approach has been found to suffer from hypothetical bias. In this paper, we propose a rigorous method that improves the accuracy of the direct single question approach. Specifically, we systematically assess the hypothetical biases associated with the direct single question approach and explore ways to de-bias it. Our results show that by using the de-biasing procedures we propose, we can generate a de-biased direct single question approach that is accurate enough to be useful for managerial decision-making. We validate this approach with two studies in this paper.
Background: Modulation of cortical excitability by transcranial magnetic stimulation (TMS) is used for investigating human brain functions. A common observation is the high variability of long-term depression (LTD)-like changes in human (motor) cortex excitability. This study aimed at analyzing the response subgroup distribution after paired continuous theta burst stimulation (cTBS) as a basis for subject selection.
Methods: The effects of paired cTBS using 80% active motor threshold (AMT) in 31 healthy volunteers were assessed at the primary motor cortex (M1) corresponding to the representation of the first dorsal interosseous (FDI) muscle of the left hand, before and up to 50 min after plasticity induction. The changes in motor evoked potentials (MEPs) were analyzed using machine-learning derived methods implemented as Gaussian mixture modeling (GMM) and computed ABC analysis.
Results: The probability density distribution of the MEP changes from baseline was tri-modal, showing a clear separation at 80.9%. Subjects displaying at least this degree of LTD-like changes were n = 6 responders. By contrast, n = 7 subjects displayed a paradox response with increase in MEP. Reassessment using ABC analysis as alternative approach led to the same n = 6 subjects as a distinct category.
Conclusion: Depressive effects of paired cTBS using 80% AMT endure at least 50 min, however, only in a small subgroup of healthy subjects. Hence, plasticity induction by paired cTBS might not reflect a general mechanism in human motor cortex excitability. A mathematically supported criterion is proposed to select responders for enrolment in assessments of human brain functional networks using virtual brain lesions.
Based on accumulating evidence of a role of lipid signaling in many physiological and pathophysiological processes including psychiatric diseases, the present data driven analysis was designed to gather information needed to develop a prospective biomarker, using a targeted lipidomics approach covering different lipid mediators. Using unsupervised methods of data structure detection, implemented as hierarchal clustering, emergent self-organizing maps of neuronal networks, and principal component analysis, a cluster structure was found in the input data space comprising plasma concentrations of d = 35 different lipid-markers of various classes acquired in n = 94 subjects with the clinical diagnoses depression, bipolar disorder, ADHD, dementia, or in healthy controls. The structure separated patients with dementia from the other clinical groups, indicating that dementia is associated with a distinct lipid mediator plasma concentrations pattern possibly providing a basis for a future biomarker. This hypothesis was subsequently assessed using supervised machine-learning methods, implemented as random forests or principal component analysis followed by computed ABC analysis used for feature selection, and as random forests, k-nearest neighbors, support vector machines, multilayer perceptron, and naïve Bayesian classifiers to estimate whether the selected lipid mediators provide sufficient information that the diagnosis of dementia can be established at a higher accuracy than by guessing. This succeeded using a set of d = 7 markers comprising GluCerC16:0, Cer24:0, Cer20:0, Cer16:0, Cer24:1, C16 sphinganine, and LacCerC16:0, at an accuracy of 77%. By contrast, using random lipid markers reduced the diagnostic accuracy to values of 65% or less, whereas training the algorithms with randomly permuted data was followed by complete failure to diagnose dementia, emphasizing that the selected lipid mediators were display a particular pattern in this disease possibly qualifying as biomarkers.
The Gini index is a measure of the inequality of a distribution that can be derived from Lorenz curves. While commonly used in, e.g., economic research, it suffers from ambiguity via lack of Lorenz dominance preservation. Here, investigation of large sets of empirical distributions of incomes of the World’s countries over several years indicated firstly, that the Gini indices are centered on a value of 33.33% corresponding to the Gini index of the uniform distribution and secondly, that the Lorenz curves of these distributions are consistent with Lorenz curves of log-normal distributions. This can be employed to provide a Lorenz dominance preserving equivalent of the Gini index. Therefore, a modified measure based on log-normal approximation and standardization of Lorenz curves is proposed. The so-called UGini index provides a meaningful and intuitive standardization on the uniform distribution as this characterizes societies that provide equal chances. The novel UGini index preserves Lorenz dominance. Analysis of the probability density distributions of the UGini index of the World’s counties income data indicated multimodality in two independent data sets. Applying Bayesian statistics provided a data-based classification of the World’s countries’ income distributions. The UGini index can be re-transferred into the classical index to preserve comparability with previous research.
Persistent and, in particular, neuropathic pain is a major healthcare problem with still insufficient pharmacological treatment options. This triggered research activities aimed at finding analgesics with a novel mechanism of action. Results of these efforts will need to pass through the phases of drug development, in which experimental human pain models are established components e.g. implemented as chemical hyperalgesia induced by capsaicin. We aimed at ranking the various readouts of a human capsaicin–based pain model with respect to the most relevant information about the effects of a potential reference analgesic. In a placebo‐controlled, randomized cross‐over study, seven different pain‐related readouts were acquired in 16 healthy individuals before and after oral administration of 300 mg pregabalin. The sizes of the effect on pain induced by intradermal injection of capsaicin were quantified by calculating Cohen's d. While in four of the seven pain‐related parameters, pregabalin provided a small effect judged by values of Cohen's d exceeding 0.2, an item categorization technique implemented as computed ABC analysis identified the pain intensities in the area of secondary hyperalgesia and of allodynia as the most suitable parameters to quantify the analgesic effects of pregabalin. Results of this study provide further support for the ability of the intradermal capsaicin pain model to show analgesic effects of pregabalin. Results can serve as a basis for the designs of studies where the inclusion of this particular pain model and pregabalin is planned.
An easy-to-use model to evaluate conductivities at high and middle latitudes in the height range 70–100 km is presented. It is based on electron density profiles obtained with the EISCAT VHF radar during 11 years and on the neutral atmospheric model MSIS95. The model uses solar zenith angle, geomagnetic activity and season as input parameters. It was mainly constructed to study the properties of Schumann resonances that depend on such conductivity profiles.
Dual-task paradigms encompass a broad range of approaches to measure cognitive load in instructional settings. As a common characteristic, an additional task is implemented alongside a learning task to capture the individual’s unengaged cognitive capacities during the learning process. Measures to determine these capacities are, for instance, reaction times and interval errors on the additional task, while the performance on the learning task is to be maintained. Opposite to retrospectively applied subjective ratings, the continuous assessment within a dual-task paradigm allows to simultaneously monitor changes in the performance related to previously defined tasks. Following the Cognitive Load Theory, these changes in performance correspond to cognitive changes related to the establishment of permanently existing knowledge structures. Yet the current state of research indicates a clear lack of standardization of dual-task paradigms over study settings and task procedures. Typically, dual-task designs are adapted uniquely for each study, albeit with some similarities across different settings and task procedures. These similarities range from the type of modality to the frequency used for the additional task. This results in a lack of validity and comparability between studies due to arbitrarily chosen patterns of frequency without a sound scientific base, potentially confounding variables, or undecided adaptation potentials for future studies. In this paper, the lack of validity and comparability between dual-task settings will be presented, the current taxonomies compared and the future steps for a better standardization and implementation discussed.
Este artigo tem por objetivo analisar comparativamente as semelhanças contidas nas críticas à democracia liberal presentes em alguns trabalhos selecionados de Carl Schmitt (1888-1985) e Robert Kurz (1943-2012). A despeito da estreita associação do primeiro autor com o regime nazista após 1933 e do segundo ser normalmente caracterizado como um pensador marxista (embora bastante crítico ao marxismo “ortodoxo”), são verificáveis inúmeras similitudes entre ambos quando se propõem a analisar as características do liberalismo parlamentar das democracias do século XX. Uma hipótese que pode explicar tais semelhanças seria a influência exercida por Schmitt sobre diversos teóricos da escola de Frankfurt, com os quais Kurz frequentemente dialoga em seus escritos e que foram inspiradores de algumas de suas reflexões – em especial, Walter Benjamin, Theodor Adorno e Max Horkheimer, embora Schmitt também tenha influenciado Franz Neumann, Otto Kirchheimer, Karl Korsch e Herbert Marcuse. Outra via de interpretação abordada aqui se refere à possibilidade de Schmitt ter encontrado, em suas teorias sobre o Estado e sobre o direito, os limites epistemológicos do liberalismo moderno, o que constitui o principal objeto de pesquisa de Kurz e foi tema recorrente nos escritos dos teóricos de Frankfurt.
Este artigo analisa a crítica de Adorno à ontologia de Heidegger. Para tal, utiliza como leitmotiv a interpretação heideggeriana de Kant. Procuraremos mostrar que para Adorno a edificação da ontologia fundamental a partir da filosofia de Kant é uma interpretação indevida desta. Por fim, procura apontar uma possível saída na filosofia de Adorno para o problema da necessidade de fundamentação do discurso filosófico. Tal saída passa pela constatação da importância da arte para a construção da universalidade na filosofia.
A critical role for VEGF and VEGFR2 in NMDA receptor synaptic function and fear-related behavior
(2016)
Vascular endothelial growth factor (VEGF) is known to be required for the action of antidepressant therapies but its impact on brain synaptic function is poorly characterized. Using a combination of electrophysiological, single-molecule imaging and conditional transgenic approaches, we identified the molecular basis of the VEGF effect on synaptic transmission and plasticity. VEGF increases the postsynaptic responses mediated by the N-methyl-d-aspartate type of glutamate receptors (GluNRs) in hippocampal neurons. This is concurrent with the formation of new synapses and with the synaptic recruitment of GluNR expressing the GluN2B subunit (GluNR-2B). VEGF induces a rapid redistribution of GluNR-2B at synaptic sites by increasing the surface dynamics of these receptors within the membrane. Consistently, silencing the expression of the VEGF receptor 2 (VEGFR2) in neural cells impairs hippocampal-dependent synaptic plasticity and consolidation of emotional memory. These findings demonstrated the direct implication of VEGF signaling in neurons via VEGFR2 in proper synaptic function. They highlight the potential of VEGF as a key regulator of GluNR synaptic function and suggest a role for VEGF in new therapeutic approaches targeting GluNR in depression.
Rezension zu: Psychology of Retention:Theory, Research and Practice / Melinde Coetzee, Ingrid L. Potgieter and Nadia Ferreira (Eds.), ISBN:978-3-319-98919-8 Publisher:Springer Nature, 2018, R1600 (Preis SA)
Background: Invasive off- or on-pump cardiac surgery (elective and emergency procedures, excluding transplants are routinely performed to treat complications of ischaemic heart disease. Randomised controlled trials (RCT) evaluate the effectiveness of treatments in the setting of cardiac surgery. However, the impact of RCTs is weakened by heterogeneity in outcome measuring and reporting, which hinders comparison across trials. Core outcome sets (COS, a set of outcomes that should be measured and reported, as a minimum, in clinical trials for a specific clinical field) help reduce this problem. In light of the above, we developed a COS for cardiac surgery effectiveness trials.
Methods: Potential core outcomes were identified a priori by analysing data on 371 RCTs of 58,253 patients. We reached consensus on core outcomes in an international three-round eDelphi exercise. Outcomes for which at least 60% of the participants chose the response option "no" and less than 20% chose the response option "yes" were excluded.
Results: Eighty-six participants from 23 different countries involving adult cardiac patients, cardiac surgeons, anaesthesiologists, nursing staff and researchers contributed to this eDelphi. The panel reached consensus on four core outcomes: 1) Measure of mortality, 2) Measure of quality of life, 3) Measure of hospitalisation and 4) Measure of cerebrovascular complication to be included in adult cardiac surgery trials.
Conclusion: This study used robust research methodology to develop a minimum core outcome set for clinical trials evaluating the effectiveness of treatments in the setting of cardiac surgery. As a next step, appropriate outcome measurement instruments have to be selected.
Commercialization of consumers’ personal data in the digital economy poses serious, both conceptual and practical, challenges to the traditional approach of European Union (EU) Consumer Law. This article argues that mass-spread, automated, algorithmic decision-making casts doubt on the foundational paradigm of EU consumer law: consent and autonomy. Moreover, it poses threats of discrimination and under- mining of consumer privacy. It is argued that the recent legislative reaction by the EU Commission, in the form of the ‘New Deal for Consumers’, was a step in the right direction, but fell short due to its continued reliance on consent, autonomy and failure to adequately protect consumers from indirect discrimination. It is posited that a focus on creating a contracting landscape where the consumer may be properly informed in material respects is required, which in turn necessitates blending the approaches of competition, consumer protection and data protection laws.
A consistent muscle activation strategy underlies crawling and swimming in Caenorhabditis elegans
(2014)
Although undulatory swimming is observed in many organisms, the neuromuscular basis for undulatory movement patterns is not well understood. To better understand the basis for the generation of these movement patterns, we studied muscle activity in the nematode Caenorhabditis elegans. Caenorhabditis elegans exhibits a range of locomotion patterns: in low viscosity fluids the undulation has a wavelength longer than the body and propagates rapidly, while in high viscosity fluids or on agar media the undulatory waves are shorter and slower. Theoretical treatment of observed behaviour has suggested a large change in force–posture relationships at different viscosities, but analysis of bend propagation suggests that short-range proprioceptive feedback is used to control and generate body bends. How muscles could be activated in a way consistent with both these results is unclear. We therefore combined automated worm tracking with calcium imaging to determine muscle activation strategy in a variety of external substrates. Remarkably, we observed that across locomotion patterns spanning a threefold change in wavelength, peak muscle activation occurs approximately 45° (1/8th of a cycle) ahead of peak midline curvature. Although the location of peak force is predicted to vary widely, the activation pattern is consistent with required force in a model incorporating putative length- and velocity-dependence of muscle strength. Furthermore, a linear combination of local curvature and velocity can match the pattern of activation. This suggests that proprioception can enable the worm to swim effectively while working within the limitations of muscle biomechanics and neural control.
Introduction: Encouraged by the change in licensing regulations the practical professional skills in Germany received a higher priority and are taught in medical schools therefore increasingly. This created the need to standardize the process more and more. On the initiative of the German skills labs the German Medical Association Committee for practical skills was established and developed a competency-based catalogue of learning objectives, whose origin and structure is described here.
Goal of the catalogue is to define the practical skills in undergraduate medical education and to give the medical schools a rational planning basis for the necessary resources to teach them.
Methods: Building on already existing German catalogues of learning objectives a multi-iterative process of condensation was performed, which corresponds to the development of S1 guidelines, in order to get a broad professional and political support.
Results: 289 different practical learning goals were identified and assigned to twelve different organ systems with three overlapping areas to other fields of expertise and one area of across organ system skills. They were three depths and three different chronological dimensions assigned and the objectives were matched with the Swiss and the Austrian equivalent.
Discussion: This consensus statement may provide the German faculties with a basis for planning the teaching of practical skills and is an important step towards a national standard of medical learning objectives.
Looking ahead: The consensus statement may have a formative effect on the medical schools to teach practical skills and plan the resources accordingly.
Publicly available compound and bioactivity databases provide an essential basis for data-driven applications in life-science research and drug design. By analyzing several bioactivity repositories, we discovered differences in compound and target coverage advocating the combined use of data from multiple sources. Using data from ChEMBL, PubChem, IUPHAR/BPS, BindingDB, and Probes & Drugs, we assembled a consensus dataset focusing on small molecules with bioactivity on human macromolecular targets. This allowed an improved coverage of compound space and targets, and an automated comparison and curation of structural and bioactivity data to reveal potentially erroneous entries and increase confidence. The consensus dataset comprised of more than 1.1 million compounds with over 10.9 million bioactivity data points with annotations on assay type and bioactivity confidence, providing a useful ensemble for computational applications in drug design and chemogenomics.
Ubiquitin fold modifier 1 (UFM1) is a member of the ubiquitin-like protein family. UFM1 undergoes a cascade of enzymatic reactions including activation by UBA5 (E1), transfer to UFC1 (E2) and selective conjugation to a number of target proteins via UFL1 (E3) enzymes. Despite the importance of ufmylation in a variety of cellular processes and its role in the pathogenicity of many human diseases, the molecular mechanisms of the ufmylation cascade remains unclear. In this study we focused on the biophysical and biochemical characterization of the interaction between UBA5 and UFC1. We explored the hypothesis that the unstructured C-terminal region of UBA5 serves as a regulatory region, controlling cellular localization of the elements of the ufmylation cascade and effective interaction between them. We found that the last 20 residues in UBA5 are pivotal for binding to UFC1 and can accelerate the transfer of UFM1 to UFC1. We solved the structure of a complex of UFC1 and a peptide spanning the last 20 residues of UBA5 by NMR spectroscopy. This structure in combination with additional NMR titration and isothermal titration calorimetry experiments revealed the mechanism of interaction and confirmed the importance of the C-terminal unstructured region in UBA5 for the ufmylation cascade.
Background: The differentiation between Gaucher disease type 3 (GD3) and type 1 is challenging because pathognomonic neurologic symptoms may be subtle and develop at late stages. The ophthalmologist plays a crucial role in identifying the typical impairment of horizontal saccadic eye movements, followed by vertical ones. Little is known about further ocular involvement. The aim of this monocentric cohort study is to comprehensively describe the ophthalmological features of Gaucher disease type 3. We suggest recommendations for a set of useful ophthalmologic investigations for diagnosis and follow up and for saccadometry parameters enabling a correlation to disease severity.
Methods: Sixteen patients with biochemically and genetically diagnosed GD3 completed ophthalmologic examination including optical coherence tomography (OCT), clinical oculomotor assessment and saccadometry by infrared based video-oculography. Saccadic peak velocity, gain and latency were compared to 100 healthy controls, using parametric tests. Correlations between saccadic assessment and clinical parameters were calculated.
Results: Peripapillary subretinal drusen-like deposits with retinal atrophy (2/16), preretinal opacities of the vitreous (4/16) and increased retinal vessel tortuosity (3/16) were found. Oculomotor pathology with clinically slowed saccades was more frequent horizontally (15/16) than vertically (12/16). Saccadometry revealed slowed peak velocity compared to 100 controls (most evident horizontally and downwards). Saccades were delayed and hypometric. Best correlating with SARA (scale for the assessment and rating of ataxia), disease duration, mSST (modified Severity Scoring Tool) and reduced IQ was peak velocity (both up- and downwards). Motility restriction occurred in 8/16 patients affecting horizontal eye movements, while vertical motility restriction was seen less frequently. Impaired abduction presented with esophoria or esotropia, the latter in combination with reduced stereopsis.
Conclusions: Vitreoretinal lesions may occur in 25% of Gaucher type 3 patients, while we additionally observed subretinal lesions with retinal atrophy in advanced disease stages. Vertical saccadic peak velocity seems the most promising "biomarker" for neuropathic manifestation for future longitudinal studies, as it correlates best with other neurologic symptoms. Apart from the well documented abduction deficit in Gaucher type 3 we were able to demonstrate motility impairment in all directions of gaze.
Background: Alterations in the DNA methylation pattern are a hallmark of leukemias and lymphomas. However, most epigenetic studies in hematologic neoplasms (HNs) have focused either on the analysis of few candidate genes or many genes and few HN entities, and comprehensive studies are required. Methodology/Principal Findings: Here, we report for the first time a microarray-based DNA methylation study of 767 genes in 367 HNs diagnosed with 16 of the most representative B-cell (n = 203), T-cell (n = 30), and myeloid (n = 134) neoplasias, as well as 37 samples from different cell types of the hematopoietic system. Using appropriate controls of B-, T-, or myeloid cellular origin, we identified a total of 220 genes hypermethylated in at least one HN entity. In general, promoter hypermethylation was more frequent in lymphoid malignancies than in myeloid malignancies, being germinal center mature B-cell lymphomas as well as B and T precursor lymphoid neoplasias those entities with highest frequency of gene-associated DNA hypermethylation. We also observed a significant correlation between the number of hypermethylated and hypomethylated genes in several mature B-cell neoplasias, but not in precursor B- and T-cell leukemias. Most of the genes becoming hypermethylated contained promoters with high CpG content, and a significant fraction of them are targets of the polycomb repressor complex. Interestingly, T-cell prolymphocytic leukemias show low levels of DNA hypermethylation and a comparatively large number of hypomethylated genes, many of them showing an increased gene expression. Conclusions/Significance: We have characterized the DNA methylation profile of a wide range of different HNs entities. As well as identifying genes showing aberrant DNA methylation in certain HN subtypes, we also detected six genes—DBC1, DIO3, FZD9, HS3ST2, MOS, and MYOD1—that were significantly hypermethylated in B-cell, T-cell, and myeloid malignancies. These might therefore play an important role in the development of different HNs.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice nucleating particles (INPs). However, an inter-comparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nucleation research UnIT), we distributed an illite rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. Seventeen measurement methods were involved in the data inter-comparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while ten other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing dataset was evaluated using the ice nucleation active surface-site density (ns) to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers nine orders of magnitude in ns.
Our inter-comparison results revealed a discrepancy between suspension and dry-dispersed particle measurements for this mineral dust. While the agreement was good below ~ −26 °C, the ice nucleation activity, expressed in ns, was smaller for the wet suspended samples and higher for the dry-dispersed aerosol samples between about −26 and −18 °C. Only instruments making measurement techniques with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −26 and −18 °C is discussed. In general, the seventeen immersion freezing measurement techniques deviate, within the range of about 7 °C in terms of temperature, by three orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency (i.e., ns) of illite NX particles is relatively independent on droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature-dependence and weak time- and size-dependence of immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns (T) spectra, and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. A multiple exponential distribution fit is expressed as ns(T) = exp(23.82 × exp(−exp(0.16 × (T + 17.49))) + 1.39) based on the specific surface area and ns(T) = exp(25.75 × exp(−exp(0.13 × (T + 17.17))) + 3.34) based on the geometric area (ns and T in m−2 and °C, respectively). These new fits, constrained by using an identical reference samples, will help to compare IN measurement methods that are not included in the present study and, thereby, IN data from future IN instruments.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice-nucleating particles. However, an intercomparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nuclei Research Unit), we distributed an illite-rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. A total of 17 measurement methods were involved in the data intercomparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while 10 other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing data set was evaluated using the ice nucleation active surface-site density, ns, to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers 9 orders of magnitude in ns.
In general, the 17 immersion freezing measurement techniques deviate, within a range of about 8 °C in terms of temperature, by 3 orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency expressed in ns of illite NX particles is relatively independent of droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature dependence and weak time and size dependence of the immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns(T) spectra and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. While the agreement between different instruments was reasonable below ~ −27 °C, there seemed to be a different trend in the temperature-dependent ice nucleation activity from the suspension and dry-dispersed particle measurements for this mineral dust, in particular at higher temperatures. For instance, the ice nucleation activity expressed in ns was smaller for the average of the wet suspended samples and higher for the average of the dry-dispersed aerosol samples between about −27 and −18 °C. Only instruments making measurements with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −27 and −18 °C is discussed. Multiple exponential distribution fits in both linear and log space for both specific surface area-based ns(T) and geometric surface area-based ns(T) are provided. These new fits, constrained by using identical reference samples, will help to compare IN measurement methods that are not included in the present study and IN data from future IN instruments.
Analysis of whole cell lipid extracts of bacteria by means of ultra-performance (UP)LC-MS allows a comprehensive determination of the lipid molecular species present in the respective organism. The data allow conclusions on its metabolic potential as well as the creation of lipid profiles, which visualize the organism's response to changes in internal and external conditions. Herein, we describe: i) a fast reversed phase UPLC-ESI-MS method suitable for detection and determination of individual lipids from whole cell lipid extracts of all polarities ranging from monoacylglycerophosphoethanolamines to TGs; ii) the first overview of a wide range of lipid molecular species in vegetative Myxococcus xanthus DK1622 cells; iii) changes in their relative composition in selected mutants impaired in the biosynthesis of α-hydroxylated FAs, sphingolipids, and ether lipids; and iv) the first report of ceramide phosphoinositols in M. xanthus, a lipid species previously found only in eukaryotes.
Covalent inhibition has become more accepted in the past two decades, as illustrated by the clinical approval of several irreversible inhibitors designed to covalently modify their target. Elucidation of the structure-activity relationship and potency of such inhibitors requires a detailed kinetic evaluation. Here, we elucidate the relationship between the experimental read-out and the underlying inhibitor binding kinetics. Interactive kinetic simulation scripts are employed to highlight the effects of in vitro enzyme activity assay conditions and inhibitor binding mode, thereby showcasing which assumptions and corrections are crucial. Four stepwise protocols to assess the biochemical potency of (ir)reversible covalent enzyme inhibitors targeting a nucleophilic active site residue are included, with accompanying data analysis tailored to the covalent binding mode. Together, this will serve as a guide to make an educated decision regarding the most suitable method to assess covalent inhibition potency. © 2022 The Authors. Current Protocols published by Wiley Periodicals LLC.
Apigenin (4′,5,7-trihydroxyflavone) (Api) is an important component of the human diet, being distributed in a wide number of fruits, vegetables and herbs with the most important sources being represented by chamomile, celery, celeriac and parsley. This study was designed for a comprehensive evaluation of Api as an antiproliferative, proapoptotic, antiangiogenic and immunomodulatory phytocompound. In the set experimental conditions, Api presents antiproliferative activity against the A375 human melanoma cell line, a G2/M arrest of the cell cycle and cytotoxic events as revealed by the lactate dehydrogenase release. Caspase 3 activity was inversely proportional to the Api tested doses, namely 30 μM and 60 μM. Phenomena of early apoptosis, late apoptosis and necrosis following incubation with Api were detected by Annexin V-PI double staining. The flavone interfered with the mitochondrial respiration by modulating both glycolytic and mitochondrial pathways for ATP production. The metabolic activity of human dendritic cells (DCs) under LPS-activation was clearly attenuated by stimulation with high concentrations of Api. Il-6 and IL-10 secretion was almost completely blocked while TNF alpha secretion was reduced by about 60%. Api elicited antiangiogenic properties in a dose-dependent manner. Both concentrations of Api influenced tumour cell growth and migration, inducing a limited tumour area inside the application ring, associated with a low number of capillaries.
Translation is an important step in gene expression. The initiation of translation is phylogenetically diverse, since currently five different initiation mechanisms are known. For bacteria the three initiation factors IF1 – IF3 are described in contrast to archaea and eukaryotes, which contain a considerably higher number of initiation factor genes. As eukaryotes and archaea use a non-overlapping set of initiation mechanisms, orthologous proteins of both domains do not necessarily fulfill the same function. The genome of Haloferax volcanii contains 14 annotated genes that encode (subunits of) initiation factors. To gain a comprehensive overview of the importance of these genes, it was attempted to construct single gene deletion mutants of all genes. In 9 cases single deletion mutants were successfully constructed, showing that the respective genes are not essential. In contrast, the genes encoding initiation factors aIF1, aIF2γ, aIF5A, aIF5B, and aIF6 were found to be essential. Factors aIF1A and aIF2β are encoded by two orthologous genes in H. volcanii. Attempts to generate double mutants failed in both cases, indicating that also these factors are essential. A translatome analysis of one of the single aIF2β deletion mutants revealed that the translational efficiency of the second ortholog was enhanced tenfold and thus the two proteins can replace one another. The phenotypes of the single deletion mutants also revealed that the two aIF1As and aIF2βs have redundant but not identical functions. Remarkably, the gene encoding aIF2α, a subunit of aIF2 involved in initiator tRNA binding, could be deleted. However, the mutant had a severe growth defect under all tested conditions. Conditional depletion mutants were generated for the five essential genes. The phenotypes of deletion mutants and conditional depletion mutants were compared to that of the wild-type under various conditions, and growth characteristics are discussed.
In this work we present, for the first time, the non-perturbative renormalization for the unpolarized, helicity and transversity quasi-PDFs, in an RI′ scheme. The proposed prescription addresses simultaneously all aspects of renormalization: logarithmic divergences, finite renormalization as well as the linear divergence which is present in the matrix elements of fermion operators with Wilson lines. Furthermore, for the case of the unpolarized quasi-PDF, we describe how to eliminate the unwanted mixing with the twist-3 scalar operator.
We utilize perturbation theory for the one-loop conversion factor that brings the renormalization functions to the MS-scheme at a scale of 2 GeV. We also explain how to improve the estimates on the renormalization functions by eliminating lattice artifacts. The latter can be computed in one-loop perturbation theory and to all orders in the lattice spacing.
We apply the methodology for the renormalization to an ensemble of twisted mass fermions with Nf = 2 + 1 + 1 dynamical quarks, and a pion mass of around 375 MeV.
Plants, fungi and algae are important components of global biodiversity and are fundamental to all ecosystems. They are the basis for human well-being, providing food, materials and medicines. Specimens of all three groups of organisms are accommodated in herbaria, where they are commonly referred to as botanical specimens.The large number of specimens in herbaria provides an ample, permanent and continuously improving knowledge base on these organisms and an indispensable source for the analysis of the distribution of species in space and time critical for current and future research relating to global biodiversity. In order to make full use of this resource, a research infrastructure has to be built that grants comprehensive and free access to the information in herbaria and botanical collections in general. This can be achieved through digitization of the botanical objects and associated data.The botanical research community can count on a long-standing tradition of collaboration among institutions and individuals. It agreed on data standards and standard services even before the advent of computerization and information networking, an example being the Index Herbariorum as a global registry of herbaria helping towards the unique identification of specimens cited in the literature.In the spirit of this collaborative history, 51 representatives from 30 institutions advocate to start the digitization of botanical collections with the overall wall-to-wall digitization of the flat objects stored in German herbaria. Germany has 70 herbaria holding almost 23 million specimens according to a national survey carried out in 2019. 87% of these specimens are not yet digitized. Experiences from other countries like France, the Netherlands, Finland, the US and Australia show that herbaria can be comprehensively and cost-efficiently digitized in a relatively short time due to established workflows and protocols for the high-throughput digitization of flat objects.Most of the herbaria are part of a university (34), fewer belong to municipal museums (10) or state museums (8), six herbaria belong to institutions also supported by federal funds such as Leibniz institutes, and four belong to non-governmental organizations. A common data infrastructure must therefore integrate different kinds of institutions.Making full use of the data gained by digitization requires the set-up of a digital infrastructure for storage, archiving, content indexing and networking as well as standardized access for the scientific use of digital objects. A standards-based portfolio of technical components has already been developed and successfully tested by the Biodiversity Informatics Community over the last two decades, comprising among others access protocols, collection databases, portals, tools for semantic enrichment and annotation, international networking, storage and archiving in accordance with international standards. This was achieved through the funding by national and international programs and initiatives, which also paved the road for the German contribution to the Global Biodiversity Information Facility (GBIF).Herbaria constitute a large part of the German botanical collections that also comprise living collections in botanical gardens and seed banks, DNA- and tissue samples, specimens preserved in fluids or on microscope slides and more. Once the herbaria are digitized, these resources can be integrated, adding to the value of the overall research infrastructure. The community has agreed on tasks that are shared between the herbaria, as the German GBIF model already successfully demonstrates.We have compiled nine scientific use cases of immediate societal relevance for an integrated infrastructure of botanical collections. They address accelerated biodiversity discovery and research, biomonitoring and conservation planning, biodiversity modelling, the generation of trait information, automated image recognition by artificial intelligence, automated pathogen detection, contextualization by interlinking objects, enabling provenance research, as well as education, outreach and citizen science.We propose to start this initiative now in order to valorize German botanical collections as a vital part of a worldwide biodiversity data pool.
Similar to chloroplast loci, mitochondrial markers are frequently used for genotyping, phylogenetic studies, and population genetics, as they are easily amplified due to their multiple copies per cell. In a recent study, it was revealed that the chloroplast offers little variation for this purpose in central European populations of beech. Thus, it was the aim of this study to elucidate, if mitochondrial sequences might offer an alternative, or whether they are similarly conserved in central Europe. For this purpose, a circular mitochondrial genome sequence from the more than 300-year-old beech reference individual Bhaga from the German National Park Kellerwald-Edersee was assembled using long and short reads and compared to an individual from the Jamy Nature Reserve in Poland and a recently published mitochondrial genome from eastern Germany. The mitochondrial genome of Bhaga was 504,730 bp, while the mitochondrial genomes of the other two individuals were 15 bases shorter, due to seven indel locations, with four having more bases in Bhaga and three locations having one base less in Bhaga. In addition, 19 SNP locations were found, none of which were inside genes. In these SNP locations, 17 bases were different in Bhaga, as compared to the other two genomes, while 2 SNP locations had the same base in Bhaga and the Polish individual. While these figures are slightly higher than for the chloroplast genome, the comparison confirms the low degree of genetic divergence in organelle DNA of beech in central Europe, suggesting the colonisation from a common gene pool after the Weichsel Glaciation. The mitochondrial genome might have limited use for population studies in central Europe, but once mitochondrial genomes from glacial refugia become available, it might be suitable to pinpoint the origin of migration for the re-colonising beech population.
Uncalibrated semi-invasive continous monitoring of cardiac index (CI) has recently gained increasing interest. The aim of the present study was to compare the accuracy of CI determination based on arterial waveform analysis with transpulmonary thermodilution. Fifty patients scheduled for elective coronary surgery were studied after induction of anaesthesia and before and after cardiopulmonary bypass (CPB), respectively. Each patient was monitored with a central venous line, the PiCCO system, and the FloTrac/Vigileo-system. Measurements included CI derived by transpulmonary thermodilution and uncalibrated semi-invasive pulse contour analysis. Percentage changes of CI were calculated. There was a moderate, but significant correlation between pulse contour CI and thermodilution CI both before (r(2) = 0.72, P < 0.0001) and after (r(2) = 0.62, P < 0.0001) CPB, with a percentage error of 31% and 25%, respectively. Changes in pulse contour CI showed a significant correlation with changes in thermodilution CI both before (r(2) = 0.52, P < 0.0001) and after (r(2) = 0.67, P < 0.0001) CPB. Our findings demonstrated that uncalibrated semi-invasive monitoring system was able to reliably measure CI compared with transpulmonary thermodilution in patients undergoing elective coronary surgery. Furthermore, the semi-invasive monitoring device was able to track haemodynamic changes and trends.
Voting advice applications (VAAs) are online tools providing voting advice to their users. This voting advice is based on the match between the answers of the user and the answers of several political parties to a common questionnaire on political attitudes. To visualize this match, VAAs use a wide array of visualisations, most popular of which are the two-dimensional political maps. These maps show the position of both the political parties and the user in the political landscape, allowing the user to understand both their own position and their relation to the political parties. To construct these maps, VAAs require scales that represent the main underlying dimensions of the political space. This makes the correct construction of these scales important if the VAA aims to provide accurate and helpful voting advice. This paper presents three criteria that assess if a VAA achieves this aim. To illustrate their usefulness, these three criteria—unidimensionality, reliability and quality—are used to assess the scales in the cross-national EUVox VAA, a VAA designed for the European Parliament elections of 2014. Using techniques from Mokken scaling analysis and categorical principal component analysis to capture the metrics, I find that most scales show low unidimensionality and reliability. Moreover, even while designers can—and sometimes do—use certain techniques to improve their scales, these improvements are rarely enough to overcome all of the problems regarding unidimensionality, reliability and quality. This leaves certain problems for the designers of VAAs and designers of similar type online surveys.