Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10808)
- Doctoral Thesis (1567)
- Preprint (1554)
- Working Paper (1438)
- Part of Periodical (564)
- Conference Proceeding (511)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17089) (remove)
Has Fulltext
- yes (17089) (remove)
Keywords
- inflammation (92)
- COVID-19 (89)
- SARS-CoV-2 (62)
- Financial Institutions (47)
- Germany (45)
- climate change (45)
- aging (43)
- ECB (42)
- cancer (42)
- apoptosis (41)
Institute
- Medizin (5096)
- Physik (2985)
- Wirtschaftswissenschaften (1643)
- Frankfurt Institute for Advanced Studies (FIAS) (1575)
- Biowissenschaften (1399)
- Informatik (1249)
- Center for Financial Studies (CFS) (1136)
- Sustainable Architecture for Finance in Europe (SAFE) (1059)
- Biochemie und Chemie (855)
- House of Finance (HoF) (700)
Dual-task paradigms encompass a broad range of approaches to measure cognitive load in instructional settings. As a common characteristic, an additional task is implemented alongside a learning task to capture the individual’s unengaged cognitive capacities during the learning process. Measures to determine these capacities are, for instance, reaction times and interval errors on the additional task, while the performance on the learning task is to be maintained. Opposite to retrospectively applied subjective ratings, the continuous assessment within a dual-task paradigm allows to simultaneously monitor changes in the performance related to previously defined tasks. Following the Cognitive Load Theory, these changes in performance correspond to cognitive changes related to the establishment of permanently existing knowledge structures. Yet the current state of research indicates a clear lack of standardization of dual-task paradigms over study settings and task procedures. Typically, dual-task designs are adapted uniquely for each study, albeit with some similarities across different settings and task procedures. These similarities range from the type of modality to the frequency used for the additional task. This results in a lack of validity and comparability between studies due to arbitrarily chosen patterns of frequency without a sound scientific base, potentially confounding variables, or undecided adaptation potentials for future studies. In this paper, the lack of validity and comparability between dual-task settings will be presented, the current taxonomies compared and the future steps for a better standardization and implementation discussed.
The merchant language of the Georgian Jews deserves scholarly attention for several reasons. The political and social developments of the last fifty years have caused the extinction of this very interesting form of communication, as most Georgian Jews have emigrated to Israel. In a natural interaction, the type of language described in this article can be found very rarely, if at all. Records of this communication have been preserved in various contexts and received different levels of scholarly attention. Our interest concerns the linguistic aspects as well as the classification.
In the following paper we argue that the specific merchant language of Georgian Jews belongs to the pragmatic phenomenon of “very indirect language.” The use of mostly Hebrew lexemes in Georgian conversation leads to an unfounded assumption that the speakers are equally competent in Hebrew and Georgian. It is reported that a high level of linguistic competence in Hebrew does not guarantee understanding of the Jewish merchant language. In the Georgian context, the decisive factors are membership in the professional interest group of merchants and residential membership in the Jewish community. These factors seem to be equivalent, because Jewish members of other professional groups (and those from outside the particular urban residential area) have difficulties in following the language that are similar to those of the Georgian majority. We describe the pragmatic structure of interactions conducted with the help of the merchant language and take into account the purpose of the language’s use or the intention of the speakers. Relevant linguistic examples are analysed and their sociocultural contexts explained.
A critical role for VEGF and VEGFR2 in NMDA receptor synaptic function and fear-related behavior
(2016)
Vascular endothelial growth factor (VEGF) is known to be required for the action of antidepressant therapies but its impact on brain synaptic function is poorly characterized. Using a combination of electrophysiological, single-molecule imaging and conditional transgenic approaches, we identified the molecular basis of the VEGF effect on synaptic transmission and plasticity. VEGF increases the postsynaptic responses mediated by the N-methyl-d-aspartate type of glutamate receptors (GluNRs) in hippocampal neurons. This is concurrent with the formation of new synapses and with the synaptic recruitment of GluNR expressing the GluN2B subunit (GluNR-2B). VEGF induces a rapid redistribution of GluNR-2B at synaptic sites by increasing the surface dynamics of these receptors within the membrane. Consistently, silencing the expression of the VEGF receptor 2 (VEGFR2) in neural cells impairs hippocampal-dependent synaptic plasticity and consolidation of emotional memory. These findings demonstrated the direct implication of VEGF signaling in neurons via VEGFR2 in proper synaptic function. They highlight the potential of VEGF as a key regulator of GluNR synaptic function and suggest a role for VEGF in new therapeutic approaches targeting GluNR in depression.
Rezension zu: Psychology of Retention:Theory, Research and Practice / Melinde Coetzee, Ingrid L. Potgieter and Nadia Ferreira (Eds.), ISBN:978-3-319-98919-8 Publisher:Springer Nature, 2018, R1600 (Preis SA)
The Frankfurt Neutron Source at the Stern-Gerlach-Zentrum is driven by a 2 MeV proton linac consisting of a 4-rod-radio-frequency-quadrupol (RFQ) and an 8 gap IH-DTL structure. RFQ and IH cavity will be powered by only one radio frequency (RF) amplifier to reduce costs. The RF-amplifier of the RFQ-IH combination is coupled into the RFQ. Internal inductive coupling along the axis connects the RFQ with the IH cavity ensuring the required power transition as well as a fixed phase relation between the two structures. The main acceleration of 120 keV up to 2.03 MeV will be reached by the RFQ-IH combination with 175 MHz and at a total length of 2.3 m. The losses in the RFQ-IH combination are about 200 kW.
This paper examines optimal enviromental policy when external financing is costly for firms. We introduce emission externalities and industry equilibrium in the Holmström and Tirole (1997) model of corporate finance. While a cap-and- trading system optimally governs both firms` abatement activities (internal emission margin) and industry size (external emission margin) when firms have sufficient internal funds, external financing constraints introduce a wedge between these two objectives. When a sector is financially constrained in the aggregate, the optimal cap is strictly above the Pigouvian benchmark and emission allowances should be allocated below market prices. When a sector is not financially constrained in the aggregate, a cap that is below the Pigiouvian benchmark optimally shifts market share to less polluting firms and, moreover, there should be no "grandfathering" of emission allowances. With financial constraints and heterogeneity across firms or sectors, a uniform policy, such as a single cap-and-trade system, is typically not optimal.
Background: Invasive off- or on-pump cardiac surgery (elective and emergency procedures, excluding transplants are routinely performed to treat complications of ischaemic heart disease. Randomised controlled trials (RCT) evaluate the effectiveness of treatments in the setting of cardiac surgery. However, the impact of RCTs is weakened by heterogeneity in outcome measuring and reporting, which hinders comparison across trials. Core outcome sets (COS, a set of outcomes that should be measured and reported, as a minimum, in clinical trials for a specific clinical field) help reduce this problem. In light of the above, we developed a COS for cardiac surgery effectiveness trials.
Methods: Potential core outcomes were identified a priori by analysing data on 371 RCTs of 58,253 patients. We reached consensus on core outcomes in an international three-round eDelphi exercise. Outcomes for which at least 60% of the participants chose the response option "no" and less than 20% chose the response option "yes" were excluded.
Results: Eighty-six participants from 23 different countries involving adult cardiac patients, cardiac surgeons, anaesthesiologists, nursing staff and researchers contributed to this eDelphi. The panel reached consensus on four core outcomes: 1) Measure of mortality, 2) Measure of quality of life, 3) Measure of hospitalisation and 4) Measure of cerebrovascular complication to be included in adult cardiac surgery trials.
Conclusion: This study used robust research methodology to develop a minimum core outcome set for clinical trials evaluating the effectiveness of treatments in the setting of cardiac surgery. As a next step, appropriate outcome measurement instruments have to be selected.
Unquestionably (or: undoubtedly), every competent speaker has already come to doubt with respect to the question of which form is correct or appropriate and should be used (in the standard language) when faced with two or more almost identical competing variants of words, word forms or sentence and phrase structure (e.g. German "Pizzas/Pizzen/Pizze" 'pizzas', Dutch "de drie mooiste/mooiste drie stranden" 'the three most beautiful/most beautiful three beaches', Swedish "större än jag/mig" 'taller than I/me'). Such linguistic uncertainties or "cases of doubt" (cf. i.a. Klein 2003, 2009, 2018; Müller & Szczepaniak 2017; Schmitt, Szczepaniak & Vieregge 2019; Stark 2019 as well as the useful collections of data of Duden vol. 9, Taaladvies.net, Språkriktighetsboken etc.) systematically occur also in native speakers and they do not necessarily coincide with the difficulties of second language learners. In present-day German, most grammatical uncertainties occur in the domains of inflection (nominal plural formation, genitive singular allomorphy of strong masc./neut. nouns, inflectional variation of weak masc. nouns, strong/weak adjectival inflection and comparison forms, strong/weak verb forms, perfect auxiliary selection) and word-formation (linking elements in compounds, separability of complex verbs). As for syntax, there are often doubts in connection with case choice (pseudo-partitive constructions, prepositional case government) and agreement (especially due to coordination or appositional structures). This contribution aims to present a contrastive approach to morphological and syntactic uncertainties in contemporary Germanic languages (mostly German, Dutch, and Swedish) in order to obtain a broader and more fine-grained typology of grammatical instabilities and their causes. As will be discussed, most doubts of competent speakers - a problem also for general linguistic theory - can be attributed to processes of language change in progress, to language or variety contact, to gaps and rule conflicts in the grammar of every language or to psycholinguistic conditions of language processing. Our main concerns will be the issues of which (kinds of) common or different critical areas there are within Germanic (and, on the other hand, in which areas there are no doubts), which of the established (cross-linguistically valid) explanatory approaches apply to which phenomena and, ultimately, the question whether the new data reveals further lines of explanation for the empirically observable (standard) variation.
In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s seq-operator, which for instance justifies the use of the do-notation.
Commercialization of consumers’ personal data in the digital economy poses serious, both conceptual and practical, challenges to the traditional approach of European Union (EU) Consumer Law. This article argues that mass-spread, automated, algorithmic decision-making casts doubt on the foundational paradigm of EU consumer law: consent and autonomy. Moreover, it poses threats of discrimination and under- mining of consumer privacy. It is argued that the recent legislative reaction by the EU Commission, in the form of the ‘New Deal for Consumers’, was a step in the right direction, but fell short due to its continued reliance on consent, autonomy and failure to adequately protect consumers from indirect discrimination. It is posited that a focus on creating a contracting landscape where the consumer may be properly informed in material respects is required, which in turn necessitates blending the approaches of competition, consumer protection and data protection laws.
A consistent muscle activation strategy underlies crawling and swimming in Caenorhabditis elegans
(2014)
Although undulatory swimming is observed in many organisms, the neuromuscular basis for undulatory movement patterns is not well understood. To better understand the basis for the generation of these movement patterns, we studied muscle activity in the nematode Caenorhabditis elegans. Caenorhabditis elegans exhibits a range of locomotion patterns: in low viscosity fluids the undulation has a wavelength longer than the body and propagates rapidly, while in high viscosity fluids or on agar media the undulatory waves are shorter and slower. Theoretical treatment of observed behaviour has suggested a large change in force–posture relationships at different viscosities, but analysis of bend propagation suggests that short-range proprioceptive feedback is used to control and generate body bends. How muscles could be activated in a way consistent with both these results is unclear. We therefore combined automated worm tracking with calcium imaging to determine muscle activation strategy in a variety of external substrates. Remarkably, we observed that across locomotion patterns spanning a threefold change in wavelength, peak muscle activation occurs approximately 45° (1/8th of a cycle) ahead of peak midline curvature. Although the location of peak force is predicted to vary widely, the activation pattern is consistent with required force in a model incorporating putative length- and velocity-dependence of muscle strength. Furthermore, a linear combination of local curvature and velocity can match the pattern of activation. This suggests that proprioception can enable the worm to swim effectively while working within the limitations of muscle biomechanics and neural control.
Introduction: Encouraged by the change in licensing regulations the practical professional skills in Germany received a higher priority and are taught in medical schools therefore increasingly. This created the need to standardize the process more and more. On the initiative of the German skills labs the German Medical Association Committee for practical skills was established and developed a competency-based catalogue of learning objectives, whose origin and structure is described here.
Goal of the catalogue is to define the practical skills in undergraduate medical education and to give the medical schools a rational planning basis for the necessary resources to teach them.
Methods: Building on already existing German catalogues of learning objectives a multi-iterative process of condensation was performed, which corresponds to the development of S1 guidelines, in order to get a broad professional and political support.
Results: 289 different practical learning goals were identified and assigned to twelve different organ systems with three overlapping areas to other fields of expertise and one area of across organ system skills. They were three depths and three different chronological dimensions assigned and the objectives were matched with the Swiss and the Austrian equivalent.
Discussion: This consensus statement may provide the German faculties with a basis for planning the teaching of practical skills and is an important step towards a national standard of medical learning objectives.
Looking ahead: The consensus statement may have a formative effect on the medical schools to teach practical skills and plan the resources accordingly.
Publicly available compound and bioactivity databases provide an essential basis for data-driven applications in life-science research and drug design. By analyzing several bioactivity repositories, we discovered differences in compound and target coverage advocating the combined use of data from multiple sources. Using data from ChEMBL, PubChem, IUPHAR/BPS, BindingDB, and Probes & Drugs, we assembled a consensus dataset focusing on small molecules with bioactivity on human macromolecular targets. This allowed an improved coverage of compound space and targets, and an automated comparison and curation of structural and bioactivity data to reveal potentially erroneous entries and increase confidence. The consensus dataset comprised of more than 1.1 million compounds with over 10.9 million bioactivity data points with annotations on assay type and bioactivity confidence, providing a useful ensemble for computational applications in drug design and chemogenomics.
Ubiquitin fold modifier 1 (UFM1) is a member of the ubiquitin-like protein family. UFM1 undergoes a cascade of enzymatic reactions including activation by UBA5 (E1), transfer to UFC1 (E2) and selective conjugation to a number of target proteins via UFL1 (E3) enzymes. Despite the importance of ufmylation in a variety of cellular processes and its role in the pathogenicity of many human diseases, the molecular mechanisms of the ufmylation cascade remains unclear. In this study we focused on the biophysical and biochemical characterization of the interaction between UBA5 and UFC1. We explored the hypothesis that the unstructured C-terminal region of UBA5 serves as a regulatory region, controlling cellular localization of the elements of the ufmylation cascade and effective interaction between them. We found that the last 20 residues in UBA5 are pivotal for binding to UFC1 and can accelerate the transfer of UFM1 to UFC1. We solved the structure of a complex of UFC1 and a peptide spanning the last 20 residues of UBA5 by NMR spectroscopy. This structure in combination with additional NMR titration and isothermal titration calorimetry experiments revealed the mechanism of interaction and confirmed the importance of the C-terminal unstructured region in UBA5 for the ufmylation cascade.
Treatments for amblyopia focus on vision therapy and patching of one eye. Predicting the success of these methods remains difficult, however. Recent research has used binocular rivalry to monitor visual cortical plasticity during occlusion therapy, leading to a successful prediction of the recovery rate of the amblyopic eye. The underlying mechanisms and their relation to neural homeostatic plasticity are not known. Here we propose a spiking neural network to explain the effect of short-term monocular deprivation on binocular rivalry. The model reproduces perceptual switches as observed experimentally. When one eye is occluded, inhibitory plasticity changes the balance between the eyes and leads to longer dominance periods for the eye that has been deprived. The model suggests that homeostatic inhibitory plasticity is a critical component of the observed effects and might play an important role in the recovery from amblyopia.
Background: The differentiation between Gaucher disease type 3 (GD3) and type 1 is challenging because pathognomonic neurologic symptoms may be subtle and develop at late stages. The ophthalmologist plays a crucial role in identifying the typical impairment of horizontal saccadic eye movements, followed by vertical ones. Little is known about further ocular involvement. The aim of this monocentric cohort study is to comprehensively describe the ophthalmological features of Gaucher disease type 3. We suggest recommendations for a set of useful ophthalmologic investigations for diagnosis and follow up and for saccadometry parameters enabling a correlation to disease severity.
Methods: Sixteen patients with biochemically and genetically diagnosed GD3 completed ophthalmologic examination including optical coherence tomography (OCT), clinical oculomotor assessment and saccadometry by infrared based video-oculography. Saccadic peak velocity, gain and latency were compared to 100 healthy controls, using parametric tests. Correlations between saccadic assessment and clinical parameters were calculated.
Results: Peripapillary subretinal drusen-like deposits with retinal atrophy (2/16), preretinal opacities of the vitreous (4/16) and increased retinal vessel tortuosity (3/16) were found. Oculomotor pathology with clinically slowed saccades was more frequent horizontally (15/16) than vertically (12/16). Saccadometry revealed slowed peak velocity compared to 100 controls (most evident horizontally and downwards). Saccades were delayed and hypometric. Best correlating with SARA (scale for the assessment and rating of ataxia), disease duration, mSST (modified Severity Scoring Tool) and reduced IQ was peak velocity (both up- and downwards). Motility restriction occurred in 8/16 patients affecting horizontal eye movements, while vertical motility restriction was seen less frequently. Impaired abduction presented with esophoria or esotropia, the latter in combination with reduced stereopsis.
Conclusions: Vitreoretinal lesions may occur in 25% of Gaucher type 3 patients, while we additionally observed subretinal lesions with retinal atrophy in advanced disease stages. Vertical saccadic peak velocity seems the most promising "biomarker" for neuropathic manifestation for future longitudinal studies, as it correlates best with other neurologic symptoms. Apart from the well documented abduction deficit in Gaucher type 3 we were able to demonstrate motility impairment in all directions of gaze.
Background: Alterations in the DNA methylation pattern are a hallmark of leukemias and lymphomas. However, most epigenetic studies in hematologic neoplasms (HNs) have focused either on the analysis of few candidate genes or many genes and few HN entities, and comprehensive studies are required. Methodology/Principal Findings: Here, we report for the first time a microarray-based DNA methylation study of 767 genes in 367 HNs diagnosed with 16 of the most representative B-cell (n = 203), T-cell (n = 30), and myeloid (n = 134) neoplasias, as well as 37 samples from different cell types of the hematopoietic system. Using appropriate controls of B-, T-, or myeloid cellular origin, we identified a total of 220 genes hypermethylated in at least one HN entity. In general, promoter hypermethylation was more frequent in lymphoid malignancies than in myeloid malignancies, being germinal center mature B-cell lymphomas as well as B and T precursor lymphoid neoplasias those entities with highest frequency of gene-associated DNA hypermethylation. We also observed a significant correlation between the number of hypermethylated and hypomethylated genes in several mature B-cell neoplasias, but not in precursor B- and T-cell leukemias. Most of the genes becoming hypermethylated contained promoters with high CpG content, and a significant fraction of them are targets of the polycomb repressor complex. Interestingly, T-cell prolymphocytic leukemias show low levels of DNA hypermethylation and a comparatively large number of hypomethylated genes, many of them showing an increased gene expression. Conclusions/Significance: We have characterized the DNA methylation profile of a wide range of different HNs entities. As well as identifying genes showing aberrant DNA methylation in certain HN subtypes, we also detected six genes—DBC1, DIO3, FZD9, HS3ST2, MOS, and MYOD1—that were significantly hypermethylated in B-cell, T-cell, and myeloid malignancies. These might therefore play an important role in the development of different HNs.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice nucleating particles (INPs). However, an inter-comparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nucleation research UnIT), we distributed an illite rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. Seventeen measurement methods were involved in the data inter-comparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while ten other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing dataset was evaluated using the ice nucleation active surface-site density (ns) to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers nine orders of magnitude in ns.
Our inter-comparison results revealed a discrepancy between suspension and dry-dispersed particle measurements for this mineral dust. While the agreement was good below ~ −26 °C, the ice nucleation activity, expressed in ns, was smaller for the wet suspended samples and higher for the dry-dispersed aerosol samples between about −26 and −18 °C. Only instruments making measurement techniques with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −26 and −18 °C is discussed. In general, the seventeen immersion freezing measurement techniques deviate, within the range of about 7 °C in terms of temperature, by three orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency (i.e., ns) of illite NX particles is relatively independent on droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature-dependence and weak time- and size-dependence of immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns (T) spectra, and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. A multiple exponential distribution fit is expressed as ns(T) = exp(23.82 × exp(−exp(0.16 × (T + 17.49))) + 1.39) based on the specific surface area and ns(T) = exp(25.75 × exp(−exp(0.13 × (T + 17.17))) + 3.34) based on the geometric area (ns and T in m−2 and °C, respectively). These new fits, constrained by using an identical reference samples, will help to compare IN measurement methods that are not included in the present study and, thereby, IN data from future IN instruments.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice-nucleating particles. However, an intercomparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nuclei Research Unit), we distributed an illite-rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. A total of 17 measurement methods were involved in the data intercomparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while 10 other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing data set was evaluated using the ice nucleation active surface-site density, ns, to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers 9 orders of magnitude in ns.
In general, the 17 immersion freezing measurement techniques deviate, within a range of about 8 °C in terms of temperature, by 3 orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency expressed in ns of illite NX particles is relatively independent of droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature dependence and weak time and size dependence of the immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns(T) spectra and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. While the agreement between different instruments was reasonable below ~ −27 °C, there seemed to be a different trend in the temperature-dependent ice nucleation activity from the suspension and dry-dispersed particle measurements for this mineral dust, in particular at higher temperatures. For instance, the ice nucleation activity expressed in ns was smaller for the average of the wet suspended samples and higher for the average of the dry-dispersed aerosol samples between about −27 and −18 °C. Only instruments making measurements with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −27 and −18 °C is discussed. Multiple exponential distribution fits in both linear and log space for both specific surface area-based ns(T) and geometric surface area-based ns(T) are provided. These new fits, constrained by using identical reference samples, will help to compare IN measurement methods that are not included in the present study and IN data from future IN instruments.
Analysis of whole cell lipid extracts of bacteria by means of ultra-performance (UP)LC-MS allows a comprehensive determination of the lipid molecular species present in the respective organism. The data allow conclusions on its metabolic potential as well as the creation of lipid profiles, which visualize the organism's response to changes in internal and external conditions. Herein, we describe: i) a fast reversed phase UPLC-ESI-MS method suitable for detection and determination of individual lipids from whole cell lipid extracts of all polarities ranging from monoacylglycerophosphoethanolamines to TGs; ii) the first overview of a wide range of lipid molecular species in vegetative Myxococcus xanthus DK1622 cells; iii) changes in their relative composition in selected mutants impaired in the biosynthesis of α-hydroxylated FAs, sphingolipids, and ether lipids; and iv) the first report of ceramide phosphoinositols in M. xanthus, a lipid species previously found only in eukaryotes.
Covalent inhibition has become more accepted in the past two decades, as illustrated by the clinical approval of several irreversible inhibitors designed to covalently modify their target. Elucidation of the structure-activity relationship and potency of such inhibitors requires a detailed kinetic evaluation. Here, we elucidate the relationship between the experimental read-out and the underlying inhibitor binding kinetics. Interactive kinetic simulation scripts are employed to highlight the effects of in vitro enzyme activity assay conditions and inhibitor binding mode, thereby showcasing which assumptions and corrections are crucial. Four stepwise protocols to assess the biochemical potency of (ir)reversible covalent enzyme inhibitors targeting a nucleophilic active site residue are included, with accompanying data analysis tailored to the covalent binding mode. Together, this will serve as a guide to make an educated decision regarding the most suitable method to assess covalent inhibition potency. © 2022 The Authors. Current Protocols published by Wiley Periodicals LLC.
Apigenin (4′,5,7-trihydroxyflavone) (Api) is an important component of the human diet, being distributed in a wide number of fruits, vegetables and herbs with the most important sources being represented by chamomile, celery, celeriac and parsley. This study was designed for a comprehensive evaluation of Api as an antiproliferative, proapoptotic, antiangiogenic and immunomodulatory phytocompound. In the set experimental conditions, Api presents antiproliferative activity against the A375 human melanoma cell line, a G2/M arrest of the cell cycle and cytotoxic events as revealed by the lactate dehydrogenase release. Caspase 3 activity was inversely proportional to the Api tested doses, namely 30 μM and 60 μM. Phenomena of early apoptosis, late apoptosis and necrosis following incubation with Api were detected by Annexin V-PI double staining. The flavone interfered with the mitochondrial respiration by modulating both glycolytic and mitochondrial pathways for ATP production. The metabolic activity of human dendritic cells (DCs) under LPS-activation was clearly attenuated by stimulation with high concentrations of Api. Il-6 and IL-10 secretion was almost completely blocked while TNF alpha secretion was reduced by about 60%. Api elicited antiangiogenic properties in a dose-dependent manner. Both concentrations of Api influenced tumour cell growth and migration, inducing a limited tumour area inside the application ring, associated with a low number of capillaries.
Translation is an important step in gene expression. The initiation of translation is phylogenetically diverse, since currently five different initiation mechanisms are known. For bacteria the three initiation factors IF1 – IF3 are described in contrast to archaea and eukaryotes, which contain a considerably higher number of initiation factor genes. As eukaryotes and archaea use a non-overlapping set of initiation mechanisms, orthologous proteins of both domains do not necessarily fulfill the same function. The genome of Haloferax volcanii contains 14 annotated genes that encode (subunits of) initiation factors. To gain a comprehensive overview of the importance of these genes, it was attempted to construct single gene deletion mutants of all genes. In 9 cases single deletion mutants were successfully constructed, showing that the respective genes are not essential. In contrast, the genes encoding initiation factors aIF1, aIF2γ, aIF5A, aIF5B, and aIF6 were found to be essential. Factors aIF1A and aIF2β are encoded by two orthologous genes in H. volcanii. Attempts to generate double mutants failed in both cases, indicating that also these factors are essential. A translatome analysis of one of the single aIF2β deletion mutants revealed that the translational efficiency of the second ortholog was enhanced tenfold and thus the two proteins can replace one another. The phenotypes of the single deletion mutants also revealed that the two aIF1As and aIF2βs have redundant but not identical functions. Remarkably, the gene encoding aIF2α, a subunit of aIF2 involved in initiator tRNA binding, could be deleted. However, the mutant had a severe growth defect under all tested conditions. Conditional depletion mutants were generated for the five essential genes. The phenotypes of deletion mutants and conditional depletion mutants were compared to that of the wild-type under various conditions, and growth characteristics are discussed.
In this work the flexibility requirements of a highly renewable European electricity network that has to cover fluctuations of wind and solar power generation on different temporal and spatial scales are studied. Cost optimal ways to do so are analysed that include optimal distribution of the infrastructure, large scale transmission, storage, and dispatchable generators. In order to examine these issues, a model of increasing sophistication is built, first considering different flexibility classes of conventional generation, then adding storage, before finally considering transmission to see the effects of each.
To conclude, in this work it was shown that slowly flexible base load generators can only be used in energy systems with renewable shares of less than 50%, independent of the expansion of an interconnecting transmission network within Europe. Furthermore, for a system with a dominant fraction of renewable generation, highly flexible generators are essentially the only necessary class of backup generators. The total backup capacity can only be decreased significantly if interconnecting transmission is allowed, clearly favouring a European-wide energy network. These results are independent of the complexity level of the cost assumptions used for the models. The use of storage technologies allows to reduce the required conventional backup capacity further. This highlights the importance of including additional technologies into the energy system that provide flexibility to balance fluctuations caused by the renewable energy sources. These technologies could for example be advanced energy storage systems, interconnecting transmission in the electricity network, and hydro power plants.
It was demonstrated that a cost optimal European electricity system with almost 100% renewable generation can have total system costs comparable to today's system cost. However, this requires a very large transmission grid expansion to nine times the line volume of the present-day system. Limiting transmission increases the system cost by up to a third, however, a compromise grid with four times today's line volume already locks in most of the cost benefits. Therefore, it is very clear that by increasing the pan-European network connectivity, a cost efficient inclusion of renewable energies can be achieved, which is strongly needed to reach current climate change prevention goals.
It was also shown that a similarly cost efficient, highly renewable European electricity system can be achieved that considers a wide range of additional policy constraints and plausible changes of economic parameters.
In this work we present, for the first time, the non-perturbative renormalization for the unpolarized, helicity and transversity quasi-PDFs, in an RI′ scheme. The proposed prescription addresses simultaneously all aspects of renormalization: logarithmic divergences, finite renormalization as well as the linear divergence which is present in the matrix elements of fermion operators with Wilson lines. Furthermore, for the case of the unpolarized quasi-PDF, we describe how to eliminate the unwanted mixing with the twist-3 scalar operator.
We utilize perturbation theory for the one-loop conversion factor that brings the renormalization functions to the MS-scheme at a scale of 2 GeV. We also explain how to improve the estimates on the renormalization functions by eliminating lattice artifacts. The latter can be computed in one-loop perturbation theory and to all orders in the lattice spacing.
We apply the methodology for the renormalization to an ensemble of twisted mass fermions with Nf = 2 + 1 + 1 dynamical quarks, and a pion mass of around 375 MeV.
Plants, fungi and algae are important components of global biodiversity and are fundamental to all ecosystems. They are the basis for human well-being, providing food, materials and medicines. Specimens of all three groups of organisms are accommodated in herbaria, where they are commonly referred to as botanical specimens.The large number of specimens in herbaria provides an ample, permanent and continuously improving knowledge base on these organisms and an indispensable source for the analysis of the distribution of species in space and time critical for current and future research relating to global biodiversity. In order to make full use of this resource, a research infrastructure has to be built that grants comprehensive and free access to the information in herbaria and botanical collections in general. This can be achieved through digitization of the botanical objects and associated data.The botanical research community can count on a long-standing tradition of collaboration among institutions and individuals. It agreed on data standards and standard services even before the advent of computerization and information networking, an example being the Index Herbariorum as a global registry of herbaria helping towards the unique identification of specimens cited in the literature.In the spirit of this collaborative history, 51 representatives from 30 institutions advocate to start the digitization of botanical collections with the overall wall-to-wall digitization of the flat objects stored in German herbaria. Germany has 70 herbaria holding almost 23 million specimens according to a national survey carried out in 2019. 87% of these specimens are not yet digitized. Experiences from other countries like France, the Netherlands, Finland, the US and Australia show that herbaria can be comprehensively and cost-efficiently digitized in a relatively short time due to established workflows and protocols for the high-throughput digitization of flat objects.Most of the herbaria are part of a university (34), fewer belong to municipal museums (10) or state museums (8), six herbaria belong to institutions also supported by federal funds such as Leibniz institutes, and four belong to non-governmental organizations. A common data infrastructure must therefore integrate different kinds of institutions.Making full use of the data gained by digitization requires the set-up of a digital infrastructure for storage, archiving, content indexing and networking as well as standardized access for the scientific use of digital objects. A standards-based portfolio of technical components has already been developed and successfully tested by the Biodiversity Informatics Community over the last two decades, comprising among others access protocols, collection databases, portals, tools for semantic enrichment and annotation, international networking, storage and archiving in accordance with international standards. This was achieved through the funding by national and international programs and initiatives, which also paved the road for the German contribution to the Global Biodiversity Information Facility (GBIF).Herbaria constitute a large part of the German botanical collections that also comprise living collections in botanical gardens and seed banks, DNA- and tissue samples, specimens preserved in fluids or on microscope slides and more. Once the herbaria are digitized, these resources can be integrated, adding to the value of the overall research infrastructure. The community has agreed on tasks that are shared between the herbaria, as the German GBIF model already successfully demonstrates.We have compiled nine scientific use cases of immediate societal relevance for an integrated infrastructure of botanical collections. They address accelerated biodiversity discovery and research, biomonitoring and conservation planning, biodiversity modelling, the generation of trait information, automated image recognition by artificial intelligence, automated pathogen detection, contextualization by interlinking objects, enabling provenance research, as well as education, outreach and citizen science.We propose to start this initiative now in order to valorize German botanical collections as a vital part of a worldwide biodiversity data pool.
Similar to chloroplast loci, mitochondrial markers are frequently used for genotyping, phylogenetic studies, and population genetics, as they are easily amplified due to their multiple copies per cell. In a recent study, it was revealed that the chloroplast offers little variation for this purpose in central European populations of beech. Thus, it was the aim of this study to elucidate, if mitochondrial sequences might offer an alternative, or whether they are similarly conserved in central Europe. For this purpose, a circular mitochondrial genome sequence from the more than 300-year-old beech reference individual Bhaga from the German National Park Kellerwald-Edersee was assembled using long and short reads and compared to an individual from the Jamy Nature Reserve in Poland and a recently published mitochondrial genome from eastern Germany. The mitochondrial genome of Bhaga was 504,730 bp, while the mitochondrial genomes of the other two individuals were 15 bases shorter, due to seven indel locations, with four having more bases in Bhaga and three locations having one base less in Bhaga. In addition, 19 SNP locations were found, none of which were inside genes. In these SNP locations, 17 bases were different in Bhaga, as compared to the other two genomes, while 2 SNP locations had the same base in Bhaga and the Polish individual. While these figures are slightly higher than for the chloroplast genome, the comparison confirms the low degree of genetic divergence in organelle DNA of beech in central Europe, suggesting the colonisation from a common gene pool after the Weichsel Glaciation. The mitochondrial genome might have limited use for population studies in central Europe, but once mitochondrial genomes from glacial refugia become available, it might be suitable to pinpoint the origin of migration for the re-colonising beech population.
Uncalibrated semi-invasive continous monitoring of cardiac index (CI) has recently gained increasing interest. The aim of the present study was to compare the accuracy of CI determination based on arterial waveform analysis with transpulmonary thermodilution. Fifty patients scheduled for elective coronary surgery were studied after induction of anaesthesia and before and after cardiopulmonary bypass (CPB), respectively. Each patient was monitored with a central venous line, the PiCCO system, and the FloTrac/Vigileo-system. Measurements included CI derived by transpulmonary thermodilution and uncalibrated semi-invasive pulse contour analysis. Percentage changes of CI were calculated. There was a moderate, but significant correlation between pulse contour CI and thermodilution CI both before (r(2) = 0.72, P < 0.0001) and after (r(2) = 0.62, P < 0.0001) CPB, with a percentage error of 31% and 25%, respectively. Changes in pulse contour CI showed a significant correlation with changes in thermodilution CI both before (r(2) = 0.52, P < 0.0001) and after (r(2) = 0.67, P < 0.0001) CPB. Our findings demonstrated that uncalibrated semi-invasive monitoring system was able to reliably measure CI compared with transpulmonary thermodilution in patients undergoing elective coronary surgery. Furthermore, the semi-invasive monitoring device was able to track haemodynamic changes and trends.
Voting advice applications (VAAs) are online tools providing voting advice to their users. This voting advice is based on the match between the answers of the user and the answers of several political parties to a common questionnaire on political attitudes. To visualize this match, VAAs use a wide array of visualisations, most popular of which are the two-dimensional political maps. These maps show the position of both the political parties and the user in the political landscape, allowing the user to understand both their own position and their relation to the political parties. To construct these maps, VAAs require scales that represent the main underlying dimensions of the political space. This makes the correct construction of these scales important if the VAA aims to provide accurate and helpful voting advice. This paper presents three criteria that assess if a VAA achieves this aim. To illustrate their usefulness, these three criteria—unidimensionality, reliability and quality—are used to assess the scales in the cross-national EUVox VAA, a VAA designed for the European Parliament elections of 2014. Using techniques from Mokken scaling analysis and categorical principal component analysis to capture the metrics, I find that most scales show low unidimensionality and reliability. Moreover, even while designers can—and sometimes do—use certain techniques to improve their scales, these improvements are rarely enough to overcome all of the problems regarding unidimensionality, reliability and quality. This leaves certain problems for the designers of VAAs and designers of similar type online surveys.
Aim: Predicting future changes in species richness in response to climate change is one of the key challenges in biogeography and conservation ecology. Stacked species distribution models (S‐SDMs) are a commonly used tool to predict current and future species richness. Macroecological models (MEMs), regression models with species richness as response variable, are a less computationally intensive alternative to S‐SDMs. Here, we aim to compare the results of two model types (S‐SDMS and MEMs), for the first time for more than 14,000 species across multiple taxa globally, and to trace the uncertainty in future predictions back to the input data and modelling approach used.
Location: Global land, excluding Antarctica.
Taxon: Amphibians, birds and mammals.
Methods: We fitted S‐SDMs and MEMs using a consistent set of bioclimatic variables and model algorithms and conducted species richness predictions under current and future conditions. For the latter, we used four general circulation models (GCMs) under two representative concentration pathways (RCP2.6 and RCP6.0). Predicted species richness was compared between S‐SDMs and MEMs and for current conditions also to extent‐of‐occurrence (EOO) species richness patterns. For future predictions, we quantified the variance in predicted species richness patterns explained by the choice of model type, model algorithm and GCM using hierarchical cluster analysis and variance partitioning.
Results: Under current conditions, species richness predictions from MEMs and S‐SDMs were strongly correlated with EOO‐based species richness. However, both model types over‐predicted areas with low and under‐predicted areas with high species richness. Outputs from MEMs and S‐SDMs were also highly correlated among each other under current and future conditions. The variance between future predictions was mostly explained by model type.
Main conclusions: Both model types were able to reproduce EOO‐based patterns in global terrestrial vertebrate richness, but produce less collinear predictions of future species richness. Model type by far contributes to most of the variation in the different future species richness predictions, indicating that the two model types should not be used interchangeably. Nevertheless, both model types have their justification, as MEMs can also include species with a restricted range, whereas S‐SDMs are useful for looking at potential species‐specific responses.
We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.
A comparison of different APTT-reagents, heparin-sensitivity and detection of mild coagulopathies
(1992)
The activated partial thromboplastin time (aPTT) is widely used to detect coagulation abnormalities or to monitor heparin treatment.
Many commercial aPTT-reagents are available which contain different phospholipid reagents and activators. In the present study 3 aPTT-reagents (aPTT-D, Instrumentation Laboratory, Neothromtin, Behring, PTTa, Boehringer) were compared using a computerized centrifugal analyzer. One aPTT-reagent (Pathromtin, Behring) was tested on a semiautomated coagulometer. Instrument precision was evaluated using aPTT-D as reagent.
Comparative tests were performed on plasma samples of 40 healthy donors, 3 patients with mild von Willebrand's disease (vWd), W patients with heaemophilia or subhaemophilia A, 1 patient with subhaemophilia A and vWd, 8 patients treated with subcutaneous injection of unfractionated heparin (UFH) and 14 patients treated with subcutaneous injection of a low molecular weight heparin (LMWH).
aPTT-D was the most sensitive reagent to detect mild vWd while Pathromtin detected none of these defects. In patients with heamophilia A and subhaemophilia A aPTT-D, Neothromtin and PTTa detected the abnormality in nearly all tested samples while Pathromtin was less sensitive.
Patients treated with subcutaneously applied UFH or LMWH often had a prolonged aPTTt especially when aPTT-D and Neothromtin were used as reagents.
The single nucleotide polymorphism 118A>G of the human micro-opioid receptor gene OPRM1, which leads to an exchange of the amino acid asparagine (N) to aspartic acid (D) at position 40 of the extracellular receptor region, alters the in vivo effects of opioids to different degrees in pain-processing brain regions. The most pronounced N40D effects were found in brain regions involved in the sensory processing of pain intensity. Using the mu-opioid receptor-specific agonist DAMGO, we analyzed the micro-opioid receptor signaling, expression, and binding affinity in human brain tissue sampled postmortem from the secondary somatosensory area (SII) and from the ventral posterior part of the lateral thalamus, two regions involved in the sensory processing and transmission of nociceptive information. We show that the main effect of the N40D micro-opioid receptor variant is a reduction of the agonist-induced receptor signaling efficacy. In the SII region of homo- and heterozygous carriers of the variant 118G allele (n=18), DAMGO was only 62% as efficient (p=0.002) as in homozygous carriers of the wild-type 118A allele (n=15). In contrast, the number of [3H]DAMGO binding sites was unaffected. Hence, the micro-opioid receptor G-protein coupling efficacy in SII of carriers of the 118G variant was only 58% as efficient as in homozygous carriers of the 118A allele (p<0.001). The thalamus was unaffected by the OPRM1 118A>G SNP. In conclusion, we provide a molecular basis for the reduced clinical effects of opioid analgesics in carriers of mu-opioid receptor variant N40D.
Background and Aims: Chronic infection with the hepatitis B virus (HBV) is a major health issue worldwide. Recently, single nucleotide polymorphisms (SNPs) within the human leukocyte antigen (HLA)-DP locus were identified to be associated with HBV infection in Asian populations. Most significant associations were observed for the A alleles of HLA-DPA1 rs3077 and HLA-DPB1 rs9277535, which conferred a decreased risk for HBV infection. We assessed the implications of these variants for HBV infection in Caucasians.
Methods: Two HLA-DP gene variants (rs3077 and rs9277535) were analyzed for associations with persistent HBV infection and with different clinical outcomes, i.e., inactive HBsAg carrier status versus progressive chronic HBV (CHB) infection in Caucasian patients (n = 201) and HBsAg negative controls (n = 235).
Results: The HLA-DPA1 rs3077 C allele was significantly associated with HBV infection (odds ratio, OR = 5.1, 95% confidence interval, CI: 1.9–13.7; p = 0.00093). However, no significant association was seen for rs3077 with progressive CHB infection versus inactive HBsAg carrier status (OR = 2.7, 95% CI: 0.6–11.1; p = 0.31). In contrast, HLA-DPB1 rs9277535 was not associated with HBV infection in Caucasians (OR = 0.8, 95% CI: 0.4–1.9; p = 1).
Conclusions: A highly significant association of HLA-DPA1 rs3077 with HBV infection was observed in Caucasians. However, as a differentiation between different clinical courses of HBV infection was not possible, knowledge of the HLA-DPA1 genotype cannot be translated into personalized anti-HBV therapy approaches.
The cell—cell signaling gene CDH13 is associated with a wide spectrum of neuropsychiatric disorders, including attention-deficit/hyperactivity disorder (ADHD), autism, and major depression. CDH13 regulates axonal outgrowth and synapse formation, substantiating its relevance for neurodevelopmental processes. Several studies support the influence of CDH13 on personality traits, behavior, and executive functions. However, evidence for functional effects of common gene variation in the CDH13 gene in humans is sparse. Therefore, we tested for association of a functional intronic CDH13 SNP rs2199430 with ADHD in a sample of 998 adult patients and 884 healthy controls. The Big Five personality traits were assessed by the NEO-PI-R questionnaire. Assuming that altered neural correlates of working memory and cognitive response inhibition show genotype-dependent alterations, task performance and electroencephalographic event-related potentials were measured by n-back and continuous performance (Go/NoGo) tasks. The rs2199430 genotype was not associated with adult ADHD on the categorical diagnosis level. However, rs2199430 was significantly associated with agreeableness, with minor G allele homozygotes scoring lower than A allele carriers. Whereas task performance was not affected by genotype, a significant heterosis effect limited to the ADHD group was identified for the n-back task. Heterozygotes (AG) exhibited significantly higher N200 amplitudes during both the 1-back and 2-back condition in the central electrode position Cz. Consequently, the common genetic variation of CDH13 is associated with personality traits and impacts neural processing during working memory tasks. Thus, CDH13 might contribute to symptomatic core dysfunctions of social and cognitive impairment in ADHD.
We explore a combinatorial framework which efficiently quantifies the asymmetries between minima and maxima in local fluctuations of time series. We first showcase its performance by applying it to a battery of synthetic cases. We find rigorous results on some canonical dynamical models (stochastic processes with and without correlations, chaotic processes) complemented by extensive numerical simulations for a range of processes which indicate that the methodology correctly distinguishes different complex dynamics and outperforms state of the art metrics in several cases. Subsequently, we apply this methodology to real-world problems emerging across several disciplines including cases in neurobiology, finance and climate science. We conclude that differences between the statistics of local maxima and local minima in time series are highly informative of the complex underlying dynamics and a graph-theoretic extraction procedure allows to use these features for statistical learning purposes.
Extending the data set used in Beyer (2009) to 2017, we estimate I(1) and I(2) money demand models for euro area M3. After including two broken trends and a few dummies to account for shifts in the variables following the global financial crisis and the ECB's non-standard monetary policy measures, we find that the money demand and the real wealth relations identified in Beyer (2009) have remained remarkably stable throughout the extended sample period. Testing for price homogeneity in the I(2) model we find that the nominal-to-real transformation is not rejected for the money relation whereas the wealth relation cannot be expressed in real terms.
There is an increasing interest in incorporating significant citizen participation into the law-making process by developing the use of the internet in the public sphere. However, no well-accepted e-participation model has prevailed. This article points out that, to be successful, we need critical reflection of legal theory and we also need further institutional construction based on the theoretical reflection.
Contemporary dominant legal theories demonstrate too strong an internal legal point of view to empower the informal, social normative development on the internet. Regardless of whether we see the law as a body of rules or principles, the social aspect is always part of people’s background and attracts little attention. In this article, it is advocated that the procedural legal paradigm advanced by Jürgen Habermas represents an important breakthrough in this regard.
Further, Habermas’s co-originality thesis reveals a neglected internal relationship between public autonomy and private autonomy. I believe the co-originality theory provides the essential basis on which a connecting infrastructure between the legal and the social could be developed. In terms of the development of the internet to include the public sphere, co-originality can also help us direct the emphasis on the formation of public opinion away from the national legislative level towards the local level; that is, the network of governance.1
This article is divided into two sections. The focus of Part One is to reconstruct the co-originality thesis (section 2, 3). This paper uses the application of discourse in the adjudication theory of Habermas as an example. It argues that Habermas would be more coherent, in terms of his insistence on real communication in his discourse theory, if he allowed his judges to initiate improved interaction with the society. This change is essential if the internal connection between public autonomy and private autonomy in the sense of court adjudication is to be truly enabled.
In order to demonstrate such improved co-original relationships, the empowering character of the state-made law is instrumental in initiating the mobilization of legal intermediaries, both individual and institutional. A mutually enhanced relationship is thus formed; between the formal, official organization and its governance counterpart aided by its associated ‘local’ public sphere. Referring to Susan Sturm, the Harris v Forklift Systems Inc. (1930) decision of the Supreme Court of the United States in the field of sexual harassment is used as an example.
Using only one institutional example to illustrate how the co-originality thesis can be improved is not sufficient to rebuild the thesis but this is as much as can be achieved in this article.
In Part Two, the paper examines, still at the institutional level, how Sturm develops an overlooked sense of impartiality, especially in the derivation of social norms; i.e. multi-partiality instead of neutral detachment (section 4). These two ideas should be combined as the criterion for impartiality to evaluate the legitimacy of the joint decision-making processes of both the formal official organization and ‘local’ public sphere.
Sturm’s emphasis on the deployment of intermediaries, both institutional and individual, can also enlighten the discourse theory. Intermediaries are essential for connecting the disassociated social networks, especially when a breakdown of communication occurs due to a lack of data, information, knowledge, or disparity of value orientation, all of which can affect social networks. If intermediaries are used, further communication will not be blocked as a result of the lack of critical data, information, knowledge or misunderstandings due to disparity of value orientation or other causes.
The institutional impact of the newly constructed co-originality thesis is also discussed in Part Two. Landwehr’s work on institutional design and assessment for deliberative interaction is first discussed. This article concludes with an indication of how the ‘local’ public sphere, through e-rulemaking or online dispute resolution, for example, can be constructed in light of the discussion of this article.
Autism spectrum disorders (ASD) have been associated with sensory hypersensitivity. A recent study reported visual acuity (VA) in ASD in the region reported for birds of prey. The validity of the results was subsequently doubted. This study examined VA in 34 individuals with ASD, 16 with schizophrenia (SCH), and 26 typically developing (TYP). Participants with ASD did not show higher VA than those with SCH and TYP. There were no substantial correlations of VA with clinical severity in ASD or SCH. This study could not confirm the eagle-eyed acuity hypothesis of ASD, or find evidence for a connection of VA and clinical phenotypes. Research needs to further address the origins and circumstances associated with altered sensory or perceptual processing in ASD.
Background: Researchers who wish to study stress-related disorders need to use valid, reliable, and sensitive instruments and the Clinician-administered PTSD Scale (CAPS) con- stitutes the gold standard in the assessment of posttraumatic stress disorder (PTSD). While the CAPS corresponds with PTSD criteria according to the DSM-5, researchers face a challenge with the forthcoming ICD-11: ICD-11 introduces the new diagnosis Complex PTSD (CPTSD) that does not exist in DSM-5.
Objective: Researchers as well as clinicians will need to assess the incidence and prevalence of CPTSD and will want to evaluate treatment effects according to both criteria sets. However, using two clinician-rated interviews is often not feasible and a burden to patients, particularly in psychotherapy research.
Method & Results: We have therefore developed the Complex PTSD Item Set additional to the CAPS (COPISAC). This clinician rating is an easy-to-use and economic addition to the CAPS that permits assessing diagnosis and evaluating symptom severity of CPTSD. COPISAC consists of three items that assess disturbances in self-regulation including prompts for symptom description and frequency, and two additional items assessing impairment. Diagnostic status and severity ratings for CPTSD are possible. Items that account for the specific forms of trauma which the ICD-11 describes as precursors of CPTSD (e.g. torture, being enslaved) are further suggested as additions to the Life Events Checklist. Conclusion: With an introduction of COPISAC at this point, we aim at suggesting an easy transition into diagnosing CPTSD and evaluating its course over treatment.
Introduction: Current prognostic gene expression profiles for breast cancer mainly reflect proliferation status and are most useful in ER-positive cancers. Triple negative breast cancers (TNBC) are clinically heterogeneous and prognostic markers and biology-based therapies are needed to better treat this disease.
Methods: We assembled Affymetrix gene expression data for 579 TNBC and performed unsupervised analysis to define metagenes that distinguish molecular subsets within TNBC. We used n = 394 cases for discovery and n = 185 cases for validation. Sixteen metagenes emerged that identified basal-like, apocrine and claudin-low molecular subtypes, or reflected various non-neoplastic cell populations, including immune cells, blood, adipocytes, stroma, angiogenesis and inflammation within the cancer. The expressions of these metagenes were correlated with survival and multivariate analysis was performed, including routine clinical and pathological variables.
Results: Seventy-three percent of TNBC displayed basal-like molecular subtype that correlated with high histological grade and younger age. Survival of basal-like TNBC was not different from non basal-like TNBC. High expression of immune cell metagenes was associated with good and high expression of inflammation and angiogenesis-related metagenes were associated with poor prognosis. A ratio of high B-cell and low IL-8 metagenes identified 32% of TNBC with good prognosis (hazard ratio (HR) 0.37, 95% CI 0.22 to 0.61; P < 0.001) and was the only significant predictor in multivariate analysis including routine clinicopathological variables.
Conclusions: We describe a ratio of high B-cell presence and low IL-8 activity as a powerful new prognostic marker for TNBC. Inhibition of the IL-8 pathway also represents an attractive novel therapeutic target for this disease.
Chloroplasts are difficult to assemble because of the presence of large inverted repeats. At the same time, correct assemblies are important, as chloroplast loci are frequently used for biogeography and population genetics studies. In an attempt to elucidate the orientation of the single-copy regions and to find suitable loci for chloroplast single nucleotide polymorphism (SNP)-based studies, circular chloroplast sequences for the ultra-centenary reference individual of European Beech (Fagus sylvatica), Bhaga, and an additional Polish individual (named Jamy) was obtained based on hybrid assemblies. The chloroplast genome of Bhaga was 158,458 bp, and that of Jamy was 158,462 bp long. Using long-read mapping on the configuration inferred in this study and the one suggested in a previous study, we found an inverted orientation of the small single-copy region. The chloroplast genome of Bhaga and of the individual from Poland both have only two mismatches as well as three and two indels as compared to the previously published genome, respectively. The low divergence suggests low seed dispersal but high pollen dispersal. However, once chloroplast genomes become available from Pleistocene refugia, where a high degree of variation has been reported, they might prove useful for tracing the migration history of Fagus sylvatica in the Holocene.
The European Beech is the dominant climax tree in most regions of Central Europe and valued for its ecological versatility and hardwood timber. Even though a draft genome has been published recently, higher resolution is required for studying aspects of genome architecture and recombination. Here we present a chromosome-level assembly of the more than 300 year-old reference individual, Bhaga, from the Kellerwald-Edersee National Park (Germany). Its nuclear genome of 541 Mb was resolved into 12 chromosomes varying in length between 28 Mb and 73 Mb. Multiple nuclear insertions of parts of the chloroplast genome were observed, with one region on chromosome 11 spanning more than 2 Mb of the genome in which fragments up to 54,784 bp long and covering the whole chloroplast genome were inserted randomly. Unlike in Arabidopsis thaliana, ribosomal cistrons are present in Fagus sylvatica only in four major regions, in line with FISH studies. On most assembled chromosomes, telomeric repeats were found at both ends, while centromeric repeats were found to be scattered throughout the genome apart from their main occurrence per chromosome. The genome- wide distribution of SNPs was evaluated using a second individual from Jamy Nature Reserve (Poland). SNPs, repeat elements and duplicated genes were unevenly distributed in the genomes, with one major anomaly on chromosome 4. The genome presented here adds to the available highly resolved plant genomes and we hope it will serve as a valuable basis for future research on genome architecture and for understanding the past and future of European Beech populations in a changing climate.
The European Beech is the dominant climax tree in most regions of Central Europe and valued for its ecological versatility and hardwood timber. Even though a draft genome has been published recently, higher resolution is required for studying aspects of genome architecture and recombination. Here, we present a chromosome-level assembly of the more than 300 year-old reference individual, Bhaga, from the Kellerwald-Edersee National Park (Germany). Its nuclear genome of 541 Mb was resolved into 12 chromosomes varying in length between 28 and 73 Mb. Multiple nuclear insertions of parts of the chloroplast genome were observed, with one region on chromosome 11 spanning more than 2 Mb which fragments up to 54,784 bp long and covering the whole chloroplast genome were inserted randomly. Unlike in Arabidopsis thaliana, ribosomal cistrons are present in Fagus sylvatica only in four major regions, in line with FISH studies. On most assembled chromosomes, telomeric repeats were found at both ends, while centromeric repeats were found to be scattered throughout the genome apart from their main occurrence per chromosome. The genome-wide distribution of SNPs was evaluated using a second individual from Jamy Nature Reserve (Poland). SNPs, repeat elements and duplicated genes were unevenly distributed in the genomes, with one major anomaly on chromosome 4. The genome presented here adds to the available highly resolved plant genomes and we hope it will serve as a valuable basis for future research on genome architecture and for understanding the past and future of European Beech populations in a changing climate.
We performed an experiment under long-term microgravity conditions aboard the International Space Station (ISS) to obtain information on the energetics and experimental constraints required for the formation of chondrules in the solar nebula by ’nebular lightning’. As a simplified model system, we exposed porous forsterite (Mg2 SiO4) dust particles to high-energetic arc discharges. The characterization of the samples after their return by synchrotron microtomography and scanning electron microscopy revealed that aggregates had formed, consisting of several fused Mg2SiO4 particles. The partial melting and fusing of Mg2SiO4 dust particles under microgravity conditions leads to a strong reduction of their porosity. The experimental outcomes vary strongly in their appearance from small spherical melt-droplets (∅≈90 µm) to bigger and irregularly shaped aggregates (∅≈350 µm). Our results provided new constraints with respect to energetic aspects of chondrule formation and a roadmap for future and more complex experiments on Earth and in microgravity conditions.
A chiral analog of the bicyclic guanidine TBD : synthesis, structure and Brønsted base catalysis
(2016)
Starting from (S)-β-phenylalanine, easily accessible by lipase-catalyzed kinetic resolution, a chiral triamine was assembled by a reductive amination and finally cyclized to form the title compound 10. In the crystals of the guanidinium benzoate salt the six membered rings of 10 adopt conformations close to an envelope with the phenyl substituents in pseudo-axial positions. The unprotonated guanidine 10 catalyzes Diels–Alder reactions of anthrones and maleimides (25–30% ee). It also promotes as a strong Brønsted base the retro-aldol reaction of some cycloadducts with kinetic resolution of the enantiomers. In three cases, the retro-aldol products (48–83% ee) could be recrystallized to high enantiopurity (≥95% ee). The absolute configuration of several compounds is supported by anomalous X-ray diffraction and by chemical correlation.
Two subvalent, redox-active diborane(4) anions, [3]4− and [3]2−, carrying exceptionally high negative charge densities are reported: Reduction of 9-methoxy-9-borafluorene with Li granules without stirring leads to the crystallization of the B(sp3)−B(sp2) diborane(5) anion salt Li[5]. [5]− contains a 2,2′-biphenyldiyl-bridged B−B core, a chelating 2,2′-biphenyldiyl moiety, and a MeO substituent. Reduction of Li[5] with Na metal gives the Na+ salt of the tetraanion [3]4− in which two doubly reduced 9-borafluorenyl fragments are linked via a B−B single bond. Comproportionation of Li[5] and Na4[3] quantitatively furnishes the diborane(4) dianion salt Na2[3], the doubly boron-doped congener of 9,9′-bis(fluorenylidene). Under acid catalysis, Na2[3] undergoes a formal Stone–Wales rearrangement to yield a dibenzo[g,p]chrysene derivative with B=B core. Na2[3] shows boron-centered nucleophilicity toward n-butyl chloride. Na4[3] produces bright blue chemiluminescence when exposed to air.
Bromodomains (BRDs) are conserved protein interaction modules which recognize (read) acetyl-lysine modifications, however their role(s) in regulating cellular states and their potential as targets for the development of targeted treatment strategies is poorly understood. Here we present a set of 25 chemical probes, selective small molecule inhibitors, covering 29 human bromodomain targets. We comprehensively evaluate the selectivity of this probe-set using BROMOscan and demonstrate the utility of the set identifying roles of BRDs in cellular processes and potential translational applications. For instance, we discovered crosstalk between histone acetylation and the glycolytic pathway resulting in a vulnerability of breast cancer cell lines under conditions of glucose deprivation or GLUT1 inhibition to inhibition of BRPF2/3 BRDs. This chemical probe-set will serve as a resource for future applications in the discovery of new physiological roles of bromodomain proteins in normal and disease states, and as a toolset for bromodomain target validation.
The discovery of clustered regularly interspaced short palindromic repeats and their associated proteins (Cas) has revolutionized the field of genome and epigenome editing. A number of new methods have been developed to precisely control the function and activity of Cas proteins, including fusion proteins and small-molecule modulators. Proteolysis-targeting chimeras (PROTACs) represent a new concept using the ubiquitin-proteasome system to degrade a protein of interest, highlighting the significance of chemically induced protein-E3 ligase interaction in drug discovery. Here, we engineered Cas proteins (Cas9, dCas9, Cas12, and Cas13) by inserting a Phe-Cys-Pro-Phe (FCPF) amino acid sequence (known as the π-clamp system) and demonstrate that the modified CasFCPF proteins can be (1) labeled in live cells by perfluoroaromatics carrying the fluorescein or (2) degraded by a perfluoroaromatics-functionalized PROTAC (PROTAC-FCPF). A proteome-wide analysis of PROTAC-FCPF-mediated Cas9FCPF protein degradation revealed a high target specificity, suggesting a wide range of applications of perfluoroaromatics-induced proximity in the regulation of stability, activity, and functionality of any FCPF-tagging protein.
Background: One of the most popular and versatile model of murine melanoma is by inoculating B16 cells in the syngeneic C57BL6J mouse strain. A characterization of different B16 modified cell sub-lines will be of real practical interest. For this aim, modern analytical tools like surface enhanced Raman spectroscopy/scattering (SERS) and MTT were employed to characterize both chemical composition and proliferation behavior of the selected cells.
Methods: High quality SERS signal was recorded from each of the four types of B16 cell sub-lines: B164A5, B16GMCSF, B16FLT3, B16F10, in order to observe the differences between a parent cell line (B164A5) and other derived B16 cell sub-lines. Cells were incubated with silver nanoparticles of 50–100 nm diameter and the nanoparticles uptake inside the cells cytoplasm was proved by transmission electron microscopy (TEM) investigations. In order to characterize proliferation, growth curves of the four B16 cell lines, using different cell numbers and FCS concentration were obtained employing the MTT proliferation assay. For correlations doubling time were calculated.
Results: SERS bands allowed the identification inside the cells of the main bio-molecular components such as: proteins, nucleic acids, and lipids. An "on and off" SERS effect was constantly present, which may be explained in terms of the employed laser power, as well as the possible different orientations of the adsorbed species in the cells in respect to the Ag nanoparticles. MTT results showed that among the four tested cell sub-lines B16 F10 is the most proliferative and B164A5 has the lower growth capacity. Regarding B16FLT3 cells and B16GMCSF cells, they present proliferation ability in between with slight slower potency for B16GMCSF cells.
Conclusion: Molecular fingerprint and proliferation behavior of four B16 melanoma cell sub-lines were elucidated by associating SERS investigations with MTT proliferation assay.
Serial quantification of BCR–ABL1 mRNA is an important therapeutic indicator in chronic myeloid leukaemia, but there is a substantial variation in results reported by different laboratories. To improve comparability, an internationally accepted plasmid certified reference material (CRM) was developed according to ISO Guide 34:2009. Fragments of BCR–ABL1 (e14a2 mRNA fusion), BCR and GUSB transcripts were amplified and cloned into pUC18 to yield plasmid pIRMM0099. Six different linearised plasmid solutions were produced with the following copy number concentrations, assigned by digital PCR, and expanded uncertainties: 1.08±0.13 × 106, 1.08±0.11 × 105, 1.03±0.10 × 104, 1.02±0.09 × 103, 1.04±0.10 × 102 and 10.0±1.5 copies/μl. The certification of the material for the number of specific DNA fragments per plasmid, copy number concentration of the plasmid solutions and the assessment of inter-unit heterogeneity and stability were performed according to ISO Guide 35:2006. Two suitability studies performed by 63 BCR–ABL1 testing laboratories demonstrated that this set of 6 plasmid CRMs can help to standardise a number of measured transcripts of e14a2 BCR–ABL1 and three control genes (ABL1, BCR and GUSB). The set of six plasmid CRMs is distributed worldwide by the Institute for Reference Materials and Measurements (Belgium) and its authorised distributors (https://ec.europa.eu/jrc/en/reference-materials/catalogue/; CRM code ERM-AD623a-f).
We have analyzed a series of eleven mutations in the 49-kDa protein of mitochondrial complex I (NADH:ubiquinone oxidoreductase) from Yarrowia lipolytica to identify functionally important domains in this central subunit. The mutations were selected based on sequence homology with the large subunit of [NiFe] hydrogenases. None of the mutations affected assembly of complex I, all decreased or abolished ubiquinone reductase activity. Several mutants exhibited decreased sensitivities toward ubiquinone-analogous inhibitors. Unexpectedly, seven mutations affected the properties of iron-sulfur cluster N2, a prosthetic group not located in the 49-kDa subunit. In three of these mutants cluster N2 was not detectable by electron-paramagnetic resonance spectroscopy. The fact that the small subunit of hydrogenase is homologous to the PSST subunit of complex I proposed to host cluster N2 offers a straightforward explanation for the observed, unforeseen effects on this iron-sulfur cluster. We propose that the fold around the hydrogen reactive site of [NiFe] hydrogenase is conserved in the 49-kDa subunit of complex I and has become part of the inhibitor and ubiquinone binding region. We discuss that the fourth ligand of iron-sulfur cluster N2 missing in the PSST subunit may be provided by the 49-kDa subunit.
During early G1 phase, Rb is exclusively mono-phosphorylated by cyclin D:Cdk4/6, generating 14 different isoforms with specific binding patterns to E2Fs and other cellular protein targets. While mono-phosphorylated Rb is dispensable for early G1 phase progression, interfering with cyclin D:Cdk4/6 kinase activity prevents G1 phase progression, questioning the role of cyclin D:Cdk4/6 in Rb inactivation. To dissect the molecular functions of cyclin D:Cdk4/6 during cell cycle entry, we generated a single cell reporter for Cdk2 activation, RB inactivation and cell cycle entry by CRISPR/Cas9 tagging endogenous p27 with mCherry. Through single cell tracing of Cdk4i cells, we identified a time-sensitive early G1 phase specific Cdk4/6-dependent phosphorylation gradient that regulates cell cycle entry timing and resides between serum-sensing and cyclin E:Cdk2 activation. To reveal the substrate identity of the Cdk4/6 phosphorylation gradient, we performed whole proteomic and phospho-proteomic mass spectrometry, and identified 147 proteins and 82 phospho-peptides that significantly changed due to Cdk4 inhibition in early G1 phase. In summary, we identified novel (non-Rb) cyclin D:Cdk4/6 substrates that connects early G1 phase functions with cyclin E:Cdk2 activation and Rb inactivation by hyper-phosphorylation.
The pathophysiology of Takotsubo Syndrome (TTS) is not completely understood and the trigger of sudden cardiac death (SCD) in TTS is not clear either. We therefore sought to find an association between TTS and primary electrical diseases. A total of 148 TTS patients were analyzed between 2003 and 2017 in a bi-centric manner. Additionally, a literature review was performed. The patients were included in an ongoing retrospective cohort database. The coexistence of TTS and primary electrical diseases was confirmed in five cases as the following: catecholaminergic polymorphic ventricular tachycardia (CPVT, 18-year-old female) (n = 1), LQTS 1 (72-year-old female and 65-year-old female) (n = 2), LQTS 2 (17-year-old female) (n = 1), and LQTS in the absence of mutations (22-year-old female). Four patients suffered from malignant tachyarrhythmia and recurrent syncope after TTS. Except for the CPVT patient and one LQTS 1 patient, all other cases underwent subcutaneous ICD implantation. An event recorder of the CPVT patient after starting beta-blocker did not detect arrhythmias. The diagnosis of primary electrical disease was in 80% of cases unmasked on a TTS event. This diagnosis triggered a family clinical and genetic screening confirming the diagnosis of primary electrical disease. A subsequent literature review identified five cases as the following: a congenital atrioventricular block (n = 1), a Jervell and Lange-Nielsen Syndrome (n = 1), and a family LQTS in the absence of a mutation (n = 2), LQTS 2 (n = 1). A primary electrical disease should be suspected in young and old TTS patients with a family history of sudden cardiac death. In suspected cases, e.g., ongoing QT interval prolongation, despite recovery of left ventricular ejection fraction a family screening is recommended.
Vaccination represents one of the fundamentals in the fight against SARS-CoV-2. Myocarditis has been reported as a rare but possible adverse consequence of different vaccines, and its clinical presentation can range from mild symptoms to acute heart failure. We report a case of a 29-year-old man who presented with fever and retrosternal pain after receiving SARS-CoV-2 vaccine. Cardiac magnetic resonance imaging and laboratory data revealed typical findings of acute myocarditis.
A case of Lymphangioleiomyomatosis (LAM) of the lung in a patient with a history of breast cancer
(2019)
Background: Lymphangioleiomyomatosis (LAM) is a rare progressive cystic and nodular disease of the lung characterized by smooth muscle cell proliferation. LAM predominantly affects young premenopausal women. This report is of a case of LAM presenting in a 47-year-old woman with a past history of breast cancer and discusses the possibility of an association between the two conditions.
Case report: A 47-year-old woman presented as an emergency with an exacerbation of a four-month history of shortness of breath and dry cough. Her symptoms began following the start of anti-hormonal treatment with letrozole and goserelin acetate for a moderately differentiated (grade 2) invasive ductal carcinoma of the breast (pT2, pN0, M0) which was positive for expression of estrogen receptor (ER+), progesterone receptor (PR+), and human epidermal growth factor receptor 2 (HER2+). Until the previous four months, she had breast-conserving treatment with radiotherapy and tamoxifen therapy. Following hospital admission, she was found to be in type I respiratory failure. Chest X-ray, lung computed tomography (CT), and positron-emission tomography (PET) showed diffuse cystic and nodular lung lesions, consistent with a diagnosis of LAM, and antihormonal therapy was discontinued. She developed pericarditis that was treated with the anti-inflammatory agent, colchicine. Treatment with letrozole and sirolimus improved her respiratory symptoms.
Conclusions: A rare case of LAM is presented in a woman with a recent history of breast cancer. Because both tumors were hormone-dependent, this may support common underlying gene associations and signaling pathways between the two types of tumor.
A candidate gene cluster for the bioactive natural product gyrophoric acid in lichen-forming fungi
(2022)
Natural products of lichen-forming fungi are structurally diverse and have a variety of medicinal properties. Despite this, they a have limited implementation in industry, because the corresponding genes remain unknown for most of the natural products. Here we implement a long-read sequencing and bioinformatic approach to identify the biosynthetic gene cluster of the bioactive natural product gyrophoric acid (GA). Using 15 high-quality genomes representing nine GA-producing species of the lichen-forming fungal genus Umbilicaria, we identify the most likely GA cluster and investigate cluster gene organization and composition across the nine species. Our results show that GA clusters are promiscuous within Umbilicaria, with only three genes that are conserved across species, including the PKS gene. In addition, our results suggest that the same cluster codes for different but structurally similar NPs, i.e., GA, umbilicaric acid and hiascic acid, bringing new evidence that lichen metabolite diversity is also generated through regulatory mechanisms at the molecular level. Ours is the first study to identify the most likely GA cluster, and thus provides essential information to open new avenues for biotechnological approaches to producing and modifying GA and similar lichen-derived compounds. We show that bioinformatics approaches are useful in linking genes and potentially associated natural products. Genome analyses help unlocking the pharmaceutical potential of organisms such as lichens, which are biosynthetically diverse but slow growing, and difficult to cultivate due to their symbiotic nature.
At a site in the Bolivian Chiquitano region composed by a mosaic of pastureland and primary Chiquitano Dry Forest&nbsp;(CDF) we conducted a camera-trapping study to (1) survey the mammals, and (2) compare individual Jaguar numbers&nbsp;with other Chiquitano sites. Therefore, we installed 13 camera stations (450 ha polygon) over a period of six months.&nbsp;On 1,762 camera-days and in 1,654 independent capture events, we recorded 24 mammalian species that represent the&nbsp;native fauna of large and medium-sized mammals including apex-predators (Puma, Jaguar), meso-carnivores (Ocelot,&nbsp;Jaguarundi, Margay), and large herbivores (Tapir, Collared and White lipped Peccary). We identified six adult Jaguars&nbsp;and found indications of successful reproductive activity. Captures of Jaguars were higher in CDF than in altered habitats.&nbsp;In summary, we believe that (1) the mammal species richness, (2) the high capture numbers of indicator species,&nbsp;and (3) the high capture numbers of Jaguar indicate that our study area has a good conservation status. Future efforts&nbsp;should be undertaken to keep this, and monitoring programs in this region are necessary to further evaluate the potential&nbsp;importance of the Chiquitano region as a possible key region for mammals, especially Jaguars, in South America.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
I present a new business cycle model in which decision making follows a simple mental process motivated by neuroeconomics. Decision makers first compute the value of two different options and then choose the option that offers the highest value, but with errors. The resulting model is highly tractable and intuitive. A demand function in level replaces the traditional Euler equation. As a result, even liquid consumers can have a large marginal propensity to consume. The interest rate affects consumption through the cost of borrowing and not through intertemporal substitution. I discuss the implications for stimulus policies.
The most frequently used boundary-layer turbulence parameterization in numerical weather prediction (NWP) models are turbulence kinetic energy (TKE) based-based schemes. However, these parameterizations suffer from a potential weakness, namely the strong dependence on an ad-hoc quantity, the so-called turbulence length scale. The physical interpretation of the turbulence length scale is difficult and hence it cannot be directly related to measurements or large eddy simulation (LES) data. Consequently, formulations for the turbulence length scale in basically all TKE schemes are based on simplified assumptions and are model-dependent. A good reference for the independent evaluation of the turbulence length scale expression for NWP modeling is missing. Here we propose a new turbulence length scale diagnostic which can be used in the gray zone of turbulence without modifying the underlying TKE turbulence scheme. The new diagnostic is based on the TKE budget: The core idea is to encapsulate the sum of the molecular dissipation and the cross-scale TKE transfer into an effective dissipation, and associate it with the new turbulence length scale. This effective dissipation can then be calculated as a residuum in the TKE budget equation (for horizontal sub-domains of different sizes) using LES data. Estimation of the scale dependence of the diagnosed turbulence length scale using this novel method is presented for several idealized cases.
Im Mai 2008 verwüstete der Sturm Nargis über Myanmar/Burma hinweg, 140.000 Menschen wurden getötet. Das autokratisch regierte Land wies jedoch Katastrophenhilfe als innere Einmischung zurück und verweigerte die Einfuhr von Medikamenten und Lebensmitteln. Der französische Außenminister Kouschner drängte angesichts dieser Situation die UN zum Handeln, auf Grundlage der Responsibility to Protect (kurz R2P).
Dieser Akt der Versicherheitlichung steht allerdings im Kontrast zur Medienberichterstattung, wie Gabi Schlag in diesem Papier untersucht. Besonders das Bildmaterial aus dem Katastrophengebiet erzählt eine andere Geschichte. Die Photos der Berichterstattung von BBC.com zum Thema bilden ein visuelles Narrativ, welches keine Hilfsbedürftigkeit suggeriert, sondern kontrolliertes, besonnenes Vorgehen der lokalen Kräfte. Dieser Kontrast verweist auf die sprichwörtliche Macht der Bilder, welche die jeweiligen Bedingungen von Handlungsmöglichkeiten vorstrukturieren.
Consumers purchase energy in many forms. Sometimes energy goods are consumed directly, for instance, in the form of gasoline used to operate a vehicle, electricity to light a home, or natural gas to heat a home. At other times, the cost of energy is embodied in the prices of goods and services that consumers buy, say when purchasing an airline ticket or when buying online garden furniture made from plastic to be delivered by mail. Previous research has focused on quantifying the pass-through of the price of crude oil or the price of motor gasoline to U.S. inflation. Neither approach accounts for the fact that percent changes in refined product prices need not be proportionate to the percent change in the price of oil, that not all energy is derived from oil, and that the correlation of price shocks across energy markets is far from one. This paper develops a vector autoregressive model that quantifies the joint impact of shocks to several energy prices on headline and core CPI inflation. Our analysis confirms that focusing on gasoline price shocks alone will underestimate the inflationary pressures emanating from the energy sector, but not enough to overturn the conclusion that much of the observed increase in headline inflation in 2021 and 2022 reflected non-energy price shocks.
We tested 6–7-year-olds, 18–22-year-olds, and 67–74-year-olds on an associative memory task that consisted of knowledge-congruent and knowledge-incongruent object–scene pairs that were highly familiar to all age groups. We compared the three age groups on their memory congruency effect (i.e., better memory for knowledge-congruent associations) and on a schema bias score, which measures the participants’ tendency to commit knowledge-congruent memory errors. We found that prior knowledge similarly benefited memory for items encoded in a congruent context in all age groups. However, for associative memory, older adults and, to a lesser extent, children overrelied on their prior knowledge, as indicated by both an enhanced congruency effect and schema bias. Functional Magnetic Resonance Imaging (fMRI) performed during memory encoding revealed an age-independent memory x congruency interaction in the ventromedial prefrontal cortex (vmPFC). Furthermore, the magnitude of vmPFC recruitment correlated positively with the schema bias. These findings suggest that older adults are most prone to rely on their prior knowledge for episodic memory decisions, but that children can also rely heavily on prior knowledge that they are well acquainted with. Furthermore, the fMRI results suggest that the vmPFC plays a key role in the assimilation of new information into existing knowledge structures across the entire lifespan. vmPFC recruitment leads to better memory for knowledge-congruent information but also to a heightened susceptibility to commit knowledge-congruent memory errors, in particular in children and older adults.
Knowledge discovery in biomedical data using supervised methods assumes that the data contain structure relevant to the class structure if a classifier can be trained to assign a case to the correct class better than by guessing. In this setting, acceptance or rejection of a scientific hypothesis may depend critically on the ability to classify cases better than randomly, without high classification performance being the primary goal. Random forests are often chosen for knowledge-discovery tasks because they are considered a powerful classifier that does not require sophisticated data transformation or hyperparameter tuning and can be regarded as a reference classifier for tabular numerical data. Here, we report a case where the failure of random forests using the default hyperparameter settings in the standard implementations of R and Python would have led to the rejection of the hypothesis that the data contained structure relevant to the class structure. After tuning the hyperparameters, classification performance increased from 56% to 65% balanced accuracy in R, and from 55% to 67% balanced accuracy in Python. More importantly, the 95% confidence intervals in the tuned versions were to the right of the value of 50% that characterizes guessing-level classification. Thus, tuning provided the desired evidence that the data structure supported the class structure of the data set. In this case, the tuning made more than a quantitative difference in the form of slightly better classification accuracy, but significantly changed the interpretation of the data set. This is especially true when classification performance is low and a small improvement increases the balanced accuracy to over 50% when guessing.
This policy letter provides an overview of the strengths, weaknesses, risks and opportunities of the upcoming comprehensive risk assessment, a euro area-wide evaluation of bank balance sheets and business models. If carried out properly, the 2014 comprehensive assessment will lead the euro area into a new era of banking supervision. Policy makers in euro area countries are now under severe pressure to define a credible backstop framework for banks. This framework, as the author argues, needs to be a broad, quasi-European system of mutually reinforcing backstops.
Child maltreatment remains a major health threat globally that requires the understanding of socioeconomic and cultural contexts to craft effective interventions. However, little is known about research agendas globally and the development of knowledge-producing networks in this field of study. This study aims to explore the bibliometric overview on child maltreatment publications to understand their growth from 1916 to 2018. Data from the Web of Science Core Collection were collected in May 2018. Only research articles and reviews written in the English language were included, with no restrictions by publication date. We analyzed publication years, number of papers, journals, authors, keywords and countries, and presented the countries collaboration and co-occurrence keywords analysis. From 1916 to 2018, 47,090 papers (53.0% in 2010–2018) were published in 9442 journals. Child Abuse & Neglect (2576 papers; 5.5%); Children and Youth Services Review (1130 papers; 2.4%) and Pediatrics (793 papers, 1.7%) published the most papers. The most common research areas were Psychology (16,049 papers, 34.1%), Family Studies (8225 papers, 17.5%), and Social Work (7367 papers, 15.6%). Among 192 countries with research publications, the most prolific countries were the United States (26,367 papers), England (4676 papers), Canada (3282 papers) and Australia (2664 papers). We identified 17 authors who had more than 60 scientific items. The most cited papers (with at least 600 citations) were published in 29 journals, headed by the Journal of the American Medical Association (JAMA) (7 papers) and the Lancet (5 papers). This overview of global research in child maltreatment indicated an increasing trend in this topic, with the world’s leading centers located in the Western countries led by the United States. We called for interdisciplinary research approaches to evaluating and intervening on child maltreatment, with a focus on low-middle income countries (LMICs) settings and specific contexts.
A Bayesian framework to estimate diversification rates and their variation through time and space
(2011)
Background: Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification.
Results: We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinids) and Lupinus (Fabaceae). In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification.
Conclusions: Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling.
The Wood-Ljungdahl pathway of anaerobic CO(2) fixation with hydrogen as reductant is considered a candidate for the first life-sustaining pathway on earth because it combines carbon dioxide fixation with the synthesis of ATP via a chemiosmotic mechanism. The acetogenic bacterium Acetobacterium woodii uses an ancient version of the pathway that has only one site to generate the electrochemical ion potential used to drive ATP synthesis, the ferredoxin-fueled, sodium-motive Rnf complex. However, hydrogen-based ferredoxin reduction is endergonic, and how the steep energy barrier is overcome has been an enigma for a long time. We have purified a multimeric [FeFe]-hydrogenase from A. woodii containing four subunits (HydABCD) which is predicted to have one [H]-cluster, three [2Fe2S]-, and six [4Fe4S]-clusters consistent with the experimental determination of 32 mol of Fe and 30 mol of acid-labile sulfur. The enzyme indeed catalyzed hydrogen-based ferredoxin reduction, but required NAD(+) for this reaction. NAD(+) was also reduced but only in the presence of ferredoxin. NAD(+) and ferredoxin reduction both required flavin. Spectroscopic analyses revealed that NAD(+) and ferredoxin reduction are strictly coupled and that they are reduced in a 1:1 stoichiometry. Apparently, the multimeric hydrogenase of A. woodii is a soluble energy-converting hydrogenase that uses electron bifurcation to drive the endergonic ferredoxin reduction by coupling it to the exergonic NAD(+) reduction.
A B-factor for NOEs?
(2022)
Nuclear Overhauser effects (NOEs) are influenced by motion. Here, we derive exact, analytical results for a model of isotropic, harmonic fluctuations of atom positions that corresponds to the one underlying crystallographic B-factors. The model includes steric repulsion and yields closed-form expressions for the expected value of general invertible functions of the distance between two atoms, with the special case r-6 for NOEs. We discuss the implications for the definition of an NOE-based B-factor in solution NMR.
Multimodal therapy of glioblastoma (GBM) reveals inter-individual variability in terms of treatment outcome. Here, we examined whether a miRNA signature can be defined for the a priori identification of patients with particularly poor prognosis.
FFPE sections from 36 GBM patients along with overall survival follow-up were collected retrospectively and subjected to miRNA signature identification from microarray data. A risk score based on the expression of the signature miRNAs and cox-proportional hazard coefficients was calculated for each patient followed by validation in a matched GBM subset of TCGA. Genes potentially regulated by the signature miRNAs were identified by a correlation approach followed by pathway analysis.
A prognostic 4-miRNA signature, independent of MGMT promoter methylation, age, and sex, was identified and a risk score was assigned to each patient that allowed defining two groups significantly differing in prognosis (p-value: 0.0001, median survival: 10.6 months and 15.1 months, hazard ratio = 3.8). The signature was technically validated by qRT-PCR and independently validated in an age- and sex-matched subset of standard-of-care treated patients of the TCGA GBM cohort (n=58). Pathway analysis suggested tumorigenesis-associated processes such as immune response, extracellular matrix organization, axon guidance, signalling by NGF, GPCR and Wnt. Here, we describe the identification and independent validation of a 4-miRNA signature that allows stratification of GBM patients into different prognostic groups in combination with one defined threshold and set of coefficients that could be utilized as diagnostic tool to identify GBM patients for improved and/or alternative treatment approaches.
In November 2016, magnetotelluric (MT) data were collected at the Ceboruco Volcano in cooperation with the Centro de Sismología y Volcanología de Occidente (SisVoc, Universidad de Guadalajara, Mexico). The Ceboruco is a 2280 m high stratovolcano, located in Nayarit State, Mexico. It is placed in the central part of the Tepic-Zacoalco Rift (TZR), which constitutes the north-western end of the Trans-Mexican Volcanic Belt. Together with Chapala and Colima (in the Jalisco Block), they form the triple rift system developed as a consequence of the ongoing subduction of the Rivera and Cocos oceanic plates beneath the North American continental crust. Although its last eruption occurred in 1870, it is the most active volcano in the area, showing volcanic-earthquake activity together with ongoing vapor emissions. The survey was part of a geothermal project (CeMIEGeo-P24) and focused on the determination of electrical conductivity properties to characterize the deep structure and the geothermal potential of the Volcano. Frequency dependent magnetotelluric response functions were calculated from 25 broadband MT stations, which covered an area of 10 x 10 km2 including its crater, calderas and foreland. The results were interpreted using anisotropic 3-D forward modelling and isotropic 3-D inversion approaches, considering strong topographical effects. The final resistivity model implies a highly conductive layer, reaching from near-surface to approximately 2 km depth, which might be related to a hydrothermal system. Here, mineralized fluids and clay minerals can cause high conductivities around 1 S/m. For longer periods, the principal axes of the MT response tensors (phase tensor, apparent resistivity tensor) are in good agreement with the strike direction of the underlying rift system. However, they are not rendered by the isotropic inversion. Thus the data suggest an anisotropic electrical conductivity at greater depth with its principal axis determined by the response tensors.
Purpose: Classification and treatment of WHO grade II/III gliomas have dramatically changed. Implementing molecular markers into the WHO classification raised discussions about the significance of grading and clinical trials showed overall survival (OS) benefits for combined radiochemotherapy. As molecularly stratified treatment data outside clinical trials are scarce, we conducted this retrospective study.
Methods: We identified 343 patients (1995–2015) with newly diagnosed WHO grade II/III gliomas and analyzed molecular markers, patient characteristics, symptoms, histology, treatment, time to treatment failure (TTF) and OS.
Results: IDH-status was available for all patients (259 mutant, 84 IDH1-R132H-non-mutant). Molecular subclassification was possible in 173 tumors, resulting in diagnosis of 80 astrocytomas and 93 oligodendrogliomas. WHO grading remained significant for OS in astrocytomas/IDH1-R132H-non-mutant gliomas (p < 0.01) but not for oligodendroglioma (p = 0.27). Chemotherapy (and temozolomide in particular) showed inferior OS compared to radiotherapy in astrocytomas (median 6.1/12.1 years; p = 0.03) and oligodendrogliomas (median 13.2/not reached (n.r.) years; p = 0.03). While radiochemotherapy improved TTF in oligodendroglioma (median radiochemotherapy n.r./chemotherapy 3.8/radiotherapy 7.3 years; p < 0.001/ = 0.06; OS data immature) the effect, mainly in combination with temozolomide, was weaker in astrocytomas (median radiochemotherapy 6.7/chemotherapy 2.3/radiotherapy 2.0 years; p < 0.001/ = 0.11) and did not translate to improved OS (median 8.4 years).
Conclusion: This is one of the largest retrospective, real-life datasets reporting treatment and outcome in low-grade gliomas incorporating molecular markers. Current histologic grading features remain prognostic in astrocytomas while being insignificant in oligodendroglioma with interfering treatment effects. Chemotherapy (temozolomide) was less effective than radiotherapy in both astrocytomas and oligodendrogliomas while radiochemotherapy showed the highest TTF in oligodendrogliomas.
Anaerobic ammonium oxidation (anammox) is a major process in the biogeochemical nitrogen cycle in which nitrite and ammonium are converted to dinitrogen gas and water through the highly reactive intermediate hydrazine. So far, it is unknown how anammox organisms convert the toxic hydrazine into nitrogen and harvest the extremely low potential electrons (−750 mV) released in this process. We report the crystal structure and cryo electron microscopy structures of the responsible enzyme, hydrazine dehydrogenase, which is a 1.7 MDa multiprotein complex containing an extended electron transfer network of 192 heme groups spanning the entire complex. This unique molecular arrangement suggests a way in which the protein stores and releases the electrons obtained from hydrazine conversion, the final step in the globally important anammox process.
So far, personal feedback in the case of lectures with hundreds of students still seems utopic – even after the digitalization boom in times of the coronavirus. Tools from the research field of »learning analytics« could in future give students feedback and at the same time provide their supervisors with clues about where help is still needed.
The focus of this contribution is on the mode of capitalism within the industrialized sectors of "emerging markets". Particularly in the context of the rise of the BRIC (Brazil, Russia, India and China) this question has gained considerable importance, also for the development of the world economy as a whole. The core question is whether the type of capitalism within these economies is similar to the capitalist variety of the triad, or diverges in more or less permanent ways. The article gives a preliminary answer to this question, by developing a rough sketch of a "BRIC" model of capitalism and illustrating this model with the case of Brazil. In terms of theory, the article extends the Comparative Capitalism (CC) perspective to the BRICs. On the one side, the focus is on the classical questions of CC, i.e. the determinants of economic development and the differences to other types of capitalism, on the other side the relationship between these varieties and social inequality. It argues that the "state-permeated market economies" of the BRICs rely on clans as a mode of social coordination. As demonstrated by the case of Brazil, this type of capitalism can be quite successful, but is based on a highly unequal distribution of economic and political resources.
The title compound, [FeZr2(C5H5)4Cl2(C13H18B2)], is a heteronuclear complex that consists of a [3]ferrocenophane moiety substituted at each cyclopentadienyl (Cp) ring by a BH3 group; the BH3 group is bonded via two H atoms to the Zr atom of the zirconocene chloride moiety in a bidentate fashion. The two Cp rings of the [3]ferrocenophane moiety are aligned at a dihedral angle of 8.9 (4)° arising from the strain of the propane-1,3-diyl bridge linking the two Cp rings. [One methylene group is disordered over two positions with a site-occupation factor of 0.552 (18) for the major occupied site.] The dihedral angles between the Cp rings at the two Zr atoms are 50.0 (3) and 51.7 (3)°. The bonding Zr(...)H distances are in the range 1.89 (7)–2.14 (7) Å. As the two Cp rings of the ferrocene unit are connected by an ansa bridge, the two Zr atoms approach each other at 6.485 (1) Å. The crystal packing features C—H(...)Cl interactions.
[Tagungsbericht] Making finance sustainable: Ten years equator principles – success or letdown?
(2013)
In 2003, a number of banks adopted the Equator Principles (EPs), a voluntary Code of Conduct based on the International Finance Corporation’s (IFC) performance standards, to ensure the ecological and social sustainability of project finance. These so called Equator Principles Financial Institutions (EPFI) commit to requiring their borrowers to adopt sustainable management plans of environmental and social risks associated with their projects. The Principles apply to the project finance business segment of the banks and cover projects with a total cost of US $10 million or more. While for long developing countries relied on World Bank and other public assistance to finance infrastructure projects there has occurred a shift in recent years to private funding. The NGOs have been frustrated by this shift of project finance as they had spent their resources to exercise pressure on the public financial institutions to incorporate environmental and social standards in their project finance activities. However, after a shift of NGO pressure to private financial institutions the latter adopted the EPs for fear of reputational risks. NGOs had laid down their own more ambitious ideas about sustainable finance in the Collevecchio Declaration on Financial Institutions and Sustainability. Legally speaking, the EPs are a self-regulatory soft law instrument. However, it has a hard law dimension as the Equator Banks require their borrowers to comply with the EPs through covenants in the loan contracts that may trigger a default in a case of violation. ...
It is a rare and wonderful thing when a book of 383 pages leaves a reader wanting to read more, much more in fact. That is certainly the case with this intriguing collection of thirteen assorted essays on the Rhine economy from 1815 to the present, organized in six broad topical sections: origins, enterprises, sectors and clusters, infrastructures, transport, and environment. ...
The volume under review is the result of a conference on historical graffiti held at the Ludwig-Maximilians-University of Munich in 2017. The aim of this book is to analyse — for the first time — graffiti from the ancient, medieval and modern periods in their historical and geographical contexts from an interdisciplinary point of view. Following this comparative approach the authors show the tremendous potential of this nascent area of research by investigating epigraphic material that has been neglected and underestimated by scholars for a long time. ...
Since the study of Late Antiquity evolved in the last few decades into an important research topic, several publications have been dedicated to the late antique city, resulting in lively discussions on "decline" and "transition". In line with this evolution Late Antiquity has recently been the central theme of several conferences and workshops, dealing with specific study themes of Late Antiquity as a whole, focussing on a particular time period and/or dedicated to well-defined geographical areas. ...
After this contribution dealing with the capital of Asia, the paper of Axel Filges discusses the late antique and Byzantine situation in the smaller town of Blaundos in Phrygia (Zum Aussagepotential ruinöser Mauern. Bevölkerung und Bebauung im spätantiken und byzantinischen Blaundos [Phrygia]). ...
In 1875, the Liebig Extract of Meat Company began to distribute a series of pictures printed on small (11 x 7 cm), colorful, collectible cardboard cards along with its main product, Fleischextrakt. While not the first to adopt this advertising technique, Liebig quickly became the best-known purveyor of Sammelbilder. ...
The volume under review contains the published proceedings of a conference held in 2009 with the challenging title, "Merowingische Monetarmünzen und der Beginn des Mittelalters". These Merovingian "Monetarmünzen" are a distinctive group of coins of which less than 10 000 are currently known. Quite suddenly, in the late sixth century, this type of gold coinage appears, with the name of a moneyer ( monetarius ) on the obverse and the place name on the reverse (presumably, but not necessarily in all instances, the mint). Thus, over a thousand moneyers and 722 place names are recorded, many only attested once or twice. In the late 7 th c. these coins slowly give way to a system based on the silver penny/denier, no longer showing names of moneyers. Who were these moneyers? What was their relationship with the court and the kings? To what ends were those coins produced, and how were they used in daily commerce? Why are so many different mints attested? These questions have occupied scholars for several generations now. However, Jarnut and Strothmann have added a new perspective: in how far are these coinages and the associated monetary policy a continuation of late Roman practices, or do they represent something altogether different and can, therefore, be understood as an expression of a fundamentally altered society that could be termed medieval? ...
On 26th November 2010 around 3000 psychiatrists rose up for a minute's silence in the great hall of the International Congress Centrum in Berlin. What they had heard before, was deeply impressive and memorable to the audience. Professor Frank Schneider, president of the German Society for Psychiatry, Psychotherapy and Neurology (DGPPN) asked the psychiatric victims and their relatives of the Nazi era for forgiveness to an extent as only a few German Doctors done before. ...
In this review, I argue that this textbook edited by BENNETT and CHECKEL is exceptionally valuable in at least four aspects. First, with regards to form, the editors provide a paragon of how an edited volume should look: well-connected articles "speak to" and build on each other. The contributors refer to and grapple with the theoretical framework of the editors who, in turn, give heed to the conclusions of the contributors. Second, the book is packed with examples from research practice. These are not only named but thoroughly discussed and evaluated for their methodological potential in all chapters. Third, the book aims at improving and popularizing process tracing, but does not shy away from systematically considering the potential weaknesses of the approach. Fourth, the book combines and bridges various approaches to (mostly) qualitative methods and still manages to provide abstract and easily accessible standards for making "good" process tracing. As such, it is a must-read for scholars working with qualitative methods. However, BENNETT and CHECKEL struggle with fulfilling their promise of bridging positivist and interpretive approaches, for while they do indeed take the latter into account, their general research framework remains largely unchanged by these considerations. On these grounds, I argue that, especially for scholars in the positivist camp, the book can function as a "how-to" guide for designing and implementing research. Although this may not apply equally to interpretive researchers, the book is still a treasure chest for them, providing countless conceptual clarifications and potential pitfalls of process tracing practice.
The partial faunal reserve of Pama is situated in the province of Kompienga, in the South-East of Burkina Faso, with typical Sudanian savanna vegetation. Adjacent to the Arli National Park and the Pendjari National Park, it is part of the so-called WAP complex, one of the largest wildlife areas in West Africa. Up to now, only little has been known about its flora. The present study aimed at reducing this gap in knowledge, and represents an important tool for conservation and research. The list of species was compiled from the surveys carried out from 2001 to 2004, additional relevé data, and herbarium specimens. We found 450 species, which belong to 244 genera and 73 families. The most species-rich family is Poaceae (83 species), followed by Fabaceae (64), Cyperaceae (24), Rubiaceae (22), Euphor- biaceae (20), Combretaceae (15), Asteraceae (14), Caesalpiniaceae (14), Mimosaceae (12), and Convolvulaceae (11).
The significance of data and Artificial Intelligence (AI) has a profound impact on all industries, presenting both challenges and opportunities. Given its power and relevance, AI has not gone unnoticed in the public affairs sector. The upcoming German federal election in 2025 brings discussions about AI to the forefront, raising questions about the extent to which data will drive the public affairs field and how it will be handled.
In many European countries poverty migration and its impact on the European continent are currently widely discussed topics. Many seem to forget about grave migration problems taking place in Asia, where in Hong Kong, for example, the working and living conditions for approximately 320,000 foreign domestic workers (mostly women) are often intolerable.
Since hyperactivity of the protein kinase DYRK1A is linked to several neurodegenerative disorders, DYRK1A inhibitors have been suggested as potential therapeutics for Down syndrome and Alzheimer’s disease. Most published inhibitors to date suffer from low selectivity against related kinases or from unfavorable physicochemical properties. In order to identify DYRK1A inhibitors with improved properties, a series of new chemicals based on [b]-annulated halogenated indoles were designed, synthesized, and evaluated for biological activity. Analysis of crystal structures revealed a typical type-I binding mode of the new inhibitor 4-chlorocyclohepta[b]indol-10(5H)-one in DYRK1A, exploiting mainly shape complementarity for tight binding. Conversion of the DYRK1A inhibitor 8-chloro-1,2,3,9-tetrahydro-4H-carbazol-4-one into a corresponding Mannich base hydrochloride improved the aqueous solubility but abrogated kinase inhibitory activity.