Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10808)
- Doctoral Thesis (1567)
- Preprint (1554)
- Working Paper (1438)
- Part of Periodical (564)
- Conference Proceeding (511)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17089) (remove)
Has Fulltext
- yes (17089) (remove)
Keywords
- inflammation (92)
- COVID-19 (89)
- SARS-CoV-2 (62)
- Financial Institutions (47)
- Germany (45)
- climate change (45)
- aging (43)
- ECB (42)
- cancer (42)
- apoptosis (41)
Institute
- Medizin (5096)
- Physik (2985)
- Wirtschaftswissenschaften (1643)
- Frankfurt Institute for Advanced Studies (FIAS) (1575)
- Biowissenschaften (1399)
- Informatik (1249)
- Center for Financial Studies (CFS) (1136)
- Sustainable Architecture for Finance in Europe (SAFE) (1059)
- Biochemie und Chemie (855)
- House of Finance (HoF) (700)
Dual-task paradigms encompass a broad range of approaches to measure cognitive load in instructional settings. As a common characteristic, an additional task is implemented alongside a learning task to capture the individual’s unengaged cognitive capacities during the learning process. Measures to determine these capacities are, for instance, reaction times and interval errors on the additional task, while the performance on the learning task is to be maintained. Opposite to retrospectively applied subjective ratings, the continuous assessment within a dual-task paradigm allows to simultaneously monitor changes in the performance related to previously defined tasks. Following the Cognitive Load Theory, these changes in performance correspond to cognitive changes related to the establishment of permanently existing knowledge structures. Yet the current state of research indicates a clear lack of standardization of dual-task paradigms over study settings and task procedures. Typically, dual-task designs are adapted uniquely for each study, albeit with some similarities across different settings and task procedures. These similarities range from the type of modality to the frequency used for the additional task. This results in a lack of validity and comparability between studies due to arbitrarily chosen patterns of frequency without a sound scientific base, potentially confounding variables, or undecided adaptation potentials for future studies. In this paper, the lack of validity and comparability between dual-task settings will be presented, the current taxonomies compared and the future steps for a better standardization and implementation discussed.
The merchant language of the Georgian Jews deserves scholarly attention for several reasons. The political and social developments of the last fifty years have caused the extinction of this very interesting form of communication, as most Georgian Jews have emigrated to Israel. In a natural interaction, the type of language described in this article can be found very rarely, if at all. Records of this communication have been preserved in various contexts and received different levels of scholarly attention. Our interest concerns the linguistic aspects as well as the classification.
In the following paper we argue that the specific merchant language of Georgian Jews belongs to the pragmatic phenomenon of “very indirect language.” The use of mostly Hebrew lexemes in Georgian conversation leads to an unfounded assumption that the speakers are equally competent in Hebrew and Georgian. It is reported that a high level of linguistic competence in Hebrew does not guarantee understanding of the Jewish merchant language. In the Georgian context, the decisive factors are membership in the professional interest group of merchants and residential membership in the Jewish community. These factors seem to be equivalent, because Jewish members of other professional groups (and those from outside the particular urban residential area) have difficulties in following the language that are similar to those of the Georgian majority. We describe the pragmatic structure of interactions conducted with the help of the merchant language and take into account the purpose of the language’s use or the intention of the speakers. Relevant linguistic examples are analysed and their sociocultural contexts explained.
A critical role for VEGF and VEGFR2 in NMDA receptor synaptic function and fear-related behavior
(2016)
Vascular endothelial growth factor (VEGF) is known to be required for the action of antidepressant therapies but its impact on brain synaptic function is poorly characterized. Using a combination of electrophysiological, single-molecule imaging and conditional transgenic approaches, we identified the molecular basis of the VEGF effect on synaptic transmission and plasticity. VEGF increases the postsynaptic responses mediated by the N-methyl-d-aspartate type of glutamate receptors (GluNRs) in hippocampal neurons. This is concurrent with the formation of new synapses and with the synaptic recruitment of GluNR expressing the GluN2B subunit (GluNR-2B). VEGF induces a rapid redistribution of GluNR-2B at synaptic sites by increasing the surface dynamics of these receptors within the membrane. Consistently, silencing the expression of the VEGF receptor 2 (VEGFR2) in neural cells impairs hippocampal-dependent synaptic plasticity and consolidation of emotional memory. These findings demonstrated the direct implication of VEGF signaling in neurons via VEGFR2 in proper synaptic function. They highlight the potential of VEGF as a key regulator of GluNR synaptic function and suggest a role for VEGF in new therapeutic approaches targeting GluNR in depression.
Rezension zu: Psychology of Retention:Theory, Research and Practice / Melinde Coetzee, Ingrid L. Potgieter and Nadia Ferreira (Eds.), ISBN:978-3-319-98919-8 Publisher:Springer Nature, 2018, R1600 (Preis SA)
The Frankfurt Neutron Source at the Stern-Gerlach-Zentrum is driven by a 2 MeV proton linac consisting of a 4-rod-radio-frequency-quadrupol (RFQ) and an 8 gap IH-DTL structure. RFQ and IH cavity will be powered by only one radio frequency (RF) amplifier to reduce costs. The RF-amplifier of the RFQ-IH combination is coupled into the RFQ. Internal inductive coupling along the axis connects the RFQ with the IH cavity ensuring the required power transition as well as a fixed phase relation between the two structures. The main acceleration of 120 keV up to 2.03 MeV will be reached by the RFQ-IH combination with 175 MHz and at a total length of 2.3 m. The losses in the RFQ-IH combination are about 200 kW.
This paper examines optimal enviromental policy when external financing is costly for firms. We introduce emission externalities and industry equilibrium in the Holmström and Tirole (1997) model of corporate finance. While a cap-and- trading system optimally governs both firms` abatement activities (internal emission margin) and industry size (external emission margin) when firms have sufficient internal funds, external financing constraints introduce a wedge between these two objectives. When a sector is financially constrained in the aggregate, the optimal cap is strictly above the Pigouvian benchmark and emission allowances should be allocated below market prices. When a sector is not financially constrained in the aggregate, a cap that is below the Pigiouvian benchmark optimally shifts market share to less polluting firms and, moreover, there should be no "grandfathering" of emission allowances. With financial constraints and heterogeneity across firms or sectors, a uniform policy, such as a single cap-and-trade system, is typically not optimal.
Background: Invasive off- or on-pump cardiac surgery (elective and emergency procedures, excluding transplants are routinely performed to treat complications of ischaemic heart disease. Randomised controlled trials (RCT) evaluate the effectiveness of treatments in the setting of cardiac surgery. However, the impact of RCTs is weakened by heterogeneity in outcome measuring and reporting, which hinders comparison across trials. Core outcome sets (COS, a set of outcomes that should be measured and reported, as a minimum, in clinical trials for a specific clinical field) help reduce this problem. In light of the above, we developed a COS for cardiac surgery effectiveness trials.
Methods: Potential core outcomes were identified a priori by analysing data on 371 RCTs of 58,253 patients. We reached consensus on core outcomes in an international three-round eDelphi exercise. Outcomes for which at least 60% of the participants chose the response option "no" and less than 20% chose the response option "yes" were excluded.
Results: Eighty-six participants from 23 different countries involving adult cardiac patients, cardiac surgeons, anaesthesiologists, nursing staff and researchers contributed to this eDelphi. The panel reached consensus on four core outcomes: 1) Measure of mortality, 2) Measure of quality of life, 3) Measure of hospitalisation and 4) Measure of cerebrovascular complication to be included in adult cardiac surgery trials.
Conclusion: This study used robust research methodology to develop a minimum core outcome set for clinical trials evaluating the effectiveness of treatments in the setting of cardiac surgery. As a next step, appropriate outcome measurement instruments have to be selected.
Unquestionably (or: undoubtedly), every competent speaker has already come to doubt with respect to the question of which form is correct or appropriate and should be used (in the standard language) when faced with two or more almost identical competing variants of words, word forms or sentence and phrase structure (e.g. German "Pizzas/Pizzen/Pizze" 'pizzas', Dutch "de drie mooiste/mooiste drie stranden" 'the three most beautiful/most beautiful three beaches', Swedish "större än jag/mig" 'taller than I/me'). Such linguistic uncertainties or "cases of doubt" (cf. i.a. Klein 2003, 2009, 2018; Müller & Szczepaniak 2017; Schmitt, Szczepaniak & Vieregge 2019; Stark 2019 as well as the useful collections of data of Duden vol. 9, Taaladvies.net, Språkriktighetsboken etc.) systematically occur also in native speakers and they do not necessarily coincide with the difficulties of second language learners. In present-day German, most grammatical uncertainties occur in the domains of inflection (nominal plural formation, genitive singular allomorphy of strong masc./neut. nouns, inflectional variation of weak masc. nouns, strong/weak adjectival inflection and comparison forms, strong/weak verb forms, perfect auxiliary selection) and word-formation (linking elements in compounds, separability of complex verbs). As for syntax, there are often doubts in connection with case choice (pseudo-partitive constructions, prepositional case government) and agreement (especially due to coordination or appositional structures). This contribution aims to present a contrastive approach to morphological and syntactic uncertainties in contemporary Germanic languages (mostly German, Dutch, and Swedish) in order to obtain a broader and more fine-grained typology of grammatical instabilities and their causes. As will be discussed, most doubts of competent speakers - a problem also for general linguistic theory - can be attributed to processes of language change in progress, to language or variety contact, to gaps and rule conflicts in the grammar of every language or to psycholinguistic conditions of language processing. Our main concerns will be the issues of which (kinds of) common or different critical areas there are within Germanic (and, on the other hand, in which areas there are no doubts), which of the established (cross-linguistically valid) explanatory approaches apply to which phenomena and, ultimately, the question whether the new data reveals further lines of explanation for the empirically observable (standard) variation.
In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s seq-operator, which for instance justifies the use of the do-notation.
Commercialization of consumers’ personal data in the digital economy poses serious, both conceptual and practical, challenges to the traditional approach of European Union (EU) Consumer Law. This article argues that mass-spread, automated, algorithmic decision-making casts doubt on the foundational paradigm of EU consumer law: consent and autonomy. Moreover, it poses threats of discrimination and under- mining of consumer privacy. It is argued that the recent legislative reaction by the EU Commission, in the form of the ‘New Deal for Consumers’, was a step in the right direction, but fell short due to its continued reliance on consent, autonomy and failure to adequately protect consumers from indirect discrimination. It is posited that a focus on creating a contracting landscape where the consumer may be properly informed in material respects is required, which in turn necessitates blending the approaches of competition, consumer protection and data protection laws.
A consistent muscle activation strategy underlies crawling and swimming in Caenorhabditis elegans
(2014)
Although undulatory swimming is observed in many organisms, the neuromuscular basis for undulatory movement patterns is not well understood. To better understand the basis for the generation of these movement patterns, we studied muscle activity in the nematode Caenorhabditis elegans. Caenorhabditis elegans exhibits a range of locomotion patterns: in low viscosity fluids the undulation has a wavelength longer than the body and propagates rapidly, while in high viscosity fluids or on agar media the undulatory waves are shorter and slower. Theoretical treatment of observed behaviour has suggested a large change in force–posture relationships at different viscosities, but analysis of bend propagation suggests that short-range proprioceptive feedback is used to control and generate body bends. How muscles could be activated in a way consistent with both these results is unclear. We therefore combined automated worm tracking with calcium imaging to determine muscle activation strategy in a variety of external substrates. Remarkably, we observed that across locomotion patterns spanning a threefold change in wavelength, peak muscle activation occurs approximately 45° (1/8th of a cycle) ahead of peak midline curvature. Although the location of peak force is predicted to vary widely, the activation pattern is consistent with required force in a model incorporating putative length- and velocity-dependence of muscle strength. Furthermore, a linear combination of local curvature and velocity can match the pattern of activation. This suggests that proprioception can enable the worm to swim effectively while working within the limitations of muscle biomechanics and neural control.
Introduction: Encouraged by the change in licensing regulations the practical professional skills in Germany received a higher priority and are taught in medical schools therefore increasingly. This created the need to standardize the process more and more. On the initiative of the German skills labs the German Medical Association Committee for practical skills was established and developed a competency-based catalogue of learning objectives, whose origin and structure is described here.
Goal of the catalogue is to define the practical skills in undergraduate medical education and to give the medical schools a rational planning basis for the necessary resources to teach them.
Methods: Building on already existing German catalogues of learning objectives a multi-iterative process of condensation was performed, which corresponds to the development of S1 guidelines, in order to get a broad professional and political support.
Results: 289 different practical learning goals were identified and assigned to twelve different organ systems with three overlapping areas to other fields of expertise and one area of across organ system skills. They were three depths and three different chronological dimensions assigned and the objectives were matched with the Swiss and the Austrian equivalent.
Discussion: This consensus statement may provide the German faculties with a basis for planning the teaching of practical skills and is an important step towards a national standard of medical learning objectives.
Looking ahead: The consensus statement may have a formative effect on the medical schools to teach practical skills and plan the resources accordingly.
Publicly available compound and bioactivity databases provide an essential basis for data-driven applications in life-science research and drug design. By analyzing several bioactivity repositories, we discovered differences in compound and target coverage advocating the combined use of data from multiple sources. Using data from ChEMBL, PubChem, IUPHAR/BPS, BindingDB, and Probes & Drugs, we assembled a consensus dataset focusing on small molecules with bioactivity on human macromolecular targets. This allowed an improved coverage of compound space and targets, and an automated comparison and curation of structural and bioactivity data to reveal potentially erroneous entries and increase confidence. The consensus dataset comprised of more than 1.1 million compounds with over 10.9 million bioactivity data points with annotations on assay type and bioactivity confidence, providing a useful ensemble for computational applications in drug design and chemogenomics.
Ubiquitin fold modifier 1 (UFM1) is a member of the ubiquitin-like protein family. UFM1 undergoes a cascade of enzymatic reactions including activation by UBA5 (E1), transfer to UFC1 (E2) and selective conjugation to a number of target proteins via UFL1 (E3) enzymes. Despite the importance of ufmylation in a variety of cellular processes and its role in the pathogenicity of many human diseases, the molecular mechanisms of the ufmylation cascade remains unclear. In this study we focused on the biophysical and biochemical characterization of the interaction between UBA5 and UFC1. We explored the hypothesis that the unstructured C-terminal region of UBA5 serves as a regulatory region, controlling cellular localization of the elements of the ufmylation cascade and effective interaction between them. We found that the last 20 residues in UBA5 are pivotal for binding to UFC1 and can accelerate the transfer of UFM1 to UFC1. We solved the structure of a complex of UFC1 and a peptide spanning the last 20 residues of UBA5 by NMR spectroscopy. This structure in combination with additional NMR titration and isothermal titration calorimetry experiments revealed the mechanism of interaction and confirmed the importance of the C-terminal unstructured region in UBA5 for the ufmylation cascade.
Treatments for amblyopia focus on vision therapy and patching of one eye. Predicting the success of these methods remains difficult, however. Recent research has used binocular rivalry to monitor visual cortical plasticity during occlusion therapy, leading to a successful prediction of the recovery rate of the amblyopic eye. The underlying mechanisms and their relation to neural homeostatic plasticity are not known. Here we propose a spiking neural network to explain the effect of short-term monocular deprivation on binocular rivalry. The model reproduces perceptual switches as observed experimentally. When one eye is occluded, inhibitory plasticity changes the balance between the eyes and leads to longer dominance periods for the eye that has been deprived. The model suggests that homeostatic inhibitory plasticity is a critical component of the observed effects and might play an important role in the recovery from amblyopia.
Background: The differentiation between Gaucher disease type 3 (GD3) and type 1 is challenging because pathognomonic neurologic symptoms may be subtle and develop at late stages. The ophthalmologist plays a crucial role in identifying the typical impairment of horizontal saccadic eye movements, followed by vertical ones. Little is known about further ocular involvement. The aim of this monocentric cohort study is to comprehensively describe the ophthalmological features of Gaucher disease type 3. We suggest recommendations for a set of useful ophthalmologic investigations for diagnosis and follow up and for saccadometry parameters enabling a correlation to disease severity.
Methods: Sixteen patients with biochemically and genetically diagnosed GD3 completed ophthalmologic examination including optical coherence tomography (OCT), clinical oculomotor assessment and saccadometry by infrared based video-oculography. Saccadic peak velocity, gain and latency were compared to 100 healthy controls, using parametric tests. Correlations between saccadic assessment and clinical parameters were calculated.
Results: Peripapillary subretinal drusen-like deposits with retinal atrophy (2/16), preretinal opacities of the vitreous (4/16) and increased retinal vessel tortuosity (3/16) were found. Oculomotor pathology with clinically slowed saccades was more frequent horizontally (15/16) than vertically (12/16). Saccadometry revealed slowed peak velocity compared to 100 controls (most evident horizontally and downwards). Saccades were delayed and hypometric. Best correlating with SARA (scale for the assessment and rating of ataxia), disease duration, mSST (modified Severity Scoring Tool) and reduced IQ was peak velocity (both up- and downwards). Motility restriction occurred in 8/16 patients affecting horizontal eye movements, while vertical motility restriction was seen less frequently. Impaired abduction presented with esophoria or esotropia, the latter in combination with reduced stereopsis.
Conclusions: Vitreoretinal lesions may occur in 25% of Gaucher type 3 patients, while we additionally observed subretinal lesions with retinal atrophy in advanced disease stages. Vertical saccadic peak velocity seems the most promising "biomarker" for neuropathic manifestation for future longitudinal studies, as it correlates best with other neurologic symptoms. Apart from the well documented abduction deficit in Gaucher type 3 we were able to demonstrate motility impairment in all directions of gaze.
Background: Alterations in the DNA methylation pattern are a hallmark of leukemias and lymphomas. However, most epigenetic studies in hematologic neoplasms (HNs) have focused either on the analysis of few candidate genes or many genes and few HN entities, and comprehensive studies are required. Methodology/Principal Findings: Here, we report for the first time a microarray-based DNA methylation study of 767 genes in 367 HNs diagnosed with 16 of the most representative B-cell (n = 203), T-cell (n = 30), and myeloid (n = 134) neoplasias, as well as 37 samples from different cell types of the hematopoietic system. Using appropriate controls of B-, T-, or myeloid cellular origin, we identified a total of 220 genes hypermethylated in at least one HN entity. In general, promoter hypermethylation was more frequent in lymphoid malignancies than in myeloid malignancies, being germinal center mature B-cell lymphomas as well as B and T precursor lymphoid neoplasias those entities with highest frequency of gene-associated DNA hypermethylation. We also observed a significant correlation between the number of hypermethylated and hypomethylated genes in several mature B-cell neoplasias, but not in precursor B- and T-cell leukemias. Most of the genes becoming hypermethylated contained promoters with high CpG content, and a significant fraction of them are targets of the polycomb repressor complex. Interestingly, T-cell prolymphocytic leukemias show low levels of DNA hypermethylation and a comparatively large number of hypomethylated genes, many of them showing an increased gene expression. Conclusions/Significance: We have characterized the DNA methylation profile of a wide range of different HNs entities. As well as identifying genes showing aberrant DNA methylation in certain HN subtypes, we also detected six genes—DBC1, DIO3, FZD9, HS3ST2, MOS, and MYOD1—that were significantly hypermethylated in B-cell, T-cell, and myeloid malignancies. These might therefore play an important role in the development of different HNs.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice nucleating particles (INPs). However, an inter-comparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nucleation research UnIT), we distributed an illite rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. Seventeen measurement methods were involved in the data inter-comparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while ten other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing dataset was evaluated using the ice nucleation active surface-site density (ns) to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers nine orders of magnitude in ns.
Our inter-comparison results revealed a discrepancy between suspension and dry-dispersed particle measurements for this mineral dust. While the agreement was good below ~ −26 °C, the ice nucleation activity, expressed in ns, was smaller for the wet suspended samples and higher for the dry-dispersed aerosol samples between about −26 and −18 °C. Only instruments making measurement techniques with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −26 and −18 °C is discussed. In general, the seventeen immersion freezing measurement techniques deviate, within the range of about 7 °C in terms of temperature, by three orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency (i.e., ns) of illite NX particles is relatively independent on droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature-dependence and weak time- and size-dependence of immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns (T) spectra, and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. A multiple exponential distribution fit is expressed as ns(T) = exp(23.82 × exp(−exp(0.16 × (T + 17.49))) + 1.39) based on the specific surface area and ns(T) = exp(25.75 × exp(−exp(0.13 × (T + 17.17))) + 3.34) based on the geometric area (ns and T in m−2 and °C, respectively). These new fits, constrained by using an identical reference samples, will help to compare IN measurement methods that are not included in the present study and, thereby, IN data from future IN instruments.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice-nucleating particles. However, an intercomparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nuclei Research Unit), we distributed an illite-rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. A total of 17 measurement methods were involved in the data intercomparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while 10 other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing data set was evaluated using the ice nucleation active surface-site density, ns, to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers 9 orders of magnitude in ns.
In general, the 17 immersion freezing measurement techniques deviate, within a range of about 8 °C in terms of temperature, by 3 orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency expressed in ns of illite NX particles is relatively independent of droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature dependence and weak time and size dependence of the immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns(T) spectra and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. While the agreement between different instruments was reasonable below ~ −27 °C, there seemed to be a different trend in the temperature-dependent ice nucleation activity from the suspension and dry-dispersed particle measurements for this mineral dust, in particular at higher temperatures. For instance, the ice nucleation activity expressed in ns was smaller for the average of the wet suspended samples and higher for the average of the dry-dispersed aerosol samples between about −27 and −18 °C. Only instruments making measurements with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −27 and −18 °C is discussed. Multiple exponential distribution fits in both linear and log space for both specific surface area-based ns(T) and geometric surface area-based ns(T) are provided. These new fits, constrained by using identical reference samples, will help to compare IN measurement methods that are not included in the present study and IN data from future IN instruments.
Analysis of whole cell lipid extracts of bacteria by means of ultra-performance (UP)LC-MS allows a comprehensive determination of the lipid molecular species present in the respective organism. The data allow conclusions on its metabolic potential as well as the creation of lipid profiles, which visualize the organism's response to changes in internal and external conditions. Herein, we describe: i) a fast reversed phase UPLC-ESI-MS method suitable for detection and determination of individual lipids from whole cell lipid extracts of all polarities ranging from monoacylglycerophosphoethanolamines to TGs; ii) the first overview of a wide range of lipid molecular species in vegetative Myxococcus xanthus DK1622 cells; iii) changes in their relative composition in selected mutants impaired in the biosynthesis of α-hydroxylated FAs, sphingolipids, and ether lipids; and iv) the first report of ceramide phosphoinositols in M. xanthus, a lipid species previously found only in eukaryotes.