Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10808)
- Doctoral Thesis (1567)
- Preprint (1554)
- Working Paper (1438)
- Part of Periodical (564)
- Conference Proceeding (511)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17089) (remove)
Has Fulltext
- yes (17089) (remove)
Keywords
- inflammation (92)
- COVID-19 (89)
- SARS-CoV-2 (62)
- Financial Institutions (47)
- Germany (45)
- climate change (45)
- aging (43)
- ECB (42)
- cancer (42)
- apoptosis (41)
Institute
- Medizin (5096)
- Physik (2985)
- Wirtschaftswissenschaften (1643)
- Frankfurt Institute for Advanced Studies (FIAS) (1575)
- Biowissenschaften (1399)
- Informatik (1249)
- Center for Financial Studies (CFS) (1136)
- Sustainable Architecture for Finance in Europe (SAFE) (1059)
- Biochemie und Chemie (855)
- House of Finance (HoF) (700)
CMOS sensors are the most promising candidates for the Micro-Vertex-Detector (MVD) of the CBM experiment at GSI, as they provide an unprecedented compromise between spatial resolution, low material budget, adequate radiation tolerance and readout speed. To study the integration of these sensors into a detector module, a so-called MVD-demonstrator has been developed. The demonstrator and its in-beam performance will be presented and discussed in this work.
n the EU there are longstanding and ongoing pressures towards a tax that is levied on the EU level to substitute for national contributions. We discuss conditions under which such a transition can make sense, starting from what we call a "decentralization theorem of taxation" that is analogous to Oates (1972) famous result that in the absence of spill-over effects and economies of scale decentralized public good provision weakly dominates central provision. We then drop assumptions that turn out to be unnecessary for this results. While spill-over effects of taxation may call for central rules for taxation, as long as spill-over effects do not depend on the intra-regional distribution of the tax burden, decentralized taxation plus tax coordination is found superior to a union-wide tax.
Organ-on-a-chip technology has the potential to accelerate pharmaceutical drug development, improve the clinical translation of basic research, and provide personalized intervention strategies. In the last decade, big pharma has engaged in many academic research cooperations to develop organ-on-a-chip systems for future drug discoveries. Although most organ-on-a-chip systems present proof-of-concept studies, miniaturized organ systems still need to demonstrate translational relevance and predictive power in clinical and pharmaceutical settings. This review explores whether microfluidic technology succeeded in paving the way for developing physiologically relevant human in vitro models for pharmacology and toxicology in biomedical research within the last decade. Individual organ-on-a-chip systems are discussed, focusing on relevant applications and highlighting their ability to tackle current challenges in pharmacological research.
Knowledge of consumers' willingness to pay (WTP) is a prerequisite to profitable price-setting. To gauge consumers' WTP, practitioners often rely on a direct single question approach in which consumers are asked to explicitly state their WTP for a product. Despite its popularity among practitioners, this approach has been found to suffer from hypothetical bias. In this paper, we propose a rigorous method that improves the accuracy of the direct single question approach. Specifically, we systematically assess the hypothetical biases associated with the direct single question approach and explore ways to de-bias it. Our results show that by using the de-biasing procedures we propose, we can generate a de-biased direct single question approach that is accurate enough to be useful for managerial decision-making. We validate this approach with two studies in this paper.
Background: Modulation of cortical excitability by transcranial magnetic stimulation (TMS) is used for investigating human brain functions. A common observation is the high variability of long-term depression (LTD)-like changes in human (motor) cortex excitability. This study aimed at analyzing the response subgroup distribution after paired continuous theta burst stimulation (cTBS) as a basis for subject selection.
Methods: The effects of paired cTBS using 80% active motor threshold (AMT) in 31 healthy volunteers were assessed at the primary motor cortex (M1) corresponding to the representation of the first dorsal interosseous (FDI) muscle of the left hand, before and up to 50 min after plasticity induction. The changes in motor evoked potentials (MEPs) were analyzed using machine-learning derived methods implemented as Gaussian mixture modeling (GMM) and computed ABC analysis.
Results: The probability density distribution of the MEP changes from baseline was tri-modal, showing a clear separation at 80.9%. Subjects displaying at least this degree of LTD-like changes were n = 6 responders. By contrast, n = 7 subjects displayed a paradox response with increase in MEP. Reassessment using ABC analysis as alternative approach led to the same n = 6 subjects as a distinct category.
Conclusion: Depressive effects of paired cTBS using 80% AMT endure at least 50 min, however, only in a small subgroup of healthy subjects. Hence, plasticity induction by paired cTBS might not reflect a general mechanism in human motor cortex excitability. A mathematically supported criterion is proposed to select responders for enrolment in assessments of human brain functional networks using virtual brain lesions.
Based on accumulating evidence of a role of lipid signaling in many physiological and pathophysiological processes including psychiatric diseases, the present data driven analysis was designed to gather information needed to develop a prospective biomarker, using a targeted lipidomics approach covering different lipid mediators. Using unsupervised methods of data structure detection, implemented as hierarchal clustering, emergent self-organizing maps of neuronal networks, and principal component analysis, a cluster structure was found in the input data space comprising plasma concentrations of d = 35 different lipid-markers of various classes acquired in n = 94 subjects with the clinical diagnoses depression, bipolar disorder, ADHD, dementia, or in healthy controls. The structure separated patients with dementia from the other clinical groups, indicating that dementia is associated with a distinct lipid mediator plasma concentrations pattern possibly providing a basis for a future biomarker. This hypothesis was subsequently assessed using supervised machine-learning methods, implemented as random forests or principal component analysis followed by computed ABC analysis used for feature selection, and as random forests, k-nearest neighbors, support vector machines, multilayer perceptron, and naïve Bayesian classifiers to estimate whether the selected lipid mediators provide sufficient information that the diagnosis of dementia can be established at a higher accuracy than by guessing. This succeeded using a set of d = 7 markers comprising GluCerC16:0, Cer24:0, Cer20:0, Cer16:0, Cer24:1, C16 sphinganine, and LacCerC16:0, at an accuracy of 77%. By contrast, using random lipid markers reduced the diagnostic accuracy to values of 65% or less, whereas training the algorithms with randomly permuted data was followed by complete failure to diagnose dementia, emphasizing that the selected lipid mediators were display a particular pattern in this disease possibly qualifying as biomarkers.
The Gini index is a measure of the inequality of a distribution that can be derived from Lorenz curves. While commonly used in, e.g., economic research, it suffers from ambiguity via lack of Lorenz dominance preservation. Here, investigation of large sets of empirical distributions of incomes of the World’s countries over several years indicated firstly, that the Gini indices are centered on a value of 33.33% corresponding to the Gini index of the uniform distribution and secondly, that the Lorenz curves of these distributions are consistent with Lorenz curves of log-normal distributions. This can be employed to provide a Lorenz dominance preserving equivalent of the Gini index. Therefore, a modified measure based on log-normal approximation and standardization of Lorenz curves is proposed. The so-called UGini index provides a meaningful and intuitive standardization on the uniform distribution as this characterizes societies that provide equal chances. The novel UGini index preserves Lorenz dominance. Analysis of the probability density distributions of the UGini index of the World’s counties income data indicated multimodality in two independent data sets. Applying Bayesian statistics provided a data-based classification of the World’s countries’ income distributions. The UGini index can be re-transferred into the classical index to preserve comparability with previous research.
Persistent and, in particular, neuropathic pain is a major healthcare problem with still insufficient pharmacological treatment options. This triggered research activities aimed at finding analgesics with a novel mechanism of action. Results of these efforts will need to pass through the phases of drug development, in which experimental human pain models are established components e.g. implemented as chemical hyperalgesia induced by capsaicin. We aimed at ranking the various readouts of a human capsaicin–based pain model with respect to the most relevant information about the effects of a potential reference analgesic. In a placebo‐controlled, randomized cross‐over study, seven different pain‐related readouts were acquired in 16 healthy individuals before and after oral administration of 300 mg pregabalin. The sizes of the effect on pain induced by intradermal injection of capsaicin were quantified by calculating Cohen's d. While in four of the seven pain‐related parameters, pregabalin provided a small effect judged by values of Cohen's d exceeding 0.2, an item categorization technique implemented as computed ABC analysis identified the pain intensities in the area of secondary hyperalgesia and of allodynia as the most suitable parameters to quantify the analgesic effects of pregabalin. Results of this study provide further support for the ability of the intradermal capsaicin pain model to show analgesic effects of pregabalin. Results can serve as a basis for the designs of studies where the inclusion of this particular pain model and pregabalin is planned.
An easy-to-use model to evaluate conductivities at high and middle latitudes in the height range 70–100 km is presented. It is based on electron density profiles obtained with the EISCAT VHF radar during 11 years and on the neutral atmospheric model MSIS95. The model uses solar zenith angle, geomagnetic activity and season as input parameters. It was mainly constructed to study the properties of Schumann resonances that depend on such conductivity profiles.
A CW RFQ prototype
(2011)
A short RFQ prototype was built for RF-tests of high power RFQ structures. We will study thermal effects and determine critical points of the design. HF-simulations with CST Microwave Studio and measurements were done. The cw-tests with 20 kW/m RF-power and simulations of thermal effects with ALGOR were finished successfully. The optimization of some details of the HF design is on focus now. First results and the status of the project will be presented.
Dual-task paradigms encompass a broad range of approaches to measure cognitive load in instructional settings. As a common characteristic, an additional task is implemented alongside a learning task to capture the individual’s unengaged cognitive capacities during the learning process. Measures to determine these capacities are, for instance, reaction times and interval errors on the additional task, while the performance on the learning task is to be maintained. Opposite to retrospectively applied subjective ratings, the continuous assessment within a dual-task paradigm allows to simultaneously monitor changes in the performance related to previously defined tasks. Following the Cognitive Load Theory, these changes in performance correspond to cognitive changes related to the establishment of permanently existing knowledge structures. Yet the current state of research indicates a clear lack of standardization of dual-task paradigms over study settings and task procedures. Typically, dual-task designs are adapted uniquely for each study, albeit with some similarities across different settings and task procedures. These similarities range from the type of modality to the frequency used for the additional task. This results in a lack of validity and comparability between studies due to arbitrarily chosen patterns of frequency without a sound scientific base, potentially confounding variables, or undecided adaptation potentials for future studies. In this paper, the lack of validity and comparability between dual-task settings will be presented, the current taxonomies compared and the future steps for a better standardization and implementation discussed.
The merchant language of the Georgian Jews deserves scholarly attention for several reasons. The political and social developments of the last fifty years have caused the extinction of this very interesting form of communication, as most Georgian Jews have emigrated to Israel. In a natural interaction, the type of language described in this article can be found very rarely, if at all. Records of this communication have been preserved in various contexts and received different levels of scholarly attention. Our interest concerns the linguistic aspects as well as the classification.
In the following paper we argue that the specific merchant language of Georgian Jews belongs to the pragmatic phenomenon of “very indirect language.” The use of mostly Hebrew lexemes in Georgian conversation leads to an unfounded assumption that the speakers are equally competent in Hebrew and Georgian. It is reported that a high level of linguistic competence in Hebrew does not guarantee understanding of the Jewish merchant language. In the Georgian context, the decisive factors are membership in the professional interest group of merchants and residential membership in the Jewish community. These factors seem to be equivalent, because Jewish members of other professional groups (and those from outside the particular urban residential area) have difficulties in following the language that are similar to those of the Georgian majority. We describe the pragmatic structure of interactions conducted with the help of the merchant language and take into account the purpose of the language’s use or the intention of the speakers. Relevant linguistic examples are analysed and their sociocultural contexts explained.
A critical role for VEGF and VEGFR2 in NMDA receptor synaptic function and fear-related behavior
(2016)
Vascular endothelial growth factor (VEGF) is known to be required for the action of antidepressant therapies but its impact on brain synaptic function is poorly characterized. Using a combination of electrophysiological, single-molecule imaging and conditional transgenic approaches, we identified the molecular basis of the VEGF effect on synaptic transmission and plasticity. VEGF increases the postsynaptic responses mediated by the N-methyl-d-aspartate type of glutamate receptors (GluNRs) in hippocampal neurons. This is concurrent with the formation of new synapses and with the synaptic recruitment of GluNR expressing the GluN2B subunit (GluNR-2B). VEGF induces a rapid redistribution of GluNR-2B at synaptic sites by increasing the surface dynamics of these receptors within the membrane. Consistently, silencing the expression of the VEGF receptor 2 (VEGFR2) in neural cells impairs hippocampal-dependent synaptic plasticity and consolidation of emotional memory. These findings demonstrated the direct implication of VEGF signaling in neurons via VEGFR2 in proper synaptic function. They highlight the potential of VEGF as a key regulator of GluNR synaptic function and suggest a role for VEGF in new therapeutic approaches targeting GluNR in depression.
Rezension zu: Psychology of Retention:Theory, Research and Practice / Melinde Coetzee, Ingrid L. Potgieter and Nadia Ferreira (Eds.), ISBN:978-3-319-98919-8 Publisher:Springer Nature, 2018, R1600 (Preis SA)
The Frankfurt Neutron Source at the Stern-Gerlach-Zentrum is driven by a 2 MeV proton linac consisting of a 4-rod-radio-frequency-quadrupol (RFQ) and an 8 gap IH-DTL structure. RFQ and IH cavity will be powered by only one radio frequency (RF) amplifier to reduce costs. The RF-amplifier of the RFQ-IH combination is coupled into the RFQ. Internal inductive coupling along the axis connects the RFQ with the IH cavity ensuring the required power transition as well as a fixed phase relation between the two structures. The main acceleration of 120 keV up to 2.03 MeV will be reached by the RFQ-IH combination with 175 MHz and at a total length of 2.3 m. The losses in the RFQ-IH combination are about 200 kW.
This paper examines optimal enviromental policy when external financing is costly for firms. We introduce emission externalities and industry equilibrium in the Holmström and Tirole (1997) model of corporate finance. While a cap-and- trading system optimally governs both firms` abatement activities (internal emission margin) and industry size (external emission margin) when firms have sufficient internal funds, external financing constraints introduce a wedge between these two objectives. When a sector is financially constrained in the aggregate, the optimal cap is strictly above the Pigouvian benchmark and emission allowances should be allocated below market prices. When a sector is not financially constrained in the aggregate, a cap that is below the Pigiouvian benchmark optimally shifts market share to less polluting firms and, moreover, there should be no "grandfathering" of emission allowances. With financial constraints and heterogeneity across firms or sectors, a uniform policy, such as a single cap-and-trade system, is typically not optimal.
Background: Invasive off- or on-pump cardiac surgery (elective and emergency procedures, excluding transplants are routinely performed to treat complications of ischaemic heart disease. Randomised controlled trials (RCT) evaluate the effectiveness of treatments in the setting of cardiac surgery. However, the impact of RCTs is weakened by heterogeneity in outcome measuring and reporting, which hinders comparison across trials. Core outcome sets (COS, a set of outcomes that should be measured and reported, as a minimum, in clinical trials for a specific clinical field) help reduce this problem. In light of the above, we developed a COS for cardiac surgery effectiveness trials.
Methods: Potential core outcomes were identified a priori by analysing data on 371 RCTs of 58,253 patients. We reached consensus on core outcomes in an international three-round eDelphi exercise. Outcomes for which at least 60% of the participants chose the response option "no" and less than 20% chose the response option "yes" were excluded.
Results: Eighty-six participants from 23 different countries involving adult cardiac patients, cardiac surgeons, anaesthesiologists, nursing staff and researchers contributed to this eDelphi. The panel reached consensus on four core outcomes: 1) Measure of mortality, 2) Measure of quality of life, 3) Measure of hospitalisation and 4) Measure of cerebrovascular complication to be included in adult cardiac surgery trials.
Conclusion: This study used robust research methodology to develop a minimum core outcome set for clinical trials evaluating the effectiveness of treatments in the setting of cardiac surgery. As a next step, appropriate outcome measurement instruments have to be selected.
Unquestionably (or: undoubtedly), every competent speaker has already come to doubt with respect to the question of which form is correct or appropriate and should be used (in the standard language) when faced with two or more almost identical competing variants of words, word forms or sentence and phrase structure (e.g. German "Pizzas/Pizzen/Pizze" 'pizzas', Dutch "de drie mooiste/mooiste drie stranden" 'the three most beautiful/most beautiful three beaches', Swedish "större än jag/mig" 'taller than I/me'). Such linguistic uncertainties or "cases of doubt" (cf. i.a. Klein 2003, 2009, 2018; Müller & Szczepaniak 2017; Schmitt, Szczepaniak & Vieregge 2019; Stark 2019 as well as the useful collections of data of Duden vol. 9, Taaladvies.net, Språkriktighetsboken etc.) systematically occur also in native speakers and they do not necessarily coincide with the difficulties of second language learners. In present-day German, most grammatical uncertainties occur in the domains of inflection (nominal plural formation, genitive singular allomorphy of strong masc./neut. nouns, inflectional variation of weak masc. nouns, strong/weak adjectival inflection and comparison forms, strong/weak verb forms, perfect auxiliary selection) and word-formation (linking elements in compounds, separability of complex verbs). As for syntax, there are often doubts in connection with case choice (pseudo-partitive constructions, prepositional case government) and agreement (especially due to coordination or appositional structures). This contribution aims to present a contrastive approach to morphological and syntactic uncertainties in contemporary Germanic languages (mostly German, Dutch, and Swedish) in order to obtain a broader and more fine-grained typology of grammatical instabilities and their causes. As will be discussed, most doubts of competent speakers - a problem also for general linguistic theory - can be attributed to processes of language change in progress, to language or variety contact, to gaps and rule conflicts in the grammar of every language or to psycholinguistic conditions of language processing. Our main concerns will be the issues of which (kinds of) common or different critical areas there are within Germanic (and, on the other hand, in which areas there are no doubts), which of the established (cross-linguistically valid) explanatory approaches apply to which phenomena and, ultimately, the question whether the new data reveals further lines of explanation for the empirically observable (standard) variation.
In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s seq-operator, which for instance justifies the use of the do-notation.
Commercialization of consumers’ personal data in the digital economy poses serious, both conceptual and practical, challenges to the traditional approach of European Union (EU) Consumer Law. This article argues that mass-spread, automated, algorithmic decision-making casts doubt on the foundational paradigm of EU consumer law: consent and autonomy. Moreover, it poses threats of discrimination and under- mining of consumer privacy. It is argued that the recent legislative reaction by the EU Commission, in the form of the ‘New Deal for Consumers’, was a step in the right direction, but fell short due to its continued reliance on consent, autonomy and failure to adequately protect consumers from indirect discrimination. It is posited that a focus on creating a contracting landscape where the consumer may be properly informed in material respects is required, which in turn necessitates blending the approaches of competition, consumer protection and data protection laws.
A consistent muscle activation strategy underlies crawling and swimming in Caenorhabditis elegans
(2014)
Although undulatory swimming is observed in many organisms, the neuromuscular basis for undulatory movement patterns is not well understood. To better understand the basis for the generation of these movement patterns, we studied muscle activity in the nematode Caenorhabditis elegans. Caenorhabditis elegans exhibits a range of locomotion patterns: in low viscosity fluids the undulation has a wavelength longer than the body and propagates rapidly, while in high viscosity fluids or on agar media the undulatory waves are shorter and slower. Theoretical treatment of observed behaviour has suggested a large change in force–posture relationships at different viscosities, but analysis of bend propagation suggests that short-range proprioceptive feedback is used to control and generate body bends. How muscles could be activated in a way consistent with both these results is unclear. We therefore combined automated worm tracking with calcium imaging to determine muscle activation strategy in a variety of external substrates. Remarkably, we observed that across locomotion patterns spanning a threefold change in wavelength, peak muscle activation occurs approximately 45° (1/8th of a cycle) ahead of peak midline curvature. Although the location of peak force is predicted to vary widely, the activation pattern is consistent with required force in a model incorporating putative length- and velocity-dependence of muscle strength. Furthermore, a linear combination of local curvature and velocity can match the pattern of activation. This suggests that proprioception can enable the worm to swim effectively while working within the limitations of muscle biomechanics and neural control.
Introduction: Encouraged by the change in licensing regulations the practical professional skills in Germany received a higher priority and are taught in medical schools therefore increasingly. This created the need to standardize the process more and more. On the initiative of the German skills labs the German Medical Association Committee for practical skills was established and developed a competency-based catalogue of learning objectives, whose origin and structure is described here.
Goal of the catalogue is to define the practical skills in undergraduate medical education and to give the medical schools a rational planning basis for the necessary resources to teach them.
Methods: Building on already existing German catalogues of learning objectives a multi-iterative process of condensation was performed, which corresponds to the development of S1 guidelines, in order to get a broad professional and political support.
Results: 289 different practical learning goals were identified and assigned to twelve different organ systems with three overlapping areas to other fields of expertise and one area of across organ system skills. They were three depths and three different chronological dimensions assigned and the objectives were matched with the Swiss and the Austrian equivalent.
Discussion: This consensus statement may provide the German faculties with a basis for planning the teaching of practical skills and is an important step towards a national standard of medical learning objectives.
Looking ahead: The consensus statement may have a formative effect on the medical schools to teach practical skills and plan the resources accordingly.
Publicly available compound and bioactivity databases provide an essential basis for data-driven applications in life-science research and drug design. By analyzing several bioactivity repositories, we discovered differences in compound and target coverage advocating the combined use of data from multiple sources. Using data from ChEMBL, PubChem, IUPHAR/BPS, BindingDB, and Probes & Drugs, we assembled a consensus dataset focusing on small molecules with bioactivity on human macromolecular targets. This allowed an improved coverage of compound space and targets, and an automated comparison and curation of structural and bioactivity data to reveal potentially erroneous entries and increase confidence. The consensus dataset comprised of more than 1.1 million compounds with over 10.9 million bioactivity data points with annotations on assay type and bioactivity confidence, providing a useful ensemble for computational applications in drug design and chemogenomics.
Ubiquitin fold modifier 1 (UFM1) is a member of the ubiquitin-like protein family. UFM1 undergoes a cascade of enzymatic reactions including activation by UBA5 (E1), transfer to UFC1 (E2) and selective conjugation to a number of target proteins via UFL1 (E3) enzymes. Despite the importance of ufmylation in a variety of cellular processes and its role in the pathogenicity of many human diseases, the molecular mechanisms of the ufmylation cascade remains unclear. In this study we focused on the biophysical and biochemical characterization of the interaction between UBA5 and UFC1. We explored the hypothesis that the unstructured C-terminal region of UBA5 serves as a regulatory region, controlling cellular localization of the elements of the ufmylation cascade and effective interaction between them. We found that the last 20 residues in UBA5 are pivotal for binding to UFC1 and can accelerate the transfer of UFM1 to UFC1. We solved the structure of a complex of UFC1 and a peptide spanning the last 20 residues of UBA5 by NMR spectroscopy. This structure in combination with additional NMR titration and isothermal titration calorimetry experiments revealed the mechanism of interaction and confirmed the importance of the C-terminal unstructured region in UBA5 for the ufmylation cascade.
Treatments for amblyopia focus on vision therapy and patching of one eye. Predicting the success of these methods remains difficult, however. Recent research has used binocular rivalry to monitor visual cortical plasticity during occlusion therapy, leading to a successful prediction of the recovery rate of the amblyopic eye. The underlying mechanisms and their relation to neural homeostatic plasticity are not known. Here we propose a spiking neural network to explain the effect of short-term monocular deprivation on binocular rivalry. The model reproduces perceptual switches as observed experimentally. When one eye is occluded, inhibitory plasticity changes the balance between the eyes and leads to longer dominance periods for the eye that has been deprived. The model suggests that homeostatic inhibitory plasticity is a critical component of the observed effects and might play an important role in the recovery from amblyopia.
Background: The differentiation between Gaucher disease type 3 (GD3) and type 1 is challenging because pathognomonic neurologic symptoms may be subtle and develop at late stages. The ophthalmologist plays a crucial role in identifying the typical impairment of horizontal saccadic eye movements, followed by vertical ones. Little is known about further ocular involvement. The aim of this monocentric cohort study is to comprehensively describe the ophthalmological features of Gaucher disease type 3. We suggest recommendations for a set of useful ophthalmologic investigations for diagnosis and follow up and for saccadometry parameters enabling a correlation to disease severity.
Methods: Sixteen patients with biochemically and genetically diagnosed GD3 completed ophthalmologic examination including optical coherence tomography (OCT), clinical oculomotor assessment and saccadometry by infrared based video-oculography. Saccadic peak velocity, gain and latency were compared to 100 healthy controls, using parametric tests. Correlations between saccadic assessment and clinical parameters were calculated.
Results: Peripapillary subretinal drusen-like deposits with retinal atrophy (2/16), preretinal opacities of the vitreous (4/16) and increased retinal vessel tortuosity (3/16) were found. Oculomotor pathology with clinically slowed saccades was more frequent horizontally (15/16) than vertically (12/16). Saccadometry revealed slowed peak velocity compared to 100 controls (most evident horizontally and downwards). Saccades were delayed and hypometric. Best correlating with SARA (scale for the assessment and rating of ataxia), disease duration, mSST (modified Severity Scoring Tool) and reduced IQ was peak velocity (both up- and downwards). Motility restriction occurred in 8/16 patients affecting horizontal eye movements, while vertical motility restriction was seen less frequently. Impaired abduction presented with esophoria or esotropia, the latter in combination with reduced stereopsis.
Conclusions: Vitreoretinal lesions may occur in 25% of Gaucher type 3 patients, while we additionally observed subretinal lesions with retinal atrophy in advanced disease stages. Vertical saccadic peak velocity seems the most promising "biomarker" for neuropathic manifestation for future longitudinal studies, as it correlates best with other neurologic symptoms. Apart from the well documented abduction deficit in Gaucher type 3 we were able to demonstrate motility impairment in all directions of gaze.
Background: Alterations in the DNA methylation pattern are a hallmark of leukemias and lymphomas. However, most epigenetic studies in hematologic neoplasms (HNs) have focused either on the analysis of few candidate genes or many genes and few HN entities, and comprehensive studies are required. Methodology/Principal Findings: Here, we report for the first time a microarray-based DNA methylation study of 767 genes in 367 HNs diagnosed with 16 of the most representative B-cell (n = 203), T-cell (n = 30), and myeloid (n = 134) neoplasias, as well as 37 samples from different cell types of the hematopoietic system. Using appropriate controls of B-, T-, or myeloid cellular origin, we identified a total of 220 genes hypermethylated in at least one HN entity. In general, promoter hypermethylation was more frequent in lymphoid malignancies than in myeloid malignancies, being germinal center mature B-cell lymphomas as well as B and T precursor lymphoid neoplasias those entities with highest frequency of gene-associated DNA hypermethylation. We also observed a significant correlation between the number of hypermethylated and hypomethylated genes in several mature B-cell neoplasias, but not in precursor B- and T-cell leukemias. Most of the genes becoming hypermethylated contained promoters with high CpG content, and a significant fraction of them are targets of the polycomb repressor complex. Interestingly, T-cell prolymphocytic leukemias show low levels of DNA hypermethylation and a comparatively large number of hypomethylated genes, many of them showing an increased gene expression. Conclusions/Significance: We have characterized the DNA methylation profile of a wide range of different HNs entities. As well as identifying genes showing aberrant DNA methylation in certain HN subtypes, we also detected six genes—DBC1, DIO3, FZD9, HS3ST2, MOS, and MYOD1—that were significantly hypermethylated in B-cell, T-cell, and myeloid malignancies. These might therefore play an important role in the development of different HNs.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice nucleating particles (INPs). However, an inter-comparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nucleation research UnIT), we distributed an illite rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. Seventeen measurement methods were involved in the data inter-comparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while ten other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing dataset was evaluated using the ice nucleation active surface-site density (ns) to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers nine orders of magnitude in ns.
Our inter-comparison results revealed a discrepancy between suspension and dry-dispersed particle measurements for this mineral dust. While the agreement was good below ~ −26 °C, the ice nucleation activity, expressed in ns, was smaller for the wet suspended samples and higher for the dry-dispersed aerosol samples between about −26 and −18 °C. Only instruments making measurement techniques with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −26 and −18 °C is discussed. In general, the seventeen immersion freezing measurement techniques deviate, within the range of about 7 °C in terms of temperature, by three orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency (i.e., ns) of illite NX particles is relatively independent on droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature-dependence and weak time- and size-dependence of immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns (T) spectra, and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. A multiple exponential distribution fit is expressed as ns(T) = exp(23.82 × exp(−exp(0.16 × (T + 17.49))) + 1.39) based on the specific surface area and ns(T) = exp(25.75 × exp(−exp(0.13 × (T + 17.17))) + 3.34) based on the geometric area (ns and T in m−2 and °C, respectively). These new fits, constrained by using an identical reference samples, will help to compare IN measurement methods that are not included in the present study and, thereby, IN data from future IN instruments.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice-nucleating particles. However, an intercomparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nuclei Research Unit), we distributed an illite-rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. A total of 17 measurement methods were involved in the data intercomparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while 10 other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing data set was evaluated using the ice nucleation active surface-site density, ns, to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers 9 orders of magnitude in ns.
In general, the 17 immersion freezing measurement techniques deviate, within a range of about 8 °C in terms of temperature, by 3 orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency expressed in ns of illite NX particles is relatively independent of droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature dependence and weak time and size dependence of the immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns(T) spectra and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. While the agreement between different instruments was reasonable below ~ −27 °C, there seemed to be a different trend in the temperature-dependent ice nucleation activity from the suspension and dry-dispersed particle measurements for this mineral dust, in particular at higher temperatures. For instance, the ice nucleation activity expressed in ns was smaller for the average of the wet suspended samples and higher for the average of the dry-dispersed aerosol samples between about −27 and −18 °C. Only instruments making measurements with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −27 and −18 °C is discussed. Multiple exponential distribution fits in both linear and log space for both specific surface area-based ns(T) and geometric surface area-based ns(T) are provided. These new fits, constrained by using identical reference samples, will help to compare IN measurement methods that are not included in the present study and IN data from future IN instruments.
Analysis of whole cell lipid extracts of bacteria by means of ultra-performance (UP)LC-MS allows a comprehensive determination of the lipid molecular species present in the respective organism. The data allow conclusions on its metabolic potential as well as the creation of lipid profiles, which visualize the organism's response to changes in internal and external conditions. Herein, we describe: i) a fast reversed phase UPLC-ESI-MS method suitable for detection and determination of individual lipids from whole cell lipid extracts of all polarities ranging from monoacylglycerophosphoethanolamines to TGs; ii) the first overview of a wide range of lipid molecular species in vegetative Myxococcus xanthus DK1622 cells; iii) changes in their relative composition in selected mutants impaired in the biosynthesis of α-hydroxylated FAs, sphingolipids, and ether lipids; and iv) the first report of ceramide phosphoinositols in M. xanthus, a lipid species previously found only in eukaryotes.
Covalent inhibition has become more accepted in the past two decades, as illustrated by the clinical approval of several irreversible inhibitors designed to covalently modify their target. Elucidation of the structure-activity relationship and potency of such inhibitors requires a detailed kinetic evaluation. Here, we elucidate the relationship between the experimental read-out and the underlying inhibitor binding kinetics. Interactive kinetic simulation scripts are employed to highlight the effects of in vitro enzyme activity assay conditions and inhibitor binding mode, thereby showcasing which assumptions and corrections are crucial. Four stepwise protocols to assess the biochemical potency of (ir)reversible covalent enzyme inhibitors targeting a nucleophilic active site residue are included, with accompanying data analysis tailored to the covalent binding mode. Together, this will serve as a guide to make an educated decision regarding the most suitable method to assess covalent inhibition potency. © 2022 The Authors. Current Protocols published by Wiley Periodicals LLC.
Apigenin (4′,5,7-trihydroxyflavone) (Api) is an important component of the human diet, being distributed in a wide number of fruits, vegetables and herbs with the most important sources being represented by chamomile, celery, celeriac and parsley. This study was designed for a comprehensive evaluation of Api as an antiproliferative, proapoptotic, antiangiogenic and immunomodulatory phytocompound. In the set experimental conditions, Api presents antiproliferative activity against the A375 human melanoma cell line, a G2/M arrest of the cell cycle and cytotoxic events as revealed by the lactate dehydrogenase release. Caspase 3 activity was inversely proportional to the Api tested doses, namely 30 μM and 60 μM. Phenomena of early apoptosis, late apoptosis and necrosis following incubation with Api were detected by Annexin V-PI double staining. The flavone interfered with the mitochondrial respiration by modulating both glycolytic and mitochondrial pathways for ATP production. The metabolic activity of human dendritic cells (DCs) under LPS-activation was clearly attenuated by stimulation with high concentrations of Api. Il-6 and IL-10 secretion was almost completely blocked while TNF alpha secretion was reduced by about 60%. Api elicited antiangiogenic properties in a dose-dependent manner. Both concentrations of Api influenced tumour cell growth and migration, inducing a limited tumour area inside the application ring, associated with a low number of capillaries.
Translation is an important step in gene expression. The initiation of translation is phylogenetically diverse, since currently five different initiation mechanisms are known. For bacteria the three initiation factors IF1 – IF3 are described in contrast to archaea and eukaryotes, which contain a considerably higher number of initiation factor genes. As eukaryotes and archaea use a non-overlapping set of initiation mechanisms, orthologous proteins of both domains do not necessarily fulfill the same function. The genome of Haloferax volcanii contains 14 annotated genes that encode (subunits of) initiation factors. To gain a comprehensive overview of the importance of these genes, it was attempted to construct single gene deletion mutants of all genes. In 9 cases single deletion mutants were successfully constructed, showing that the respective genes are not essential. In contrast, the genes encoding initiation factors aIF1, aIF2γ, aIF5A, aIF5B, and aIF6 were found to be essential. Factors aIF1A and aIF2β are encoded by two orthologous genes in H. volcanii. Attempts to generate double mutants failed in both cases, indicating that also these factors are essential. A translatome analysis of one of the single aIF2β deletion mutants revealed that the translational efficiency of the second ortholog was enhanced tenfold and thus the two proteins can replace one another. The phenotypes of the single deletion mutants also revealed that the two aIF1As and aIF2βs have redundant but not identical functions. Remarkably, the gene encoding aIF2α, a subunit of aIF2 involved in initiator tRNA binding, could be deleted. However, the mutant had a severe growth defect under all tested conditions. Conditional depletion mutants were generated for the five essential genes. The phenotypes of deletion mutants and conditional depletion mutants were compared to that of the wild-type under various conditions, and growth characteristics are discussed.
In this work the flexibility requirements of a highly renewable European electricity network that has to cover fluctuations of wind and solar power generation on different temporal and spatial scales are studied. Cost optimal ways to do so are analysed that include optimal distribution of the infrastructure, large scale transmission, storage, and dispatchable generators. In order to examine these issues, a model of increasing sophistication is built, first considering different flexibility classes of conventional generation, then adding storage, before finally considering transmission to see the effects of each.
To conclude, in this work it was shown that slowly flexible base load generators can only be used in energy systems with renewable shares of less than 50%, independent of the expansion of an interconnecting transmission network within Europe. Furthermore, for a system with a dominant fraction of renewable generation, highly flexible generators are essentially the only necessary class of backup generators. The total backup capacity can only be decreased significantly if interconnecting transmission is allowed, clearly favouring a European-wide energy network. These results are independent of the complexity level of the cost assumptions used for the models. The use of storage technologies allows to reduce the required conventional backup capacity further. This highlights the importance of including additional technologies into the energy system that provide flexibility to balance fluctuations caused by the renewable energy sources. These technologies could for example be advanced energy storage systems, interconnecting transmission in the electricity network, and hydro power plants.
It was demonstrated that a cost optimal European electricity system with almost 100% renewable generation can have total system costs comparable to today's system cost. However, this requires a very large transmission grid expansion to nine times the line volume of the present-day system. Limiting transmission increases the system cost by up to a third, however, a compromise grid with four times today's line volume already locks in most of the cost benefits. Therefore, it is very clear that by increasing the pan-European network connectivity, a cost efficient inclusion of renewable energies can be achieved, which is strongly needed to reach current climate change prevention goals.
It was also shown that a similarly cost efficient, highly renewable European electricity system can be achieved that considers a wide range of additional policy constraints and plausible changes of economic parameters.
In this work we present, for the first time, the non-perturbative renormalization for the unpolarized, helicity and transversity quasi-PDFs, in an RI′ scheme. The proposed prescription addresses simultaneously all aspects of renormalization: logarithmic divergences, finite renormalization as well as the linear divergence which is present in the matrix elements of fermion operators with Wilson lines. Furthermore, for the case of the unpolarized quasi-PDF, we describe how to eliminate the unwanted mixing with the twist-3 scalar operator.
We utilize perturbation theory for the one-loop conversion factor that brings the renormalization functions to the MS-scheme at a scale of 2 GeV. We also explain how to improve the estimates on the renormalization functions by eliminating lattice artifacts. The latter can be computed in one-loop perturbation theory and to all orders in the lattice spacing.
We apply the methodology for the renormalization to an ensemble of twisted mass fermions with Nf = 2 + 1 + 1 dynamical quarks, and a pion mass of around 375 MeV.
Plants, fungi and algae are important components of global biodiversity and are fundamental to all ecosystems. They are the basis for human well-being, providing food, materials and medicines. Specimens of all three groups of organisms are accommodated in herbaria, where they are commonly referred to as botanical specimens.The large number of specimens in herbaria provides an ample, permanent and continuously improving knowledge base on these organisms and an indispensable source for the analysis of the distribution of species in space and time critical for current and future research relating to global biodiversity. In order to make full use of this resource, a research infrastructure has to be built that grants comprehensive and free access to the information in herbaria and botanical collections in general. This can be achieved through digitization of the botanical objects and associated data.The botanical research community can count on a long-standing tradition of collaboration among institutions and individuals. It agreed on data standards and standard services even before the advent of computerization and information networking, an example being the Index Herbariorum as a global registry of herbaria helping towards the unique identification of specimens cited in the literature.In the spirit of this collaborative history, 51 representatives from 30 institutions advocate to start the digitization of botanical collections with the overall wall-to-wall digitization of the flat objects stored in German herbaria. Germany has 70 herbaria holding almost 23 million specimens according to a national survey carried out in 2019. 87% of these specimens are not yet digitized. Experiences from other countries like France, the Netherlands, Finland, the US and Australia show that herbaria can be comprehensively and cost-efficiently digitized in a relatively short time due to established workflows and protocols for the high-throughput digitization of flat objects.Most of the herbaria are part of a university (34), fewer belong to municipal museums (10) or state museums (8), six herbaria belong to institutions also supported by federal funds such as Leibniz institutes, and four belong to non-governmental organizations. A common data infrastructure must therefore integrate different kinds of institutions.Making full use of the data gained by digitization requires the set-up of a digital infrastructure for storage, archiving, content indexing and networking as well as standardized access for the scientific use of digital objects. A standards-based portfolio of technical components has already been developed and successfully tested by the Biodiversity Informatics Community over the last two decades, comprising among others access protocols, collection databases, portals, tools for semantic enrichment and annotation, international networking, storage and archiving in accordance with international standards. This was achieved through the funding by national and international programs and initiatives, which also paved the road for the German contribution to the Global Biodiversity Information Facility (GBIF).Herbaria constitute a large part of the German botanical collections that also comprise living collections in botanical gardens and seed banks, DNA- and tissue samples, specimens preserved in fluids or on microscope slides and more. Once the herbaria are digitized, these resources can be integrated, adding to the value of the overall research infrastructure. The community has agreed on tasks that are shared between the herbaria, as the German GBIF model already successfully demonstrates.We have compiled nine scientific use cases of immediate societal relevance for an integrated infrastructure of botanical collections. They address accelerated biodiversity discovery and research, biomonitoring and conservation planning, biodiversity modelling, the generation of trait information, automated image recognition by artificial intelligence, automated pathogen detection, contextualization by interlinking objects, enabling provenance research, as well as education, outreach and citizen science.We propose to start this initiative now in order to valorize German botanical collections as a vital part of a worldwide biodiversity data pool.
Similar to chloroplast loci, mitochondrial markers are frequently used for genotyping, phylogenetic studies, and population genetics, as they are easily amplified due to their multiple copies per cell. In a recent study, it was revealed that the chloroplast offers little variation for this purpose in central European populations of beech. Thus, it was the aim of this study to elucidate, if mitochondrial sequences might offer an alternative, or whether they are similarly conserved in central Europe. For this purpose, a circular mitochondrial genome sequence from the more than 300-year-old beech reference individual Bhaga from the German National Park Kellerwald-Edersee was assembled using long and short reads and compared to an individual from the Jamy Nature Reserve in Poland and a recently published mitochondrial genome from eastern Germany. The mitochondrial genome of Bhaga was 504,730 bp, while the mitochondrial genomes of the other two individuals were 15 bases shorter, due to seven indel locations, with four having more bases in Bhaga and three locations having one base less in Bhaga. In addition, 19 SNP locations were found, none of which were inside genes. In these SNP locations, 17 bases were different in Bhaga, as compared to the other two genomes, while 2 SNP locations had the same base in Bhaga and the Polish individual. While these figures are slightly higher than for the chloroplast genome, the comparison confirms the low degree of genetic divergence in organelle DNA of beech in central Europe, suggesting the colonisation from a common gene pool after the Weichsel Glaciation. The mitochondrial genome might have limited use for population studies in central Europe, but once mitochondrial genomes from glacial refugia become available, it might be suitable to pinpoint the origin of migration for the re-colonising beech population.
Uncalibrated semi-invasive continous monitoring of cardiac index (CI) has recently gained increasing interest. The aim of the present study was to compare the accuracy of CI determination based on arterial waveform analysis with transpulmonary thermodilution. Fifty patients scheduled for elective coronary surgery were studied after induction of anaesthesia and before and after cardiopulmonary bypass (CPB), respectively. Each patient was monitored with a central venous line, the PiCCO system, and the FloTrac/Vigileo-system. Measurements included CI derived by transpulmonary thermodilution and uncalibrated semi-invasive pulse contour analysis. Percentage changes of CI were calculated. There was a moderate, but significant correlation between pulse contour CI and thermodilution CI both before (r(2) = 0.72, P < 0.0001) and after (r(2) = 0.62, P < 0.0001) CPB, with a percentage error of 31% and 25%, respectively. Changes in pulse contour CI showed a significant correlation with changes in thermodilution CI both before (r(2) = 0.52, P < 0.0001) and after (r(2) = 0.67, P < 0.0001) CPB. Our findings demonstrated that uncalibrated semi-invasive monitoring system was able to reliably measure CI compared with transpulmonary thermodilution in patients undergoing elective coronary surgery. Furthermore, the semi-invasive monitoring device was able to track haemodynamic changes and trends.
Voting advice applications (VAAs) are online tools providing voting advice to their users. This voting advice is based on the match between the answers of the user and the answers of several political parties to a common questionnaire on political attitudes. To visualize this match, VAAs use a wide array of visualisations, most popular of which are the two-dimensional political maps. These maps show the position of both the political parties and the user in the political landscape, allowing the user to understand both their own position and their relation to the political parties. To construct these maps, VAAs require scales that represent the main underlying dimensions of the political space. This makes the correct construction of these scales important if the VAA aims to provide accurate and helpful voting advice. This paper presents three criteria that assess if a VAA achieves this aim. To illustrate their usefulness, these three criteria—unidimensionality, reliability and quality—are used to assess the scales in the cross-national EUVox VAA, a VAA designed for the European Parliament elections of 2014. Using techniques from Mokken scaling analysis and categorical principal component analysis to capture the metrics, I find that most scales show low unidimensionality and reliability. Moreover, even while designers can—and sometimes do—use certain techniques to improve their scales, these improvements are rarely enough to overcome all of the problems regarding unidimensionality, reliability and quality. This leaves certain problems for the designers of VAAs and designers of similar type online surveys.
Aim: Predicting future changes in species richness in response to climate change is one of the key challenges in biogeography and conservation ecology. Stacked species distribution models (S‐SDMs) are a commonly used tool to predict current and future species richness. Macroecological models (MEMs), regression models with species richness as response variable, are a less computationally intensive alternative to S‐SDMs. Here, we aim to compare the results of two model types (S‐SDMS and MEMs), for the first time for more than 14,000 species across multiple taxa globally, and to trace the uncertainty in future predictions back to the input data and modelling approach used.
Location: Global land, excluding Antarctica.
Taxon: Amphibians, birds and mammals.
Methods: We fitted S‐SDMs and MEMs using a consistent set of bioclimatic variables and model algorithms and conducted species richness predictions under current and future conditions. For the latter, we used four general circulation models (GCMs) under two representative concentration pathways (RCP2.6 and RCP6.0). Predicted species richness was compared between S‐SDMs and MEMs and for current conditions also to extent‐of‐occurrence (EOO) species richness patterns. For future predictions, we quantified the variance in predicted species richness patterns explained by the choice of model type, model algorithm and GCM using hierarchical cluster analysis and variance partitioning.
Results: Under current conditions, species richness predictions from MEMs and S‐SDMs were strongly correlated with EOO‐based species richness. However, both model types over‐predicted areas with low and under‐predicted areas with high species richness. Outputs from MEMs and S‐SDMs were also highly correlated among each other under current and future conditions. The variance between future predictions was mostly explained by model type.
Main conclusions: Both model types were able to reproduce EOO‐based patterns in global terrestrial vertebrate richness, but produce less collinear predictions of future species richness. Model type by far contributes to most of the variation in the different future species richness predictions, indicating that the two model types should not be used interchangeably. Nevertheless, both model types have their justification, as MEMs can also include species with a restricted range, whereas S‐SDMs are useful for looking at potential species‐specific responses.
We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.
A comparison of different APTT-reagents, heparin-sensitivity and detection of mild coagulopathies
(1992)
The activated partial thromboplastin time (aPTT) is widely used to detect coagulation abnormalities or to monitor heparin treatment.
Many commercial aPTT-reagents are available which contain different phospholipid reagents and activators. In the present study 3 aPTT-reagents (aPTT-D, Instrumentation Laboratory, Neothromtin, Behring, PTTa, Boehringer) were compared using a computerized centrifugal analyzer. One aPTT-reagent (Pathromtin, Behring) was tested on a semiautomated coagulometer. Instrument precision was evaluated using aPTT-D as reagent.
Comparative tests were performed on plasma samples of 40 healthy donors, 3 patients with mild von Willebrand's disease (vWd), W patients with heaemophilia or subhaemophilia A, 1 patient with subhaemophilia A and vWd, 8 patients treated with subcutaneous injection of unfractionated heparin (UFH) and 14 patients treated with subcutaneous injection of a low molecular weight heparin (LMWH).
aPTT-D was the most sensitive reagent to detect mild vWd while Pathromtin detected none of these defects. In patients with heamophilia A and subhaemophilia A aPTT-D, Neothromtin and PTTa detected the abnormality in nearly all tested samples while Pathromtin was less sensitive.
Patients treated with subcutaneously applied UFH or LMWH often had a prolonged aPTTt especially when aPTT-D and Neothromtin were used as reagents.
The single nucleotide polymorphism 118A>G of the human micro-opioid receptor gene OPRM1, which leads to an exchange of the amino acid asparagine (N) to aspartic acid (D) at position 40 of the extracellular receptor region, alters the in vivo effects of opioids to different degrees in pain-processing brain regions. The most pronounced N40D effects were found in brain regions involved in the sensory processing of pain intensity. Using the mu-opioid receptor-specific agonist DAMGO, we analyzed the micro-opioid receptor signaling, expression, and binding affinity in human brain tissue sampled postmortem from the secondary somatosensory area (SII) and from the ventral posterior part of the lateral thalamus, two regions involved in the sensory processing and transmission of nociceptive information. We show that the main effect of the N40D micro-opioid receptor variant is a reduction of the agonist-induced receptor signaling efficacy. In the SII region of homo- and heterozygous carriers of the variant 118G allele (n=18), DAMGO was only 62% as efficient (p=0.002) as in homozygous carriers of the wild-type 118A allele (n=15). In contrast, the number of [3H]DAMGO binding sites was unaffected. Hence, the micro-opioid receptor G-protein coupling efficacy in SII of carriers of the 118G variant was only 58% as efficient as in homozygous carriers of the 118A allele (p<0.001). The thalamus was unaffected by the OPRM1 118A>G SNP. In conclusion, we provide a molecular basis for the reduced clinical effects of opioid analgesics in carriers of mu-opioid receptor variant N40D.
Background and Aims: Chronic infection with the hepatitis B virus (HBV) is a major health issue worldwide. Recently, single nucleotide polymorphisms (SNPs) within the human leukocyte antigen (HLA)-DP locus were identified to be associated with HBV infection in Asian populations. Most significant associations were observed for the A alleles of HLA-DPA1 rs3077 and HLA-DPB1 rs9277535, which conferred a decreased risk for HBV infection. We assessed the implications of these variants for HBV infection in Caucasians.
Methods: Two HLA-DP gene variants (rs3077 and rs9277535) were analyzed for associations with persistent HBV infection and with different clinical outcomes, i.e., inactive HBsAg carrier status versus progressive chronic HBV (CHB) infection in Caucasian patients (n = 201) and HBsAg negative controls (n = 235).
Results: The HLA-DPA1 rs3077 C allele was significantly associated with HBV infection (odds ratio, OR = 5.1, 95% confidence interval, CI: 1.9–13.7; p = 0.00093). However, no significant association was seen for rs3077 with progressive CHB infection versus inactive HBsAg carrier status (OR = 2.7, 95% CI: 0.6–11.1; p = 0.31). In contrast, HLA-DPB1 rs9277535 was not associated with HBV infection in Caucasians (OR = 0.8, 95% CI: 0.4–1.9; p = 1).
Conclusions: A highly significant association of HLA-DPA1 rs3077 with HBV infection was observed in Caucasians. However, as a differentiation between different clinical courses of HBV infection was not possible, knowledge of the HLA-DPA1 genotype cannot be translated into personalized anti-HBV therapy approaches.
The cell—cell signaling gene CDH13 is associated with a wide spectrum of neuropsychiatric disorders, including attention-deficit/hyperactivity disorder (ADHD), autism, and major depression. CDH13 regulates axonal outgrowth and synapse formation, substantiating its relevance for neurodevelopmental processes. Several studies support the influence of CDH13 on personality traits, behavior, and executive functions. However, evidence for functional effects of common gene variation in the CDH13 gene in humans is sparse. Therefore, we tested for association of a functional intronic CDH13 SNP rs2199430 with ADHD in a sample of 998 adult patients and 884 healthy controls. The Big Five personality traits were assessed by the NEO-PI-R questionnaire. Assuming that altered neural correlates of working memory and cognitive response inhibition show genotype-dependent alterations, task performance and electroencephalographic event-related potentials were measured by n-back and continuous performance (Go/NoGo) tasks. The rs2199430 genotype was not associated with adult ADHD on the categorical diagnosis level. However, rs2199430 was significantly associated with agreeableness, with minor G allele homozygotes scoring lower than A allele carriers. Whereas task performance was not affected by genotype, a significant heterosis effect limited to the ADHD group was identified for the n-back task. Heterozygotes (AG) exhibited significantly higher N200 amplitudes during both the 1-back and 2-back condition in the central electrode position Cz. Consequently, the common genetic variation of CDH13 is associated with personality traits and impacts neural processing during working memory tasks. Thus, CDH13 might contribute to symptomatic core dysfunctions of social and cognitive impairment in ADHD.
We explore a combinatorial framework which efficiently quantifies the asymmetries between minima and maxima in local fluctuations of time series. We first showcase its performance by applying it to a battery of synthetic cases. We find rigorous results on some canonical dynamical models (stochastic processes with and without correlations, chaotic processes) complemented by extensive numerical simulations for a range of processes which indicate that the methodology correctly distinguishes different complex dynamics and outperforms state of the art metrics in several cases. Subsequently, we apply this methodology to real-world problems emerging across several disciplines including cases in neurobiology, finance and climate science. We conclude that differences between the statistics of local maxima and local minima in time series are highly informative of the complex underlying dynamics and a graph-theoretic extraction procedure allows to use these features for statistical learning purposes.
Extending the data set used in Beyer (2009) to 2017, we estimate I(1) and I(2) money demand models for euro area M3. After including two broken trends and a few dummies to account for shifts in the variables following the global financial crisis and the ECB's non-standard monetary policy measures, we find that the money demand and the real wealth relations identified in Beyer (2009) have remained remarkably stable throughout the extended sample period. Testing for price homogeneity in the I(2) model we find that the nominal-to-real transformation is not rejected for the money relation whereas the wealth relation cannot be expressed in real terms.
There is an increasing interest in incorporating significant citizen participation into the law-making process by developing the use of the internet in the public sphere. However, no well-accepted e-participation model has prevailed. This article points out that, to be successful, we need critical reflection of legal theory and we also need further institutional construction based on the theoretical reflection.
Contemporary dominant legal theories demonstrate too strong an internal legal point of view to empower the informal, social normative development on the internet. Regardless of whether we see the law as a body of rules or principles, the social aspect is always part of people’s background and attracts little attention. In this article, it is advocated that the procedural legal paradigm advanced by Jürgen Habermas represents an important breakthrough in this regard.
Further, Habermas’s co-originality thesis reveals a neglected internal relationship between public autonomy and private autonomy. I believe the co-originality theory provides the essential basis on which a connecting infrastructure between the legal and the social could be developed. In terms of the development of the internet to include the public sphere, co-originality can also help us direct the emphasis on the formation of public opinion away from the national legislative level towards the local level; that is, the network of governance.1
This article is divided into two sections. The focus of Part One is to reconstruct the co-originality thesis (section 2, 3). This paper uses the application of discourse in the adjudication theory of Habermas as an example. It argues that Habermas would be more coherent, in terms of his insistence on real communication in his discourse theory, if he allowed his judges to initiate improved interaction with the society. This change is essential if the internal connection between public autonomy and private autonomy in the sense of court adjudication is to be truly enabled.
In order to demonstrate such improved co-original relationships, the empowering character of the state-made law is instrumental in initiating the mobilization of legal intermediaries, both individual and institutional. A mutually enhanced relationship is thus formed; between the formal, official organization and its governance counterpart aided by its associated ‘local’ public sphere. Referring to Susan Sturm, the Harris v Forklift Systems Inc. (1930) decision of the Supreme Court of the United States in the field of sexual harassment is used as an example.
Using only one institutional example to illustrate how the co-originality thesis can be improved is not sufficient to rebuild the thesis but this is as much as can be achieved in this article.
In Part Two, the paper examines, still at the institutional level, how Sturm develops an overlooked sense of impartiality, especially in the derivation of social norms; i.e. multi-partiality instead of neutral detachment (section 4). These two ideas should be combined as the criterion for impartiality to evaluate the legitimacy of the joint decision-making processes of both the formal official organization and ‘local’ public sphere.
Sturm’s emphasis on the deployment of intermediaries, both institutional and individual, can also enlighten the discourse theory. Intermediaries are essential for connecting the disassociated social networks, especially when a breakdown of communication occurs due to a lack of data, information, knowledge, or disparity of value orientation, all of which can affect social networks. If intermediaries are used, further communication will not be blocked as a result of the lack of critical data, information, knowledge or misunderstandings due to disparity of value orientation or other causes.
The institutional impact of the newly constructed co-originality thesis is also discussed in Part Two. Landwehr’s work on institutional design and assessment for deliberative interaction is first discussed. This article concludes with an indication of how the ‘local’ public sphere, through e-rulemaking or online dispute resolution, for example, can be constructed in light of the discussion of this article.
Autism spectrum disorders (ASD) have been associated with sensory hypersensitivity. A recent study reported visual acuity (VA) in ASD in the region reported for birds of prey. The validity of the results was subsequently doubted. This study examined VA in 34 individuals with ASD, 16 with schizophrenia (SCH), and 26 typically developing (TYP). Participants with ASD did not show higher VA than those with SCH and TYP. There were no substantial correlations of VA with clinical severity in ASD or SCH. This study could not confirm the eagle-eyed acuity hypothesis of ASD, or find evidence for a connection of VA and clinical phenotypes. Research needs to further address the origins and circumstances associated with altered sensory or perceptual processing in ASD.