Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10788)
- Doctoral Thesis (1564)
- Preprint (1534)
- Working Paper (1439)
- Part of Periodical (565)
- Conference Proceeding (510)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17058) (remove)
Keywords
- inflammation (92)
- COVID-19 (89)
- SARS-CoV-2 (62)
- Financial Institutions (47)
- Germany (45)
- climate change (45)
- aging (43)
- ECB (42)
- cancer (42)
- apoptosis (41)
Institute
- Medizin (5095)
- Physik (2953)
- Wirtschaftswissenschaften (1646)
- Frankfurt Institute for Advanced Studies (FIAS) (1570)
- Biowissenschaften (1397)
- Informatik (1249)
- Center for Financial Studies (CFS) (1137)
- Sustainable Architecture for Finance in Europe (SAFE) (1061)
- Biochemie und Chemie (853)
- House of Finance (HoF) (702)
The putative effects of dark matter are most easily explained by a collisionless fluid on cosmological scales and by Modified Newtonian Dynamics (MOND) on galactic scales. Hybrid MOND dark matter models combine the successes of dark matter on cosmological scales and those of MOND on galactic scales. An example of such a model is superfluid dark matter (SFDM) which postulates that this differing behavior with scale is caused by a single underlying substance with two phases. In this thesis, I highlight successful observational tests of SFDM regarding strong lensing and the Milky Way rotation curve. I also discuss three problems due to the double role of the aforementioned single underlying substance and show how these may be avoided. Finally, I introduce a novel Cherenkov radiation constraint for hybrid MOND dark matter models. This constraint is different from standard modified gravity Cherenkov radiation constraints because such hybrid models allow even non-relativistic objects like stars to emit Cherenkov radiation.
For the academic audience, this paper presents the outcome of a well-identified, large change in the monetary policy rule from the lens of a standard New Keynesian model and asks whether the model properly captures the effects. For policymakers, it presents a cautionary tale of the dismal effects of ignoring basic macroeconomics. The Turkish monetary policy experiment of the past decade, stemming from a belief of the government that higher interest rates cause higher inflation, provides an unfortunately clean exogenous variance in the policy rule. The mandate to keep rates low, and the frequent policymaker turnover orchestrated by the government to enforce this, led to the Taylor principle not being satisfied and eventually a negative coeffcient on inflation in the policy rule. In such an environment, was the exchange rate still a random walk? Was inflation anchored? Does the “standard model”” suffice to explain the broad contours of macroeconomic outcomes in an emerging economy with large identifying variance in the policy rule? There are no surprises for students of open-economy macroeconomics; the answers are no, no, and yes.
Reproducible annotations
(2022)
This bachelor thesis presents a software solution which implements reproducible annotations in the context of the UIMA framework. This is achieved by creating an automated containerization of arbitrary analysis engines and annotating every analysis engine configuration in the processed CAS document. Any CAS document created by this solution is self sufficient and able to reproduce the exact environment under which it was created.
A review of the state-of-the art software in the field of UIMA reveals that there are many implementations trying to increase reproducibility for a given application relying on UIMA, but no publication trying to increase the reproducibility of UIMA itself. This thesis improves upon that technological gap and provides a throughout analysis at the end which shows a negligible overhead in memory consumption, but a significant performance regression depending on the complexity of the analysis engine which was examined.
This paper analyses disclosure duties in insurance contract law in Germany on the basis of questions developed in preparation of the World Congress of the International Insurance Law Association (AIDA) 2018. As risk factors are within the policyholder’s sphere of knowledge, the insurer naturally depends on gaining such knowledge from its policyholder in order to calculate and evaluate premium and risk. Legal approaches as to how the insurer may obtain relevant information and the legal consequences differ in national insurance contract laws around the globe. Taking part in this legal comparison, the paper describes the key elements of such a mechanism from a German perspective and comprises both duties of the policyholder and duties of the insurer.
As for the policyholder, these issues are differences between a duty to (spontaneously) disclose and a duty not to misrepresent as a reaction to questions of the insurer, the prerequisites and remedies of such duty, the subjective standard of the disclosure duty and a duty to notify material changes during the contract term. On the other hand, the paper also addresses an insurer’s duty to investigate, a duty to ascertain the policyholder’s understanding of the policy and a duty to inform during the contract term or after the occurrence of an insured event. In doing so, the paper offers a comprehensive and critical overview on the transfer of knowledge in the insurance (pre-)contractual relationship.
The recognition of pharmacological substances, compounds and proteins is an essential preliminary work for the recognition of relations between chemicals and other biomedically relevant units. In this paper, we describe an approach to Task 1 of the PharmaCoNER Challenge, which involves the recognition of mentions of chemicals and drugs in Spanish medical texts. We train a state-of-the-art BiLSTM-CRF sequence tagger with stacked Pooled Contextualized Embeddings, word and sub-word embeddings using the open-source framework FLAIR. We present a new corpus composed of articles and papers from Spanish health science journals, termed the Spanish Health Corpus, and use it to train domain-specific embeddings which we incorporate in our model training. We achieve a result of 89.76% F1-score using pre-trained embeddings and are able to improve these results to 90.52% F1-score using specialized embeddings.
Despite the great importance of the Latin language in the past, there are relatively few resources available today to develop modern NLP tools for this language. Therefore, the EvaLatin Shared Task for Lemmatization and Part-of-Speech (POS) tagging was published in the LT4HALA workshop. In our work, we dealt with the second EvaLatin task, that is, POS tagging. Since most of the available Latin word embeddings were trained on either few or inaccurate data, we trained several embeddings on better data in the first step. Based on these embeddings, we trained several state-of-the-art taggers and used them as input for an ensemble classifier called LSTMVoter. We were able to achieve the best results for both the cross-genre and the cross-time task (90.64% and 87.00%) without using additional annotated data (closed modality). In the meantime, we further improved the system and achieved even better results (96.91% on classical, 90.87% on cross-genre and 87.35% on cross-time).
We present new results on nonlocal Dirichlet problems established by means of suitable spectral theoretic and variational methods, taking care of the nonlocal feature of the operators. We mainly address: First, we estimate the Morse index of radially symmetric sign changing bounded weak solutions to a semilinear Dirichlet problem involving the fractional Laplacian. In particular, we derive a conjecture due to Bañuelos and Kulczycki on the geometric structure of the second Dirichlet eigenfunctions. Secondly, we study a small order asymptotics with respect to the parameter s of the Dirichlet eigenvalues problem for the fractional Laplacian. Thirdly, we deal with the logarithmic Schrödinger operator. In particular, we provide an alternative to derive the singular integral representation corresponding to the associated Fourier symbol and introduce tools and functional analytic framework for variational studies. Finaly, we study nonlocal operators of order strictly below one. In particular, we investigate interior regularity properties of weak solutions to the associated Poisson problem depending on the regularity of the right-hand side.
Biodiversity information is contained in countless digitized and unprocessed scholarly texts. Although automated extraction of these data has been gaining momentum for years, there are still innumerable text sources that are poorly accessible and require a more advanced range of methods to extract relevant information. To improve the access to semantic biodiversity information, we have launched the BIOfid project (www.biofid.de) and have developed a portal to access the semantics of German language biodiversity texts, mainly from the 19th and 20th century. However, to make such a portal work, a couple of methods had to be developed or adapted first. In particular, text-technological information extraction methods were needed, which extract the required information from the texts. Such methods draw on machine learning techniques, which in turn are trained by learning data. To this end, among others, we gathered the BIOfid text corpus, which is a cooperatively built resource, developed by biologists, text technologists, and linguists. A special feature of BIOfid is its multiple annotation approach, which takes into account both general and biology-specific classifications, and by this means goes beyond previous, typically taxon- or ontology-driven proper name detection. We describe the design decisions and the genuine Annotation Hub Framework underlying the BIOfid annotations and present agreement results. The tools used to create the annotations are introduced, and the use of the data in the semantic portal is described. Finally, some general lessons, in particular with multiple annotation projects, are drawn.
Are nearby places (e.g., cities) described by related words? In this article, we transfer this research question in the field of lexical encoding of geographic information onto the level of intertextuality. To this end, we explore Volunteered Geographic Information (VGI) to model texts addressing places at the level of cities or regions with the help of so-called topic networks. This is done to examine how language encodes and networks geographic information on the aboutness level of texts. Our hypothesis is that the networked thematizations of places are similar, regardless of their distances and the underlying communities of authors. To investigate this, we introduce Multiplex Topic Networks (MTN), which we automatically derive from Linguistic Multilayer Networks (LMN) as a novel model, especially of thematic networking in text corpora. Our study shows a Zipfian organization of the thematic universe in which geographical places (especially cities) are located in online communication. We interpret this finding in the context of cognitive maps, a notion which we extend by so-called thematic maps. According to our interpretation of this finding, the organization of thematic maps as part of cognitive maps results from a tendency of authors to generate shareable content that ensures the continued existence of the underlying media. We test our hypothesis by example of special wikis and extracts of Wikipedia. In this way, we come to the conclusion that geographical places, whether close to each other or not, are located in neighboring semantic places that span similar subnetworks in the topic universe.
In the model of randomly perturbed graphs we consider the union of a deterministic graph G with minimum degree αn and the binomial random graph G(n, p). This model was introduced by Bohman, Frieze, and Martin and for Hamilton cycles their result bridges the gap between Dirac’s theorem and the results by Pósa and Korshunov on the threshold in G(n, p). In this note we extend this result in G ∪G(n, p) to sparser graphs with α = o(1). More precisely, for any ε > 0 and α: N ↦→ (0, 1) we show that a.a.s. G ∪ G(n, β/n) is Hamiltonian, where β = −(6 + ε) log(α). If α > 0 is a fixed constant this gives the aforementioned result by Bohman, Frieze, and Martin and if α = O(1/n) the random part G(n, p) is sufficient for a Hamilton cycle. We also discuss embeddings of bounded degree trees and other spanning structures in this model, which lead to interesting questions on almost spanning embeddings into G(n, p).
The annotation of texts and other material in the field of digital humanities and Natural Language Processing (NLP) is a common task of research projects. At the same time, the annotation of corpora is certainly the most time- and cost-intensive component in research projects and often requires a high level of expertise according to the research interest. However, for the annotation of texts, a wide range of tools is available, both for automatic and manual annotation. Since the automatic pre-processing methods are not error-free and there is an increasing demand for the generation of training data, also with regard to machine learning, suitable annotation tools are required. This paper defines criteria of flexibility and efficiency of complex annotations for the assessment of existing annotation tools. To extend this list of tools, the paper describes TextAnnotator, a browser-based, multi-annotation system, which has been developed to perform platform-independent multimodal annotations and annotate complex textual structures. The paper illustrates the current state of development of TextAnnotator and demonstrates its ability to evaluate annotation quality (inter-annotator agreement) at runtime. In addition, it will be shown how annotations of different users can be performed simultaneously and collaboratively on the same document from different platforms using UIMA as the basis for annotation.
We present a deterministic workflow for genotyping single and double transgenic individuals directly upon nascence that prevents overproduction and reduces wasted animals by two-thirds. In our vector concepts, transgenes are accompanied by two of four clearly distinguishable transformation markers that are embedded in interweaved, but incompatible Lox site pairs. Following Cre-mediated recombination, the genotypes of single and double transgenic individuals were successfully identified by specific marker combinations in 461 scorings.
Drawing on insights found in both philosophy and psychology, this paper offers an analysis of hate and distinguishes between its main types. I argue that hate is a sentiment, i.e., a form to regard the other as evil which on certain occasions can be acutely felt. On the basis of this definition, I develop a typology which, unlike the main typologies in philosophy and psychology, does not explain hate in terms of patterns of other affective states. By examining the developmental history and intentional structure of hate, I obtain two variables: the replaceability/irreplaceability of the target and the determinacy/indeterminacy of the focus of concern. The combination of these variables generates the four-types model of hate, according to which hate comes in the following kinds: normative, ideological, retributive, and malicious.
Wir betrachten Algorithmen für strategische Kommunikation mit Commitment Power zwischen zwei rationalen Parteien mit eigenen Interessen. Wenn eine Partei Commitment Power hat, so legt sie sich auf eine Handlungsstrategie fest und veröffentlicht diese und kann nicht mehr davon abweichen.
Beide Parteien haben Grundinformation über den Zustand der Welt. Die erste Partei (S) hat die Möglichkeit, diesen direkt zu beobachten. Die zweite Partei (R) trifft jedoch eine Entscheidung durch die Wahl einer von n Aktionen mit für sie unbekanntem Typ. Dieser Typ bestimmt die möglicherweise verschiedenen, nicht-negativen Nutzwerte für S und R. Durch das Senden von Signalen versucht S, die Wahl von R zu beeinflussen. Wir betrachten zwei Grundszenarien: Bayesian Persuasion und Delegated Search.
In Bayesian Persuasion besitzt S Commitment Power. Hier legt sich S sich auf ein Signalschema φ fest und teilt dieses R mit. Es beschreibt, welches Signal S in welcher Situation sendet. Erst danach erfährt S den wahren Zustand der Welt. Nach Erhalt der durch φ bestimmten Signale wählt R eine der Aktionen. Das Wissen um φ erlaubt R die Annahmen über den Zustand der Welt in Abhängigkeit von den empfangenen Signalen zu aktualisieren. Dies muss S für das Design von φ berücksichtigen, denn R wird Empfehlungen nicht folgen, die S auf Kosten von R übervorteilen. Wir betrachten das Problem aus der Sicht von S und beschreiben Signalschemata, die S einen möglichst großen Nutzen garantieren.
Zuerst betrachten wir den Offline-Fall. Hier erfährt S den kompletten Zustand der Welt und schickt daraufhin ein Signal an R. Wir betrachten ein Szenario mit einer beschränkten Anzahl k ≤ n Signale. Mit nur k Signalen kann S höchstens k verschiedene Aktionen empfehlen. Für verschiedene symmetrische Instanzen beschreiben wir einen Polynomialzeitalgorithmus für die Berechnung eines optimalen Signalschemas mit k Signalen.
Weiterhin betrachten wir eine Teilmenge von Instanzen, in denen die Typen aus bekannten, unabhängigen Verteilungen gezogen werden. Wir beschreiben Polynomialzeitalgorithmen, die ein Signalschema mit k Signalen berechnen, das einen konstanten Approximationsfaktor im Verhältnis zum optimalen Signalschema mit k Signalen garantiert.
Im Online-Fall werden die Aktionstypen einzeln in Runden aufgedeckt. Nach Betrachtung der aktuellen Aktion sendet S ein Signal und R muss sofort durch Wahl oder Ablehnung der Aktion darauf reagieren. Der Prozess endet mit der Wahl einer Aktion. Andernfalls wird der nächste Aktionstyp aufgedeckt und vorherige Aktionen können nicht mehr gewählt werden. Als Richtwert für unsere Online-Signalschemata verwenden wir das beste Offline-Signalschema.
Zuerst betrachten wir ein Szenario mit unabhängigen Verteilungen. Wir zeigen, wie ein optimales Signalschema in Polynomialzeit bestimmt werden kann. Jedoch gibt es Beispiele, bei denen S – anders als im Offline-Fall – im Online-Fall keinen positiven Wert erzielen kann. Wir betrachten daraufhin eine Teilmenge der Instanzen, für die ein einfaches Signalschema einen konstanten Approximationsfaktor garantiert und zeigen dessen Optimalität.
Zusätzlich betrachten wir 16 verschiedene Szenarien mit unterschiedlichem Level an Information für S und R und unterschiedlichen Zielfunktionen für S und R unter der Annahme, dass die Aktionstypen a priori unbekannt sind, aber in uniform zufälliger Reihenfolge aufgedeckt werden. Für 14 Fälle beschreiben wir Signalschemata mit konstantem Approximationsfaktor. Solche Schemata existieren für die verbleibenden beiden Fälle nicht. Zusätzlich zeigen wir für die meistern Fälle, dass die beschriebenen Approximationsgarantien optimal sind.
Im zweiten Teil betrachten wir eine Online-Variante von Delegated Search. Hier besitzt nun R Commitment Power. Die Aktionstypen werden aus bekannten, unabhängigen Verteilungen gezogen. Bevor S die realisierten Typen beobachtet, legt R sich auf ein Akzeptanzschema φ fest. Für jeden Typen gibt φ an, mit welcher Wahrscheinlichkeit R diesen akzeptiert. Folglich versucht S, eine Aktion mit einem guten Typen für sich selbst zu finden, der von R akzeptiert wird. Da der Prozess online abläuft, muss S für jede Aktion einzeln entscheiden, diese vorzuschlagen oder zu verwerfen. Nur empfohlene Aktionen können von R ausgewählt werden.
Für den Offline-Fall sind für identisch verteilte Aktionstypen konstante Approximationsfaktoren im Vergleich zu einer Aktion mit optimalem Wert für R bekannt. Wir zeigen, dass R im Online-Fall im Allgemeinen nur eine Θ(1/n)-Approximation erzielen kann. Der Richtwert ist der erwartete Wert für eine eindimensionale Online-Suche von R.
Da für die Schranke eine exponentielle Diskrepanz in den Werten der Typen für S benötigt wird, betrachten wir parametrisierte Instanzen. Die Parameter beschränken die Werte für S bzw. das Verhältnis der Werte für R und S. Wir zeigen (beinahe) optimale logarithmische Approximationsfaktoren im Bezug auf diese Parameter, die von effizient berechenbaren Schemata garantiert werden.
Das Feld der Hochenergie-Schwerionenforschung hat sich der Untersuchung des Quark-Gluon-Plasmas (QGP) gewidmet. Ein QGP ist ein sehr heißer und dichter Materiezustand, der kurz nach dem Urknall für einige Mikrosekunden das Universum füllte. Unter diesen extremen Bedingungen sind die fundamentalen Bausteine der Materie, die Quarks und Gluonen, quasi frei, also nicht in Hadronen eingeschlossen, wie es unter normalen Bedingungen der Fall ist. Hadronen sind Teilchen, die aus Quarks und Gluonen bestehen. Die bekanntesten Hadronen sind Protonen und Neutronen, die Bestandteile von Atomkernen, aus denen, zusammen mit Elektronen, die gesamte bekannte Materie aufgebaut ist.
Um ein QGP im Labor zu erzeugen, lässt man ultrarelativistische schwere Ionen, wie zum Beispiel Pb-208-Kerne, aufeinander prallen. Dies geschieht am CERN, dem größten Kernforschungszentrum der Welt. Der Teilchenbeschleuniger, welcher Protonen und Pb-Kerne beschleunigt und zur Kollision bringt, heißt Large Hadron Collider (LHC) und ist mit 27 km Umfang der größte der Welt. Bei einer einzigen Pb-Pb Kollision am LHC werden mehrere Tausend Teilchen und Antiteilchen erzeugt. Das dedizierte Experiment zur Untersuchung von Schwerionenkollisionen am LHC ist ALICE. ALICE ist mit mehreren Teilchendetektoren ausgerüstet, die es ermöglichen, tausende Teilchen gleichzeitig zu messen und zu identifizieren.
Unter den produzierten Teilchen befinden sich auch leichte Atomkerne, wenngleich diese nur sehr selten erzeugt werden. Die Anzahl der produzierten Teilchen pro Teilchensorte hängt nämlich von deren Masse ab. In Pb-Pb Kollisionen am LHC sinkt die Anzahl der produzierten (Anti)kerne exponentiell um einen Faktor 1/330 bei Hinzufügen jedes weiteren Nukleons. Die Menge an produzierten Teilchen pro Spezies stellt Informationen über den Produktionsmechanismus beim Übergang vom QGP zum Hadrongas zur Verfügung. Hierbei sind leichte (Anti)kerne von besonderem Interesse, da sie vergleichsweise groß sind und ihre Bindungsenergie bis zu zwei Größenordnungen kleiner ist als die Temperaturen, die bei der Erzeugung der Hadronen vorherrschen. Es ist bis heute noch nicht verstanden, wie leichte (Anti)kerne bei diesen Bedingungen erzeugt werden und überleben können.
Für diese Arbeit wurden ca. 270 Millionen Pb-Pb Kollisionen bei einer Schwerpunktsenergie von 5,02 TeV, die von ALICE im November 2018 aufgezeichnet wurden, analysiert. Es wurde die Produktion von (Anti)triton und (Anti)alpha untersucht. Wegen ihrer großen Masse werden beide Kerne sehr selten produziert, bei weitem nicht bei jeder Kollision. Antialpha ist der schwerste Antikern, der jemals gemessen wurde. Aufgrund dieser Seltenheit ist die Größe des zur Verfügung stehenden Datensatzes entscheidend. Es war möglich, das erste jemals gemessene Antialpha-Transversalimpulsspektrum zu extrahieren. Auch für (Anti)triton und Alpha wurden Transversalimpulsspektren bestimmt.
Die Ergebnisse wurden mit theoretischen Modellen und anderen ALICE Messungen verglichen.
Am Ende wird in einem Ausblick auf das kürzlich durchgeführte Upgrade der ALICE Spurendriftkammer (TPC) eingegangen. In der nächsten, bald startenden Datennahmeperiode wird der LHC seine Kollisionsrate erheblich erhöhen, was es ermöglichen wird, mehr als 100 mal so viele Daten wie bisher aufzuzeichnen. Hiervon werden die in dieser Arbeit beschriebenen (Anti)triton- und (Anti)alpha-Analysen beachtlich profitieren. Um mit den erheblich höheren Kollisionsraten zurecht zu kommen, mussten einige Detektoren, unter anderem die TPC, maßgeblich erneuert werden. In den ersten beiden Datennahmeperioden wurde die TPC mit Vieldrahtproportionalkammern betrieben. Diese sind allerdings viel zu langsam für die geplanten Kollisionsraten. Deshalb wurden sie im Jahr 2019, während einer langen Betriebspause des LHC, durch Quadrupel-GEM (Gas Electron Multiplier) Folien basierte Auslesekammern ersetzt, welche eine kontinuierliche Auslese der TPC ermöglichen. Da es sich um die erste jemals gebaute GEM TPC im Großformat handelt, war ein umfangreiches Forschungs- und Entwicklungs- (F&E) Programm notwendig, um die GEM Auslesekammern zu charakterisieren und zu testen. Im Rahmen dieses F&E Programms wurden am Anfang dieser Promotion systematische Messungen an einer kleinen Test TPC mit Quadrupel-GEM Auslese, die extra zu diesem Zweck gebaut worden war, durchgeführt. Hierbei wurde der Rückfluss der bei der Gasverstärkung erzeugten Ionen in das Driftvolumen der TPC und die Energieauflösung mit verschiedenen GEM Folien Typen und unterschiedlicher Anordnung gemessen. Das Ziel war, möglichst kleine Ionenrückflüsse bei möglichst guter Energieauflösung zu erreichen. Hierbei musste ein Kompromiss gefunden werden, da die beiden Größen sich gegenläufig verhalten. Es war jedoch möglich, mit mehreren GEM Konfigurationen Spannungseinstellungen zu identifizieren, bei denen beide Größen den gewünschten Anforderungen entsprachen.
Objectives: To compare dual-energy CT (DECT) and MRI for assessing presence and extent of traumatic bone marrow edema (BME) and fracture line depiction in acute vertebral fractures. Methods: Eighty-eight consecutive patients who underwent dual-source DECT and 3-T MRI of the spine were retrospectively analyzed. Five radiologists assessed all vertebrae for presence and extent of BME and for identification of acute fracture lines on MRI and, after 12 weeks, on DECT series. Additionally, image quality, image noise, and diagnostic confidence for overall diagnosis of acute vertebral fracture were assessed. Quantitative analysis of CT numbers was performed by a sixth radiologist. Two radiologists analyzed MRI and grayscale DECT series to define the reference standard. Results: For assessing BME presence and extent, DECT showed high sensitivity (89% and 84%, respectively) and specificity (98% in both), and similarly high diagnostic confidence compared to MRI (2.30 vs. 2.32; range 0–3) for the detection of BME (p = .72). For evaluating acute fracture lines, MRI achieved high specificity (95%), moderate sensitivity (76%), and a significantly lower diagnostic confidence compared to DECT (2.42 vs. 2.62, range 0–3) (p < .001). A cutoff value of − 0.43 HU provided a sensitivity of 89% and a specificity of 90% for diagnosing BME, with an overall AUC of 0.96. Conclusions: DECT and MRI provide high diagnostic confidence and image quality for assessing acute vertebral fractures. While DECT achieved high overall diagnostic accuracy in the analysis of BME presence and extent, MRI provided moderate sensitivity and lower confidence for evaluating fracture lines.
Evaluation of stability and inactivation methods of SARS-CoV-2 in context of laboratory settings
(2021)
The novel coronavirus SARS-CoV-2 is the causative agent of the acute respiratory disease COVID-19, which has become a global concern due to its rapid spread. Laboratory work with SARS-CoV-2 in a laboratory setting was rated to biosafety level 3 (BSL-3) biocontainment level. However, certain research applications in particular in molecular biology require incomplete denaturation of the proteins, which might cause safety issues handling contaminated samples. In this study, we evaluated lysis buffers that are commonly used in molecular biological laboratories for their ability to inactivate SARS-CoV-2. In addition, viral stability in cell culture media at 4 °C and on display glass and plastic surfaces used in laboratory environment was analyzed. Furthermore, we evaluated chemical and non-chemical inactivation methods including heat inactivation, UV-C light, addition of ethanol, acetone-methanol, and PFA, which might be used as a subsequent inactivation step in the case of insufficient inactivation. We infected susceptible Caco-2 and Vero cells with pre-treated SARS-CoV-2 and determined the tissue culture infection dose 50 (TCID50) using crystal violet staining and microscopy. In addition, lysates of infected cells and virus containing supernatant were subjected to RT-qPCR analysis. We have found that guanidine thiocyanate and most of the tested detergent containing lysis buffers were effective in inactivation of SARS-CoV-2, however, the M-PER lysis buffer containing a proprietary detergent failed to inactivate the virus. In conclusion, careful evaluation of the used inactivation methods is required especially for non-denaturing buffers. Additional inactivation steps might be necessary before removal of lysed viral samples from BSL-3.
Background: Myelosuppression is a potential dose-limiting factor in radioligand therapy (RLT). This study aims to investigate occurrence, severity and reversibility of hematotoxic adverse events in patients undergoing RLT with 177Lu-PSMA-617 for metastatic castration-resistant prostate cancer (mCRPC). The contribution of pretreatment risk factors and cumulative treatment activity is taken into account specifically. Methods: RLT was performed in 140 patients receiving a total of 497 cycles. A mean activity of 6.9 ± 1.3 GBq 177Lu-PSMA-617 per cycle was administered, and mean cumulative activity was 24.6 ± 15.9 GBq. Hematological parameters were measured at baseline, prior to each treatment course, 2 to 4 weeks thereafter and throughout follow-up. Toxicity was graded based on Common Terminology Criteria for Adverse Events v5.0. Results: Significant (grade ≥ 3) hematologic adverse events occurred in 13 (9.3%) patients, with anemia in 10 (7.1%), leukopenia in 5 (3.6%) and thrombocytopenia in 6 (4.3%). Hematotoxicity was reversible to grade ≤ 2 through a median follow-up of 8 (IQR 9) months in all but two patients who died from disease progression within less than 3 months after RLT. Myelosuppression was significantly more frequent in patients with pre-existing grade 2 cytopenia (OR: 3.50, 95%CI 1.08–11.32, p = 0.04) or high bone tumor burden (disseminated or diffuse based on PROMISE miTNM, OR: 5.08, 95%CI 1.08–23.86, p = 0.04). Previous taxane-based chemotherapy was associated with an increased incidence of significant hematotoxicity (OR: 4.62, 95%CI 1.23–17.28, p = 0.02), while treatment with 223Ra-dichloride, cumulative RLT treatment activity and activity per cycle were not significantly correlated (p = 0.93, 0.33, 0.29). Conclusion: Hematologic adverse events after RLT have an acceptable overall incidence and are frequently reversible. High bone tumor burden, previous taxane-based chemotherapy and pretreatment grade 2 cytopenia may be considered as risk factors for developing clinically relevant myelosuppression, whereas cumulative RLT activity and previous 223Ra-dichloride treatment show no significant contribution to incidence rates.
Purpose: To analyze refractive and topographic changes secondary to Descemet membrane endothelial keratoplasty (DMEK) in pseudophakic eyes with Fuchs’ endothelial dystrophy (FED). Methods: Eighty-seven pseudophakic eyes of 74 patients who underwent subsequent DMEK surgery for corneal endothelial decompensation and associated visual impairment were included. Median post-operative follow-up time was 12 months (range: 3–26 months). Main outcome measures were pre- and post-operative manifest refraction, anterior and posterior corneal astigmatism, simulated keratometry (CASimK) and Q value obtained by Scheimpflug imaging. Secondary outcome measures included corrected distance visual acuity (CDVA), central corneal densitometry, central corneal thickness, corneal volume (CV), anterior chamber volume (ACV) and anterior chamber depth (ACD). Results: After DMEK surgery, mean pre-operative spherical equivalent (± SD) changed from + 0.04 ± 1.73 D to + 0.37 ± 1.30 D post-operatively (p = 0.06). CDVA, proportion of emmetropic eyes, ACV and ACD increased significantly during follow-up. There was also a significant decrease in posterior corneal astigmatism, central corneal densitometry, central corneal thickness and corneal volume over time (p = 0.001). Only anterior corneal astigmatism and simulated keratometry (CASimK) remained fairly stable after DMEK. Conclusion: Despite tendencies toward a hyperopic shift, changes in SE were not significant and refraction remained overall stable in pseudophakic patients undergoing DMEK for FED. Analysis of corneal parameters by Scheimpflug imaging mainly revealed changes in posterior corneal astigmatism pointing out the relevance of posterior corneal profile changes during edema resolution after DMEK.
The integrated stress response (ISR) is a central cellular adaptive program that is activated by diverse stressors including ER stress, hypoxia and nutrient deprivation to orchestrate responses via activating transcription factor 4 (ATF4). We hypothesized that ATF4 is essential for the adaptation of human glioblastoma (GB) cells to the conditions of the tumor microenvironment and is contributing to therapy resistance against chemotherapy. ATF4 induction in GB cells was modulated pharmacologically and genetically and investigated in the context of temozolomide treatment as well as glucose and oxygen deprivation. The relevance of the ISR was analyzed by cell death and metabolic measurements under conditions to approximate aspects of the GB microenvironment. ATF4 protein levels were induced by temozolomide treatment. In line, ATF4 gene suppressed GB cells (ATF4sh) displayed increased cell death and decreased survival after temozolomide treatment. Similar results were observed after treatment with the ISR inhibitor ISRIB. ATF4sh and ISRIB treated GB cells were sensitized to hypoxia-induced cell death. Our experimental study provides evidence for an important role of ATF4 for the adaptation of human GB cells to conditions of the tumor microenvironment characterized by low oxygen and nutrient availability and for the development of temozolomide resistance. Inhibiting the ISR in GB cells could therefore be a promising therapeutic approach.
The ingestion of microplastics (MPs) is well documented for various animals and spherical MPs (beads) in many studies. However, the retention time and egestion of MPs have been examined less, especially for irregular MPs (fragments) which are predominantly found in the environment. Furthermore, the accumulation of such particles in the gastrointestinal tract is likely to determine whether adverse effects are induced. To address this, we investigated if the ingestion and egestion of beads are different to those of fragments in the freshwater shrimp Neocaridina palmata. Therefore, organisms were exposed to 20–20,000 particles L−1 of either polyethylene (PE) beads (41 μm and 87 μm) or polyvinyl chloride (PVC) fragments (<63 μm). Moreover, shrimps were exposed to 20,000 particles L−1 of either 41 μm PE and 11 μm polystyrene (PS) beads or the PVC fragments for 24 h, followed by a post-exposure period of 4 h to analyze the excretion of particles. To simulate natural conditions, an additional fragment ingestion study was performed in the presence of food. After each treatment, the shrimps were analyzed for retained or excreted particles. Our results demonstrate that the ingestion of beads and fragments were concentration-dependent. Shrimps egested 59% of beads and 18% of fragments within 4 h. Particle shape did not significantly affect MP ingestion or egestion, but size was a relevant factor. Medium- and small-sized beads were frequently ingested. Furthermore, fragment uptake decreased slightly when co-exposed to food, but was not significantly different to the treatments without food. Finally, the investigations highlight that the assessment of ingestion and egestion rates can help to clarify whether MPs remain in specific organisms and, thereby, become a potential health threat.
In our work, we establish the existence of standing waves to a nonlinear Schrödinger equation with inverse-square potential on the half-line. We apply a profile decomposition argument to overcome the difficulty arising from the non-compactness of the setting. We obtain convergent minimizing sequences by comparing the problem to the problem at “infinity” (i.e., the equation without inverse square potential). Finally, we establish orbital stability/instability of the standing wave solution for mass subcritical and supercritical nonlinearities respectively.
The QCD phase-diagram is studied, at finite magnetic field. Our calculations are based on the QCD effective model, the SU(3) Polyakov linear-sigma model (PLSM), in which the chiral symmetry is integrated in the hadron phase and in the parton phase, the up-, down- and strange-quark degrees of freedom are incorporated besides the inclusion of Polyakov loop potentials in the pure gauge limit, which are motivated by various underlying QCD symmetries. The Landau quantization and the magnetic catalysis are implemented. The response of the QCD matter to an external magnetic field such as magnetization, magnetic susceptibility and permeability has been estimated. We conclude that the parton phase has higher values of magnetization, magnetic susceptibility, and permeability relative to the hadron phase. Depending on the contributions to the Landau levels, we conclude that the chiral magnetic field enhances the chiral quark condensates and hence the chiral QCD phase-diagram, i.e. the hadron-parton phase-transition likely takes place, at lower critical temperatures and chemical potentials.
The aim of this study was to quantify and to compare the wear rates of premolar (PM) and molar (M) restorations of lithium disilicate ceramic (LS2) and an experimental CAD/CAM polymer (COMP) in cases of complex rehabilitations with changes in vertical dimension of occlusion (VDO). Twelve patients with severe tooth wear underwent prosthetic rehabilitation, restoring the VDO with antagonistic occlusal coverage restorations either out of LS2 (n = 6 patients, n = 16 posterior restorations/patient; N = 96 restorations/year) or COMP (n = 6 patients; n = 16 posterior restorations/patient; N = 96 restorations/year). Data was obtained by digitalization of plaster casts with a laboratory scanner at annual recalls (350 ± 86 days; 755 ± 92 days; 1102 ± 97 days). Each annual recall dataset of premolar and molar restorations (N = 192) was overlaid individually with the corresponding baseline dataset using an iterative best-fit method. Mean vertical loss of the occlusal contact areas (OCAs) was calculated for each restoration and recall time. For LS2 restorations, the mean wear rate per month over 1 year was 7.5 ± 3.4 μm (PM), 7.8 ± 2.0 μm (M), over 2 years 3.8 ± 1.6 µm (PM), 4.4 ± 1.5 µm (M), over 3 years 2.8 ± 1.3 µm (PM), 3.4 ± 1.7 µm (M). For COMP restorations, the mean wear rate per month over 1 year was 15.5 ± 8.9 μm (PM), 28.5 ± 20.2 μm (M), over 2 years 9.2 ± 5.9 µm (PM), 16.7 ± 14.9 µm (M), over 3 years 8.6 ± 5.3 µm (PM), 9.5 ± 8.0 µm (M). Three COMP restorations fractured after two years and therefore were not considered in the 3-year results. The wear rates in the LS2 group showed significant differences between premolars and molars restorations (p = 0.041; p = 0.023; p = 0.045). The wear rates in COMP group differed significantly between premolars and molars only in the first two years (p < 0.0001; p = 0.007). COMP restorations show much higher wear rates compared to LS2. The presented results suggest that with increasing time in situ, the monthly wear rates for both materials decreased over time. On the basis of this limited dataset, both LS2 and COMP restorations show reasonable clinical wear rates after 3 years follow-up. Wear of COMP restorations was higher, however prosthodontic treatment was less invasive. LS2 showed less wear, yet tooth preparation was necessary. Clinicians should balance well between necessary preparation invasiveness and long-term occlusal stability in patients with worn dentitions.
The mobile games business is an ever-increasing sub-sector of the entertainment industry. Due to its high profitability but also high risk and competitive atmosphere, game publishers need to develop strategies that allow them to release new products at a high rate, but without compromising the already short lifespan of the firms' existing games. Successful game publishers must enlarge their user base by continually releasing new and entertaining games, while simultaneously motivating the current user base of existing games to remain active for more extended periods. Since the core-component reuse strategy has proven successful in other software products, this study investigates the advantages and drawbacks of this strategy in mobile games. Drawing on the widely accepted Product Life Cycle concept, the study investigates whether the introduction of a new mobile game built with core-components of an existing mobile game curtails the incumbent's product life cycle. Based on real and granular data on the gaming activity of a popular mobile game, the authors find that by promoting multi-homing (i.e., by smartly interlinking the incumbent and new product with each other so that users start consuming both games in parallel), the core-component reuse strategy can prolong the lifespan of the incumbent game.
Human observers can quickly and accurately categorize scenes. This remarkable ability is related to the usage of information at different spatial frequencies (SFs) following a coarse-to-fine pattern: Low SFs, conveying coarse layout information, are thought to be used earlier than high SFs, representing more fine-grained information. Alternatives to this pattern have rarely been considered. Here, we probed all possible SF usage strategies randomly with high resolution in both the SF and time dimensions at two categorization levels. We show that correct basic-level categorizations of indoor scenes are linked to the sampling of relatively high SFs, whereas correct outdoor scene categorizations are predicted by an early use of high SFs and a later use of low SFs (fine-to-coarse pattern of SF usage). Superordinate-level categorizations (indoor vs. outdoor scenes) rely on lower SFs early on, followed by a shift to higher SFs and a subsequent shift back to lower SFs in late stages. In summary, our results show no consistent pattern of SF usage across tasks and only partially replicate the diagnostic SFs found in previous studies. We therefore propose that SF sampling strategies of observers differ with varying stimulus and task characteristics, thus favouring the notion of flexible SF usage.
Machine learning (ML) techniques have evolved rapidly in recent years and have shown impressive capabilities in feature extraction, pattern recognition, and causal inference. There has been an increasing attention to applying ML to medical applications, such as medical diagnosis, drug discovery, personalized medicine, and numerous other medical problems. ML-based methods have the advantage of processing vast amounts of data.
With an ever increasing amount of medical data collection and large, inter-subject variability in the medical data, automated data processing pipelines are very much desirable since it is laborious, expensive, and error-prone to rely solely on human processing. ML methods have the potential to uncover interesting patterns, unravel correlations between complex features, learn patient-specific representations, and make accurate predictions. Motivated by these promising aspects, in this thesis, I present studies where I have implemented deep neural networks for the early diagnosis of epilepsy based on electroencephalography (EEG) data and brain tumor detection based on magnetic resonance spectroscopy (MRS) data.
In the project for early diagnosis of epilepsy, we are dealing with one of the most common neurological disorders, epilepsy, which is characterized by recurrent unprovoked seizures. It can be triggered by a variety of initial brain injuries and manifests itself after a time window which is called the latent period. During this period, a cascade of structural and functional brain alterations takes place leading to an increased seizure susceptibility.
The development and extension of brain tissue capable of generating spontaneous seizures is defined as epileptogenesis (EPG).
Detecting the presence of EPG provides a precious opportunity for targeted early medical interventions and, thus, can slow down or even halt the disease progression. In order to study brain signals in this latent window, animal epilepsy models are used to provide valuable data as it is extremely difficult to obtain this data from human patients. The aim of this study is to discover biomarkers of EPG using animal models and then to find the equivalent and counterparts in human patients' data. However, the EEG features for EPG are not well-understood and there is not a sufficiently large amount of annotated data for ML-based algorithms. To approach this problem, firstly, I utilized the timestamp information of the recorded EEG from an animal epilepsy model where epilepsy is induced by an electrical stimulation. The timestamp serves as a form of weak supervision, i.e., before and after the stimulation. Secondly, I implemented a deep residual neural network and trained it with a binary classification task to distinguish the EEG signals from these two phases. After obtaining a high discriminative ability on the binary classification task, I proposed to divide further the time span after the stimulation for a three-class classification, aiming to detect possible stages of the progression of the latent EPG phase. I have shown that the model can distinguish EEG signals at different stages of EPG with high accuracy and generalization ability. I have also demonstrated that some of the learned features from the network are clinically relevant.
In the task of detecting brain tumors based on MRS data, I first proposed to apply a deep neural network on the MRS data collected from over 400 patients for a binary classification task. To combat the challenge of noisy labeling, I developed a distillation step to filter out relatively ``cleanly'' labeled samples. A mixing-based data augmentation method was also implemented to expand the size of the training set. All the experiments were designed to be conducted with a leave-patient-out scheme to ensure the generalization ability of the model. Averaged across all leave-patient-out cross-validation sets, the proposed method performed on par with human neuroradiologists, while outperforming other baseline methods. I have demonstrated the distillation effect on the MNIST data set with manually-introduced label noise as well as providing visualization of the input influences on the final classification through a class activation map method.
Moreover, I have proposed to aggregate information at the subject level, which could provide more information and insights. This is inspired by the concept of multiple instance learning, where instance-level labels are not required and which is more tolerant to noisy labeling. I have proposed to generate data bags consisting of instances from each patient and also proposed two modules to ensure permutation invariance, i.e., an attention module and a pooling module. I have compared the performance of the network in different cases, i.e., with and without permutation-invariant modules, with and without data augmentation, single-instance-based and multiple-instance-based learning and have shown that neural networks equipped with the proposed attention or pooling modules can outperform human experts.
This paper considers ways in which rulers can respond to, generate, or exploit fear of COVID-19 infection for various ends, and in particular distinguishes between ‘fear-invoking’ and ‘fear-minimising’ strategies. It examines historical precedent for executive overreach in crises and then moves on to look in more detail at some specific areas where fear is being mobilised or generated: in ways that lead to the suspension of civil liberties; that foster discrimination against minorities; and that boost the personality cult of leaders and limit criticism or competition. Finally, in the Appendix, we present empirical work, based on the results of an original survey in Brazil, that provides support for the conjectures in the previous sections. While it is too early to tell what the longer-term outcomes of the changes we note will be, our purpose here is simply to identify some warning signs that threaten the key institutions and values of democracy.
The COVID-19 pandemic has both highlighted and exacerbated global health inequities, leading for calls for responses to COVID to promote social justice and ensure that no one is left behind. One key lesson to be learnt from the pandemic is the critical importance of decolonizing global health and global health research so that African countries are better placed to address pandemic challenges in contextually relevant ways. This paper argues that to be successful, programmes of decolonization in complex global health landscapes require a complex three-dimensional approach. Drawing on the broader discourse of political decolonization that has been going on in the African context for over a century, we present a model for unpacking the complex task of decolonization. Our approach suggests a three-dimensional approach which encompasses hegemomic; epistemic; and commitmental elements.
We live in tragic times. Millions are sheltering in place to avoid exacerbating the Coronavirus (COVID-19) pandemic. How should we respond to such tragedies? This paper argues that the human right to health can help us do so because it inspires human rights advocates, claimants, and those with responsibility for fulfilling the right to try hard to satisfy its claims. That is, the right should, and often does, give rise to what I call the virtue of creative resolve. This resolve embodies a fundamental commitment to finding creative solutions to what appear to be tragic dilemmas. Contra critics, we should not reject the right even if it cannot tell us how to ration scarce health resources. Rather, the right gives us a response to apparent tragedy in motivating us to search for ways of fulfilling everyone’s basic health needs.
The COVID-19 pandemic is affecting countries across the globe. Only a globally coordinated response, however, will enable the containment of the virus. Responding to a request from policy makers for ethics input for a global resource pledging event as a starting point, this paper outlines normative and procedural principles to inform a coordinated global coronavirus response. Highlighting global connections and specific vulnerabilities from the pandemic, and proposing standards for reasonable and accountable decision-making, the ambition of the paper is two-fold: to raise awareness for the justice dimensions in the global response, and to argue for moving health from the periphery to the centre of philosophical debates about social and global justice.
The first case of COVID-19 infection in Africa was recorded in Egypt on 14 February 2020. Following this, several projections of the possible devastating effect that the virus can have on the population of African countries were made in the Western media. This paper presents evidence for Africa’s successful responses to the COVID-19 pandemic and under-reporting or misrepresentation of these successes in Western media. It proceeds to argue for accounting for these successes in terms of Africa’s communitarian way of life and conceptions of self, duty, and rights; and that a particular orientation in theorizing on global justice can highlight the injustices inherent in the misrepresentation of these successes and contribute shared perspectives to formulating a framework of values and concepts that would facilitate the implementation of global policy goals for justice. The paper is thus grounded in a rejection of the insular tenets of theorizing prevalent in the global justice debate and to persistent inclinations in Western scholarship to the thinking that theorizing in the African context that draws inspiration from the cultural past has little to contribute to the quest for justice globally. On the contrary, it argues that reflexive critique of cultural history is a necessary source of normative ideals that can foster tolerant coexistence and a cooperative endeavour toward shared conceptions of justice in the contemporary world.
Introduction
(2022)
Child sexual abuse has been discussed thoroughly; however, marginalized groups of victims such as victims of child sexual abuse in early childhood and victims of maternal sexual abuse have rarely been considered. This essay combines these two relevant perspectives in child protection and aims to pin out future directions in the field of child abuse and specifically maternal sexual abuse and its early prevention. In the course of the 7th Haruv International PhD Workshop on Child Maltreatment at the Hebrew University, Jerusalem, in 2019 the topics of maternal sexual abuse and early prevention of child maltreatment in Germany were discussed and intertwined. Problems concerning the specific research of maternal sexual abuse in early childhood and prevention were identified. Both, maternal sexual abuse as well as sexual abuse in early childhood, i.e. before the age of three, are underreported topics. Society still follows a “friendly mother illusion” while recent cases in German media as well as research findings indicate that the mother can be a perpetrator of child sexual abuse. Similarly, sexual abuse in early childhood, namely abuse before the age of three, is existent; although the recognition of it is difficult and young children are, in regards to their age and development especially vulnerable. They need protective adults in their environment, who are aware of sexual abuse in the first years of life. Raising awareness on marginalized or tabooed topics can be a form of prevention. An open dialog in research and practice about the so far marginalized topics of maternal sexual abuse and sexual abuse in early childhood is crucial.
Niemann-Pick type C (NPC) disease, a lysosomal storage disorder caused by defective NPC1/NPC2 function, results in the accumulation of cholesterol and glycosphingolipids in lysosomes of affected organs, such as liver and brain. Moreover, increase of mitochondrial cholesterol (mchol) content and impaired mitochondrial function and GSH depletion contribute to NPC disease. However, the underlying mechanism of mchol accumulation in NPC disease remains unknown. As STARD1 is crucial in intramitochondrial cholesterol trafficking and acid ceramidase (ACDase) has been shown to regulate STARD1, we explored the functional relationship between ACDase and STARD1 in NPC disease. Liver and brain of Npc1−/− mice presented a significant increase in mchol levels and STARD1 expression. U18666A, an amphiphilic sterol that inhibits lysosomal cholesterol efflux, increased mchol levels in hepatocytes from Stard1f/f mice but not Stard1ΔHep mice. We dissociate the induction of STARD1 expression from endoplasmic reticulum stress, and establish an inverse relationship between ACDase and STARD1 expression and LRH-1 levels. Hepatocytes from Npc1+/+ mice treated with U18666A exhibited increased mchol accumulation, STARD1 upregulation and decreased ACDase expression, effects that were reversed by cholesterol extraction with 2-hydroxypropyl-β-cyclodextrin. Moreover, transfection of fibroblasts from NPC patients with ACDase, decreased STARD1 expression and mchol accumulation, resulting in increased mitochondrial GSH levels, improved mitochondrial functional performance, decreased oxidative stress and protected NPC fibroblasts against oxidative stress-mediated cell death. Our results demonstrate a cholesterol-dependent inverse relationship between ACDase and STARD1 and provide a novel approach to target the accumulation of cholesterol in mitochondria in NPC disease.
The stress-dependent dynamics of Saccharomyces cerevisiae tRNA and rRNA modification profiles
(2021)
RNAs are key players in the cell, and to fulfil their functions, they are enzymatically modified. These modifications have been found to be dynamic and dependent on internal and external factors, such as stress. In this study we used nucleic acid isotope labeling coupled mass spectrometry (NAIL-MS) to address the question of which mechanisms allow the dynamic adaptation of RNA modifications during stress in the model organism S. cerevisiae. We found that both tRNA and rRNA transcription is stalled in yeast exposed to stressors such as H2O2, NaAsO2 or methyl methanesulfonate (MMS). From the absence of new transcripts, we concluded that most RNA modification profile changes observed to date are linked to changes happening on the pre-existing RNAs. We confirmed these changes, and we followed the fate of the pre-existing tRNAs and rRNAs during stress recovery. For MMS, we found previously described damage products in tRNA, and in addition, we found evidence for direct base methylation damage of 2′O-ribose methylated nucleosides in rRNA. While we found no evidence for increased RNA degradation after MMS exposure, we observed rapid loss of all methylation damages in all studied RNAs. With NAIL-MS we further established the modification speed in new tRNA and 18S and 25S rRNA from unstressed S. cerevisiae. During stress exposure, the placement of modifications was delayed overall. Only the tRNA modifications 1-methyladenosine and pseudouridine were incorporated as fast in stressed cells as in control cells. Similarly, 2′-O-methyladenosine in both 18S and 25S rRNA was unaffected by the stressor, but all other rRNA modifications were incorporated after a delay. In summary, we present mechanistic insights into stress-dependent RNA modification profiling in S. cerevisiae tRNA and rRNA.
The cell—cell signaling gene CDH13 is associated with a wide spectrum of neuropsychiatric disorders, including attention-deficit/hyperactivity disorder (ADHD), autism, and major depression. CDH13 regulates axonal outgrowth and synapse formation, substantiating its relevance for neurodevelopmental processes. Several studies support the influence of CDH13 on personality traits, behavior, and executive functions. However, evidence for functional effects of common gene variation in the CDH13 gene in humans is sparse. Therefore, we tested for association of a functional intronic CDH13 SNP rs2199430 with ADHD in a sample of 998 adult patients and 884 healthy controls. The Big Five personality traits were assessed by the NEO-PI-R questionnaire. Assuming that altered neural correlates of working memory and cognitive response inhibition show genotype-dependent alterations, task performance and electroencephalographic event-related potentials were measured by n-back and continuous performance (Go/NoGo) tasks. The rs2199430 genotype was not associated with adult ADHD on the categorical diagnosis level. However, rs2199430 was significantly associated with agreeableness, with minor G allele homozygotes scoring lower than A allele carriers. Whereas task performance was not affected by genotype, a significant heterosis effect limited to the ADHD group was identified for the n-back task. Heterozygotes (AG) exhibited significantly higher N200 amplitudes during both the 1-back and 2-back condition in the central electrode position Cz. Consequently, the common genetic variation of CDH13 is associated with personality traits and impacts neural processing during working memory tasks. Thus, CDH13 might contribute to symptomatic core dysfunctions of social and cognitive impairment in ADHD.
Growing amounts of genomic data and more efficient assembly tools advance organelle genomics at an unprecedented scale. Genomic resources are increasingly used for phylogenetic analyses of many plant species, but are less frequently used to investigate within-species variability and phylogeography. In this study, we investigated genetic diversity of Fagus sylvatica, an important broadleaved tree species of European forests, based on complete chloroplast genomes of 18 individuals sampled widely across the species distribution. Our results confirm the hypothesis of a low cpDNA diversity in European beech. The chloroplast genome size was remarkably stable (158,428 ± 37 bp). The polymorphic markers, 12 microsatellites (SSR), four SNPs and one indel, were found only in the single copy regions, while inverted repeat regions were monomorphic both in terms of length and sequence, suggesting highly efficient suppression of mutation. The within-individual analysis of polymorphisms showed >9k of markers which were proportionally present in gene and non-gene areas. However, an investigation of the frequency of alternate alleles revealed that the source of this diversity originated likely from nuclear-encoded plastome remnants (NUPTs). Phylogeographic and Mantel correlation analysis based on the complete chloroplast genomes exhibited clustering of individuals according to geographic distance in the first distance class, suggesting that the novel markers and in particular the cpSSRs could provide a more detailed picture of beech population structure in Central Europe.
Nucleoredoxin is a thioredoxin-like redoxin that has been recognized as redox modulator of WNT signaling. Using a Yeast-2-Hybrid screen, we identified calcium calmodulin kinase 2a, Camk2a, as a prominent prey in a brain library. Camk2a is crucial for nitric oxide dependent processes of neuronal plasticity of learning and memory. Therefore, the present study assessed functions of NXN in neuronal Nestin-NXN-/- deficient mice. The NXN-Camk2a interaction was confirmed by coimmunoprecipitation, and by colocalization in neuropil and dendritic spines. Functionally, Camk2a activity was reduced in NXN deficient neurons and restored with recombinant NXN. Proteomics revealed reduced oxidation in the hippocampus of Nestin-NXN-/- deficient mice, including Camk2a, further synaptic and mitochondrial proteins, and was associated with a reduction of mitochondrial respiration. Nestin-NXN-/- mice were healthy and behaved normally in behavioral tests of anxiety, activity and sociability. They had no cognitive deficits in touchscreen based learning & memory tasks, but omitted more trials showing a lower interest in the reward. They also engaged less in rewarding voluntary wheel running, and in exploratory behavior in IntelliCages. Accuracy was enhanced owing to the loss of exploration. The data suggested that NXN maintained the oxidative state of Camk2a and thereby its activity. In addition, it supported oxidation of other synaptic and mitochondrial proteins, and mitochondrial respiration. The loss of NXN-dependent pro-oxidative functions manifested in a loss of exploratory drive and reduced interest in reward in behaving mice.
The effect of the extreme summer drought and heatwave 2018 in Central Europe on wood properties of oaks at four sandy valley river sites (Quercus robur L.) and one south-exposed schist slope (Qu. petraea (Matt.) Liebl.) in the middle Rhine and lower Main valley were studied and compared to well-watered trees from a riparian stand. While properties of the 2018 tree rings mostly resembled those of the previous (wet) year, significant decreases in Δ13C, wood density and ring width occurred in 2019 at most drought-prone sites. In the sandy sites, ring widths correlated with previous-year precipitation from June to August over a 20-year period. In organs formed in 2018, in general, decreasing Δ13C values were obtained in the order leaves, twigs, wood and acorns, with the values from acorns often resembling those from 2019-year rings. The observed changes indicated an increased intrinsic water use efficiency and lack of starch reserve formation during the unprecedented hot and dry summer 2018. Qu. petraea revealed quite different values from Qu. robur (lower Δ13C, wider and denser year rings), but qualitatively showed the same reaction to the drought in 2018, except for an enhanced formation of tyloses in recent-year tree rings.
The prevalence and specificity of local protein synthesis during neuronal synaptic plasticity
(2021)
To supply proteins to their vast volume, neurons localize mRNAs and ribosomes in dendrites and axons. While local protein synthesis is required for synaptic plasticity, the abundance and distribution of ribosomes and nascent proteins near synapses remain elusive. Here, we quantified the occurrence of local translation and visualized the range of synapses supplied by nascent proteins during basal and plastic conditions. We detected dendritic ribosomes and nascent proteins at single-molecule resolution using DNA-PAINT and metabolic labeling. Both ribosomes and nascent proteins positively correlated with synapse density. Ribosomes were detected at ~85% of synapses with ~2 translational sites per synapse; ~50% of the nascent protein was detected near synapses. The amount of locally synthesized protein detected at a synapse correlated with its spontaneous Ca2+ activity. A multifold increase in synaptic nascent protein was evident following both local and global plasticity at respective scales, albeit with substantial heterogeneity between neighboring synapses.
The Specialized Information Service Biodiversity Research (BIOfid) has been launched to mobilize valuable biological data from printed literature hidden in German libraries for over the past 250 years. In this project, we annotate German texts converted by OCR from historical scientific literature on the biodiversity of plants, birds, moths and butterflies. Our work enables the automatic extraction of biological information previously buried in the mass of papers and volumes. For this purpose, we generated training data for the tasks of Named Entity Recognition (NER) and Taxa Recognition (TR) in biological documents. We use this data to train a number of leading machine learning tools and create a gold standard for TR in biodiversity literature. More specifically, we perform a practical analysis of our newly generated BIOfid dataset through various downstream-task evaluations and establish a new state of the art for TR with 80.23% F-score. In this sense, our paper lays the foundations for future work in the field of information extraction in biology texts.
Bezüglich der Arzneimittelforschung galt für sehr lange Zeit das Paradigma "ein Gen, ein Medikament, eine Krankheit". In jüngerer Zeit ändert sich dieses Paradigma jedoch auf Grund von redundanten Funktionen und alternativen sich kompensierenden Signalmustern, die insbesondere bei Krebserkrankungen vorherrschend sind. Daher kann die logische Konsequenz nur sein, Multi-Target-Strategien gegenüber Single-Target-Ansätzen in Betracht zu ziehen. Auf Grund der Schwierigkeit, mit einer Kombination von zwei Einzelwirkstoffen, in diesem Fall BET- und HDAC-Inhibitoren eine konsistente Biodistribution und Pharmakokinetik zu erreichen, wurde nach Einzelmolekülen gesucht, die mehrere inhibitorische Aktivitäten aufweisen. Dies wurde hier zunächst durch die einfache Konjugation von zwei unterschiedlichen Pharmakophoren erreicht.
Insgesamt wurden vier verschiedene Liganden dieses Typs synthetisiert und einer von ihnen, Verbindung 14, zeigte sehr vielversprechende Ergebnisse. 14 vereint den BET Inhibitor JQ1- mit dem HDAC Inhibitor CI994 und hat eine hemmende Wirkung sowohl gegen BRD4- als auch HDAC-Proteine wie durch DSF- und nanoBRET-Assay gezeigt werden konnte. Außerdem zeigten in vitro Assays in PDAC-Zellen, dass 14 ein noch potenterer dualer BET/HDAC-Inhibitor ist als die Kombination aus JQ1 und CI994. Während die Effekte von 14 auf das BETi-Antwortgen MYC denen von JQ1 ziemlich ähnlich sind, sind insbesondere die HDAC-inhibitorischen Effekte nachhaltiger und verstärkt, wahrscheinlich aufgrund einer längeren Verweildauer von 14 auf HDAC als dies bei CI994 der Fall ist. Dies ist durch das hohe Niveau der acetylierten Lysine von Histon H3 im Western Blot erkennbar. Dieses veränderte Expressionsverhalten hatte einen großen Einfluss auf das Zellwachstum und überleben in allen getesteten PDAC-Zelllinien. Hier wurde die Überlegenheit von 14 gegenüber der gleichzeitigen Behandlung der Zellen mit JQ1 und CI994 sehr deutlich. Wurden PDAC-Zellen mit dem dualen Inhibitor 14 behandelt, hatte dies ein geringeres Wachstum und Überleben der Krebszellen zur Folge als mit beiden ursprünglichen Molekülen, unabhängig davon, ob diese einzeln oder simultan verabreicht wurden. Außerdem wurde 14 mit Gemcitabin, einem gut verträglichen Chemotherapeutikum, kombiniert, dass bei PDAC allein nur eine begrenzte Aktivität aufweist. Es stellte sich heraus, dass die Reihenfolge, in der die Medikamente verabreicht werden, einen großen Einfluss auf die Effektivität hatte. Der durch 14 induzierte Stopp des Zellzyklus verhindert den Einbau von Gemcitabin in die DNA, wenn 14 vor oder gleichzeitig mit Gemcitabin verabreicht wird. Wenn jedoch die Behandlung mit 14 nach der Verabreichung von Gemcitabin folgt, wird der durch Gemcitabin induzierte S-Phasen-Arrest und Replikationsstress aufrechterhalten. Im Vergleich zu den meisten früheren Studien, die sich mit dualen BET/HDAC-Inhibitoren beschäftigten, ist dies eine große Verbesserung, da es bisher keinen signifikanten Unterschied zwischen der Verwendung eines dualen BET/HDAC-Inhibitors und der Kombination von zwei Einzelinhibitoren gab.
Als Proof of Concept unterstützten die Daten weitere Bemühungen zur Entwicklung zusätzlicher dualer BET/HDAC-Inhibitoren. Daher wurden zwei weitere Generationen dualer BET/HDAC Inhibitoren entwickelt, die jedoch bisher nicht an die Eigenschaften von 14 anknüpfen konnten. Vor allem die 3. Generation bietet jedoch Raum für Optimierungen, so dass hier möglicherweise noch ein potenter dualer Inhibitor zu finden ist. Sollte es in Zukunft einen zugelassenen dualen BET/HDAC-Inhibitor geben, ist es jedoch nicht unwahrscheinlich, dass keine der hier verwendet BET inhibierenden Strukturen verwendet werden, aber Struktur des HDAC inhibierenden Teils immer noch vergleichbar ist. Der Grund dafür ist, dass die HDAC Inhibitoren größtenteils relativ einfach aufgebaut. So lange das wichtigste, die zinkbindende Gruppe vorhanden ist, scheint der Linker sowie die Capping-Gruppe zweitranging zu sein. Die größere Herausforderung wird vermutlich die Suche nach dem passenden BET Inhibitor sein und die Wahlmöglichkeiten sind schon jetzt vielfältig.
Generell lässt sich sagen, dass die Idee der dualen BET/HDAC-Inhibitoren äußerst vielversprechend und es wert ist, weiter verfolgt zu werden. Dies liegt vor allem an den guten Testergebnissen, die mit Verbindung 14 erzielt wurden. Mit Hilfe dieser Art von Inhibitoren könnte es in Zukunft möglich sein, die Überlebensrate von PDAC-Patienten zu erhöhen, wenn nicht als alleiniges Medikament, so vielleicht als Zusatz zur Chemotherapie. Darüber hinaus scheint der Einsatz von dualen BET/HDAC-Inhibitoren nicht nur auf die Behandlung von PDAC beschränkt zu sein und kann auch bei anderen Krebsarten angewendet werden. NMC zum Beispiel ist ein ebenso seltener wie tödlicher Subtyp des schlecht differenzierten Plattenepithelkarzinoms und zeichnet sich durch eine Fusion des NUT-Gens mit BRD4 aus, wodurch es potenziell anfällig für eine BET-Inhibition ist. Tatsächlich zeigte 14 auch hier einen größeren positiven Effekt auf die getesteten NMC-Zellen als JQ1 oder CI994 und veranlasste die Zellen unter anderem zur Differenzierung. ...
Hepatic inflammasome activation as origin of Interleukin-1α and Interleukin-1β in liver cirrhosis
(2020)
The metaphor of DIADEM informs the way in which Proverbs depicts the character of a woman of strength and her place in the society. The metaphor serves the Proverbs to conceptualise a prudent, virtuous and reasonable character in relation to the divine and the human, and thus to provide the main support of a successful life.
The Greenlandic oral story-telling tradition, Oqaluttuaq, meaning “history,” “legend,” and “narrative,” is recognized as an important entry point into Arctic collective memory. The graphic artist Nuka K. Godtfredsen and his literary and scientific collaborators have used the term as the title of graphic narratives published from 2009 to 2018, and focused on four moments or ‘snippets’ from Greenland’s history (from the periods of Saqqaq, late Dorset, Norse settlement, and European colonization). Adopting a fragmentary and episodic approach to historical narrativization, the texts frame the modern European presence in Greenland as one of multiple migrations to and settlements in the Artic, rather than its central axis. We argue that, in consequence, the Oqaluttuaq narratives not only “provincialize” the tradition of hyperborean colonial memories, but also provide a postcolonial mnemonic construction of Greenland as a place of multiple histories, plural peoples, and heterogenous temporalities. As such, the books also narrativize loss and disappearance—of people, cultures, and environments—as a distinctive melancholic strand in Greenlandic history. Informed by approaches in the field of cultural memory and in the study memorial objects, Marks’ haptic visuality and Keenan and Weizman’s forensic aesthetics, we analyze the graphic narratives of Oqaluttuaq in regard to their aesthetic dimensions, as well as investigate the role of material objects and artifacts, which work as narrative “props” for multiple stories of encounter and survival in the Arctic.
Objectives: The aim of this study was to develop a prognostic tool to estimate long-term tooth retention in periodontitis patients at the beginning of active periodontal therapy (APT). Material and methods: Tooth-related factors (type, location, bone loss (BL), infrabony defects, furcation involvement (FI), abutment status), and patient-related factors (age, gender, smoking, diabetes, plaque control record) were investigated in patients who had completed APT 10 years before. Descriptive analysis was performed, and a generalized linear-mixed model-tree was used to identify predictors for the main outcome variable tooth loss. To evaluate goodness-of-fit, the area under the curve (AUC) was calculated using cross-validation. A bootstrap approach was used to robustly identify risk factors while avoiding overfitting. Results: Only a small percentage of teeth was lost during 10 years of supportive periodontal therapy (SPT; 0.15/year/patient). The risk factors abutment function, diabetes, and the risk indicator BL, FI, and age (≤ 61 vs. > 61) were identified to predict tooth loss. The prediction model reached an AUC of 0.77. Conclusion: This quantitative prognostic model supports data-driven decision-making while establishing a treatment plan in periodontitis patients. In light of this, the presented prognostic tool may be of supporting value. Clinical relevance: In daily clinical practice, a quantitative prognostic tool may support dentists with data-based decision-making. However, it should be stressed that treatment planning is strongly associated with the patient’s wishes and adherence. The tool described here may support establishment of an individual treatment plan for periodontally compromised patients.
Introduction: Deep brain stimulation (DBS) has become a well-established treatment modality for a variety of conditions over the last decades. Multiple surgeries are an essential part in the postoperative course of DBS patients if nonrechargeable implanted pulse generators (IPGs) are applied. So far, the rate of subclinical infections in this field is unknown. In this prospective cohort study, we used sonication to evaluate possible microbial colonization of IPGs from replacement surgery. Methods: All consecutive patients undergoing IPG replacement between May 1, 2019 and November 15, 2020 were evaluated. The removed hardware was investigated using sonication to detect biofilm-associated bacteria. Demographic and clinical data were analyzed. Results: A total of 71 patients with a mean (±SD) of 64.5 ± 15.3 years were evaluated. In 23 of these (i.e., 32.4%) patients, a positive sonication culture was found. In total, 25 microorganisms were detected. The most common isolated microorganisms were Cutibacterium acnes (formerly known as Propionibacterium acnes) (68%) and coagulase-negative Staphylococci (28%). Within the follow-up period (5.2 ± 4.3 months), none of the patients developed a clinical manifest infection. Discussions/Conclusions: Bacterial colonization of IPGs without clinical signs of infection is common but does not lead to manifest infection. Further larger studies are warranted to clarify the impact of low-virulent pathogens in clinically asymptomatic patients.
Purpose: The prospective, randomized ERGO2 trial investigated the effect of calorie-restricted ketogenic diet and intermittent fasting (KD-IF) on re-irradiation for recurrent brain tumors. The study did not meet its primary endpoint of improved progression-free survival in comparison to standard diet (SD). We here report the results of the quality of life/neurocognition and a detailed analysis of the diet diaries. Methods: 50 patients were randomized 1:1 to re-irradiation combined with either SD or KD-IF. The KD-IF schedule included 3 days of ketogenic diet (KD: 21–23 kcal/kg/d, carbohydrate intake limited to 50 g/d), followed by 3 days of fasting and again 3 days of KD. Follow-up included examination of cognition, quality of life and serum samples. Results: The 20 patients who completed KD-IF met the prespecified goals for calorie and carbohydrate restriction. Substantial decreases in leptin and insulin and an increase in uric acid were observed. The SD group, of note, had a lower calorie intake than expected (21 kcal/kg/d instead of 30 kcal/kg/d). Neither quality of life nor cognition were affected by the diet. Low glucose emerged as a significant prognostic parameter in a best responder analysis. Conclusion: The strict caloric goals of the ERGO2 trial were tolerated well by patients with recurrent brain cancer. The short diet schedule led to significant metabolic changes with low glucose emerging as a candidate marker of better prognosis. The unexpected lower calorie intake of the control group complicates the interpretation of the results. Clinicaltrials.gov number: NCT01754350; Registration: 21.12.2012.