Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10788)
- Doctoral Thesis (1564)
- Preprint (1534)
- Working Paper (1439)
- Part of Periodical (565)
- Conference Proceeding (510)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17058) (remove)
Keywords
- inflammation (92)
- COVID-19 (89)
- SARS-CoV-2 (62)
- Financial Institutions (47)
- Germany (45)
- climate change (45)
- aging (43)
- ECB (42)
- cancer (42)
- apoptosis (41)
Institute
- Medizin (5095)
- Physik (2953)
- Wirtschaftswissenschaften (1646)
- Frankfurt Institute for Advanced Studies (FIAS) (1570)
- Biowissenschaften (1397)
- Informatik (1249)
- Center for Financial Studies (CFS) (1137)
- Sustainable Architecture for Finance in Europe (SAFE) (1061)
- Biochemie und Chemie (853)
- House of Finance (HoF) (702)
The putative effects of dark matter are most easily explained by a collisionless fluid on cosmological scales and by Modified Newtonian Dynamics (MOND) on galactic scales. Hybrid MOND dark matter models combine the successes of dark matter on cosmological scales and those of MOND on galactic scales. An example of such a model is superfluid dark matter (SFDM) which postulates that this differing behavior with scale is caused by a single underlying substance with two phases. In this thesis, I highlight successful observational tests of SFDM regarding strong lensing and the Milky Way rotation curve. I also discuss three problems due to the double role of the aforementioned single underlying substance and show how these may be avoided. Finally, I introduce a novel Cherenkov radiation constraint for hybrid MOND dark matter models. This constraint is different from standard modified gravity Cherenkov radiation constraints because such hybrid models allow even non-relativistic objects like stars to emit Cherenkov radiation.
For the academic audience, this paper presents the outcome of a well-identified, large change in the monetary policy rule from the lens of a standard New Keynesian model and asks whether the model properly captures the effects. For policymakers, it presents a cautionary tale of the dismal effects of ignoring basic macroeconomics. The Turkish monetary policy experiment of the past decade, stemming from a belief of the government that higher interest rates cause higher inflation, provides an unfortunately clean exogenous variance in the policy rule. The mandate to keep rates low, and the frequent policymaker turnover orchestrated by the government to enforce this, led to the Taylor principle not being satisfied and eventually a negative coeffcient on inflation in the policy rule. In such an environment, was the exchange rate still a random walk? Was inflation anchored? Does the “standard model”” suffice to explain the broad contours of macroeconomic outcomes in an emerging economy with large identifying variance in the policy rule? There are no surprises for students of open-economy macroeconomics; the answers are no, no, and yes.
Reproducible annotations
(2022)
This bachelor thesis presents a software solution which implements reproducible annotations in the context of the UIMA framework. This is achieved by creating an automated containerization of arbitrary analysis engines and annotating every analysis engine configuration in the processed CAS document. Any CAS document created by this solution is self sufficient and able to reproduce the exact environment under which it was created.
A review of the state-of-the art software in the field of UIMA reveals that there are many implementations trying to increase reproducibility for a given application relying on UIMA, but no publication trying to increase the reproducibility of UIMA itself. This thesis improves upon that technological gap and provides a throughout analysis at the end which shows a negligible overhead in memory consumption, but a significant performance regression depending on the complexity of the analysis engine which was examined.
This paper analyses disclosure duties in insurance contract law in Germany on the basis of questions developed in preparation of the World Congress of the International Insurance Law Association (AIDA) 2018. As risk factors are within the policyholder’s sphere of knowledge, the insurer naturally depends on gaining such knowledge from its policyholder in order to calculate and evaluate premium and risk. Legal approaches as to how the insurer may obtain relevant information and the legal consequences differ in national insurance contract laws around the globe. Taking part in this legal comparison, the paper describes the key elements of such a mechanism from a German perspective and comprises both duties of the policyholder and duties of the insurer.
As for the policyholder, these issues are differences between a duty to (spontaneously) disclose and a duty not to misrepresent as a reaction to questions of the insurer, the prerequisites and remedies of such duty, the subjective standard of the disclosure duty and a duty to notify material changes during the contract term. On the other hand, the paper also addresses an insurer’s duty to investigate, a duty to ascertain the policyholder’s understanding of the policy and a duty to inform during the contract term or after the occurrence of an insured event. In doing so, the paper offers a comprehensive and critical overview on the transfer of knowledge in the insurance (pre-)contractual relationship.
The recognition of pharmacological substances, compounds and proteins is an essential preliminary work for the recognition of relations between chemicals and other biomedically relevant units. In this paper, we describe an approach to Task 1 of the PharmaCoNER Challenge, which involves the recognition of mentions of chemicals and drugs in Spanish medical texts. We train a state-of-the-art BiLSTM-CRF sequence tagger with stacked Pooled Contextualized Embeddings, word and sub-word embeddings using the open-source framework FLAIR. We present a new corpus composed of articles and papers from Spanish health science journals, termed the Spanish Health Corpus, and use it to train domain-specific embeddings which we incorporate in our model training. We achieve a result of 89.76% F1-score using pre-trained embeddings and are able to improve these results to 90.52% F1-score using specialized embeddings.
Despite the great importance of the Latin language in the past, there are relatively few resources available today to develop modern NLP tools for this language. Therefore, the EvaLatin Shared Task for Lemmatization and Part-of-Speech (POS) tagging was published in the LT4HALA workshop. In our work, we dealt with the second EvaLatin task, that is, POS tagging. Since most of the available Latin word embeddings were trained on either few or inaccurate data, we trained several embeddings on better data in the first step. Based on these embeddings, we trained several state-of-the-art taggers and used them as input for an ensemble classifier called LSTMVoter. We were able to achieve the best results for both the cross-genre and the cross-time task (90.64% and 87.00%) without using additional annotated data (closed modality). In the meantime, we further improved the system and achieved even better results (96.91% on classical, 90.87% on cross-genre and 87.35% on cross-time).
We present new results on nonlocal Dirichlet problems established by means of suitable spectral theoretic and variational methods, taking care of the nonlocal feature of the operators. We mainly address: First, we estimate the Morse index of radially symmetric sign changing bounded weak solutions to a semilinear Dirichlet problem involving the fractional Laplacian. In particular, we derive a conjecture due to Bañuelos and Kulczycki on the geometric structure of the second Dirichlet eigenfunctions. Secondly, we study a small order asymptotics with respect to the parameter s of the Dirichlet eigenvalues problem for the fractional Laplacian. Thirdly, we deal with the logarithmic Schrödinger operator. In particular, we provide an alternative to derive the singular integral representation corresponding to the associated Fourier symbol and introduce tools and functional analytic framework for variational studies. Finaly, we study nonlocal operators of order strictly below one. In particular, we investigate interior regularity properties of weak solutions to the associated Poisson problem depending on the regularity of the right-hand side.
Biodiversity information is contained in countless digitized and unprocessed scholarly texts. Although automated extraction of these data has been gaining momentum for years, there are still innumerable text sources that are poorly accessible and require a more advanced range of methods to extract relevant information. To improve the access to semantic biodiversity information, we have launched the BIOfid project (www.biofid.de) and have developed a portal to access the semantics of German language biodiversity texts, mainly from the 19th and 20th century. However, to make such a portal work, a couple of methods had to be developed or adapted first. In particular, text-technological information extraction methods were needed, which extract the required information from the texts. Such methods draw on machine learning techniques, which in turn are trained by learning data. To this end, among others, we gathered the BIOfid text corpus, which is a cooperatively built resource, developed by biologists, text technologists, and linguists. A special feature of BIOfid is its multiple annotation approach, which takes into account both general and biology-specific classifications, and by this means goes beyond previous, typically taxon- or ontology-driven proper name detection. We describe the design decisions and the genuine Annotation Hub Framework underlying the BIOfid annotations and present agreement results. The tools used to create the annotations are introduced, and the use of the data in the semantic portal is described. Finally, some general lessons, in particular with multiple annotation projects, are drawn.
Are nearby places (e.g., cities) described by related words? In this article, we transfer this research question in the field of lexical encoding of geographic information onto the level of intertextuality. To this end, we explore Volunteered Geographic Information (VGI) to model texts addressing places at the level of cities or regions with the help of so-called topic networks. This is done to examine how language encodes and networks geographic information on the aboutness level of texts. Our hypothesis is that the networked thematizations of places are similar, regardless of their distances and the underlying communities of authors. To investigate this, we introduce Multiplex Topic Networks (MTN), which we automatically derive from Linguistic Multilayer Networks (LMN) as a novel model, especially of thematic networking in text corpora. Our study shows a Zipfian organization of the thematic universe in which geographical places (especially cities) are located in online communication. We interpret this finding in the context of cognitive maps, a notion which we extend by so-called thematic maps. According to our interpretation of this finding, the organization of thematic maps as part of cognitive maps results from a tendency of authors to generate shareable content that ensures the continued existence of the underlying media. We test our hypothesis by example of special wikis and extracts of Wikipedia. In this way, we come to the conclusion that geographical places, whether close to each other or not, are located in neighboring semantic places that span similar subnetworks in the topic universe.
In the model of randomly perturbed graphs we consider the union of a deterministic graph G with minimum degree αn and the binomial random graph G(n, p). This model was introduced by Bohman, Frieze, and Martin and for Hamilton cycles their result bridges the gap between Dirac’s theorem and the results by Pósa and Korshunov on the threshold in G(n, p). In this note we extend this result in G ∪G(n, p) to sparser graphs with α = o(1). More precisely, for any ε > 0 and α: N ↦→ (0, 1) we show that a.a.s. G ∪ G(n, β/n) is Hamiltonian, where β = −(6 + ε) log(α). If α > 0 is a fixed constant this gives the aforementioned result by Bohman, Frieze, and Martin and if α = O(1/n) the random part G(n, p) is sufficient for a Hamilton cycle. We also discuss embeddings of bounded degree trees and other spanning structures in this model, which lead to interesting questions on almost spanning embeddings into G(n, p).
The annotation of texts and other material in the field of digital humanities and Natural Language Processing (NLP) is a common task of research projects. At the same time, the annotation of corpora is certainly the most time- and cost-intensive component in research projects and often requires a high level of expertise according to the research interest. However, for the annotation of texts, a wide range of tools is available, both for automatic and manual annotation. Since the automatic pre-processing methods are not error-free and there is an increasing demand for the generation of training data, also with regard to machine learning, suitable annotation tools are required. This paper defines criteria of flexibility and efficiency of complex annotations for the assessment of existing annotation tools. To extend this list of tools, the paper describes TextAnnotator, a browser-based, multi-annotation system, which has been developed to perform platform-independent multimodal annotations and annotate complex textual structures. The paper illustrates the current state of development of TextAnnotator and demonstrates its ability to evaluate annotation quality (inter-annotator agreement) at runtime. In addition, it will be shown how annotations of different users can be performed simultaneously and collaboratively on the same document from different platforms using UIMA as the basis for annotation.
We present a deterministic workflow for genotyping single and double transgenic individuals directly upon nascence that prevents overproduction and reduces wasted animals by two-thirds. In our vector concepts, transgenes are accompanied by two of four clearly distinguishable transformation markers that are embedded in interweaved, but incompatible Lox site pairs. Following Cre-mediated recombination, the genotypes of single and double transgenic individuals were successfully identified by specific marker combinations in 461 scorings.
Drawing on insights found in both philosophy and psychology, this paper offers an analysis of hate and distinguishes between its main types. I argue that hate is a sentiment, i.e., a form to regard the other as evil which on certain occasions can be acutely felt. On the basis of this definition, I develop a typology which, unlike the main typologies in philosophy and psychology, does not explain hate in terms of patterns of other affective states. By examining the developmental history and intentional structure of hate, I obtain two variables: the replaceability/irreplaceability of the target and the determinacy/indeterminacy of the focus of concern. The combination of these variables generates the four-types model of hate, according to which hate comes in the following kinds: normative, ideological, retributive, and malicious.
Wir betrachten Algorithmen für strategische Kommunikation mit Commitment Power zwischen zwei rationalen Parteien mit eigenen Interessen. Wenn eine Partei Commitment Power hat, so legt sie sich auf eine Handlungsstrategie fest und veröffentlicht diese und kann nicht mehr davon abweichen.
Beide Parteien haben Grundinformation über den Zustand der Welt. Die erste Partei (S) hat die Möglichkeit, diesen direkt zu beobachten. Die zweite Partei (R) trifft jedoch eine Entscheidung durch die Wahl einer von n Aktionen mit für sie unbekanntem Typ. Dieser Typ bestimmt die möglicherweise verschiedenen, nicht-negativen Nutzwerte für S und R. Durch das Senden von Signalen versucht S, die Wahl von R zu beeinflussen. Wir betrachten zwei Grundszenarien: Bayesian Persuasion und Delegated Search.
In Bayesian Persuasion besitzt S Commitment Power. Hier legt sich S sich auf ein Signalschema φ fest und teilt dieses R mit. Es beschreibt, welches Signal S in welcher Situation sendet. Erst danach erfährt S den wahren Zustand der Welt. Nach Erhalt der durch φ bestimmten Signale wählt R eine der Aktionen. Das Wissen um φ erlaubt R die Annahmen über den Zustand der Welt in Abhängigkeit von den empfangenen Signalen zu aktualisieren. Dies muss S für das Design von φ berücksichtigen, denn R wird Empfehlungen nicht folgen, die S auf Kosten von R übervorteilen. Wir betrachten das Problem aus der Sicht von S und beschreiben Signalschemata, die S einen möglichst großen Nutzen garantieren.
Zuerst betrachten wir den Offline-Fall. Hier erfährt S den kompletten Zustand der Welt und schickt daraufhin ein Signal an R. Wir betrachten ein Szenario mit einer beschränkten Anzahl k ≤ n Signale. Mit nur k Signalen kann S höchstens k verschiedene Aktionen empfehlen. Für verschiedene symmetrische Instanzen beschreiben wir einen Polynomialzeitalgorithmus für die Berechnung eines optimalen Signalschemas mit k Signalen.
Weiterhin betrachten wir eine Teilmenge von Instanzen, in denen die Typen aus bekannten, unabhängigen Verteilungen gezogen werden. Wir beschreiben Polynomialzeitalgorithmen, die ein Signalschema mit k Signalen berechnen, das einen konstanten Approximationsfaktor im Verhältnis zum optimalen Signalschema mit k Signalen garantiert.
Im Online-Fall werden die Aktionstypen einzeln in Runden aufgedeckt. Nach Betrachtung der aktuellen Aktion sendet S ein Signal und R muss sofort durch Wahl oder Ablehnung der Aktion darauf reagieren. Der Prozess endet mit der Wahl einer Aktion. Andernfalls wird der nächste Aktionstyp aufgedeckt und vorherige Aktionen können nicht mehr gewählt werden. Als Richtwert für unsere Online-Signalschemata verwenden wir das beste Offline-Signalschema.
Zuerst betrachten wir ein Szenario mit unabhängigen Verteilungen. Wir zeigen, wie ein optimales Signalschema in Polynomialzeit bestimmt werden kann. Jedoch gibt es Beispiele, bei denen S – anders als im Offline-Fall – im Online-Fall keinen positiven Wert erzielen kann. Wir betrachten daraufhin eine Teilmenge der Instanzen, für die ein einfaches Signalschema einen konstanten Approximationsfaktor garantiert und zeigen dessen Optimalität.
Zusätzlich betrachten wir 16 verschiedene Szenarien mit unterschiedlichem Level an Information für S und R und unterschiedlichen Zielfunktionen für S und R unter der Annahme, dass die Aktionstypen a priori unbekannt sind, aber in uniform zufälliger Reihenfolge aufgedeckt werden. Für 14 Fälle beschreiben wir Signalschemata mit konstantem Approximationsfaktor. Solche Schemata existieren für die verbleibenden beiden Fälle nicht. Zusätzlich zeigen wir für die meistern Fälle, dass die beschriebenen Approximationsgarantien optimal sind.
Im zweiten Teil betrachten wir eine Online-Variante von Delegated Search. Hier besitzt nun R Commitment Power. Die Aktionstypen werden aus bekannten, unabhängigen Verteilungen gezogen. Bevor S die realisierten Typen beobachtet, legt R sich auf ein Akzeptanzschema φ fest. Für jeden Typen gibt φ an, mit welcher Wahrscheinlichkeit R diesen akzeptiert. Folglich versucht S, eine Aktion mit einem guten Typen für sich selbst zu finden, der von R akzeptiert wird. Da der Prozess online abläuft, muss S für jede Aktion einzeln entscheiden, diese vorzuschlagen oder zu verwerfen. Nur empfohlene Aktionen können von R ausgewählt werden.
Für den Offline-Fall sind für identisch verteilte Aktionstypen konstante Approximationsfaktoren im Vergleich zu einer Aktion mit optimalem Wert für R bekannt. Wir zeigen, dass R im Online-Fall im Allgemeinen nur eine Θ(1/n)-Approximation erzielen kann. Der Richtwert ist der erwartete Wert für eine eindimensionale Online-Suche von R.
Da für die Schranke eine exponentielle Diskrepanz in den Werten der Typen für S benötigt wird, betrachten wir parametrisierte Instanzen. Die Parameter beschränken die Werte für S bzw. das Verhältnis der Werte für R und S. Wir zeigen (beinahe) optimale logarithmische Approximationsfaktoren im Bezug auf diese Parameter, die von effizient berechenbaren Schemata garantiert werden.
Das Feld der Hochenergie-Schwerionenforschung hat sich der Untersuchung des Quark-Gluon-Plasmas (QGP) gewidmet. Ein QGP ist ein sehr heißer und dichter Materiezustand, der kurz nach dem Urknall für einige Mikrosekunden das Universum füllte. Unter diesen extremen Bedingungen sind die fundamentalen Bausteine der Materie, die Quarks und Gluonen, quasi frei, also nicht in Hadronen eingeschlossen, wie es unter normalen Bedingungen der Fall ist. Hadronen sind Teilchen, die aus Quarks und Gluonen bestehen. Die bekanntesten Hadronen sind Protonen und Neutronen, die Bestandteile von Atomkernen, aus denen, zusammen mit Elektronen, die gesamte bekannte Materie aufgebaut ist.
Um ein QGP im Labor zu erzeugen, lässt man ultrarelativistische schwere Ionen, wie zum Beispiel Pb-208-Kerne, aufeinander prallen. Dies geschieht am CERN, dem größten Kernforschungszentrum der Welt. Der Teilchenbeschleuniger, welcher Protonen und Pb-Kerne beschleunigt und zur Kollision bringt, heißt Large Hadron Collider (LHC) und ist mit 27 km Umfang der größte der Welt. Bei einer einzigen Pb-Pb Kollision am LHC werden mehrere Tausend Teilchen und Antiteilchen erzeugt. Das dedizierte Experiment zur Untersuchung von Schwerionenkollisionen am LHC ist ALICE. ALICE ist mit mehreren Teilchendetektoren ausgerüstet, die es ermöglichen, tausende Teilchen gleichzeitig zu messen und zu identifizieren.
Unter den produzierten Teilchen befinden sich auch leichte Atomkerne, wenngleich diese nur sehr selten erzeugt werden. Die Anzahl der produzierten Teilchen pro Teilchensorte hängt nämlich von deren Masse ab. In Pb-Pb Kollisionen am LHC sinkt die Anzahl der produzierten (Anti)kerne exponentiell um einen Faktor 1/330 bei Hinzufügen jedes weiteren Nukleons. Die Menge an produzierten Teilchen pro Spezies stellt Informationen über den Produktionsmechanismus beim Übergang vom QGP zum Hadrongas zur Verfügung. Hierbei sind leichte (Anti)kerne von besonderem Interesse, da sie vergleichsweise groß sind und ihre Bindungsenergie bis zu zwei Größenordnungen kleiner ist als die Temperaturen, die bei der Erzeugung der Hadronen vorherrschen. Es ist bis heute noch nicht verstanden, wie leichte (Anti)kerne bei diesen Bedingungen erzeugt werden und überleben können.
Für diese Arbeit wurden ca. 270 Millionen Pb-Pb Kollisionen bei einer Schwerpunktsenergie von 5,02 TeV, die von ALICE im November 2018 aufgezeichnet wurden, analysiert. Es wurde die Produktion von (Anti)triton und (Anti)alpha untersucht. Wegen ihrer großen Masse werden beide Kerne sehr selten produziert, bei weitem nicht bei jeder Kollision. Antialpha ist der schwerste Antikern, der jemals gemessen wurde. Aufgrund dieser Seltenheit ist die Größe des zur Verfügung stehenden Datensatzes entscheidend. Es war möglich, das erste jemals gemessene Antialpha-Transversalimpulsspektrum zu extrahieren. Auch für (Anti)triton und Alpha wurden Transversalimpulsspektren bestimmt.
Die Ergebnisse wurden mit theoretischen Modellen und anderen ALICE Messungen verglichen.
Am Ende wird in einem Ausblick auf das kürzlich durchgeführte Upgrade der ALICE Spurendriftkammer (TPC) eingegangen. In der nächsten, bald startenden Datennahmeperiode wird der LHC seine Kollisionsrate erheblich erhöhen, was es ermöglichen wird, mehr als 100 mal so viele Daten wie bisher aufzuzeichnen. Hiervon werden die in dieser Arbeit beschriebenen (Anti)triton- und (Anti)alpha-Analysen beachtlich profitieren. Um mit den erheblich höheren Kollisionsraten zurecht zu kommen, mussten einige Detektoren, unter anderem die TPC, maßgeblich erneuert werden. In den ersten beiden Datennahmeperioden wurde die TPC mit Vieldrahtproportionalkammern betrieben. Diese sind allerdings viel zu langsam für die geplanten Kollisionsraten. Deshalb wurden sie im Jahr 2019, während einer langen Betriebspause des LHC, durch Quadrupel-GEM (Gas Electron Multiplier) Folien basierte Auslesekammern ersetzt, welche eine kontinuierliche Auslese der TPC ermöglichen. Da es sich um die erste jemals gebaute GEM TPC im Großformat handelt, war ein umfangreiches Forschungs- und Entwicklungs- (F&E) Programm notwendig, um die GEM Auslesekammern zu charakterisieren und zu testen. Im Rahmen dieses F&E Programms wurden am Anfang dieser Promotion systematische Messungen an einer kleinen Test TPC mit Quadrupel-GEM Auslese, die extra zu diesem Zweck gebaut worden war, durchgeführt. Hierbei wurde der Rückfluss der bei der Gasverstärkung erzeugten Ionen in das Driftvolumen der TPC und die Energieauflösung mit verschiedenen GEM Folien Typen und unterschiedlicher Anordnung gemessen. Das Ziel war, möglichst kleine Ionenrückflüsse bei möglichst guter Energieauflösung zu erreichen. Hierbei musste ein Kompromiss gefunden werden, da die beiden Größen sich gegenläufig verhalten. Es war jedoch möglich, mit mehreren GEM Konfigurationen Spannungseinstellungen zu identifizieren, bei denen beide Größen den gewünschten Anforderungen entsprachen.
Objectives: To compare dual-energy CT (DECT) and MRI for assessing presence and extent of traumatic bone marrow edema (BME) and fracture line depiction in acute vertebral fractures. Methods: Eighty-eight consecutive patients who underwent dual-source DECT and 3-T MRI of the spine were retrospectively analyzed. Five radiologists assessed all vertebrae for presence and extent of BME and for identification of acute fracture lines on MRI and, after 12 weeks, on DECT series. Additionally, image quality, image noise, and diagnostic confidence for overall diagnosis of acute vertebral fracture were assessed. Quantitative analysis of CT numbers was performed by a sixth radiologist. Two radiologists analyzed MRI and grayscale DECT series to define the reference standard. Results: For assessing BME presence and extent, DECT showed high sensitivity (89% and 84%, respectively) and specificity (98% in both), and similarly high diagnostic confidence compared to MRI (2.30 vs. 2.32; range 0–3) for the detection of BME (p = .72). For evaluating acute fracture lines, MRI achieved high specificity (95%), moderate sensitivity (76%), and a significantly lower diagnostic confidence compared to DECT (2.42 vs. 2.62, range 0–3) (p < .001). A cutoff value of − 0.43 HU provided a sensitivity of 89% and a specificity of 90% for diagnosing BME, with an overall AUC of 0.96. Conclusions: DECT and MRI provide high diagnostic confidence and image quality for assessing acute vertebral fractures. While DECT achieved high overall diagnostic accuracy in the analysis of BME presence and extent, MRI provided moderate sensitivity and lower confidence for evaluating fracture lines.
Evaluation of stability and inactivation methods of SARS-CoV-2 in context of laboratory settings
(2021)
The novel coronavirus SARS-CoV-2 is the causative agent of the acute respiratory disease COVID-19, which has become a global concern due to its rapid spread. Laboratory work with SARS-CoV-2 in a laboratory setting was rated to biosafety level 3 (BSL-3) biocontainment level. However, certain research applications in particular in molecular biology require incomplete denaturation of the proteins, which might cause safety issues handling contaminated samples. In this study, we evaluated lysis buffers that are commonly used in molecular biological laboratories for their ability to inactivate SARS-CoV-2. In addition, viral stability in cell culture media at 4 °C and on display glass and plastic surfaces used in laboratory environment was analyzed. Furthermore, we evaluated chemical and non-chemical inactivation methods including heat inactivation, UV-C light, addition of ethanol, acetone-methanol, and PFA, which might be used as a subsequent inactivation step in the case of insufficient inactivation. We infected susceptible Caco-2 and Vero cells with pre-treated SARS-CoV-2 and determined the tissue culture infection dose 50 (TCID50) using crystal violet staining and microscopy. In addition, lysates of infected cells and virus containing supernatant were subjected to RT-qPCR analysis. We have found that guanidine thiocyanate and most of the tested detergent containing lysis buffers were effective in inactivation of SARS-CoV-2, however, the M-PER lysis buffer containing a proprietary detergent failed to inactivate the virus. In conclusion, careful evaluation of the used inactivation methods is required especially for non-denaturing buffers. Additional inactivation steps might be necessary before removal of lysed viral samples from BSL-3.
Background: Myelosuppression is a potential dose-limiting factor in radioligand therapy (RLT). This study aims to investigate occurrence, severity and reversibility of hematotoxic adverse events in patients undergoing RLT with 177Lu-PSMA-617 for metastatic castration-resistant prostate cancer (mCRPC). The contribution of pretreatment risk factors and cumulative treatment activity is taken into account specifically. Methods: RLT was performed in 140 patients receiving a total of 497 cycles. A mean activity of 6.9 ± 1.3 GBq 177Lu-PSMA-617 per cycle was administered, and mean cumulative activity was 24.6 ± 15.9 GBq. Hematological parameters were measured at baseline, prior to each treatment course, 2 to 4 weeks thereafter and throughout follow-up. Toxicity was graded based on Common Terminology Criteria for Adverse Events v5.0. Results: Significant (grade ≥ 3) hematologic adverse events occurred in 13 (9.3%) patients, with anemia in 10 (7.1%), leukopenia in 5 (3.6%) and thrombocytopenia in 6 (4.3%). Hematotoxicity was reversible to grade ≤ 2 through a median follow-up of 8 (IQR 9) months in all but two patients who died from disease progression within less than 3 months after RLT. Myelosuppression was significantly more frequent in patients with pre-existing grade 2 cytopenia (OR: 3.50, 95%CI 1.08–11.32, p = 0.04) or high bone tumor burden (disseminated or diffuse based on PROMISE miTNM, OR: 5.08, 95%CI 1.08–23.86, p = 0.04). Previous taxane-based chemotherapy was associated with an increased incidence of significant hematotoxicity (OR: 4.62, 95%CI 1.23–17.28, p = 0.02), while treatment with 223Ra-dichloride, cumulative RLT treatment activity and activity per cycle were not significantly correlated (p = 0.93, 0.33, 0.29). Conclusion: Hematologic adverse events after RLT have an acceptable overall incidence and are frequently reversible. High bone tumor burden, previous taxane-based chemotherapy and pretreatment grade 2 cytopenia may be considered as risk factors for developing clinically relevant myelosuppression, whereas cumulative RLT activity and previous 223Ra-dichloride treatment show no significant contribution to incidence rates.
Purpose: To analyze refractive and topographic changes secondary to Descemet membrane endothelial keratoplasty (DMEK) in pseudophakic eyes with Fuchs’ endothelial dystrophy (FED). Methods: Eighty-seven pseudophakic eyes of 74 patients who underwent subsequent DMEK surgery for corneal endothelial decompensation and associated visual impairment were included. Median post-operative follow-up time was 12 months (range: 3–26 months). Main outcome measures were pre- and post-operative manifest refraction, anterior and posterior corneal astigmatism, simulated keratometry (CASimK) and Q value obtained by Scheimpflug imaging. Secondary outcome measures included corrected distance visual acuity (CDVA), central corneal densitometry, central corneal thickness, corneal volume (CV), anterior chamber volume (ACV) and anterior chamber depth (ACD). Results: After DMEK surgery, mean pre-operative spherical equivalent (± SD) changed from + 0.04 ± 1.73 D to + 0.37 ± 1.30 D post-operatively (p = 0.06). CDVA, proportion of emmetropic eyes, ACV and ACD increased significantly during follow-up. There was also a significant decrease in posterior corneal astigmatism, central corneal densitometry, central corneal thickness and corneal volume over time (p = 0.001). Only anterior corneal astigmatism and simulated keratometry (CASimK) remained fairly stable after DMEK. Conclusion: Despite tendencies toward a hyperopic shift, changes in SE were not significant and refraction remained overall stable in pseudophakic patients undergoing DMEK for FED. Analysis of corneal parameters by Scheimpflug imaging mainly revealed changes in posterior corneal astigmatism pointing out the relevance of posterior corneal profile changes during edema resolution after DMEK.