Universitätspublikationen
Refine
Year of publication
Document Type
- Article (12988)
- Part of Periodical (3429)
- Doctoral Thesis (3181)
- Contribution to a Periodical (2075)
- Book (2051)
- Working Paper (1870)
- Preprint (1297)
- Review (1050)
- Report (910)
- Conference Proceeding (659)
Language
- English (16269)
- German (13555)
- Portuguese (231)
- Spanish (123)
- Italian (66)
- French (64)
- Multiple languages (59)
- Turkish (12)
- Ukrainian (10)
- slo (7)
Keywords
- Deutschland (132)
- COVID-19 (93)
- inflammation (91)
- Financial Institutions (90)
- ECB (67)
- Capital Markets Union (64)
- SARS-CoV-2 (63)
- Financial Markets (61)
- Adorno (58)
- Banking Union (50)
Institute
- Medizin (6264)
- Präsidium (4979)
- Physik (3058)
- Wirtschaftswissenschaften (2243)
- Gesellschaftswissenschaften (2009)
- Biowissenschaften (1677)
- Frankfurt Institute for Advanced Studies (FIAS) (1455)
- Biochemie und Chemie (1371)
- Sustainable Architecture for Finance in Europe (SAFE) (1370)
- Informatik (1351)
Responding to inadequate awareness of the outstanding importance of biodiversity, the BioFrankfurt network was founded in 2004 in the State of Hesse, Germany. It is presented here as a case study and may serve as a model for other parts of the world, such as the Middle East. In 2007, only about 26% of the German population were familiar with the term “Biodiversity”, and most of them only had a vague idea about its meaning. The BioFrankfurt network of institutions addressed this problem, raising public awareness and supporting research, education and conservation. A regional biodiversity education program has been developed and delivered to more than 500 schools. Since 2007, an innovative public relations campaign combines raising awareness on regional biodiversity issues with activities to improve the public image of the Frankfurt area. Because of its geographical focus, the network’s activities gained the attention of local and regional politicians and other decision makers, culminating in the joint establishment of a new Biodiversity and Climate Research Centre by BioFrankfurt member institutions. The success of current activities attracts interesting partners, resulting in challenging cooperation initiatives. The authors are convinced that the network’s concepts and activities have a great potential to profoundly enhance the notion and acceptance of biodiversity issues elsewhere. Keywords: BioFrankfurt, biodiversity network, education, public awareness, scientifi c communication
COPA syndrome is a newly discovered hereditary immunodeficiency affecting the lung, kidneys, and joints. The mutated gene encodes the α subunit of the coatomer complex I, a protein transporter from the Golgi back to the endoplasmic reticulum. The impaired return of proteins leads to intracellular stress. The syndrome is an autoimmune and autoinflammatory disease that can be grouped among the interferonopathies. The knowledge about COPA syndrome and its treatment is still limited. In this paper, we describe an additional patient, a 15-year-old girl with rheumatoid factor-positive polyarthritis and rheumatoid nodules since the age of 2, who developed interstitial lung disease. The detected mutation c.698G>A was causing the disease. The patient presented with symmetric polyarthritis on wrists, fingers, and hip and ankle joints, with significant functional impairment, and high disease activity. Laboratory parameters demonstrated chronic inflammation, hypergammaglobulinemia, high titre ANA (antinuclear antibodies) and CCP (anti-citrullinated protein) antibodies, and rheumatoid factors. Therapies with various DMARDs (Disease Modifying Anti-Rheumatic Drugs) and biologicals failed. Upon baricitinib application, the clinical activity decreased dramatically with disappearance of joint pain and morning stiffness and significant decrease of joint swelling. A low disease activity was reached after 12 months, with complete disappearance of rheumatoid nodules. In contrast to IL-1 (interleukin-1), IL-6, and TNF (tumor necrosis factor) inhibitors, baricitinib was very successful, probably because baricitinib acts as a JAK-1/2 (janus kinase-1/2) inhibitor in the IFNα/β (inteferone α/β) pathway. A relatively higher dose in children is necessary. COPA syndrome represents a novel disorder of intracellular transport. Reviewing published literature on COPA syndrome, in addition to our patient, there were 31 cases further described.
Der Neubau für den Fachbereich 09 »Sprach- und Kulturwissenschaften« auf dem Campus Westend bekommt den letzten Schliff. Das international renommierte Künstler:innenkollektiv Raqs Media Collective, das sich 1992 in Neu Delhi gegründet hat, gewann mit seinem Entwurf den vom Land Hessen ausgelobten Wettbewerb für »Kunst am Bau«. Die dreiteilige Arbeit »All, Humans« wird am 2. November 2023 abends im Foyer des SKW-Gebäudes feierlich eingeweiht. Studierende des Masterstudiengangs »Curatorial Studies« haben die Entstehung in den letzten Monaten intensiv verfolgt und bieten im Dialog mit den Künstler*innen und Expert*innen Einblicke und Auseinandersetzungen in die filmische Installation.
We analyze the impact of decreases in available lending resources on quantitative and qualita- tive dimensions of firms’ patenting activities. We thereby make use of the European Banking Authority?s capital exercise to carve out the causal effect of bank lending on firm innovation. In order to do so we combine various datasets to derive information on firms’ financials, their patenting behaviors, as well as their relationships with their lenders. Building on this self- generated dataset, we provide support for the “less finance, less innovation” view. At the same time, we show that lower available financial resources for firms lead to improvement in the qualitative dimensions of their patents. Hence, we carve out a “less finance, less but better innovation” pattern.
Die Bestrahlung atmungsbewegter Tumoren stellt eine Herausforderung für die moderne Strahlentherapie dar. In der vorliegenden Arbeit werden zu Beginn die physikalischen, technischen und medizinischen Grundlagen vorgestellt, um dem Leser den Einstieg in die komplexe Thematik zu erleichtern. Des Weiteren werden verschiedene Techniken zur Bestrahlung atmungsbewegter Zielvolumina vorgestellt. Auch wird auf die Sicherheitssäume eingegangen, die notwendig sind, um Fehler in der Bestrahlungskette beim Festlegen des Planungszielvolumens für die Bestrahlung auszugleichen.
Im Rahmen dieser Arbeit wurde ein Konzept entwickelt, wodurch sich der Sicherheitssaum von bewegten Tumoren in der Radiochirurgie mit dem Tumor-Tracking-System des Cyberknifes noch weiter verkleinern lässt. Somit kann die sogenannte therapeutische Breite der Behandlung weiter vergrößert werden kann. Dafür wurden ein 4D-CT und ein Gating-System in den klinischen Betrieb aufgenommen. Die entwickelte Technik basiert auf den zehn individuellen Atemphasen des 4D-CTs und lässt eine Berücksichtigung bewegter Risikostrukturen bereits während der Bestrahlungsplanung zu. Diese Methode wurde mit aktuellen Bestrahlungstechniken mittels eines Vergleichs der Bestrahlungspläne anhand von zehn Patientenfällen verglichen. Zur Erstellung der Bestrahlungspläne kamen die Bestrahlungsplanungssysteme von Varian (Eclipse 13.5) und Accuray (Multiplan 4.6) zum Einsatz. Es wurden insbesondere die Bestrahlungsdosen an den Risikoorganen und die Volumina ausgewählter Isodosen betrachtet. Hier zeigte sich eine klare Abhängigkeit von der Belastung des gesunden Gewebes von der verwendeten Bestrahlungstechnik. Dies lässt die Schlussfolgerung zu, dass mit einer Reduzierung des Sicherheitssaums, welcher abhängig von der verwendeten Planungs- und Bestrahlungstechnik ist, eine Vergrößerung der therapeutischen Breite einhergeht. Zusätzlich bleibt bei einer geringen Belastung des umliegenden gesunden Gewebes die Möglichkeit für eine weitere Bestrahlung offen.
Anschließend wurden anhand von berechneten Testplänen Messungen an einem für diese Arbeit modifizierten Messphantom am Varian Clinac DHX und am Cyberknife VSI durchgeführt. Hier wurden die beim Planvergleich verwendeten Bestrahlungstechniken verwendet, um einen Abgleich von berechneter und tatsächlich applizierter Dosis zu erhalten. Das verwendete Messphantom simuliert die Atmung des Patienten und lässt gleichzeitig eine Verifikation der Dosisverteilung mit EBT3-Filmen sowie Messungen mit Ionisationskammern zu. Es zeigte sich, dass für die Techniken, welche aktiv die Atmung berücksichtigen (Synchrony am Cyberknife und Gating am Varian Clinac), selbst im Niedrigdosisbereich eine gute Übereinstimmung zwischen Messung und Berechnung der Dosisverteilung vorliegt. Sobald die Bewegung des Zielvolumens bereits bei der Bestrahlungsplanung berücksichtigt wird, steigt die Übereinstimmung weiter an. Für Techniken, welche die Atmung lediglich bei der Zielvolumen-Definition einbeziehen (ITV-Konzept), liegen sowohl die mit Ionisationskammern gemessenen Werte als auch die Übereinstimmung von berechneter und gemessener Dosisverteilung außerhalb des Toleranzbereichs.
Eine weitere Frage dieser Arbeit befasst sich mit der Treffsicherheit des Tumor-Tracking-Systems des Cyberknifes (Synchrony). Hier wurden Messungen mit dem XSightLung-Phantom und unterschiedlichen Sicherheitssäumen, welche die Bewegung des Tumors ausgleichen sollen, durchgeführt. Dies geschah sowohl mit dem für das Phantom vorgesehenen Würfel mit Einschüben für EBT3-Filme als auch mit einem Film-Sanchwich aus Flab-Material zur Untersuchung einer dreidimensionalen Dosisverteilung. Die Analyse der Filme ergab, dass es zumindest an einem Phantom mit einer einfachen kraniokaudalen Bewegung nicht nötig ist, die Bewegung des Zielvolumens durch einen asymmetrischen Sicherheitssaum in Bewegungsrichtung zu kompensieren um die Abdeckung des Zielvolumens mit der gewünschten Dosis zu gewährleisten.
Durch diese Arbeit konnten zusätzlich weitere wertvolle Erkenntnisse für den klinischen Alltag gewonnen werden: bei der Untersuchung der Bewegung von Tumoren in freier Atmung sowie bei maximaler Inspiration und Exspiration zeigte sich, dass zum Teil die Tumorbewegung in maximalen Atemlagen (3-Phasen-CT) deutlich von der freien Atmung abweicht. Dies lässt den Schluss zu, dass für eine Bestrahlung in freier Atmung ein 4D-CT die Tumorbewegung deutlich realistischer widerspiegelt als ein 3-Phasen-CT, zumal letzteres eine größere Dosisbelastung für den Patienten bedeutet.
Ebenfalls konnte anhand einer retrospektiven Untersuchung von Lungentumoren gezeigt werden, dass für die Berechnung von Bestrahlungsplänen für Tumoren in inhomogenem Gewebe der Ray-Tracing-Algorithmus die Dosis im Zielvolumen teilweise sehr stark überschätzt. Um eine realistische Dosisverteilung zu erhalten, sollte deshalb insbesondere bei Tumoren in der Lunge auf den Monte-Carlo-Algorithmus zurückgegriffen werden.
Diese Arbeit beschäftigt sich mit dem Aufbau und der Kalibrierung eines Neutronendetektorarrays für niedrige Energien (Low Energy Neutron detector Array, kurz „LENA“) am kommenden R³B-Aufbau (Reactions with Relativistic Radioactive Beams) am FAIR (Facility for Antiproton and Ion Research) an der GSI in Darmstadt. Die Detektion niederenergetischer Neutronen im Bereich von 100 keV bis 1 MeV ist nötig, um Ladungsaustauschreaktionen, speziell (p,n)-Reaktionen in inverser Kinematik zu untersuchen. In diesem Energiebereich ist die Detektion äußerst schwierig, da Methoden für thermische als auch hochenergetische (100 MeV bis 1 GeV) Neutronen versagen. Neben dem Aufbau des Detektors wird die Bedeutung des Experiments für die nukleare Astrophysik verdeutlicht. Der theoretische Teil dieser Arbeit legt Grundlagen zum Verständnis für den Nachweis von Neutronen, die Funktionsweise des LENA-Detektors und den damit nachweisbaren Kernreaktionen. Des Weiteren wurde eine Simulation des Detektors mit GEANT4 (GEometry And Tracking), einer C++ orientierten Plattform für Simulationen von Wechselwirkungen von Detektormaterial mit Teilchen, durchgeführt. Die Ergebnisse wurden zur Auswertung von Messungen, die im Rahmen einer Strahlzeit im März 2011 an der Physikalisch Technischen Bundesanstalt (PTB) in Braunschweig durchgeführt wurden, herangezogen. Ziel der Arbeit ist es, die Effizienz des Detektors zu bestimmen.
Gridded maps of meteorological variables are needed for the evaluation of weather and climate models and for climate change monitoring. In order to produce them, values at locations where no observing stations are available need to be estimated from point-wise observations. For the interpolation of meteorological observations deterministic and stochastic methods are often combined. Deterministic methods can account for ancillary information such as elevation, continentality or satellite observations. Stochastic methods such as kriging reproduce observed values at the station locations and also account for spatial variability. In the first two studies of this thesis, a flexible interpolation method for the gridding of locally observed daily extreme temperatures is developed that also provides an optimal estimate of the interpolation ncertainty. In the third study, an observational dataset is created using this interpolation method and then applied to evaluate a climate simulation for Africa.
In the first study, the Regression-Kriging-Kriging (RKK) method is tested for the interpolation of daily minimum and maximum temperatures (Tmin and Tmax) in different regions in Europe. RKK accounts for elevation, continentality index and zonal mean temperature and is applicable in regions of differing station density and climate. The accuracy of RKK is compared to Inverse Distance Weighting, a common deterministic interpolation method, and to Ordinary Kriging, a common stochastic interpolation method. The first step in RKK is to use regression kriging, in which multiple linear regression accounts for topographical effects on the temperature field and kriging minimizes the regression error, to interpolate climatological means. In the second step daily deviations from the monthly climatology are interpolated using simple kriging. Owing to the large climatological differences across the investigation area the interpolation is performed in homogeneous subregions defined according to the Köppen-Geiger climate classification. Cross validation demonstrates the superiority of RKK over the simpler algorithms in terms of accuracy and preservation of spatial variability. The interpolation performance however strongly varies across Europe, being considerably higher over Central Europe (highest station density) than over Greenland (few stations along the coast line). This illustrates the strong impact of the station density on the accuracy of the interpolation result. Satellites provide comprehensive observations of climate variables such as land surface temperature (LST) and cloud cover (CC). However, LST is associated with high uncertainty (standard error ~ 1-2°C), preventing its direct application in meteorology and climatology. The second study investigates the usefulness of LST and CC as predictors for the gridding of daily Tmin and Tmax. The RKK algorithm is compared with similar interpolation methods that apply LST and CC in addition to the predictors used with the RKK algorithm. The investigation is conducted in two regions, Central Europe and the Iberian Peninsula, which differ strongly in average cloud cover (Central Europe is approximately 30% cloud free and the Iberian Peninsula approximately 60 % cloud free). RKKLST (in which monthly mean LST is used as an additional predictor) yields for Central Europe no clear improvement over RKK, yet it reduces the interpolation error over the Iberian Peninsula. This finding can be explained by the higher percentage of cloud free pixels over that region in summer which enables a more robust determination of monthly mean LST. Adding a regression step for daily anomalies (using the predictor CC) yields the RKRK method and improves the preservation of spatial variability over the Iberian Peninsula. Moreover, a successive reduction of the station number (from 140 to 10 stations) reveals an increasing superiority of RKKLST and RKRK over RKK in both regions.
The application of a gridded observational dataset for climate monitoring or climate model validation requires knowledge of the uncertainties associated with the dataset. The estimation of the interpolation uncertainty, here the inter quartile range is the used uncertainty measure, is therefore an important issue within the frame of this thesis. By means of cross validation it is shown that the largest uncertainties occur in regions of low station density (e.g. Greenland), in mountainous regions and along coastlines (in these regions model evaluation results should be interpreted carefully). The magnitude of the interpolation error mainly depends on the station density, while the complexity of terrain has substantially less influence. On average over all regions and investigation days the target precision of the uncertainty estimate is reached. However, on local scales and for single days it can be clearly over- or underestimated. The application of satellite-derived predictors (LST and CC) yields no noteworthy improvement of the uncertainty estimate.
In the last study two regional climate simulations for Africa using the ERA-Interim driven COSMO-CLM (CCLM) model at two different horizontal resolutions (0.22° and 0.44°) are validated. It is assessed whether observed patterns and statistical properties of daily Tmin and Tmax are correctly represented in the model. The ERA-Interim reanalysis and a specially created observational dataset are used as reference. The observational dataset is generated by applying the RKRK algorithm (developed within the second study). The investigations show an occasionally large bias in Tmin and Tmax. The hemispheric summers are generally too warm and the temporal variability in temperature is too high, particularly over extra tropical Africa. The diurnal temperature range is overestimated by about 2°C in the northern subtropics but underestimated by about 2°C over large parts of the African tropics. CCLM reproduces the observed frequency distribution of daily Tmin and Tmax in all African climate regions, and the extreme values in the lower percentiles (5, 10, 20%) for Tmin are well simulated. The higher percentiles (80, 90, 95%) for Tmax are however overestimated by 2-5°C. For both Tmin and Tmax the 0.22° simulation is on average 0.5°C warmer than the 0.44° simulation. Additionally, the higher percentiles are about 1°C warmer for both Tmin and Tmax in the higher resolution run, while the lower percentiles in both runs match very well. Although the temperature pattern is represented in more detail along the coastlines and in topographically complex regions, the higher resolution simulation yields no qualitative improvement.
To summarize, the choice of the appropriate algorithm mainly depends on the interpolation conditions. In cases where the station density is high across the target region and the predictor space is adequately covered by observing stations, the computationally less demanding RK algorithm should be preferred. In regions where the station density is low the more robust RKRK algorithm should be the first choice. Due to the strong physical relation of both CC and LST to Tmin and Tmax the missing information is at least partially compensated for. The estimation of the interpolation uncertainty could be improved by applying a normal score transformation to the data prior to a kriging step. This is because the kriging assumption that the increments of the variable of interest are second order stationary can be approximately met by a normal score transformation.
This paper investigates the potential impact of secondary information on rainfall mapping applying Ordinary Kriging. Secondary information tested is a natural area indicator, which is a combination of topographic features and weather conditions. Cross validation shows that secondary information only marginally improves the final mapping, indicating that a one-day accumulation time is possibly too short.
This study presents a method for adjusting long-term climate data records (CDRs) for the integrated use with near-real-time data using the example of surface incoming solar irradiance (SIS). Recently, a 23-year long (1983–2005) continuous SIS CDR has been generated based on the visible channel (0.45–1 μm) of the MVIRI radiometers onboard the geostationary Meteosat First Generation Platform. The CDR is available from the EUMETSAT Satellite Application Facility on Climate Monitoring (CM SAF). Here, it is assessed whether a homogeneous extension of the SIS CDR to the present is possible with operationally generated surface radiation data provided by CM SAF using the SEVIRI and GERB instruments onboard the Meteosat Second Generation satellites. Three extended CM SAF SIS CDR versions consisting of MVIRI-derived SIS (1983–2005) and three different SIS products derived from the SEVIRI and GERB instruments onboard the MSG satellites (2006 onwards) were tested. A procedure to detect shift inhomogeneities in the extended data record (1983–present) was applied that combines the Standard Normal Homogeneity Test (SNHT) and a penalized maximal T-test with visual inspection. Shift detection was done by comparing the SIS time series with the ground stations mean, in accordance with statistical significance. Several stations of the Baseline Surface Radiation Network (BSRN) and about 50 stations of the Global Energy Balance Archive (GEBA) over Europe were used as the ground-based reference. The analysis indicates several breaks in the data record between 1987 and 1994 probably due to artefacts in the raw data and instrument failures. After 2005 the MVIRI radiometer was replaced by the narrow-band SEVIRI and the broadband GERB radiometers and a new retrieval algorithm was applied. This induces significant challenges for the homogenisation across the satellite generations. Homogenisation is performed by applying a mean-shift correction depending on the shift size of any segment between two break points to the last segment (2006–present). Corrections are applied to the most significant breaks that can be related to satellite changes. This study focuses on the European region, but the methods can be generalized to other regions. To account for seasonal dependence of the mean-shifts the correction was performed independently for each calendar month. In comparison to the ground-based reference the homogenised data record shows an improvement over the original data record in terms of anomaly correlation and bias. In general the method can also be applied for the adjustment of satellite datasets addressing other variables to bridge the gap between CDRs and near-real-time data.
Die Prognose eines malignen Glioms ist trotz verschiedener Therapiemöglichkeiten noch immer sehr schlecht. Zwar hat sich für die Primärsituation seit 2005 eine Standardtherapie etabliert, doch im Rezidivfall fehlt es weiterhin an einer einheitlichen Behandlung. Das Ziel dieser retrospektiven Datenerfassung war es, den prognostischen Stellenwert klinisch- pathologischer Parameter zu vergleichen und eine Konsensempfehlung zu erarbeiten. Zusätzlich wurde ein Teil dieser Daten im Rahmen einer multizentrischen retrospektiven Analyse des DKTKs zur Validierung des im Zuge dessen entwickelten prognostischen RRRSs erhoben und verwendet.
Grundlage dafür bildeten die in der internen Datenbank „Orbis“ und in archivierten Patientenakten gespeicherten Daten von Patienten, die zwischen 07/2009 und 02/2017 in der Klinik für Strahlentherapie am Universitätsklinikum Frankfurt am Main therapiert wurden. Hierbei handelte es sich um Patienten mit einem histologisch gesicherten Glioblastomen WHO Grad IV zum Zeitpunkt der ReRT. Die mediane Gesamtdosis betrug 28 Gy (20-60 Gy), die mediane Einzeldosis 3,5 Gy/Tag (1,8-4 Gy).
Es wurden 102 Patienten eingeschlossen, wobei zwei Patienten als primäre Diagnose ein niedriggradiges Gliom WHO Grad I/II, sechs ein Astrozytom WHO Grad III und 96 ein Glioblastomen WHO Grad IV aufwiesen. Das durchschnittliche Alter betrug 55 Jahre und die mittlere Zeit zwischen initialer und erneuter RT 21,07 Monate. Im Rezidivfall unterzogen sich 40 Patienten einer chirurgischen Intervention, bei welcher es sich in 32 der Fälle um eine totale und acht Mal um eine subtotale Resektion handelte. Des Weiteren erhielten 52 der Patienten eine Chemotherapie mit Temozolomid, 20 eine mit CCNU, 17 mit Avastin und fünf bzw. acht ein anderes oder kein Chemotherapeutikum.
Das mOS nach initialer Diagnosestellung eines malignen Glioms ergab 42,64 Monaten, das progressionsfreie Überleben 14,77 Monate. Das mOS nach der ReRT lag bei 11,8 Monaten und der mediane Zeitraum bis zu einem erneuten Progress betrug 4,25 Monate.
Bezüglich der Primärdiagnose konnten die initiale Histologie (p = 0,002), das Alter (p = 0,016) und der MGMT-Promotor-Status (p = 0,001) als statistisch signifikante Einflussfaktoren identifiziert werden. Demnach wiesen jüngere Patienten mit einer niedriggradigeren Histologie sowie einer Hypermethylierung des MGMT-Promotors eine bessere Prognose auf. Der KPS (p < 0,001), die Zeit zwischen erster und zweiter Bestrahlung (p = 0,003), der MGMT-Promotor-Status (p = 0,025) und das Tumorwachstum (p = 0,024) waren determinante Faktoren hinsichtlich des Outcomes nach der ReRT. Außerdem zeigte sich, dass eine Gesamtstrahlendosis von mehr als 28,90 Gy auf statistisch signifikante Art und Weise (p = 0,042) mit einem längeren OS nach erneuter RT assoziiert war, sowie eine Parietal- bzw. Temporallappenlokalisation (p = 0,009) mit einem längeren progressionsfreien Überleben. Was die Therapiemodalitäten angeht, zeigte sich keine der anderen überlegen.
Die erneute Validierung dieser Daten mit dem RRRS ergab ebenfalls ein statistisch signifikantes Ergebnis bezogen auf die durchschnittliche Überlebenszeit zwischen den einzelnen prognostischen Gruppen ab dem Zeitpunkt der ReRT.
Die Ergebnisse dieser Arbeit legen dar, dass noch immer keine optimale Therapie für Patienten mit rezidivierendem Glioblastomen existiert und weiterhin Forschungsbedarf in der Modifizierung bestehender Behandlungsoptionen sowie in der Entwicklung neuer Therapiemöglichkeiten besteht. Des Weiteren unterstreichen sie die Wichtigkeit und den Wert spezifischer Einflussfaktoren zur Prognoseabschätzung und die Notwendigkeit des Einschlusses bedeutender neuer molekularer Marker anhand der WHO- Klassifikation von 2016 für zukünftige Studien.
"Mehr Licht!", um die "Seele" der Natur zu erfassen – das verband die Impressionisten mit der vorangegangenen Künstlergeneration um Camille Corot. Claude Monet und seine Kollegen suchten die Wälder und Parks auf, um Licht, Atmosphäre und Farbigkeit in ihrer Malerei festzuhalten. Dabei reagierten sie eher intuitiv auf die in jener Zeit intensiv erforschten optischen Gesetzmäßigkeiten.
The increasing demand of the high value ω-3 fatty acids due to its beneficial role for human health, explains the huge need for alternative production ways of ω-3 fatty acids. The oleaginous alga Phaeodactylum tricornutum is a prominent candidate and has been investigated as biofactory for ω-3 fatty acids, e.g. the synthesis of eicosapentaenoic acid (EPA). In general, the growth and the lipid content of diatoms can be enhanced by genetic engineering or are influenced by environmental factors, e.g. nutrients, light or temperature.
In this study, the potential of P. tricornutum as biofactory was improved by heterologously expressing the hexose uptake protein 1 (HUP1) from the Chlorophyte Chlorella kessleri.
An in situ localization study revealed that only the full length HUP1 protein fused to eGFP was correctly targeted to the plasma membrane, whereas the N-terminal sequence of the protein is only sufficient to enter the ER. Protein and gene expression data displayed that the gene-promoter combination was relevant for the expression level of HUP1, while only cells expressing the protein under the light-inducible fcpA promoter showed a significant expression. In these mutants an efficient glucose uptake was detectable under mixotrophic growth condition, low light intensities and low glucose concentrations leading to an increased cell dry weight.
In a second approach, the growth and lipid content of wildtype cells were analyzed in a small 1l photobioreactor. Here, a commercial F/2 medium and a common culture medium, ASP and modified versions were compared. There was neither a significant impact on the growth and lipid content in P. tricornutum cells due to the supplemention of trace elements nor due to elevated salt concentrations in the media. In a modified version of ASP medium, with adapted nitrate and phosphate concentration a constantly high biomass productivity was achieved, yielding the highest value of 82 mg l-1 d-1 during the first three days. This was achieved even though light intensity was reduced by 40%. The differences in biomass productivity as well as the lipid content and the lipid composition underlined the importance of the choice of culture medium and the harvest time for enhanced growth and EPA yields in P. tricornutum.
Die Transkription vieler Gene wird über den Acetylierungsgrad der Histone reguliert. Entsprechend erweiterte die Entdeckung von Histondeacetylase-Inhibitoren das Verständnis um Transkriptions-Repressoren und ihre Rolle in der Pathogenese beträchtlich. Zur Zeit stehen die Modifikationen der Histondeacetylasen (HDACs) sowie die biologischen Rollen der verschiedenen HDAC-Isoenzyme im Zentrum intensiver Forschungsarbeiten.
In der vorliegenden Arbeit wurde anhand verschiedener Zelllinien und mit murinem Primärmaterial nachgewiesen, dass das gut verträgliche Antiepileptikum Valproinsäure (VPA) ein potenter HDAC-Inhibitor ist. Dies zeigt sich daran, dass VPA in vivo die durch HDACs vermittelte transkriptionelle Repression aufhebt und zur Akkumulation hyperacetylierter Histone führt. In vitro Enzymassays weisen darauf hin, dass VPA selbst und nicht ein hypothetischer Metabolit die Histondeacetylasen hemmt. Darüber hinaus wurde mit Bindungs- und Kompetitionsstudien festgestellt, dass eine Interaktion von VPA mit dem katalytischen Zentrum der HDACs stattfindet.
Weitere Analysen zeigten, dass VPA bevorzugt Klasse I HDACs hemmt. Durch dieses Merkmal einer erhöhten Spezifität bei gleichzeitig guter Bioverfügbarkeit definiert VPA eine neue Klasse von HDAC-Inhibitoren. Hieraus ergeben sich Hinweise auf strukturelle Anforderungen, die ein HDAC-Inhibitor erfüllen muß, um spezifischer und weniger toxisch als konventionelle Chemotherapeutika zu wirken. Außerdem eröffnete das neu entdeckte pharmakologische Wirkungsspektrum von VPA auf HDACs Erkenntnisse um zusätzliche therapeutische Einsatzmöglichkeiten dieses etablierten Arzneimittels. Bereits jetzt wird VPA in klinischen Studien an Patienten mit Krebs verabreicht.
HDAC-Inhibitoren gelten als potentielle Medikamente für die Therapie maligner Neoplasien. Deshalb besteht großes Interesse an den molekularen Mechanismen, mit denen Substanzen dieser Wirkstoffklasse das Wachstum transformierter Zellen in vitro und in vivo hemmen. In den humanen Melanomzelllinien SK-Mel-37 und Mz-Mel-19 bewirken klinisch relevante VPA-Dosen eine zeit- und dosisabhängige Akkumulation von Zellzyklusinhibitoren und hyperacetylierten Histonen, morphologische Veränderungen und eine verringerte Proliferationsrate. Die verminderte Proliferation wird von einem veränderten Zellzyklusprofil und Apoptose unter Beteiligung sowohl der extrinsisch als auch der intrinsisch bedingten Caspase-Kaskade begleitet. Dies manifestiert sich in der Spaltung der Caspasen 3, 8 und 9, einer Schädigung der Mitochondrien, der apoptotischen PARP-Spaltung, einem Abbau der genomischen DNA und einer Inaktivierung des GFP-Proteins.
Diese Analysen in Melanomzellen sprechen dafür, dass die weitgehend selektive Wirkung von VPA auf Klasse I HDACs der Mechanismus ist, mit dem diese Substanz das Wachstum bestimmter Tumorzellen hemmt. Durch Genexpressions-Analysen konnten außerdem neue Modelle zum Einfluss von VPA auf solide Tumoren postuliert werden. Darüber hinaus wurde festgestellt, dass die Expression und Induzierbarkeit der Zellzyklusregulatoren p21WAF/CIP1 und p27Kip1 und des latent cytoplasmatischen Transkriptionsfaktors Stat1 Biomarker für die Sensitivität von Melanomzellen gegenüber HDAC-Inhibitoren sind. Im Einklang hiermit wird die proapoptotische Wirkung von VPA durch das Cytokin Interferon α und den S-Phase-Inhibitor Hydroxyharnstoff deutlich gesteigert. Diese Ergebnisse sprechen für den Einsatz von VPA in tierexperimentellen und klinischen Studien.
Aufgrund der Schlüsselrolle der HDACs für die physiologische und aberrante Genexpression ist es wichtig, die Mechanismen ihrer Regulation zu kennen. In der vorliegenden Arbeit wurde anhand zahlreicher kultivierter Zelllinien und mittels eines Mausmodells gezeigt, dass therapeutisch einsetzbare VPA-Dosen neben der Hemmung enzymatischer Aktivität auch zu einer isoenzymspezifischen Verringerung der Klasse I Histondeacetylase HDAC2 führen. Als Ursache hierfür konnten eine verstärkte Poly-Ubiquitinylierung und ein proteasomaler Abbau ermittelt werden. Gleichzeitig wurden die Beteiligung etlicher Proteasen und eine veränderte Synthese oder Prozessierung der HDAC2-mRNA als Mechanismen ausgeschlossen.
Expressionsanalysen identifizierten die E2 Ubiquitinkonjugase Ubc8 als von HDAC-Inhibitoren induziertes Gen. Mittels transienter Überexpression („Gain-of-Function“) und siRNA-Experimenten („Loss-of-Function“) konnte dieses Gen als limitierender Faktor des HDAC2-Umsatzes in vivo erkannt werden. Weiterhin wurde gezeigt, dass die E3 Ubiquitinligase RLIM spezifisch mit HDAC2 interagiert. Die Expression von RLIM beziehungsweise seine enzymatische Funktion beeinflusst die HDAC2-Konzentration in vivo. Hierbei kann VPA klar von dem HDACInhibitor Trichostatin A (TSA) abgegrenzt werden. Dieser hemmt ein breites Spektrum an HDACs und induziert Ubc8, führt aber gleichzeitig zu einem proteasomal vermittelten Abbau des RLIM-Proteins. Analysen mit überexprimiertem RLIM zeigten, dass TSA aufgrund dieses Mechanismus nicht in der Lage ist, den Abbau von HDAC2 zu induzieren. Somit ist im Rahmen dieser Arbeit die Ubiquitinylierungs-Maschinerie für HDAC2 charakterisiert worden. Hierdurch sind neue Aspekte zum Zusammenspiel zwischen dem Ubiquitin-Proteasom-System und der Transkriptionsrepression nachgewiesen worden.
Isoenzymspezifische HDAC-Inhibitoren können zur Aufklärung der Funktion einzelner Histondeacetylasen beitragen, insbesondere wenn Knock-Out-Studien zu aufwendig oder aufgrund embryonaler Letalität nicht durchführbar sind. Die Wichtigkeit dieser Analysen wird gerade bei HDAC2 deutlich, da diese Histondeacetylase in vielen soliden und hämatologischen Tumoren überexprimiert ist, und ihre Deregulation möglicherweise zur Krebsentstehung beiträgt. Die in der vorliegenden Arbeit identifizierte Regulation dieses HDAC-Isoenzyms könnte Hinweise auf den Ablauf eines malignen Transformationsprozesses geben. Darüber hinaus zeigt der nachgewiesene Regulationsmechanismus Erfordernisse und potentielle Zielstrukturen einer pharmakologischen Intervention auf. Schließlich könnten die Selektivität von VPA für Klasse I HDACs zusammen mit der Spezifität für HDAC2 die Gründe für die geringen Nebenwirkungen der VPA-Behandlung bei gleichzeitigem Auftreten antitumoraler Effekte sein.
Im Rahmen dieser Dissertation wurde die spätpleistozäne und holozäne Landschaftsentwicklung im Umfeld der im Tal des Wadis Chuera in Nordsyrien liegenden bronzezeitlichen Siedlung Tell Chuera untersucht. Durch die Kombination von hochgenauen Vermessungen, Satellitenbildauswertungen und Untersuchungen der Wadisedimente konnten mehrere flussgeschichtliche Entwicklungsphasen erarbeitet und in einen chronostratigraphischen Rahmen eingeordnet werden. Über ein grobsandig-kiesiges System eines verzweigten Flusses wurden mindestens bis ins Obere Pleistozän mächtige Kieslagen im Untersuchungsgebiet sedimentiert. Innerhalb einer fossilen Rinne abgelagerte lössähnliche Sedimente, welche die Kiesfolgen partiell überlagern, konnten relativchronologisch ins Obere Pleistozän gestellt werden und dokumentieren vermutlich eine trockene Phase. Durch die mit scharfer Diskordanz über den Kiesen abgelagerten pelitischen Hochflutsedimente wird ein abrupter flussdynamischer Umbruch von dem eines ursprünglich verzweigten Flusses zu dem eines mäandrierenden Flusses mit Hochflutsedimentation nachgewiesen. IRSL-Datierungen stellen den Beginn der Ablagerung der Hochflutsedimente ins letzte Glazial. Der größte Teil der Sedimente wurde jedoch im frühen und mittleren Holozän (ca. 9 und 5 kaBP) abgelagert, so dass zu Beginn der Hauptsiedlungsphase am Tell Chuera (3. Jahrtausend v.Chr.) die Oberfläche der Überschwemmungsebene ihr heutiges Niveau nahezu erreicht hatte. Bis dahin führten großflächige Überschwemmungen zur Hochflutsedimentation in der Aue. Ein erneuter Wechsel der fluvialen Geomorphodynamik und der Sedimentationsverhältnisse zeigt sich darin, dass die letzten ca. 5000 Jahre keine nennenswerte Sedimentation in der Hochflutebene zu verzeichnen war. Es kam zu einer bis heute stattfindenden, lateralen Verlagerung der Mäander des Wadis und damit der Aufarbeitung von Teilen der Kiese und Hochflutsedimente. Siedlungsspuren im Wadiverlauf weisen auf eine Periodizität des Abflusses des Wadis Chuera zwischen etwa 4.7 und 4.2 kaBP hin. Die Theorie einer verstärkten Akkumulation von Kolluvien der Rahmenhöhen im Wadital als direkte Folge eines steigenden Siedlungsdrucks während der Hauptsiedlungsphase konnte widerlegt werden. Vielmehr handelt es sich bei den vermeintlichen Kolluvien um fluvial aufgearbeitete Hochflutsedimente. Anthropogene Eingriffe in den Landschaftshaushalt lassen sich in Form von Kalkkrustensteinbrüchen und einem komplexen Wegenetz nachweisen.
This chapter analyzes the risk and return characteristics of investments in artists from the Middle East and Northern Africa (MENA) region over the sample period 2000 to 2012. With hedonic regression modeling we create an annual index that is based on 3,544 paintings created by 663 MENA artists. Our empirical results prove that investing in such a hypothetical index provides strong financial returns. While the results show an exponential growth in sales since 2006, the geometric annual return of the MENA art index is a stable13.9 percent over the whole period. We conclude that investing in MENA paintings would have been profitable but also note that we examined the performance of an emerging art market that has only seen an upward trend without any correction, yet.
After nearly two decades of US leadership during the 1980s and 1990s, are Europe’s venture capital (VC) markets in the 2000s finally catching up regarding the provision of financing and successful exits, or is the performance gap as wide as ever? Are we amid an overall VC performance slump with no encouraging news? We attempt to answer these questions by tracking over 40,000 VC-backed firms stemming from six industries in 13 European countries and the US between 1985 and 2009; determining the type of exit – if any – each particular firm’s investors choose for the venture.
Direct financing of consumer credit by individual investors or non-bank institutions through an implementation of marketplace lending is a relatively new phenomenon in financial markets. The emergence of online platforms has made this type of financial intermediation widely available. This paper analyzes the performance of marketplace lending using proprietary cash flow data for each individual loan from the largest platform, Lending Club. While individual loan characteristics would be important for amateur investors holding a few loans, sophisticated lenders, including institutional investors, usually form broad portfolios to benefit from diversification. We find high risk-adjusted performance of approximately 40 basis points per month for these basic loan portfolios. This abnormal performance indicates that Lending Club, and similar marketplace lenders, are likely to attract capital to finance a growing share of the consumer credit market. In the absence of a competitive response from traditional credit providers, these loans lower costs to the ultimate borrowers and increase returns for the ultimate lenders.
We analyze the performance of marketplace lending using loan cash flow data from the largest platform, Lending Club. We find substantial risk-adjusted performance of about 40 basis points per month for the entire loan portfolio. Other loan portfolios grouped by risk category have similar risk-adjusted performance. We show that characteristics of the local bank sector for each loan, such as concentration of deposits and the presence of national banks, are related to the performance of loans. Thus, marketplace lending has the potential to finance a growing share of the consumer credit market in the absence of a competitive response from the traditional incumbents.
The record-breaking prices observed in the art market over the last three years raise the question of whether we are experiencing a speculative bubble. Given the difficulty to determine the fundamental value of artworks, we apply a right-tailed unit root test with forward recursive regressions (SADF test) to detect explosive behaviors directly in the time series of four different art market segments (“Impressionist and Modern”, “Post-war and Contemporary”, “American”, and “Latin American”) for the period from 1970 to 2013. We identify two historical speculative bubbles and find an explosive movement in today’s “Post-war and Contemporary” and “American” fine art market segments.
Euro crash risk
(2015)
This paper sets the background for the Special Issue of the Journal of Empirical Finance on the European Sovereign Debt Crisis. It identifies the channel through which risks in the financial industry leaked into the public sector. It discusses the role of the bank rescues in igniting the sovereign debt crisis and reviews approaches to detect early warning signals to anticipate the buildup of crises. It concludes with a discussion of potential implications of sovereign distress for financial markets.
We investigate the effect of overreaction in the fine art market. Using a unique sample of auction prices of modern prints, we define an overvalued (undervalued) print as a print that was bought for a price above (below) its high (low) auction pricing estimate. Based on the overreaction hypothesis, we predict that overvalued (undervalued) prints generate a negative (positive) excess return at a subsequent sale. Our empirical findings confirm our expectations. We report that prints that were bought for a price 10 percent above (below) its high (low) pricing estimate generate a positive (negative) excess return of 12 percent (17 percent) after controlling for the general price movement on the prints market. The price correction for overvalued (undervalued) prints is more pronounced during recessions (expansions).
This paper investigates the impact of news media sentiment on financial market returns and volatility in the long-term. We hypothesize that the way the media formulate and present news to the public produces different perceptions and, thus, incurs different investor behavior. To analyze such framing effects we distinguish between optimistic and pessimistic news frames. We construct a monthly media sentiment indicator by taking the ratio of the number of newspaper articles that contain predetermined negative words to the number of newspaper articles that contain predetermined positive words in the headline and/or the lead paragraph. Our results indicate that pessimistic news media sentiment is positively related to global market volatility and negatively related to global market returns 12 to 24 months in advance. We show that our media sentiment indicator reflects very well the financial market crises and pricing bubbles over the past 20 years.
This study examines the recent literature on the expectations, beliefs and perceptions of investors who incorporate Environmental, Social, Governance (ESG) considerations in investment decisions with the aim to generate superior performance and also make a societal impact. Through the lens of equilibrium models of agents with heterogeneous tastes for ESG investments, green assets are expected to generate lower returns in the long run than their non- ESG counterparts. However, at the short run, ESG investment can outperform non-ESG investment through various channels. Empirically, results of ESG outperformance are mixed. We find consensus in the literature that some investors have ESG preference and that their actions can generate positive social impact. The shift towards more sustainable policies in firms is motivated by the increased market values and the lower cost of capital of green firms driven by investors’ choices.
We examine whether the uncertainty related to environmental, social, and governance (ESG) regulation developments is reflected in asset prices. We proxy the sensitivity of firms to ESG regulation uncertainty by the disparity across the components of their ESG ratings. Firms with high ESG disparity have a higher option-implied cost of protection against downside tail risk. The impact of the misalignment across the different dimensions of the ESG score is distinct from that of ESG score level itself. Aggregate downside risk bears a negative price for firms with low ESG disparity.
We study the relevance of signaling and marketing as explanations for the discount control mechanisms that a closed-end fund may choose to adopt in its prospectus. These policies are designed to narrow the potential gap between share price and net asset value, measured by the fund’s discount. The two most common discount control mechanisms are explicit discretion to repurchase shares based on the magnitude of the fund discount and mandatory continuation votes that provide shareholders the opportunity to liquidate the fund. We find very limited evidence that a discount control mechanism serves as costly signal of information. Funds with mandatory voting are not more likely to delist than the rest of the CEFs in general or whenever the fund discount is large. Similarly, funds that explicitly discuss share repurchases as a potential response do not subsequently buy back shares more often when discounts do increase. Instead, the existence of these policies is more consistent with marketing explanations because the policies are associated with an increased probability of issuing more equity in subsequent periods.
The discount control mechanisms that closed-end funds often choose to adopt before IPO are supposedly implemented to narrow the difference between share price and net asset value. We find evidence that non-discretionary discount control mechanisms such as mandatory continuation votes serve as costly signals of information to reveal higher fund quality to investors. Rents of the skill signaled through the announcement of such policies accrue to managers rather than investors as differences in skill are revealed through growing assets under management rather than risk- adjusted performance.
Venture capital (VC) funds backed by large multi-fund families tend to perform substantially better due to cross-fund cash flows (CFCFs), a liquidity support mechanism provided by matching distributions and capital calls within a VC fund family. The dynamics of this mechanism coincide with the sensitivity of different stage projects owing to market liquidity conditions. We find that the early-stage funds demand relatively more intra-family CFCFs than later-stage funds during liquidity stress periods. We show that the liquidity improvement based on the timing of CFCF allocation reflects how fund families arrange internal liquidity provision and explains a large part of their outperformance.
This paper provides a review of the development of the non-fungible tokens (NFTs) market, with a particular focus on its pricing determinants, its current applications and future opportunities. We investigate the current state of the NFT markets and highlight the perception and expectations of investors towards these products. We summarize and compare the financial and econometric models that have been used in the literature for the pricing of non-fungible tokens with a special focus on their predictive performance. Our intention is to design a framework that can help understanding the price formation of NFTs. We further aim to shed light on the value creating determinants of NFTs in order to better understand the investors’ behavior on the blockchain.
Bartonellae are facultative intracellular alpha-proteobacteria often transmitted by arthropods. Ixodes ricinus is the most important vector for arthropod-borne pathogens in Europe. However, its vector competence for Bartonella spp. is still unclear. This study aimed to experimentally compare its vector competence for three Bartonella species: B. henselae, B. grahamii, and B. schoenbuchensis. A total of 1333 ticks (1021 nymphs and 312 adults) were separated into four groups, one for each pathogen and a negative control group. Ticks were fed artificially with bovine blood spiked with the respective Bartonella species. DNA was extracted from selected ticks to verify Bartonella-infection by PCR. DNA of Bartonella spp. was detected in 34% of nymphs and females after feeding. The best engorgement results were obtained by ticks fed with B. henselae-spiked blood (65.3%) and B. schoenbuchensis (61.6%). Significantly more nymphs fed on infected blood (37.3%) molted into adults compared to the control group (11.4%). Bartonella DNA was found in 22% of eggs laid by previously infected females and in 8.6% of adults molted from infected nymphs. The transovarial and transstadial transmission of bartonellae suggest that I. ricinus could be a potential vector for three bacteria.
Korrektur der durch ein inhomogenes Magnetfeld verursachten Verzerrungen in einer Spurendriftkammer
(1995)
Attention-deficit/hyperactivity disorder (ADHD) is often accompanied by problems in social behaviour, which are sometimes similar to some symptoms of autism-spectrum disorders (ASD). However, neuronal mechanisms of ASD-like deficits in ADHD have rarely been studied. The processing of biological motion–recently discussed as a marker of social cognition–was found to be disrupted in ASD in several studies. Thus in the present study we tested if biological motion processing is disrupted in ADHD. We used 64-channel EEG and spatio-temporal source analysis to assess event-related potentials associated with human motion processing in 21 children and adolescents with ADHD and 21 matched typically developing controls. On the behavioural level, all subjects were able to differentiate between human and scrambled motion. But in response to both scrambled and biological motion, the N200 amplitude was decreased in subjects with ADHD. After a spatio-temporal dipole analysis, a human motion specific activation was observable in occipital-temporal regions with a reduced and more diffuse activation in ADHD subjects. These results point towards neuronal determined alterations in the processing of biological motion in ADHD.
The electron-capture process was studied for Xe54+ colliding with H2 molecules at the internal gas target of the Experimental Storage Ring (ESR) at GSI, Darmstadt. Cross-section values for electron capture into excited projectile states were deduced from the observed emission cross section of Lyman radiation, being emitted by the hydrogenlike ions subsequent to the capture of a target electron. The ion beam energy range was varied between 5.5 and 30.9 MeV/u by applying the deceleration mode of the ESR. Thus, electron-capture data were recorded at the intermediate and, in particular, the low-collision-energy regime, well below the beam energy necessary to produce bare xenon ions. The obtained data are found to be in reasonable qualitative agreement with theoretical approaches, while a commonly applied empirical formula significantly overestimates the experimental findings.
We suggest a new method to compute the spectrum and wave functions of excited states. We construct a stochastic basis of Bargmann link states, drawn from a physical probability density distribution and compute transition amplitudes between stochastic basis states. From such transition matrix we extract wave functions and the energy spectrum. We apply this method toU(1)2+1 lattice gauge theory. As a test we compute the energy spectrum, wave functions and thermodynamical functions of the electric Hamiltonian and compare it with analytical results. We find excellent agreement. We observe scaling of energies and wave functions in the variable of time. We also present first results on a small lattice for the full Hamiltonian including the magnetic term.
Hepatic lipid deposition and inflammation represent risk factors for hepatocellular carcinoma (HCC). The mRNA-binding protein tristetraprolin (TTP, gene name ZFP36) has been suggested as a tumor suppressor in several malignancies, but it increases insulin resistance. The aim of this study was to elucidate the role of TTP in hepatocarcinogenesis and HCC progression. Employing liver-specific TTP-knockout (lsTtp-KO) mice in the diethylnitrosamine (DEN) hepatocarcinogenesis model, we observed a significantly reduced tumor burden compared to wild-type animals. Upon short-term DEN treatment, modelling early inflammatory processes in hepatocarcinogenesis, lsTtp-KO mice exhibited a reduced monocyte/macrophage ratio as compared to wild-type mice. While short-term DEN strongly induced an abundance of saturated and poly-unsaturated hepatic fatty acids, lsTtp-KO mice did not show these changes. These findings suggested anti-carcinogenic actions of TTP deletion due to effects on inflammation and metabolism. Interestingly, though, investigating effects of TTP on different hallmarks of cancer suggested tumor-suppressing actions: TTP inhibited proliferation, attenuated migration, and slightly increased chemosensitivity. In line with a tumor-suppressing activity, we observed a reduced expression of several oncogenes in TTP-overexpressing cells. Accordingly, ZFP36 expression was downregulated in tumor tissues in three large human data sets. Taken together, this study suggests that hepatocytic TTP promotes hepatocarcinogenesis, while it shows tumor-suppressive actions during hepatic tumor progression.
In this paper, we developed a method to extract item-level response times from log data that are available in computer-based assessments (CBA) and paper-based assessments (PBA) with digital pens. Based on response times that were extracted using only time differences between responses, we used the bivariate generalized linear IRT model framework (B-GLIRT, [1]) to investigate response times as indicators for response processes. A parameterization that includes an interaction between the latent speed factor and the latent ability factor in the cross-relation function was found to fit the data best in CBA and PBA. Data were collected with a within-subject design in a national add-on study to PISA 2012 administering two clusters of PISA 2009 reading units. After investigating the invariance of the measurement models for ability and speed between boys and girls, we found the expected gender effect in reading ability to coincide with a gender effect in speed in CBA. Taking this result as indication for the validity of the time measures extracted from time differences between responses, we analyzed the PBA data and found the same gender effects for ability and speed. Analyzing PBA and CBA data together we identified the ability mode effect as the latent difference between reading measured in CBA and PBA. Similar to the gender effect the mode effect in ability was observed together with a difference in the latent speed between modes. However, while the relationship between speed and ability is identical for boys and girls we found hints for mode differences in the estimated parameters of the cross-relation function used in the B-GLIRT model.
Intrinsic motivation for honesty is perceived as an important determinant of large and persistent variation in cheating behavior. However, little is known about its actual role due to challenges in obtaining precise measures of motivation for honesty, as well as field outcomes on cheating. We fill these gaps using a unique setting of informal milk markets in India. A novel behavioral experiment, which combines a standard die roll task with Bluetooth technology, is used to measure motivation for honesty of milkmen at both extensive and intensive margins. We then buy milk from the same milkmen and show that cheating in the field, measured by the amount of water added to milk, widens significantly with a milkman’s degree of dishonesty. Additional analyses show that conventional binary measure of motivation for honesty suffers from measurement errors, resulting in underestimation of this association.
Interpretation bias and dysfunctional social assumptions are proposed to play a pivotal role in the development and maintenance of social phobia (SP), especially in youth. In this study, we aimed to investigate disorder-specific implicit assumptions of rejection and implicit interpretation bias in youth with severe, chronic SP and healthy controls (CG). Twenty-seven youth with SP in inpatient/day-care treatment (M age = 15.6 years, 74% female) and 24 healthy controls (M age = 15.7 years, 54% female) were included. The Implicit Association Test (IAT) and the Affect Misattribution Procedure (AMP) were completed to assess implicit assumptions and interpretation bias related to the processing of social and affective stimuli. No group differences were observed for the IAT controlling for depressive symptoms in the analyses. However, group differences were found regarding interpretation bias (p = .017, η2p = .137). Correlations between implicit scores and explicit questionnaire results were medium to large in the SP group (r =|.28| to |.54|, pall ≤ .05), but lower in the control group (r =|.04| to |.46|, pall ≤ .05). Our results confirm the finding of an interpretation bias in youth SP, especially regarding the implicit processing of faces, whereas implicit dysfunctional social assumptions of being rejected do not seem to be specific for SP. Future research should investigate the causal relationship of assumptions/interpretation bias and SP.
Vortrag im Rahmen des Symposiums der Universitätsbibliothek Frankfurt am Main in Kooperation mit der Frankfurter Buchmesse 2011 "Economy and Acceptance of Open Access Strategies", am 14.10.2011.
Motor imagery is conceptualized as an internal simulation that uses motor-related parts of the brain as its substrate. Many studies have investigated this sharing of common neural resources between the two modalities of motor imagery and motor execution. They have shown overlapping but not identical activation patterns that thereby result in a modality-specific neural signature. However, it is not clear how far this neural signature depends on whether the imagined action has previously been practiced physically or only imagined. The present study aims to disentangle whether the neural imprint of an imagined manual pointing sequence within cortical and subcortical motor areas is determined by the nature of this prior practice modality. Each participant practiced two sequences physically, practiced two other sequences mentally, and did a behavioural pre-test without any further practice on a third pair of sequences. After a two-week practice intervention, participants underwent fMRI scans while imagining all six sequences. Behavioural data demonstrated practice-related effects as well as very good compliance with instructions. Functional MRI data confirmed the previously known motor imagery network. Crucially, we found that mental and physical practice left a modality-specific footprint during mental motor imagery. In particular, activation within the right posterior cerebellum was stronger when the imagined sequence had previously been practiced physically. We conclude that cerebellar activity is shaped specifically by the nature of the prior practice modality.
Background: The elderly population deals with multimorbidity (three chronic conditions) and increasinged drug use with age. A comprehensive characterisation of the medication – including prescription and over-the-counter (OTC) drugs – of elderly patients in primary care is still insufficient.
Objectives: This study aims to characterise the medication (prescription and OTC) of multimorbid elderly patients in primary care and living at home by identifying drug patterns to evaluate the relationship between drugs and drug groups and reveal associations with recently published multimorbidity clusters of the same cohort.
Methods: MultiCare was a multicentre, prospective, observational cohort study of 3189 multimorbid patients aged 65 to 85 years in primary care in Germany. Patients and general practitioners were interviewed between 2008 and 2009. Drug patterns were identified using exploratory factor analysis. The relations between the drug patterns with the three multimorbidity clusters were analysed with Spearman-Rank-Correlation.
Results: Patients (59.3% female) used in mean 7.7 drugs; in total 24,535 drugs (23.7% OTC) were detected. Five drug patterns for men (drugs for obstructive pulmonary diseases (D-OPD), drugs for coronary heart diseases and hypertension (D-CHD), drugs for osteoporosis (D-Osteo), drugs for heart failure and drugs for pain) and four drug patterns for women (D-Osteo, D-CHD, D-OPD and drugs for diuretics and gout) were detected. Significant associations between multimorbidity clusters and drug patterns were detectable (D-CHD and CMD: male: ρ = 0.376, CI 0.322–0.430; female: ρ = 0.301, CI 0.624–0.340).
Conclusion: The drug patterns demonstrate non-random relations in drug use in multimorbid elderly patients and systematic associations between drug patterns and multimorbidity clusters were found in primary care.
Objectives Our study aimed to assess the frequency of potentially inappropriate medication (PIM) use (according to three PIM lists) and to examine the association between PIM use and cognitive function among participants in the MultiCare cohort. Design MultiCare is conducted as a longitudinal, multicentre, observational cohort study. Setting The MultiCare study is located in eight different study centres in Germany. Participants 3189 patients (59.3% female). Primary and secondary outcome measures The study had a cross-sectional design using baseline data from the German MultiCare study. Prescribed and over-the-counter drugs were classified using FORTA (Fit fOR The Aged), PRISCUS (Latin for ‘time-honoured’) and EU(7)-PIM lists. A mixed-effect multivariate linear regression was performed to calculate the association between PIM use patients’ cognitive function (measured with (LDST)). Results Patients (3189) used 2152 FORTA PIM (mean 0.9±1.03 per patient), 936 PRISCUS PIM (0.3±0.58) and 4311 EU(7)-PIM (1.4±1.29). The most common FORTA PIM was phenprocoumon (13.8%); the most prevalent PRISCUS PIM was amitriptyline (2.8%); the most common EU(7)-PIM was omeprazole (14.0%). The lists rate PIM differently, with an overall overlap of 6.6%. Increasing use of PIM is significantly associated with reduced cognitive function that was detected with a correlation coefficient of −0.60 for FORTA PIM (p=0.002), −0.72 for PRISCUS PIM (p=0.025) and −0.44 for EU(7)-PIM (p=0.005). Conclusion We identified PIM using FORTA, PRISCUS and EU(7)-PIM lists differently and found that PIM use is associated with cognitive impairment according to LDST, whereby the FORTA list best explained cognitive decline for the German population. These findings are consistent with a negative impact of PIM use on multimorbid elderly patient outcomes.
Objectives The aims of our study were to examine the anticholinergic drug use and to assess the association between anticholinergic burden and cognitive function in the multimorbid elderly patients of the MultiCare cohort.
Setting MultiCare was conducted as a longitudinal cohort study in primary care, located in eight different study centres in Germany.
Participants 3189 patients (59.3% female).
Primary and secondary outcome measures Baseline data were used for the following analyses. Drugs were classified according to the well-established anticholinergic drug scale (ADS) and the recently published German anticholinergic burden (German ACB). Cognitive function was measured using a letter digit substitution test (LDST) and a mixed-effect multivariate linear regression was performed to calculate the influence of anticholinergic burden on the cognitive function.
Results Patients used 1764 anticholinergic drugs according to ADS and 2750 anticholinergics according to the German ACB score (prevalence 38.4% and 53.7%, respectively). The mean ADS score was 0.8 (±1.3), and the mean German ACB score was 1.2 (±1.6) per patient. The most common ADS anticholinergic was furosemide (5.8%) and the most common ACB anticholinergic was metformin (13.7%). The majority of the identified anticholinergics were drugs with low anticholinergic potential: 80.2% (ADS) and 73.4% (ACB), respectively. An increasing ADS and German ACB score was associated with reduced cognitive function according to the LDST (−0.26; p=0.008 and −0.24; p=0.003, respectively).
Conclusion Multimorbid elderly patients are in a high risk for using anticholinergic drugs according to ADS and German ACB score. We especially need to gain greater awareness for the contribution of drugs with low anticholinergic potential from the cardiovascular system. As anticholinergic drug use is associated with reduced cognitive function in multimorbid elderly patients, the importance of rational prescribing and also deprescribing needs to be further evaluated.
Trial registration number ISRCTN89818205.
The specific and precise arrangement of proteins and biomolecules in 3D is an important prerequisite for the study of cell migration, cellular signal transduction and the production of artificial tissue. In a variety of research approaches, proteins have been immobilized on rigid surfaces such as glass or gold to observe protein-protein or protein-cell interactions. While these commonly used analytical platforms offer advantages such as rapid washing steps and easy use, due to their rigidity and two-dimensionality, they cannot replicate the extracellular matrix (ECM) the native environment of cells. This severe deviation from the natural environment results in significant changes in cell structure and cellular processes such as the polarization of the cell, its morphology, and signal transduction. In order to maintain the functionality of the immobilized proteins, it is also enormously important that the proteins are oriented and anchored in the material under mild conditions.
An immobilization strategy that makes this possible is bioaffinity. For this, the specific interaction of a biomolecule with an interaction partner anchored on a surface is used to immobilize the biomolecule. Such an interaction is for example the nitrilotriacetic acid (NTA)/His-tag binding. NTA is a chelator molecule that, when bound to divalent metal ions such as Ni(II), forms an octahedral complex with oligohistidines. The oligo histidine-tag can be competed out of the complex by free histidine or imidazole due to structural similarity. This is exploited in immobilized metal affinity chromatography (IMAC). The binding of a monoNTA/His-tag complex (KD=10 µM) is not stable enough to be used for immobilizations. Therefore, multivalent variants of the chelator were developed, like trisNTA which has a high affinity for His6 tagged proteins (KD= 10 nM). The PA-trisNTA developed in a preliminary work was the first light-activatable system based on the trisNTA chelator head.
The aim of this work was to synthesize a new two-photon (2P) activatable trisNTA (TPA trisNTA) interaction molecule, to analyze its photophysical characteristics and to apply it for two- and three dimensional (2D/3D) biomolecule patterning. The final goal was to use TPA trisNTA for cellular applications in order to manipulate membrane protein organization. Therefore, TPA trisNTA was designed to maintain a stable autoinhibition enabling the immobilization of proteins under physiological conditions with high precision in the x/y, as well as z dimension only upon light activation. 2P activation brings some outstanding advantages: i) the use of near-infrared (NIR) light is less harmful to cells compared to ultraviolet (UV) light, ii) the longer wavelength allows the radiation to penetrate deeper into tissues, iii) the precision of focal irradiation is more accurate because only a focal volume (about 1 fL) is excited and, unlike UV light, scattered light does not lead to activation.
Several backbones for TPA-trisNTA were considered as 2P cleavable groups due to their 2P absorption ability and small size: 3 nitrodibenzofuran (NDBF), 6 bromo 7 hydroxycoumarin (Bhc), and 7 diethylaminocoumarin (DEAC). Initially, suitable synthetic routes were developed for the respective carbaldehydes, since these represented an important intermediate for both the construction of amino acid (aa) derivatives as well as ß hydroxy acids. ß Hydroxy acids were important intermediates because their photocleavage differs from aa derivatives. To establish the conversion from carbaldehydes to hydroxy acids via Reformatsky reaction, commercially available carbaldehydes of the nitroveratral (NV) or nitropiperonal (NP) group were used in addition. The conversion of NDBF, NV, NP proved to be difficult, whereas the ß-hydroxy acid was successfully synthesized from Bhc as well as from DEAC.
Starting from DEAC ß hydroxy acid, a Fmoc protected amino acid derivative was synthesized. To ensure high cleavage efficiency, the DEAC ß hydroxy acid was linked to monoFmoc ethylenediamine through a carbamate linker. Subsequently, the photocleavable group was successfully incorporated into the linker of TPA-trisNTA by solid-phase peptide synthesis (SPPS).
The functional principle of TPA-trisNTA, similar to PA-trisNTA, is based on the autoinhibition of the multivalent chelator head trisNTA, which is linked to an intramolecular oligohistidine sequence by a peptide linker. In presence of Ni(II) ions, trisNTA forms a metal ion-mediated complex with histidine, causing TPA-trisNTA to self-inactivate. The cleavage site is the DEAC based photocleavable amino acid. In contrast to PA-trisNTA, the incorporation of two photocleavable amino acids was omitted. Instead, only one photocleavable DEAC was incorporated in front of the His tag. To avoid a second DEAC group within the His tag, a His5 tag was used instead of an His6 tag. It is known from preliminary work that a His5 tag is sufficient to maintain autoinhibition in the presence of His6-tagged proteins of interest (POIs), but can be displaced from the complex after light-driven cleavage of the peptide backbone. Placement of a cysteine in the peptide linker between the trisNTA and the DEAC group allowed for permanent surface anchoring after photocleavage of the linker.
...
Photoresponsive hydrogels can be employed to coordinate the organization of proteins in three dimensions (3D) and thus to spatiotemporally control their physiochemical properties by light. However, reversible and user-defined tethering of proteins and protein complexes to biomaterials pose a considerable challenge as this is a cumbersome process, which, in many cases, does not support the precise localization of biomolecules in the z direction. Here, we report on the 3D patterning of proteins with polyhistidine tags based on in-situ two-photon lithography. By exploiting a two-photon activatable multivalent chelator head, we established the protein mounting of hydrogels with micrometer precision. In the presence of photosensitizers, a substantially enhanced two-photon activation of the developed tool inside hydrogels was detected, enabling the user-defined 3D protein immobilization in hydrogels with high specificity, micrometer-scale precision, and under mild light doses. Our protein-binding strategy allows the patterning of a wide variety of proteins and offers the possibility to dynamically modify the biofunctional properties of materials at defined subvolumes in 3D.
In this study we show how size-resolved measurements of aerosol particles and cloud condensation nuclei (CCN) can be used to characterize the supersaturation of water vapor in a cloud. The method was developed and applied for the investigation of a cloud event during the ACRIDICON-Zugspitze campaign (17 September to 4 October 2012) at the high-alpine research station Schneefernerhaus (German Alps, 2650 m a.s.l.). Number size distributions of total and interstitial aerosol particles were measured with a scanning mobility particle sizer (SMPS), and size-resolved CCN efficiency spectra were recorded with a CCN counter system operated at different supersaturation levels.
During the evolution of a cloud, aerosol particles are exposed to different supersaturation levels. We outline and compare different estimates for the lower and upper bounds (Slow, Shigh) and the average value (Savg) of peak supersaturation encountered by the particles in the cloud. For the investigated cloud event, we derived Slow ≈ 0.19–0.25%, Shigh ≈ 0.90–1.64% and Savg ≈ 0.38–0.84%. Estimates of Slow, Shigh and Savg based on aerosol size distribution data require specific knowledge or assumptions of aerosol hygroscopicity, which are not required for the derivation of Slow and Savg from the size-resolved CCN efficiency spectra.
In this study we show how size-resolved measurements of aerosol particles and cloud condensation nuclei (CCN) can be used to characterize the supersaturation of water vapor in a cloud. The method was developed and applied during the ACRIDICON-Zugspitze campaign (17 September to 4 October 2012) at the high-Alpine research station Schneefernerhaus (German Alps, 2650 m a.s.l.). Number size distributions of total and interstitial aerosol particles were measured with a scanning mobility particle sizer (SMPS), and size-resolved CCN efficiency spectra were recorded with a CCN counter system operated at different supersaturation levels.
During the evolution of a cloud, aerosol particles are exposed to different supersaturation levels. We outline and compare different estimates for the lower and upper bounds (Slow, Shigh) and the average value (Savg) of peak supersaturation encountered by the particles in the cloud. A major advantage of the derivation of Slow and Savg from size-resolved CCN efficiency spectra is that it does not require the specific knowledge or assumptions about aerosol hygroscopicity that are needed to derive estimates of Slow, Shigh, and Savg from aerosol size distribution data. For the investigated cloud event, we derived Slow ≈ 0.07–0.25%, Shigh ≈ 0.86–1.31% and Savg ≈ 0.42–0.68%.
In der Arbeit wird das Certainty-Tool, eine Erweiterung für den Unity-basierten Teil des Stolperwege Projektes, vorgestellt. Dieses verfolgt die Idee des VAnnotatoR weiter und erlaubt die Visualisierung von informationeller Ungewissheit der im Stolperwege-Praktikum digital rekonstruierten Gebäude. Dabei inkorporiert das Tool das Konzept hinter BIM (Building Information Modelling), eine neuartige Methode der Planung in der AEC-Branche, welches ein Selbstbewusstsein von Informationen für Teile eines Gebäudes ermöglicht. Dabei werden im Certainty-Tool Stufen der informationellen Ungewissheit entwickelt und diese auf Teile des Gebäudes zugewiesen. Das Tool wird anhand einer digitalen Rekonstruktion des zerstörten Rothschild-Palais vorgestellt. Des Weiteren wurde eine Evaluation basierend auf der Usability Metric for User Experience durchgeführt und weiterführende Entwicklungen und Verbesserungen des Tools diskutiert.
The structural diversity of terpenoids is limited by the isoprene rule which states that all primary terpene synthase products derive from methyl-branched building blocks with five carbon atoms. With this study we discover a broad spectrum of novel terpenoids with eleven carbon atoms as byproducts of bacterial 2-methylisoborneol or 2-methylenebornane synthases. Both enzymes use 2-methyl-GPP as substrate, which is synthesized from GPP by the action of a methyltransferase. We used E. coli strains that heterologously produce different C11-terpene synthases together with the GPP methyltransferase and the mevalonate pathway enzymes. With this de novo approach, 35 different C11-terpenes could be produced. In addition to eleven known compounds, it was possible to detect 24 novel C11-terpenes which have not yet been described as terpene synthase products. Four of them, 3,4-dimethylcumene, 2-methylborneol and the two diastereomers of 2-methylcitronellol could be identified. Furthermore, we showed that an E. coli strain expressing the GPP-methyltransferase can produce the C16-terpene 6-methylfarnesol which indicates the condensation of 2-methyl-GPP and IPP to 6-methyl-FPP by the E. coli FPP-synthase. Our study demonstrates the broad range of unusual terpenes accessible by expression of GPP-methyltransferases and C11-terpene synthases in E. coli and provides an extended mechanism for C11-terpene synthases.
We develop a state-space model to decompose bid and ask quotes of CDS into two components, fair default premium and liquidity premium. This approach gives a better estimate of the default premium than mid quotes, and it allows to disentangle and compare the liquidity premium earned by the protection buyer and the protection seller. In contrast to other studies, our model is structurally much simpler, while it also allows for correlation between liquidity and default premia, as supported by empirical evidence. The model is implemented and applied to a large data set of 118 CDS for a period ranging from 2004 to 2010. The model-generated output variables are analyzed in a difference-in-difference framework to determine how the default premium, as well as the liquidity premium of protection buyers and sellers, evolved during different periods of the financial crisis and to which extent they differ for financial institutions compared to non-financials.
[Nachruf] Editha Platte
(2010)
Das Museum Giersch scheint manchmal weit ab vom Schuss, gerät es doch im Schatten der großen Frankfurter Galerien ein ums andere Mal in Vergessenheit. Umso wichtiger ist es, diesem Umstand entgegenzuwirken – denn derzeit präsentiert das Ausstellungshaus am Museumsufer mit der Schau Frobenius – Die Kunst des Forschens ein Stück Frankfurts kultureller Identität. Konträr zum vielbesuchten Städel, erwartet das villenartige Museum den Besucher mit gedimmtem Licht und warmem Holz: Eine ruhige Stimmung breitet sich aus, die nicht nur der Konzentration beim Lesen der ausführlichen Erklärtexte, sondern der Wirkung von mehr als 200 gezeigten Werken zuträglich ist. ...
This doctoral thesis deals with the structural and dynamical NMR characterization of biomolecules, covering a broad range of proteins, from small peptides to large GPCRs proteins. This work consists of two projects, which are presented in chapter II and III. Chapter II is focused on the structural screening of peptides and small proteins ranging from 14 to 71 amino acids, while chapter III describes the structure and light dynamics of the disease relevant rhodopsin G90D mutant. The main method used to investigate both types of proteins is NMR spectroscopy. Both chapters comprise individual general introduction, materials and methods, results and discussion sections, and a final conclusion paragraph.
‘Chapter I: Methodological aspects of protein NMR spectroscopy’ presents an overview of different NMR methods developed for the rapid characterization of protein structure and dynamics. Multidimensional NMR, which is routinely used in structural biology, is indispensable for protein structure determination in solution. However, detailed information with resolution at the atomic level is time consuming and requires weeks of expensive measurement time, followed by the manual data analysis. Therefore, the development of time-saving NMR techniques is highly required for screening studies of a large amount of proteins, and can be also helpful for studying unstable biomolecules, as their short lifetime often restricts the experimental procedure.
This chapter covers the two main approaches to accelerate a multidimensional NMR experiment: fast-pulsing techniques that aim to reduce the duration of an individual measurement, and non-uniform sampling technique (NUS), which was developed to reduce the overall number of increments in virtual time domains. A combination of both approaches, fast-pulsing and non-uniform sampling, allows speeding up the measurement time by 2-3 orders of magnitude. Furthermore, recently developed software called TA (targeted acquisition) combines various time-saving approaches, including fast-pulsing, non-uniform sampling and targeted acquisition. Targeted acquisition algorithm records a set of multidimensional NMR spectra in semi-interleaved incremental mode. This provides the ability to monitor the quality of the recorded spectra in real-time and therefore enables the completion of the experiments after the desired quality is achieved. Using this approach will greatly reduce the measurement time without losing important structural information. The implemented automated FLYA assignment further contributes to the rapid and simplified readout of the chemical shift assignment progress of the TA program. During this doctoral dissertation, the scientific collaboration with the TA software developer Prof. Vladislav Orekhov (Sweden) took place, and resulted in the successful establishing of this new NMR technology in the Schwalbe laboratory. TA is now routinely applied in Prof. Schwalbe group for the structure elucidation of small proteins.
‘Chapter II: Rapid NMR and biophysical characterization of small proteins’ describes the structural analysis of peptides and small proteins, which were recently identified within the framework of the Priority Program (SPP 2002). Due to technical limitations in detections of small systems and strict assumptions concerning the smallest size of the gene that can be translated, small open reading frames (sORFs) were excluded from the automated gene annotation for a very long time. Thanks to the newly developed computational and experimental approaches, the ability to identify and detect the small proteins consisting of less than approximately 70 amino acids sparked a growing scientific interest by microbiologist. In the past years, hundreds of new short protein sequences were discovered. Although some peptides were found to be involved in diverse essential biological processes, the functional elucidation of a large number of recently discovered peptides and small proteins remains a challenging task. It is well established that the structure of proteins is often linked to their function. However, the size of small constructs often restricts the possible diversity of secondary structure elements that might be adopted by a protein. Furthermore, as was shown for intrinsic discorded proteins (IDPs), the absence of a well-defined three-dimensional structure does not necessarily mean lack of function. Moreover, peptides, which are initially unstructured in the isolated form can fold in a stable structured conformation upon interaction with their biological partners. Solution state NMR spectroscopy is perfectly amenable for the structural characterization of systems of this size. It provides a rapid readout about the conformational state of small peptides unambiguously, distinguishing between folded, molten globule and unstructured conformations.
During this doctoral thesis the workflow protocol for fast screening of peptides and small proteins was established and applied to 20 candidates ranging from 14 to 71 amino acids, which were identified and selected by six microbiological groups, all members of the Priority Program on small proteins (SPP2002) funded by the German research foundation (DFG). The screening protocol includes sample preparation and biochemical characterization. Peptides containing less than 30 amino acids were synthesized by solid phase synthesis (SPPS), while small proteins containing more than 30 amino acids were heterologously expressed in E. coli.
...
The RHO gene encodes the G-protein-coupled receptor (GPCR) rhodopsin. Numerous mutations associated with impaired visual cycle have been reported; the G90D mutation leads to a constitutively active mutant form of rhodopsin that causes CSNB disease. We report on the structural investigation of the retinal configuration and conformation in the binding pocket in the dark and light-activated state by solution and MAS-NMR spectroscopy. We found two long-lived dark states for the G90D mutant with the 11-cis retinal bound as Schiff base in both populations. The second minor population in the dark state is attributed to a slight shift in conformation of the covalently bound 11-cis retinal caused by the mutation-induced distortion on the salt bridge formation in the binding pocket. Time-resolved UV/Vis spectroscopy was used to monitor the functional dynamics of the G90D mutant rhodopsin for all relevant time scales of the photocycle. The G90D mutant retains its conformational heterogeneity during the photocycle.
Proteins encoded by small open reading frames (sORFs) have a widespread occurrence in diverse microorganisms and can be of high functional importance. However, due to annotation biases and their technically challenging direct detection, these small proteins have been overlooked for a long time and were only recently rediscovered. The currently rapidly growing number of such proteins requires efficient methods to investigate their structure–function relationship. Herein, a method is presented for fast determination of the conformational properties of small proteins. Their small size makes them perfectly amenable for solution-state NMR spectroscopy. NMR spectroscopy can provide detailed information about their conformational states (folded, partially folded, and unstructured). In the context of the priority program on small proteins funded by the German research foundation (SPP2002), 27 small proteins from 9 different bacterial and archaeal organisms have been investigated. It is found that most of these small proteins are unstructured or partially folded. Bioinformatics tools predict that some of these unstructured proteins can potentially fold upon complex formation. A protocol for fast NMR spectroscopy structure elucidation is described for the small proteins that adopt a persistently folded structure by implementation of new NMR technologies, including automated resonance assignment and nonuniform sampling in combination with targeted acquisition.
1H, 13C, and 15N backbone chemical shift assignments of coronavirus-2 non-structural protein Nsp10
(2020)
The international Covid19-NMR consortium aims at the comprehensive spectroscopic characterization of SARS-CoV-2 RNA elements and proteins and will provide NMR chemical shift assignments of the molecular components of this virus. The SARS-CoV-2 genome encodes approximately 30 different proteins. Four of these proteins are involved in forming the viral envelope or in the packaging of the RNA genome and are therefore called structural proteins. The other proteins fulfill a variety of functions during the viral life cycle and comprise the so-called non-structural proteins (nsps). Here, we report the near-complete NMR resonance assignment for the backbone chemical shifts of the non-structural protein 10 (nsp10). Nsp10 is part of the viral replication-transcription complex (RTC). It aids in synthesizing and modifying the genomic and subgenomic RNAs. Via its interaction with nsp14, it ensures transcriptional fidelity of the RNA-dependent RNA polymerase, and through its stimulation of the methyltransferase activity of nsp16, it aids in synthesizing the RNA cap structures which protect the viral RNAs from being recognized by the innate immune system. Both of these functions can be potentially targeted by drugs. Our data will aid in performing additional NMR-based characterizations, and provide a basis for the identification of possible small molecule ligands interfering with nsp10 exerting its essential role in viral replication.
To date, there is insufficient insight into inflammatory bowel disease (IBD)-associated stress, recognized disability, and contact with the social care system. We aimed to assess these parameters in IBD patients and a non-IBD control group, who were invited to participate in an online survey developed specifically for this study (www.soscisurvey.de) with the help of IBD patients. 505 IBD patients and 166 volunteers (i.e., control group) participated in the survey. IBD patients reported significantly increased levels of stress within the last six months and five years (p<0.0001) and were more likely to have a recognized disability (p<0.0001). A low academic status was the strongest indicator of a disability (p = 0.006). Only 153 IBD patients (30.3%) reported contact with the social care system, and a disability was the strongest indicator for this (p<0.0001). Our study provides data on stress and disability in a large unselected German IBD cohort. We showed that patients with IBD suffer more often from emotional stress and more often have a recognized disability. As only about 1/3 of the patients had come into contact with the social care system and the corresponding support, this patient group is undersupplied in this area.
Background and Aim: The main disadvantage of plastic stents is the high rate of stent occlusion. The usual replacement interval of biliary plastic stents is 3 months. This study aimed to investigate if a shorter interval of 6–8 weeks impacts the median premature exchange rate (mPER) in benign and malignant biliary strictures.
Methods: All cases with endoscopic retrograde cholangiopancreatography (ERCP) and plastic stent placement were retrospectively analyzed since establishing an elective replacement interval of every 6–8 weeks at our institution and mPER was determined.
Results: A total of 3979 ERCPs (1199 patients) were analyzed, including 1262 (31.7%) malignant and 2717 (68.3%) benign cases, respectively. The median stent patency (mSP) was 41 days (range 14–120) for scheduled stent exchanges, whereas it was 17 days (1–75) for prematurely exchanged stents. The mPER was significantly higher for malignant (28.1%, 35–50%) compared with benign strictures (15.2%, 10–28%), P < 0.0001, respectively. mSP was significantly shorter in cases with only one stent (34 days [1–87] vs 41 days [1–120]) and in cases with only a 7-Fr stent (28 days [2–79]) compared with a larger stent (34 days [1–87], P = 0.001). Correspondingly, mPER was significantly higher in cases with only one stent (23% vs 16.2%, P < 0.0001) and only a 7-Fr stent (31.3% vs 22.4%, P = 0.03).
Conclusion: A shorter replacement interval does not seem to lead to a clinically meaningful reduction of mPER in benign and malignant strictures. Large stents and multiple stenting should be favored as possible.
Background: Vitamin D is required to maintain the integrity of the intestinal barrier and inhibits inflammatory signaling pathways.
Objective: Vitamin D deficiency might be involved in cirrhosis-associated systemic inflammation and risk of hepatic decompensation in patients with liver cirrhosis.
Methods: Outpatients of the Hepatology Unit of the University Hospital Frankfurt with advanced liver fibrosis and cirrhosis were prospectively enrolled. 25-hydroxyvitamin D (25(OH)D3) serum concentrations were quantified and associated with markers of systemic inflammation / intestinal bacterial translocation and hepatic decompensation.
Results: A total of 338 patients with advanced liver fibrosis or cirrhosis were included. Of those, 51 patients (15%) were hospitalized due to hepatic decompensation during follow-up. Overall, 72 patients (21%) had severe vitamin D deficiency. However, patients receiving vitamin D supplements had significantly higher 25(OH)D3 serum levels compared to patients without supplements (37 ng/mL vs. 16 ng/ml, P<0.0001). Uni- and multivariate analyses revealed an independent association of severe vitamin D deficiency with the risk of hepatic decompensation during follow-up (multivariate P = 0.012; OR = 3.25, 95% CI = 1.30–8.2), together with MELD score, low hemoglobin concentration, low coffee consumption, and presence of diabetes. Of note, serum levels of C-reactive protein, IL-6 and soluble CD14 were significantly higher in patients with versus without severe vitamin D deficiency, and serum levels of soluble CD14 levels declined in patients with de novo supplementation of vitamin D (median 2.15 vs. 1.87 ng/mL, P = 0.002).
Conclusions: In this prospective cohort study, baseline vitamin D levels were inversely associated with liver-cirrhosis related systemic inflammation and the risk of hepatic decompensation.
Background and Aims: The IL-12/23 inhibitor ustekinumab (UST) opened up new treatment options for patients with Crohn’s disease (CD). Due to the recent approval, real-world German data on long-term efficacy and safety are lacking. This study aimed to assess the clinical course of CD patients under UST therapy and to identify potential predictive markers.
Methods: Patients with CD receiving UST treatment in three hospitals and two outpatient centers were included and retrospectively analyzed. Rates for short- and long-term remission and response were analyzed with the help of clinical (Harvey–Bradshaw Index (HBI)) and biochemical (C-reactive protein (CRP), Fecal calprotectin (fCal)) parameters for disease activity.
Results: Data from 180 patients were evaluated. One-hundred-and-six patients had a follow-up of at least eight weeks and were included. 96.2% of the patients were pre-exposed to anti- TNFα agents and 34.4% to both anti-TNFα and anti-integrin antibodies. The median follow-up was 49.1 weeks (95% CI 42.03-56.25). At week 8, 51 patients (54.8%) showed response to UST, and 24 (24.7%) were in remission. At week 48, 48 (51.6%) responded to UST, and 25 patients (26.9%) were in remission. Steroid-free response and remission at week eight was achieved by 30.1% and 19.3% of patients, respectively. At week 48, 37.6% showed steroid-free response to UST, and 20.4% of the initial patient population was in steroid-free remission.
Conclusion: Our study confirms short- and long-term UST effectiveness and tolerability in a cohort of multi-treatment-exposed patients.
Successful retrieval from memory is a desirably difficult learning event that reduces the recall decrement of studied materials over longer delays more than restudying does. The present study was the first to test this direct testing effect for performed and read action events (e.g., “light a candle”) in terms of both recall accuracy and recall speed. To this end, subjects initially encoded action phrases by either enacting them or reading them aloud (i.e., encoding type). After this initial study phase, they received two practice phases, in which the same number of action phrases were restudied or retrieval-practiced (Exp. 1–3), or not further processed (Exp. 3; i.e., practice type). This learning session was ensued by a final cued-recall test both after a short delay (2 min) and after a long delay (1 week: Exp. 1 and 2; 2 weeks: Exp. 3). To test the generality of the results, subjects retrieval practiced with either noun-cued recall of verbs (Exp. 1 and 3) or verb-cued recall of nouns (Exp. 2) during the intermediate and final tests (i.e., test type). We demonstrated direct benefits of testing on both recall accuracy and recall speed. Repeated retrieval practice, relative to repeated restudy and study-only practice, reduced the recall decrement over the long delay, and enhanced phrases’ recall speed already after 2 min, and this independently of type of encoding and recall test. However, a benefit of testing on long-term retention only emerged (Exp. 3), when prolonging the recall delay from 1 to 2 weeks, and using different sets of phrases for the immediate and delayed final tests. Thus, the direct testing benefit appears to be highly generalizable even with more complex, action-oriented stimulus materials, and encoding manipulations. We discuss these results in terms of the distribution-based bifurcation model.
This work proposes to employ the (bursty) GLO model from Bingmer et. al (2011) to model the occurrence of tropical cyclones. We develop a Bayesian framework to estimate the parameters of the model and, particularly, employ a Markov chain Monte Carlo algorithm. This also allows us to develop a forecasting framework for future events.
Moreover, we assess the default probability of an insurance company that is exposed to claims that occur according to a GLO process and show that the model is able to substantially improve actuarial risk management if events occur in oscillatory bursts.
This paper documents that the bond investments of insurance companies transmit shocks from insurance markets to the real economy. Liquidity windfalls from household insurance purchases increase insurers' demand for corporate bonds. Exploiting the fact that insurers persistently invest in a small subset of firms for identification, I show that these increases in bond demand raise bond prices and lower firms' funding costs. In response, firms issue more bonds, especially when their bond underwriters are well connected with investors. Firms use the proceeds to raise investment rather than equity payouts. The results emphasize the significant impact of investor demand on firms' financing and investment activities.
Macro-finance theory predicts that financial fragility builds up when volatility is low. This “volatility paradox’” challenges traditional systemic risk measures. I explore a new dimension of systemic risk, spillover persistence, which is the average time horizon at which a firm’s losses increase future risk in the financial system. Using firm-level data covering more than 30 years and 50 countries, I document that persistence declines when fragility builds up: before crises, during stock market booms, and when banks take more risks. In contrast, persistence increases with loss amplification: during crises and fire sales. These findings support key predictions of recent macrofinance models.
This paper documents that the bond investments of insurance companies transmit shocks from insurance markets to the real economy. Liquidity windfalls from household insurance purchases increase insurers’ demand for corporate bonds. Exploiting the fact that insurers persistently invest in a small subset of firms for identification, I show that these increases in bond demand raise bond prices and lower firms’ funding costs. In response, firms issue more bonds, especially when their bond underwriters are well connected with investors. Firms use the proceeds to raise investment rather than equity payouts. The results emphasize the significant impact of investor demand on firms’ financing and investment activities.
We study the impact of estimation errors of firms on social welfare. For this purpose, we present a model of the insurance market in which insurers face parameter uncertainty about expected loss sizes. As consumers react to under- and overestimation by increasing and decreasing demand, respectively, insurers require a safety loading for parameter uncertainty. If the safety loading is too small, less risk averse consumers benefit from less informed insurers by speculating on them underestimating expected losses. Otherwise, social welfare increases with insurers’ information. We empirically estimate safety loadings in the US property and casualty insurance market, and show that these are likely to be sufficiently large for consumers to benefit from more informed insurers.
This paper sheds light on the life insurance sector’s liquidity risk exposure. Life insurers are important long-term investors on financial markets. Due to their long-term investment horizon they cannot quickly adapt to changes in macroeconomic conditions. Rising interest rates in particular can expose life insurers to run-like situations, since a slow interest rate passthrough incentivizes policyholders to terminate insurance policies and invest the proceeds at relatively high market interest rates. We develop and empirically calibrate a granular model of policyholder behavior and life insurance cash flows to quantify insurers’ liquidity risk exposure stemming from policy terminations. Our model predicts that a sharp interest rate rise by 4.5pp within two years would force life insurers to liquidate 12% of their initial assets. While the associated fire sale costs are small under reasonable assumptions, policy terminations plausibly erase 30% of life insurers’ capital due to mark-to-market accounting. Our analysis reveals a mechanism by which monetary policy tightening increases liquidity risk exposure of non-bank financial intermediaries with long-term assets.
Life insurance convexity
(2023)
Life insurers sell savings contracts with surrender options, which allow policyholders to prematurely receive guaranteed surrender values. These surrender options move toward the money when interest rates rise. Hence, higher interest rates raise surrender rates, as we document empirically by exploiting plausibly exogenous variation in monetary policy. Using a calibrated model, we then estimate that surrender options would force insurers to sell up to 2% of their investments during an enduring interest rate rise of 25 bps per year. We show that these fire sales are fueled by surrender value guarantees and insurers’ long-term investments.
Life insurance convexity
(2021)
Life insurers massively sell savings contracts with surrender options which allow policyholders to withdraw a guaranteed amount before maturity. These options move toward the money when interest rates rise. Using data on German life insurers, we estimate that a 1 percentage point increase in interest rates raises surrender rates by 17 basis points. We quantify the resulting liquidity risk in a calibrated model of surrender decisions and insurance cash flows. Simulations predict that surrender options can force insurers to sell up to 3% of their assets, depressing asset prices by 90 basis points. The effect is amplified by the duration of insurers' investments, and its impact on the term structure of interest rates depends on life insurers' investment strategy.
Common systemic risk measures focus on the instantaneous occurrence of triggering and systemic events. However, systemic events may also occur with a time-lag to the triggering event. To study this contagion period and the resulting persistence of institutions' systemic risk we develop and employ the Conditional Shortfall Probability (CoSP), which is the likelihood that a systemic market event occurs with a specific time-lag to the triggering event. Based on CoSP we propose two aggregate systemic risk measures, namely the Aggregate Excess CoSP and the CoSP-weighted time-lag, that reflect the systemic risk aggregated over time and average time-lag of an institution's triggering event, respectively. Our empirical results show that 15% of the financial companies in our sample are significantly systemically important with respect to the financial sector, while 27% of the financial companies are significantly systemically important with respect to the American non-financial sector. Still, the aggregate systemic risk of systemically important institutions is larger with respect to the financial market than with respect to non-financial markets. Moreover, the aggregate systemic risk of insurance companies is similar to the systemic risk of banks, while insurers are also exposed to the largest aggregate systemic risk among the financial sector.
This paper studies insurance demand for individuals with limited financial literacy. We propose uncertainty about insurance payouts, resulting from contract complexity, as a novel channel that affects decision-making of financially illiterate individuals. Then, a trade-off between second-order (risk aversion) and third-order (prudence) risk preferences drives insurance demand. Sufficiently prudent individuals raise insurance demand upon an increase in contract complexity, while the effect is reversed for less prudent individuals. We characterize competitive market equilibria that feature complex contracts since firms face costs to reduce complexity. Based on the equilibrium analysis, we propose a monetary measure for the welfare cost of financial illiteracy and show that it is mainly driven by individuals’ risk aversion. Finally, we discuss implications for regulation and consumer protection.
Through the lens of market participants' objective to minimize counterparty risk, we provide an explanation for the reluctance to clear derivative trades in the absence of a central clearing obligation. We develop a comprehensive understanding of the benefits and potential pitfalls with respect to a single market participant's counterparty risk exposure when moving from a bilateral to a clearing architecture for derivative markets. Previous studies suggest that central clearing is beneficial for single market participants in the presence of a sufficiently large number of clearing members. We show that three elements can render central clearing harmful for a market participant's counterparty risk exposure regardless of the number of its counterparties: 1) correlation across and within derivative classes (i.e., systematic risk), 2) collateralization of derivative claims, and 3) loss sharing among clearing members. Our results have substantial implications for the design of derivatives markets, and highlight that recent central clearing reforms might not incentivize market participants to clear derivatives.
Central clearing counterparties (CCPs) were established to mitigate default losses resulting from counterparty risk in derivatives markets. In a parsimonious model, we show that clearing benefits are distributed unevenly across market participants. Loss sharing rules determine who wins or loses from clearing. Current rules disproportionately benefit market participants with flat portfolios. Instead, those with directional portfolios are relatively worse off, consistent with their reluctance to voluntarily use central clearing. Alternative loss sharing rules can address cross-sectional disparities in clearing benefits. However, we show that CCPs may favor current rules to maximize fee income, with externalities on clearing participation.
Through the lens of market participants' objective to minimize counterparty risk, we provide an explanation for the reluctance to clear derivative trades in the absence of a central clearing obligation. We develop a comprehensive understanding of the benefits and potential pitfalls with respect to a single market participant's counterparty risk exposure when moving from a bilateral to a clearing architecture for derivative markets. Previous studies suggest that central clearing is beneficial for single market participants in the presence of a sufficiently large number of clearing members. We show that three elements can render central clearing harmful for a market participant's counterparty risk exposure regardless of the number of its counterparties: 1) correlation across and within derivative classes (i.e., systematic risk), 2) collateralization of derivative claims, and 3) loss sharing among clearing members. Our results have substantial implications for the design of derivatives markets, and highlight that recent central clearing reforms might not incentivize market participants to clear derivatives.
Different insurance activities exhibit different levels of persistence of shocks and volatility. For example, life insurance is typically more persistent but less volatile than non-life insurance. We examine how diversification among life, non-life insurance, and active reinsurance business affects an insurer's contribution and exposure to the risk of other companies. Our model shows that a counterparty's credit risk exposure to an insurance group substantially depends on the relative proportion of the insurance group's life and non-life business. The empirical analysis confirms this finding with respect to several measures for spillover risk. The optimal proportion of life business that minimizes spillover risk decreases with leverage of the insurance group, and increases with active reinsurance business.
Mit der Entscheidung für eine Promotion und dem daran anschließenden Prozess der Realisierung dieser, wird eine besondere Phase im Lebenslauf von Wissenschaftler_innen eingeläutet. Die Biografie des Einzelnen wird dabei zum Bezugsrahmen des jeweiligen Ausgestaltungsprozesses, die Steuerung(sversuche) des eigenen Lebenslaufs zum Gegenstand individueller Karrierevorstellungen. Darüber lässt sich der vorliegende Beitrag zur Jahrestagung der Sektion Erwachsenenbildung der DGfE 2016 in das Thema "Biografie – Lebenslauf – Generation" einbetten, der einige Ergebnisse einer kürzlich veröffentlichten Längsschnittstudie (Kubsch 2016) in den Blick nimmt. ...
This article claims that the institution of the market is structurally exploitative. It allows for exploitation, it encouragesactors to engage in it and it even pressurizes them to do so. Within Marxism, this claim is well known, but can it bedefended without strongly relying on Marxist concepts and arguments? I will answer in the affirmative and identify theforces of market competition to be a fundamental source of exploitative pressures...
Happy Birthday, CAMPUSERVICE : 15-jähriges Bestehen der Tochtergesellschaft der Goethe-Universität
(2017)
Attenuated NOX2 expression impairs ROS production during the hypoinflammatory phase of sepsis
(2012)
Background: The multicomponent phagocytic NADPH oxidase produces reactive oxygen species (ROS) after activation by microorganisms or inflammatory mediators. In the hypoinflammatory phase of sepsis, macrophages are alternatively activated by contact with apoptotic cells or their secretion products. This inhibits NADPH oxidase and leads to attenuated ROS production and furthermore contributes among others to a hyporeactive host defense. Due to this immune paralysis, sepsis patients suffer from recurrent and secondary infections. We focused on the catalytic subunit of NADPH oxidase, the transmembrane protein NOX2. We assume that after induction of sepsis the expression of NOX2 is reduced and hence ROS production is decreased.
Methods: We induced polymicrobial sepsis in mice by cecal ligation and puncture. The ability of peritoneal macrophages (PMs) to produce ROS was determined by FACS via hydroethidine assay. NOX2 expression of PMs was determined by western blot and qPCR. To elucidate the mechanism causing mRNA destabilization, we performed in vitro experiments using J774 macrophages. To obtain an alternatively activated phenotype, macrophages were stimulated with conditioned medium from apoptotic T cells (CM). By luciferase assays we figured out a 3'UTR-dependent regulation of NOX2 mRNA stability. Assuming that a protein is involved in the mRNA degradation, we performed a RNA pulldown with biotinylated NOX2-3'UTR constructs followed by mass spectrometry. We verified the role of SYNCRIP by siRNA approach. Additionally, we overexpressed NOX2 in J774 cells and analyzed the ROS production (w/wo CM treatment) by FACS.
Results: We found an impaired expression of NOX2 at RNA and protein level along with decreased ROS production after induction of sepsis in mice as well as stimulating J774 macrophages with CM of apoptotic T cells. This is due to a time-dependent NOX2 mRNA degradation depending on SYNCRIP, a RNA-binding protein, which stabilizes NOX2 mRNA through binding to its 3'UTR under normal conditions. In line, knockdown of SYNCRIP also decreases NOX2 mRNA expression. We assume that a CM-dependent modification or degradation of SYNCRIP prevents its stabilizing function. As the overexpression of NOX2 restores ROS production of CM-treated J774 cells, we assume that NOX2 expression is crucial for maintaining NADPH activity during the hypoinflammatory phase of sepsis.
Conclusion: Our data imply a regulatory impact of SYNCRIP on NOX2 stability during the late phase of sepsis. Therefore, further understanding of the regulation of NADPH oxidase could lead to the design of a therapy to reconstitute NADPH oxidase function, finally improving immune function in sepsis patients.
Empathie ist ein mehrdimensionales psychologisches Konstrukt, das aus verschiedenen Facetten besteht (Decety & Ickes, 2011). Es ist anzunehmen, dass Empathie ein wichtiger Mechanismus ist, um Menschen miteinander zu verbinden und eine Gruppenkohäsion möglich zu machen (Rameson & Lieberman, 2009). Neben der Fähigkeit die Erlebenswelt des Gegenübers mit eigenen mentalen Repräsentationen nachzuvollziehen, werden dadurch Emotionen ausgelöst, die denen des Gegenübers sehr ähnlich sind. Gleichzeitig unterscheidet sich dieses Gefühlserleben aber beispielsweise von reiner Gefühlsansteckung, da eine Selbst-Andere Differenzierung stattfindet und in einer empathischen Episode immer im Vordergrund steht, dass man sich aufgrund der Gefühle des anderen so fühlt (Altmann, 2015). Hier spielt Imitation eine wichtige Rolle, wenn es darum geht, die Erlebenswelt der anderen Person zu erfassen (Meltzoff & Decety, 2003). Besonders auch bei Lehrkräften zeigt sich eine Wichtigkeit von empathischem Handeln und Verstehen (Tausch & Tausch, 2008). In verschiedenen Studien zeigten sich positive Effekte von Empathie auf die Schülerschaft und die Unterrichtsqualität. Die SchülerInnen trauen sich mehr, es herrscht weniger Angst im Klassenzimmer und die Qualität der Unterrichtsbeiträge steigt (vgl. Tausch & Tausch, 1998). Empathie selbst besteht aus State- und Trait-Anteilen, so dass zumindest Teile davon trainierbar sind (Butters, 2010). Eine potentielle Möglichkeit um Empathie zu fördern scheint das Lehr-Lern-Format Service Learning (SL) darzustellen. Hierbei handelt es sich um ein Veranstaltungskonzept, bei dem ein meist fachlicher, akademischer Inhalt mit einem ehrenamtlichen Engagement außerhalb der Universität verknüpft wird (Reinders, 2016). Forschung aus dem angloamerikanischen Raum weist darauf hin, dass Empathie durch derartige Formate gefördert werden kann (Lundy, 2007; Wilson, 2011). Da die meisten Messverfahren von Empathie auf Selbstauskunft basieren und damit nur indirekt Anteile wie das affektive Mitschwingen abbilden können, war es Teil dieser Arbeit im ersten Schritt einen objektiven, videobasierten Test zu entwickeln, der dann mit anderen Verfahren zur Messung eingesetzt werden sollte. In zwei ExpertInnen-Befragungen wurden aus einem Pool von Videosequenzen mit Unterrichtssituationen insgesamt zehn Videoclips mit jeweils vier Items und zugehörigen Antwortoptionen extrahiert. In einer darauf folgenden Validierung mit Studierenden der Goethe-Universität (N = 112) wurden diese Vignetten mit verschiedenen Verfahren zur Messung von Empathie gemeinsam erhoben und die Zusammenhänge analysiert. Die Reliabilitäten der drei Testscores bewegten sich in den beiden gebildeten Testversionen zwischen Cronbachs α = .53 (Verhaltens-Score der Testversion 1) und α = .76 (Intensitäts-Score der Testversion 2). Es zeigten sich zu allen Fragebögen erwartungskonforme Zusammenhänge von kleinen bis mittleren Effekten. Die Itemschwierigkeiten bei den meisten Items lagen zwischen 50 und 65, die Trennschärfen zwischen .18 und .70.
Im nächsten Entwicklungsschritt wurden die Vignetten in neu zusammengestellten Testversionen nur Lehramtsstudierenden (N = 41) vorgelegt und zusätzlich Videoaufnahmen der Gesichter der ProbandInnen gemacht, um sie mit Face-Reader zu analysieren und die Facette Mitschwingen abzubilden. Die Reliabilitäten der Testversionen lagen mit einem neuen Scoring nun zwischen α = .24 (Emotionserkennungs-Score Prä-Testversion) und
α = .57 (Intensitäts-Score Prä-Testversion) sowie zwischen α = .10 (Emotionserkennungs-Score Post-Testversion) und α = .77 (Intensitäts-Score Post-Testversion). Auch die Schwierigkeiten und Trennschärfen änderten sich nach Adaptieren des Scorings und bewegten sich in beiden Testversionen nun von 30 bis 89 (Schwierigkeit) und von .0 bis .5 (Trennschärfe). Die Face-Reader Analysen zeigten nur in Teilen kongruente Emotionen mit den Selbstauskunftsdaten bzw. den eingeschätzten Intensitäten in den Videosequenzen, dann allerdings mittlere bis große Effekte, so dass in Teilen von einem affektiven Mitschwingen ausgegangen werden kann. Da sich die internen Konsistenzen im Vergleich zur Validierung verschlechterten, wurden die Zusammensetzungen der Testversionen für den Praxiseinsatz wieder auf die Validierungs-Versionen umgestellt.
Im Praxiseinsatz wurden Lehramtsstudierende in SL und Non-SL-Veranstaltungen rekrutiert und miteinander verglichen. Insgesamt nahmen N = 68 Personen an drei Messzeitpunkten teil (n = 30 in SL und n = 38 in Non-SL-Seminaren). Die Analysen zeigten, dass es zwischen den Gruppen keine signifikanten Unterschiede in den genutzten Instrumenten gab. Auch über die Zeit gab es nach der Bonferroni-Korrektur nur einen signifikanten Effekt (F (2,52) = 6.57, p = .003, η2 = .20). Es ist anzunehmen, dass diese Ergebnisse vor allem auf methodische Einschränkungen und Verbesserungsmöglichkeiten des entwickelten Testverfahrens zurückzuführen sind. Weitere Möglichkeiten werden diskutiert.
The FIRE AND ICE Trial (ClinicalTrials.gov, identifier NCT01490814) was initiated in 2012 as a multicenter, randomized, head‐to‐head comparison of radiofrequency current (RFC) and cryoballoon catheter ablation for the treatment of patients with drug‐refractory symptomatic paroxysmal atrial fibrillation (AF). Six years on, it remains the largest, randomized comparison of safety and efficacy between 2 catheter ablation modalities used in the treatment of patients with AF. This landmark trial not only established noninferiority between cryoballoon and RFC ablation for pulmonary vein isolation (PVI) with regard to the study's efficacy and safety primary end points,1 but also, it evaluated secondary end points that were critical for a representative study interpretation. ...
Aims: The primary safety and efficacy endpoints of the randomized FIRE AND ICE trial have recently demonstrated non-inferiority of cryoballoon vs. radiofrequency current (RFC) catheter ablation in patients with drug-refractory symptomatic paroxysmal atrial fibrillation (AF). The aim of the current study was to assess outcome parameters that are important for the daily clinical management of patients using key secondary analyses. Specifically, reinterventions, rehospitalizations, and quality-of-life were examined in this randomized trial of cryoballoon vs. RFC catheter ablation.
Methods and results: Patients (374 subjects in the cryoballoon group and 376 subjects in the RFC group) were evaluated in the modified intention-to-treat cohort. After the index ablation, log-rank testing over 1000 days of follow-up demonstrated that there were statistically significant differences in favour of cryoballoon ablation with respect to repeat ablations (11.8% cryoballoon vs. 17.6% RFC; P = 0.03), direct-current cardioversions (3.2% cryoballoon vs. 6.4% RFC; P = 0.04), all-cause rehospitalizations (32.6% cryoballoon vs. 41.5% RFC; P = 0.01), and cardiovascular rehospitalizations (23.8% cryoballoon vs. 35.9% RFC; P < 0.01). There were no statistical differences between groups in the quality-of-life surveys (both mental and physical) as measured by the Short Form-12 health survey and the EuroQol five-dimension questionnaire. There was an improvement in both mental and physical quality-of-life in all patients that began at 6 months after the index ablation and was maintained throughout the 30 months of follow-up.
Conclusion: Patients treated with cryoballoon as opposed to RFC ablation had significantly fewer repeat ablations, direct-current cardioversions, all-cause rehospitalizations, and cardiovascular rehospitalizations during follow-up. Both patient groups improved in quality-of-life scores after AF ablation.
Clinical trial registration: ClinicalTrials.gov identifier: NCT01490814.
In der so umfangreichen wie differenzierten Bruegelforschung dominieren religiöse und moralische Deutungen, die auf eine pessimistische Weltsicht des Künstlers schließen. Dem gegenüber priorisiert die vorliegende Arbeit säkulare und materielle Optionen, die einer optimistischen Weltsicht den Boden bereiten konnten. Diese ambitionierte Akzentsetzung ist anhand des zeitgenössischen Kontexts prinzipiell legitimiert und anhand von Bildbelegen praktisch veranschaulicht.
Eine konkretisierte kunstgeschichtliche Positionsbestimmung Bruegels seine systematische Abgrenzung von der romanistischen Konkurrenz ebenso heraus wie den originären Beitrag zur Weiterentwicklung der nicht-romanistischen Malerei seiner Zeit. Für seine innovativen Bildkonzepte gibt es einen gemeinsamen Nenner. Das ist die erweiterte Einbeziehung des Betrachters, der vom passiven Rezipienten zum aktiven Interpreten aufgewertet ist, nicht zuletzt mit dem Ziel, zeitgenössische Wirklichkeit und deren Widersprüche, Defizite und Alternativen diskutierbar zu machen.
Eine modifizierte sozialgeschichtliche Positionsbestimmung Bruegels geht vom Doppelcharakter der zeitgenössischen Transformationsprozesse in den spanischen Niederlanden aus. In der Regel rückt man religiöse Kontroversen und politische Konflikte in den Vordergrund, die destruktiven Potenziale des gesellschaftlichen Wandels und deren Wiederspiegelung im Bruegelwerk. Danach wird der Künstler tendenziell als religiöser Dissident und politischer Opponent qualifiziert. Stattdessen wird hier die ökonomische Expansion, die wirtschaftliche Dynamik als vorrangig angesehen, welche die Heimat Bruegels, die Metropole Antwerpen und ihr Umland, zu einer europäischen Vorsprungsregion werden ließ. Das lenkt den Blick auf die Reflexion des sozialökonomischen Kontexts im Bruegel-Oeuvre, die der Entwicklung der gesellschaftlichen Produktivkraft und die Rolle nicht nur der agrarischen Arbeit erkennbar werden lässt.
Damit ist der Boden bereitet für die Frage nach säkularer Kritik und sozialer Utopie in den Gemälden, denen die beiden folgenden Hauptteile der Arbeit gewidmet sind, dem „Sturz des Ikarus“ (Teil B) und dem „Turmbau zu Babel“ (Teil C). Beide Kapitel setzten sich intensiv mit Referenztexten und Referenzbildern sowie mit der Rezeptionsgeschichte auseinander und münden jeweils in den Entwurf einer konkurrierenden Deutung zu den traditionellen Thesen von der Bestrafung menschlicher Hybris ein:
Die konkurrierende Deutung des „Ikarussturzes“ kommt zu dem Ergebnis, dass die sozialgeschichtliche Substanz des Gemäldes in der Überbietung antiker durch frühneuzeitliche Arbeits- und Verkehrsformen zu entdecken ist. Eine latente sozialutopische Perspektive klingt in den dabei imaginierten Indizien für Arbeits- und Techniklob an.
Die konkurrierende Deutung der Turmbaugemälde gipfelt in dem Vorschlag, die sozialutopischen Potenziale aus dem Übergang vom „Wiener Turmbau“ zum Rotterdamer Turmbau“ abzuleiten. Sie kommt zu dem Ergebnis, dass die Bruegelsche Turmbau-Folge die Entwicklung einer Arbeitsutopie und einer Architekturutopie sowie deren Zusammenfassung zu einer Gesellschaftsutopie vorstellbar macht. Dabei ist der Optimismus keineswegs ungebrochen, das Konstrukt nicht frei von Skepsis, weil Bedrohungen des Projekts von innen und außen Teil des Bildgeschehens sind.
Noise-induced hearing loss is one of the most common auditory pathologies, resulting from overstimulation of the human cochlea, an exquisitely sensitive micromechanical device. At very low frequencies (less than 250 Hz), however, the sensitivity of human hearing, and therefore the perceived loudness is poor. The perceived loudness is mediated by the inner hair cells of the cochlea which are driven very inadequately at low frequencies. To assess the impact of low-frequency (LF) sound, we exploited a by-product of the active amplification of sound outer hair cells (OHCs) perform, so-called spontaneous otoacoustic emissions. These are faint sounds produced by the inner ear that can be used to detect changes of cochlear physiology. We show that a short exposure to perceptually unobtrusive, LF sounds significantly affects OHCs: a 90 s, 80 dB(A) LF sound induced slow, concordant and positively correlated frequency and level oscillations of spontaneous otoacoustic emissions that lasted for about 2 min after LF sound offset. LF sounds, contrary to their unobtrusive perception, strongly stimulate the human cochlea and affect amplification processes in the most sensitive and important frequency range of human hearing.
1. Die Lebensdauer und Entwicklungsfähigkeit unbesamter Seeigeleier in Coffeinlösungen (1 : 250 bis 1 : 2000) ist gegenüber der in normalem Seewasser deutlich verlängert.
2. Das Optimum der Lebensverlängerung liegt etwa bei einer Konzentration von 1 : 1000 bis 1 : 1250.
3. Der Zellkern kann sich unter Coffeineinfluß „aufblähen“. und zwar bis zum 3-fachen seines normalen Umfanges.
4. Das Coffein ruft bei gleichbleibender Zellgröße eine Herabsetzung der Viskosität der Zelloberfläche hervor und ermöglicht bei gallertlosen Eiern, die sich berühren, ein Aneinanderlegen und Abplatten der Eier bis zu Reihen-Eiern und Pflasterbildungen. Derartige Eier können sich nach Zurückbringen in Seewasser und Besamung noch am 4. und 5. Tage zu Plutei aller Normalitäts-Stufen entwickeln.
5. Aus den Reiheneiern können durch Verschmelzen braun gefärbte Riesen-Eier hervorgehen, die nicht mehr entwicklungsfähig, aber gegen Zerfall sehr widerstandsfähig sind.
Gedanken und Versuche zum ernsthaften Einsatz des wissenschaftlichen Films im Hochschulunterricht
(1956)
HIV ist heutzutage eine gut behandelbare, chronische Erkrankung. Insbesondere bei chronischen Erkrankungen ist es entscheidend, auch die psychischen und physischen Auswirkungen auf die Lebenssituation zu untersuchen und dabei auch geschlechtsspezifische Aspekte in der gesundheitsbezogenen Lebensqualität von PLWH mit einzuschließen.
Ziel dieser monozentrischen Beobachtungsstudie ist es, die gesundheitsbezogene Lebensqualität von Patientinnen und Patienten des HIVCENTERs Frankfurt darzustellen und diesbezügliche Einflussfaktoren zu identifizieren. Im Zuge dessen wurden zusätzlich geschlechtsspezifische Unterschiede ausgewertet. Der Mental Component Score stellte die primäre Zielgröße der Studie dar, der Physical Component Score die sekundäre.
Zur Erhebung der gesundheitsbezogenen Lebensqualität wurde der SF-12v2 Fragebogen verwendet, der insgesamt zwölf Fragen beinhaltet. Inhaltlich gliedert sich der Bogen in acht Skalen und zwölf Items, die den Mental- und Physical Component Score bilden. Zur näheren Erfassung der aktuellen Lebenssituation der Patientinnen und Patientinnen des HIVCENTERs Frankfurt, wurde ein eigens für die Studie entwickelter Fragebogen verwendet. Dieser erfasste mit 19 Fragen unter anderem soziodemographische Daten, sowie Parameter zu Religiosität oder Sexualität. Retrospektive Daten aus der Epidem-Datenbank des HIVCENTERs und aus den Patientinnen- und Patientenakten wurden ebenfalls in die Auswertung einbezogen.
Die statistische Auswertung beinhaltete neben deskriptiven Methoden, einfache Varianzanalysen für geschlechtsunabhängige Zusammenhänge und Varianzanalysen mit Interaktion für das Geschlecht zur Ermittlung von geschlechtsspezifischen Einflussgrößen. Des Weiteren wurden Spearmankorrelationen berechnet und zur Identifikation von potenziellen Prädiktoren Regressionen mit Rückwärtsausschluss durchgeführt. Für beide Zielgrößen wurde identisch verfahren. Alle statistischen Tests waren zweiseitig und nutzen ein Signifikanzniveau von alpha=5%.
Im Zeitraum von September 2016 bis Mai 2017 wurden insgesamt 275 Patientinnen und Patienten in die Studie eingeschlossen, darunter 123 Frauen, 150 Männer und 2 transgender Personen. Letztere wurden aufgrund der geringen Fallzahl nicht über die deskriptive Statistik hinaus in den Berechnungen berücksichtigt. Das durchschnittliche Alter in der Studienpopulation betrug 46 Jahre. Frauen hatten ein Durchschnittsalter von 44, Männer von 48 Jahren. 97% der Patientinnen und Patienten waren zum Erhebungszeitpunkt unter antiretroviraler Therapie. Im Durchschnitt erzielten die Teilnehmerinnen und Teilnehmer im Vergleich mit der Referenzpopulation einen unterdurchschnittlichen Mental Component Score von 46. Die Frauen der Studie erzielten einen signifikant schlechteren MCS als die Männer (45 vs. 48; p=0,02). Im Kontext mit den übrigen Prädiktoren des Regressionsmodells erreichten Frauen einen um durchschnittlich 13 Punkte schlechteren MCS als Männer (B=-13; p=<0,001). Als geschlechtsunabhängige negative Prädiktoren auf den MCS stellten sich unter anderem regelmäßiger Alkohol- und Drogenkonsum heraus, sowie das Unterlassen von regelmäßigem Sport oder eine negative Zukunftsaussicht. Als geschlechtsabhängiger negativer Prädiktor erwies sich bei den Frauen eine afrikanische versus westeuropäische Herkunft (B=-5; p=0,028). Arbeitslosigkeit stellte sich bei Männern als geschlechtsabhängiger negativer Prädiktor heraus (B=-5; p=0,033).
Vorliegende Dissertation macht deutlich, dass PLWH auch heute noch eine unterdurchschnittliche gesundheitsbezogene Lebensqualität aufweisen und darüber hinaus deutliche geschlechtsabhängige Unterschiede existieren. HIV-positive Frauen erreichten in dieser Studie signifikant schlechtere Werte für den Mental Component Score als Männer und waren hinsichtlich negativer Einflussgrößen überrepräsentiert. Anhand der hier verwendeten Fragebögen wird es behandelnden Ärztinnen und Ärzten innerhalb der Routinesprechstunde ermöglicht, ein regelmäßiges Monitoring der HRQoL durchzuführen und auch ihren Verlauf zu beurteilen. Zudem wäre ein Screening nach den in dieser Arbeit ermittelten negativen Prädiktoren der HRQoL möglich, wodurch entsprechenden Patientinnen und Patienten ein zielgruppenspezifisches Angebot erhalten könnten.
Power and law in enlightened absolutism : Carl Gottlieb Svarez' theoretical and practical approach
(2012)
The term Enlightened Absolutism reflects a certain tension between its two components. This tension is in a way a continuation of the dichotomy between power on one hand and law on the other. The present paper shall provide an analysis of these two concepts from the perspective of Carl Gottlieb Svarez, who, in his position as a high-ranking Prussian civil servant and legal reformist, has had unparalleled influence on the legislative history of the
Prussian states towards the end of the 18th century. Working side-by-side with Johann Heinrich Casimir von Carmer, who held the post of Prussian minister of justice from 1779 to 1798, Svarez was able to make use of his talent for reforming and legislating. From 1780 to 1794 he was primarily responsible for the elaboration of the codification of the Prussian private law – the “Allgemeines Landrecht für die Preußischen Staaten” in 1794. In the present paper, Svarez’ approach to the relation between law and power shall be analysed on two different levels. Firstly, on a theoretical level, the reformist’s thoughts and reflections as laid down in his numerous works, papers and memorandums, shall be discussed. Secondly, on a practical level, the question of the extent to which he implemented his ideas in Prussian legal reality shall be explored.
"A manager in the minds of doctors" : a comparison of new modes of control in European hospitals
(2013)
Background: Hospital governance increasingly combines management and professional self-governance. This article maps the new emergent modes of control in a comparative perspective and aims to better understand the relationship between medicine and management as hybrid and context-dependent. Theoretically, we critically review approaches into the managerialism-professionalism relationship; methodologically, we expand cross-country comparison towards the meso-level of organisations; and empirically, the focus is on processes and actors in a range of European hospitals.
Methods: The research is explorative and was carried out as part of the FP7 COST action IS0903 Medicine and Management, Working Group 2. Comprising seven European countries, the focus is on doctors and public hospitals. We use a comparative case study design that primarily draws on expert information and document analysis as well as other secondary sources.
Results: The findings reveal that managerial control is not simply an external force but increasingly integrated in medical professionalism. These processes of change are relevant in all countries but shaped by organisational settings, and therefore create different patterns of control: (1) ‘integrated’ control with high levels of coordination and coherent patterns for cost and quality controls; (2) ‘partly integrated’ control with diversity of coordination on hospital and department level and between cost and quality controls; and (3) ‘fragmented’ control with limited coordination and gaps between quality control more strongly dominated by medicine, and cost control by management.
Conclusions: Our comparison highlights how organisations matter and brings the crucial relevance of ‘coordination’ of medicine and management across the levels (hospital/department) and the substance (cost/quality-safety) of control into perspective. Consequently, coordination may serve as a taxonomy of emergent modes of control, thus bringing new directions for cost-efficient and quality-effective hospital governance into perspective.
Background: As health workforce policy is gaining momentum, data sources and monitoring systems have significantly improved in the European Union and internationally. Yet data remain poorly connected to policy-making and implementation and often do not adequately support integrated approaches. This brings the importance of governance and the need for innovation into play.
Case: The present case study introduces a regional health workforce monitor in the German Federal State of Rhineland-Palatinate and seeks to explore the capacity of monitoring to innovate health workforce governance. The monitor applies an approach from the European Network on Regional Labour Market Monitoring to the health workforce. The novel aspect of this model is an integrated, procedural approach that promotes a ‘learning system’ of governance based on three interconnected pillars: mixed methods and bottom-up data collection, strong stakeholder involvement with complex communication tools and shared decision- and policy-making. Selected empirical examples illustrate the approach and the tools focusing on two aspects: the connection between sectoral, occupational and mobility data to analyse skill/qualification mixes and the supply–demand matches and the connection between monitoring and stakeholder-driven policy.
Conclusion: Regional health workforce monitoring can promote effective governance in high-income countries like Germany with overall high density of health workers but maldistribution of staff and skills. The regional stakeholder networks are cost-effective and easily accessible and might therefore be appealing also to low- and middle-income countries.
Background: Women’s participation in medicine and the need for gender equality in healthcare are increasingly recognised, yet little attention is paid to leadership and management positions in large publicly funded academic health centres. This study illustrates such a need, taking the case of four large European centres: Charité – Universitätsmedizin Berlin (Germany), Karolinska Institutet (Sweden), Medizinische Universität Wien (Austria), and Oxford Academic Health Science Centre (United Kingdom).
Case:The percentage of female medical students and doctors in all four countries is now well within the 40–60% gender balance zone. Women are less well represented among specialists and remain significantly under-represented among senior doctors and full professors. All four centres have made progress in closing the gender leadership gap on boards and other top-level decision-making bodies, but a gender leadership gap remains relevant. The level of achieved gender balance varies significantly between the centres and largely mirrors country-specific welfare state models, with more equal gender relations in Sweden than in the other countries. Notably, there are also similar trends across countries and centres: gender inequality is stronger within academic enterprises than within hospital enterprises and stronger in middle management than at the top level. These novel findings reveal fissures in the ‘glass ceiling’ effects at top-level management, while the barriers for women shift to middle-level management and remain strong in academic positions. The uneven shifts in the leadership gap are highly relevant and have policy implications.
Conclusion: Setting gender balance objectives exclusively for top-level decision-making bodies may not effectively promote a wider goal of gender equality. Academic health centres should pay greater attention to gender equality as an issue of organisational performance and good leadership at all levels of management, with particular attention to academic enterprises and newly created management structures. Developing comprehensive gender-sensitive health workforce monitoring systems and comparing progress across academic health centres in Europe could help to identify the gender leadership gap and utilise health human resources more effectively.
Solute carrier (SLC) are related to various diseases in human and promising pharmaceutical targets but more structural and functional information on SLCs is required to expand their use for drug design and therapy. The 7-transmembrane segment inverted (7-TMIR) fold was identified for the SLC families 4, 23 and 26 in the last decade thus detailed analysis of the structure function relationship of one of these families might also yield insights for the other two. SVCT1 and SVCT2 from the SLC23 family are sodium dependent ascorbic acid transporters in human but structural analysis of the SLC23 family is exclusively based on two homologs – UraA from E. coli and UapA from A. nidulans – yielding two inward-facing and one occluded conformation. In combination with outward-facing conformations from SLC4 transporters, and additional information from the SLC26 family, an elevator transport mechanism for all 7-TMIR proteins was identified but detailed mechanistic features of the transport remain elusive due to the lack of multiple conformations from individual transporters.
To increase the understanding of 7-TMIR protein structure and function in this study, the transport mechanism of SLC23 transporters was analyzed by two strategies including selection of alpaca derived nanobodies and synthetic nanobodies against UraA as prokaryotic model protein of the SLC23 family. The second strategy involved mutagenesis of UraA at functional relevant positions regarding the conformational change during transport. Therefore, available structures of 7-TMIR proteins and less related elevator transporters were analyzed and a common motif identified – the alpha helical inter-domain linkers. The proposed rigid body movement for transport in combination with the characteristic alpha helical secondary structure of the linkers connecting both rigid bodies led to the hypothesis of functional relevance of the linkers and a conformational hinge being located in close proximity to the linkers. These positions were identified and used to modulate the biophysical properties of the transporter. Mutagenesis at three relevant positions led to loss of transport functionality and these UraA variants could be recombinantly produced and purified to further examine the underlying mechanistic effects. The variants UraAG320P and UraAP330G from the periplasmic inter-domain linker showed increased dimerization and thermal stability as well as substrate binding in solution. The substrate affinity of UraAG320P was identified to be 5-fold higher compared to the wildtype. The solvent accessibility of the substrate binding site in UraAG320P and UraAP330G revealed reduced open probability that indicated an altered conformational space compared to UraAWT. This phenomenon was analyzed in more detail by differential hydrogen-deuterium exchange mass spectrometry and the results supported the hypothesis of a reduced open probability and gave further insights into the impact of the two mutations in the periplasmic inter-domain linker in UraA.
This thesis further presents strategies for phage display selection of nanobodies with epitope bias and a post selection analysis pipeline to identify nanobodies with desired binding characteristics. Thereby, whole cell transport inhibition highlighted periplasmic epitope binders and conformational selectivity. A cytoplasmic epitope could be identified by pulldown with inside-out membrane vesicles for one cytoplasmic side binder. Thermal stabilization analysis of the target protein in differential scanning fluorometry was performed in presence of two different nanobodies to identify simultaneous binding by additional thermal stabilization respectively competition by intermediate melting temperatures. Combination of epitope information with simultaneous DSF could be used to identify the stabilization of different UraA conformations by a set of binders and presents a general nanobody selection strategy for other SLCs. Synthetic nanobodies (sybodies) were also included in the analysis pipeline and Sy45 identified as promising candidate for co-crystallization that gave rise to UraAWT crystals in several conditions in presence or absence of uracil. Similar crystals could be obtained in combination with UraAG320P that were further optimized to gain structural information on this mutant. The structure was solved by molecular replacement and the model refined at 3.1 Å resolution confirming the cytoplasmic epitope of Sy45 as predicted by the selection pipeline. The stabilized conformation was inward-facing similar to the reported UapA structure but significantly different to the previously reported inward-facing structure of UraA. The structure further confirmed the structural integrity of the UraA mutant G320P. Despite the monomeric state of UraA in the structure, the gate domain aligned reasonably well with the gate domain of the previously published dimeric UraA structure in the occluded conformation and allowed detailed analysis of the conformational transition in UraA from inward-facing to occluded by a single rigid body movement. Thereby little movement in the gate domain of UraA was observed in contrast to a previously reported transport mechanism. Core domain rotation around a rotation axis parallel to the substrate barrier was found to explain the major part of conformational transition from inward-facing to occluded and experimentally supported the hypothesized mechanism by Chang et al. (2017). Additionally, the conformational hinge around position G320 in UraA could be identified as well as the impact of the backbone rigidity introduced by the highly conserved proline residue at position 330 in UraA on the conformational transition. This position was found to serve as anchoring point the inter-domain linker and determines the coordinated movement of inter-domain linker and core domain. The functional analysis further highlighted the requirement of alpha helical secondary structure within the inter-domain linker that serves as amphipathic structural entity that can adjust to changed core-gate domain distances and angles during transport by extension/compression or bending while preserving the rigid linkage.
The applied strategies to modulate the conformational space of UraA by mutagenesis at the hinge positions in the inter-domain linkers is transferrable to other transporters and might facilitate their structural and functional characterization.
Further, this study discusses the conformational thermostabilization of UraA that is based on increased melting temperatures upon restriction of its conformational freedom. The term ‘conformational thermostabilization’ introduced by Serrano-Vega et al. (2007) could be experimentally supported and the direct correlation between the conformational freedom and thermostabilization was qualitatively analyzed for UraA. The concept of conformational thermostabilization might help in characterization of other dynamic transport systems as well.
Highlights
• Germany plans more long-distances water transfers to secure drinking water supply.
• Long-distance water transfers can unfold lock-ins that limit adaptive water governance.
• Our interdisciplinary case study shows how lock-ins emerge over different spaces and times.
• Commercialisation of water but also local protests contributed to various lock-ins.
• We therefore call for context-specific assessments of potentials and risks of LDWT.
Abstract
Germany plans to expand water transfers over long distances in the light of numerous and pressing challenges for drinking water supply. Research on inter- and intrabasin water transfers warns, however, that major investments in large-scale infrastructure systems accompanied by institutional logics and political interests often lead to a so-called lock-in. As a consequence, long-distance water transfers can limit the potential for adaptive water governance in the involved supply areas over decades with negative impacts for people and the environment. By using a case study in Germany as an example, we researched when, where and how such lock-ins around long-distance water transfers emerge. In the infrastructural development of the Elbaue-Ostharz transfer system we found various lock-ins that overlap in space and time. Some are located at the centre others at the margins of the infrastructure and commercialization of the water sector as well as hydraulic and hygienic concerns interlock with local protests in a way that the expansion of the long-distance water transfer infrastructure is presented continuously as imperative. Our findings contribute to a relational understanding of lock-ins of long-distance water transfers as contingent and diverse processes. Given the widespread occurrence of lock-ins, we argue for a context-specific assessment of potentials and risks of long-distance water transfers in times of multiple crises.