Universitätspublikationen
Refine
Year of publication
- 2015 (1742) (remove)
Document Type
- Article (606)
- Doctoral Thesis (187)
- Working Paper (169)
- Contribution to a Periodical (164)
- Book (159)
- Report (157)
- Part of Periodical (124)
- Review (70)
- Preprint (55)
- Conference Proceeding (22)
- Part of a Book (6)
- Master's Thesis (5)
- Periodical (5)
- Bachelor Thesis (4)
- Magister's Thesis (4)
- magisterthesis (4)
- Habilitation (1)
Language
- English (866)
- German (835)
- Spanish (14)
- Italian (11)
- Portuguese (11)
- French (3)
- Multiple languages (1)
- Russian (1)
Is part of the Bibliography
- no (1742)
Keywords
- Islamischer Staat (34)
- IS (25)
- Terrorismus (23)
- Deutschland (16)
- Dschihadismus (13)
- Syrien (12)
- Terror (11)
- Irak (10)
- Islamismus (10)
- Salafismus (10)
Institute
- Präsidium (336)
- Medizin (252)
- Gesellschaftswissenschaften (230)
- Physik (185)
- Wirtschaftswissenschaften (149)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (116)
- Center for Financial Studies (CFS) (115)
- Biowissenschaften (99)
- Frankfurt Institute for Advanced Studies (FIAS) (97)
- Informatik (96)
Das Folgende ist eine Auseinandersetzung mit dem Zeitungsartikel von Patrick Bahners (FAZ 5. 9. 2015) und zugleich mit seiner wichtigsten Autorität, Gerd Althoff (FmSt 48, 2014, S. 261-76: "Das Amtsverständnis Gregors VII. und die neue These vom Friedenspakt in Canossa"). Zu beiden Texten sind einige Bemerkungen vonnöten. Der vorliegenden Fassung meiner Darlegung gingen Versionen voraus, die an Jürgen Petersohn u.a., sowie jeweils erweitert an Nikolas Jaspert und zuletzt an Folker Reichert geschickt wurden. Ich stelle hiermit diese erweiterte Version (entgegen meiner ursprünglichen Absicht) auf Drängen von Kollegen und Freunden ins "Netz".
Aims: We sought to describe perfusion dyssynchrony analysis specifically to exploit the high temporal resolution of stress perfusion CMR. This novel approach detects differences in the temporal distribution of the wash-in of contrast agent across the left ventricular wall.
Methods and results: Ninety-eight patients with suspected coronary artery disease (CAD) were retrospectively identified. All patients had undergone perfusion CMR at 3T and invasive angiography with fractional flow reserve (FFR) of lesions visually judged >50% stenosis. Stress images were analysed using four different perfusion dyssynchrony indices: the variance and coefficient of variation of the time to maximum signal upslope (V-TTMU and C-TTMU) and the variance and coefficient of variation of the time to peak myocardial signal enhancement (V-TTP and C-TTP). Patients were classified according to the number of vessels with haemodynamically significant CAD indicated by FFR <0.8. All indices of perfusion dyssynchrony were capable of identifying the presence of significant CAD. C-TTP >10% identified CAD with sensitivity 0.889, specificity 0.857 (P < 0.0001). All indices correlated with the number of diseased vessels. C-TTP >12% identified multi-vessel disease with sensitivity 0.806, specificity 0.657 (P < 0.0001). C-TTP was also the dyssynchrony index with the best inter- and intra-observer reproducibility. Perfusion dyssynchrony indices showed weak correlation with other invasive and non-invasive measurements of the severity of ischaemia, including FFR, visual ischaemic burden, and MPR.
Conclusion: These findings suggest that perfusion dyssynchrony analysis is a robust novel approach to the analysis of first-pass perfusion and has the potential to add complementary information to aid assessment of CAD.
The presented work inside this thesis aims to raise the degree of automation in analog circuit design. Therefore, a framework was developed to provide the necessary mechanisms in order to carry out a fully automated analog circuit synthesis, i.e., the construction of an analog circuit fulfilling all previously defined (electrical) specifications. Nowadays, analog circuit design in general is a very time consuming process compared to a digital design flow. Due to its discrete nature, the digital design process is highly automated and thus very efficient compared to analog circuit design. In modern Very-Large-Scale integration (VLSI) circuits the analog parts are mostly just a small portion of the overall chip area. Although this small portion is known to consume a major part of the needed workforce. Paired with product cycles which constantly get shorter, the time needed to develop the analog parts of an integrated circuit (IC) becomes a determinant factor. Apart from this, the ongoing progress in semiconductor processing technologies promises more speed with less power consumption on smaller areas, forcing the IC developers to keep track with the technology nodes in order to maintain competitiveness. Analog circuitry exhibits the inherent property of being hard to reuse, as porting from one technology node to another imposes critical changes for operating conditions (e.g., supply voltage) - mostly leading to a full redesign for most of the analog modules. This productivity gap between digital and analog design resembles the primary motivation for this thesis. Due to the availability of commercial sizing tools, this work deliberately focuses on the construction of circuit topologies in distinction to parameter synthesis, which can be obtained with a dedicated sizing tool. The focus on circuit construction allows the development of a framework which allows a full design space exploration. This thesis describes the needed concepts and methods to realize a deterministic, explorative analog synthesis framework. Despite this, a reference implementation is presented, which demonstrates the applicability in current analog design flows.
This study examines the urban heat island (UHI) of Brussels, for both current (2000–2009) and projected future (2060–2069) climate conditions, by employing very high resolution (250 m) modelling experiments, using the urban boundary layer climate model UrbClim. Meteorological parameters that are related to the intensity of the UHI are identified and it is investigated how these parameters and the magnitude of the UHI evolve for two plausible trajectories for future climate conditions. UHI intensity is found to be strongly correlated to the inversion strength in the lowest 100 m of the atmosphere. The results for the future scenarios indicate that the magnitude of the UHI is expected to decrease slightly due to global warming. This can be attributed to the increased incoming longwave radiation, caused by higher air temperature and humidity values. The presence of the UHI also has a significant impact on the frequency of extreme temperature events in the city area, both in present and future climates, and exacerbates the impact of climate change on the urban population as the amount of heat wave days in the city increases twice as fast as in the rural surroundings.
The frequency of intensional and non-first-order definable operators in natural languages constitutes a challenge for automated reasoning with the kind of logical translations that are deemed adequate by formal semanticists. Whereas linguists employ expressive higher-order logics in their theories of meaning, the most successful logical reasoning strategies with natural language to date rely on sophisticated first-order theorem provers and model builders. In order to bridge the fundamental mathematical gap between linguistic theory and computational practice, we present a general translation from a higher-order logic frequently employed in the linguistics literature, two-sorted Type Theory, to first-order logic under Henkin semantics. We investigate alternative formulations of the translation, discuss their properties, and evaluate the availability of linguistically relevant inferences with standard theorem provers in a test suite of inference problems stated in English. The results of the experiment indicate that translation from higher-order logic to first-order logic under Henkin semantics is a promising strategy for automated reasoning with natural languages.
El siguiente trabajo se propone como objetivo la reconstrucción de la estructura interna que constituye a las acciones con sentido (o intencionales) de los hombres tal como puede ser hecha desde la pragmática trascendental del lenguaje de Karl-Otto Apel. Resaltando (I) el papel decisivo que juega el lenguaje y el discurso en la constitución como tal de las acciones intencionales, se procede a la explicitación de una estructura interna de pretensiones de validez similar a la que es posible encontrar en el discurso explícito, tal como lo hicieron los planteos clásicos de Karl-Otto Apel. Luego de la discusión de algunas críticas que pueden hacerse a esta reconstrucción (II), se arriban finalmente a algunas conclusiones referidas a los presupuestos de las acciones no lingüísticas (III).
Dieser Leitfaden ist im Rahmen des Projektes "IndUK – Individuelles Umwelthandeln und Klimaschutz – Ergebnisintegration und transdisziplinäre Verwertung von Erkenntnissen aus der SÖF-Forschung zu den sozialen Dimensionen von Klimaschutz und Klimawandel" entstanden. Das Projekt IndUK wurde vom Bundesministerium für Bildung und Forschung (BMBF) im Förderschwerpunkt Sozial-ökologische Forschung gefördert.
Im Kontext der Diskussion zur „Globalisierung des Managements“ und der daraus entstandenen These einer transnationalen Klasse untersuchen wir in diesem Beitrag den Stellenwert internationaler Berufserfahrung bei Bankvorständen in Deutschland und weltweit. Bisherige Forschungen (etwa Pohlmann 2009) argumentieren, dass bei den Top-100- Industrieunternehmen in den USA, Ostasien und Deutschland Karriereverläufe im mittleren und Spitzenmanagement kaum internationalisiert sind und Hauskarrieren die Regel seien. Unsere eigene explorative Untersuchung legt die Vermutung nahe, dass die Situation im deutschen sowie im globalen Bankensektor anders aussieht. Vor allem in Deutschland verlaufen die Top-Karrieren im Unterschied zu Industrieunternehmen deutlich internationaler, was auf andere personelle Konstellation im Feld des global vernetzten Finanzsektors hinweist. Im deutschen wie im globalen Finanzsektor könnten wir es hierbei mit dem Phänomen einer „Transnationalisierung ohne Migration“ zu tun haben.
In methodischer Hinsicht macht unsere Studie auf die Grenzen quantitativer Forschungsdesigns bei der Untersuchung internationaler Berufserfahrung und internationalen Arbeitspraxen aufmerksam. Daher plädieren wir für ein an die Kategorien der Bourdieu‘schen Sozialtheorie angelehntes qualitatives Forschungsdesign für die Untersuchung der Herausbildung einer globalen Klasse auf den globalisierten Finanzmärkten.
Globale Finanzplätze im Vergleich : Frankfurt und Sydney zwischen Global City und lokaler Variation
(2015)
Frankfurt und Sydney sind international bedeutende Knotenpunkte des Global- Cities-Netzwerks. Als transnationale Finanzzentren erreichen sie im Global Financial Centres Index (GFCI) ähnliche Platzierungen. Populäre Rankings wie der GFCI entfalten ihre Wirkungsmacht in einem politischen Diskurs, der die Konkurrenz von Finanzzentren in einem hierarchischen Städtenetzwerk betont und so die Orientierung an den Champions der Finanzmetropolen forciert. Der hier vorgenommene kontrastive Vergleich Frankfurts und Sydneys zeigt hingegen, dass die stark von Globalisierungs- und Finanzialisierungstendenzen beeinflussten Städte sich nicht einfach einem Idealtypus von Global Cities angleichen. Vielmehr sorgt die Einbettung in unterschiedliche Entwicklungslinien – im Falle Frankfurts in die Tradition einer koordinierten Marktwirtschaft, im Falle Sydneys in die Tradition einer liberalen Marktwirtschaft – für die Ausbildung von Finanzsystemen mit unterschiedlichem Charakter und unterschiedlicher Reichweite. So weist der Finanzplatz Frankfurt im Vergleich mit Sydney eine starke globale Vernetzung auf, wenngleich die Merkmale der koordinierten Marktwirtschaft - geringere Börsenkapitalisierung der Unternehmen, einer primär kreditbasierten Unternehmensfinanzierung und geringere Finanzmarktorientierung der Bevölkerung nachwirken. Demgegenüber profitiert der Finanzstandort Sydney von einer durchwegs finanzialisierten Ökonomie, was sich in der Finanzmarktorientierung von Unternehmen und jener der allgemeinen Bevölkerung ausdrückt, weist aber eine stärkere Binnenorientierung, also die Fokussierung auf den nationalen Markt auf.
National Model United Nations New York 2015 : Delegation der Goethe-Universität Frankfurt am Main
(2015)
Seit ihrer Gründung im Jahr 1945 sind die Vereinten Nationen zur bedeutendsten und einflussreichsten internationalen Organisation geworden. Als völkerrechtlicher Zusammenschluss verschiedenster Staaten haben die Vereinten Nationen eine generelle Zuständigkeit in Fragen von Frieden, Sicherheit und internationalem Zusammenleben. Unter den sechs Hauptorganen der Vereinten Nationen sind besonders der Sicherheitsrat und die Generalversammlung hervorzuheben. Letztere ist mit Vertretern aus allen 193 Mitgliedsstaaten die weltweit größte, regelmäßige Zusammenkunft von offiziellen Staatsvertretern. ...
Background: Alzheimer’s Disease (AD) is the most common form of dementia and one of the major diseases of old age, causing the impairment of cognitive functions. This disease does not only confront society with financial issues, but also puts severe stress on individuals suffering from AD and their relatives alike. One of the possible symptoms, commonly described in AD, is the impairment of learning as well as the recognition of face-name associations. Beginning at age 60, the chance to develop AD grows exponentially with increasing age, making age a major risk factor. Additionally, the e4 allele of the apolipoprotein E (APOE) polymorphism has been associated with the risk of developing AD when compared to the more common e3 allele. While strong evidence shows a stronger decline in cognitive function with rising age for e4 carriers, some studies demonstrated better cognitive function in e4 carriers at a young age.
This led to the postulation of the hypothesis of antagonistic pleiotropy of the APOE gene, wherein the e4 allele may benefit cognitive function in young carriers, yet leads to a faster decline at a later point in life, encouraging the development of cognitive dysfunction such as AD. Several functional magnetic resonance imaging (fMRI) studies, examining functional activation patterns, found APOE-related differences in key areas of episodic memory, such as the hippocampus, where e4 carriers show aberrant activation similar to AD patients. However, associative memory (encoding and retrieval of face name pairs) has not been well examined for APOE-related differences. Interaction effects of age and the APOE genotype, such as those postulated by the hypothesis of antagonistic pleiotropy, have not been addressed in face-name association tasks either.
Leading Question: Is it possible to detect interaction effects between age and APOE genotype on cognitive performance or neuronal activation patterns in healthy young and old participants during an fMRI face-name association task, supporting the hypothesis of antagonistic pleiotropy of the APOE genotype?
Methods: Participants were stratied by age, and APOE e4 carriers were randomly matched with homozygous e3 carriers. Neuropsychological examination (CVLT and CERAD) was administered. Participants underwent structural MRI analysis via voxelbased morphometry (VBM) as well as fMRI imaging during a face-name association task.
Results: Apart from strong age-related effects in cognitive function detected during neuropsychological testing, the behavioral data from the face-name association task as well as the structural MRI analysis did not show an association with the APOE genotype. Nevertheless, analysis of functional MRI data showed age- as well as APOE-dependent effects on activation patterns for the encoding and retrieval of face-name pairs, in absence of differences in cognitive performance. Further analysis showed eight clusters of significant age X APOE genotype interactions in areas previously associated with working and visual associative memory, including the fusiform gyri bilaterally. These interactions show different patterns, whereas a relative hypoactivation of young e4 carriers together with a hyperactivation of old e4 carriers is the most prominent.
Conclusions: With regard to the leading question, this study successfully found age X APOE interactions in a face-name pair retrieval task, although no interaction effects were present in the encoding task, structural analysis, or cognitive performance. The agemediated effect of the APOE e4 allele on functional activation patterns may be explained by the compensatory hypothesis, describing a relative hyperactivation of old e4 carriers as compensatory, and interpreting a relative hypoactivation of younger e4 participants as reduced effort to achieve the same cognitive performance as non carriers.
These findings present further evidence of an antagonistic pleiotropy of the APOE genotype, showing age-dependent effects of the e4 allele even in healthy carriers. Nevertheless, previously described differences in cognitive performance and brain structure, even in young participants, were not found. On the contrary, functional MRI analysis showed APOE-related differences in young and old participants, suggesting that this modality may be more sensitive in detecting APOE-mediated changes. Among the clusters, demonstrating an interaction effect, the fusiform gyri were most prominent, which might be due to its important role in visual associative memory. As previous studies indicate an early and strong involvement of this area due to AD pathology, this interaction effect of age and APOE genotype in healthy participants underlines the importance of this region in the development of AD, and should be the focus of further research. However, this research is also required to determine, how exactly the APOE genotype influences brain function in healthy humans, and to clarify its relationship to pathological processes facilitating the development of AD.
Die aktuelle Debatte um Pornographie stellt sich andere Fragen als in den kämpferischen 70er Jahren. In den interdisziplinären Beiträgen des Sammelbandes wird Pornographie als kulturelles Artefakt behandelt, als Begriff, der in Diskurse über Sexualität und Moderne, über Identität und Jugend verwoben ist. Die Autor_innen arbeiten mit empirisch-sozialwissenschaftlichen Methoden Fragen nach dem Nutzer_innenverhalten von Onlinepornographie und jugendlichem Pornokonsum auf, bieten theoriegeleitete Zugänge zur Unbestimmbarkeit von Pornographie, zu ihrer notwendigen Einbettung in andere gesellschaftliche Kontexte sowie künstlerische Interventionen zu ihrem emanzipatorischen Potential. Die Beiträge bieten einen gelungenen Einblick in den aktuellen Stand der Debatte dieses noch jungen Feldes.
Recent STAR data for the directed flow of protons, antiprotons and charged pions obtained within the beam energy scan program are analyzed within the Parton-Hadron-String-Dynamics (PHSD/HSD) transport models. Both versions of the kinetic approach are used to clarify the role of partonic degrees of freedom. The PHSD results, simulating a partonic phase and its coexistence with a hadronic one, are roughly consistent with the STAR data. Generally, the semi-qualitative agreement between the measured data and model results supports the idea of a crossover type of quark-hadron transition which softens the nuclear EoS but shows no indication of a first-order phase transition. Furthermore, the directed flow of kaons and antikaons is evaluated in the PHSD/HSD approachesfrom √sNN ≈ 3 - 200 GeV which shows a high sensitivity to hadronic potentials in the FAIR/NICA energy regime √sNN ≤ 8 GeV.
Die vorliegende Arbeit, die im Rahmen des zwischen 2011 und 2013 durchgeführten Forschungsprojektes „Förderung von Modellbildungs- und Falsifikationsprozessen im Elementar- und Primarbereich“ entstanden ist, untersuchte auf Grundlage neuerer entwicklungspsychologischer Forschungsbefunde die Möglichkeiten der Förderung im naturwissenschaftlichen Denken bei Kindern im Elementarbereich. Nach der theoretischen Einordnung des Themas und der Darstellung der Forschungslage wurden im empirischen Teil in einem ersten Schritt die Kompetenzen beim Schlussfolgern im Themengebiet Elastizität und Plastizität und beim Wissenschaftsverständnis von Kindern im Alter von vier bis zehn Jahren, eingeteilt in vier Altersstufen, ermittelt; weiterhin wurden die Verknüpfungen beider Kompetenzbereiche untersucht. Als Instrumente dienten ein bereits erprobter Schlussfolgerungstest sowie ein neu entwickelter Test zur Kompetenzmessung von Wissenschaftsverständnis. In der Grundschule wurden die Tests jeweils als Gruppentests und im Kindergarten als Einzeltests durchgeführt. Die Stichprobe um-fasste 142 Kinder, 82 Kinder aus dem Primarbereich und 60 Kinder aus dem Elementarbereich. Beim Schlussfolgern zeigte sich, dass es für Kinder aller einbezogenen Altersgruppen deutlich leichter war, mit Ereignissen umzugehen, die eine Vermutung bestätigen, als mit solchen, die eine Vermutung widerlegen. Zudem stellte sich heraus, dass der Umgang mit Ereignissen, die im Hinblick auf eine Vermutung irrelevant sind, noch schwieriger war. Mit zunehmendem Alter war eine Kompetenzsteigerung erkennbar. Die Analyse der Tests ergab außerdem einen Zusammenhang zwischen Wissenschaftsverständnis und Schlussfolgern sowie einen deutlichen Einfluss exekutiver Funktionen. Im zweiten Schritt wurden zwei ausgewählte Trainingsmaß-nahmen zur Förderung der Koordination von Theorie und Evidenz bei Kindern im Alter von fünf bis sechs Jahren auf ihre Wirksamkeit hin geprüft, und zwar einerseits durch die Unter-stützung mit adaptivem Nachfragen bei fehlerhaften Antworten sowie andererseits durch eine intensive Förderung mit Modellierung. Die an einer Stichprobe von 63 Kindern durchgeführte Trainingsstudie war als Prä-Post-Studie angelegt und umfasste die Überprüfung des erworbenen Wissens. Die Studie ergab, dass die intensiv geförderten Kinder deutlich höhere Kompetenzen erworben hatten als die durch adaptive Unterstützung unterstützten Probanden. Außer-dem wurde ein Transfer-Test im Inhaltsgebiet Schwimmen und Sinken durchgeführt; hierbei wurden beide Trainingsgruppen in gleicher Weise mit adaptivem Nachfragen unterstützt. Dabei zeigten Kinder beider Trainingsgruppen deutlich höhere Kompetenzen beim Schlussfolgern als im Post-Test, dennoch zeigten Kinder mit vorheriger intensiver Förderung durch Modellierung wiederum höhere Kompetenzen im Transfer-Test als Kinder aus der Gruppe mit adaptiver Unterstützung. Abschließend wurde noch ein Argumentationstest durchgeführt, bei dem Kinder aller drei Experimentalgruppen (Trainingsgruppe 1, Trainingsgruppe 2, Kontrollgruppe) über-greifende Kompetenzen beim Schlussfolgern zeigen konnten. In diesem Test zeigte sich zwischen den drei Gruppen kein Unterschied im Hinblick auf angemessene Antworten beim Schlussfolgern.
Cancer is characterized by a remarkable intertumoral, intratumoral, and cellular heterogeneity that might be explained by the cancer stem cell (CSC) and/or the clonal evolution models. CSCs have the ability to generate all different cells of a tumor and to reinitiate the disease after remission. In the clonal evolution model, a consecutive accumulation of mutations starting in a single cell results in competitive growth of subclones with divergent fitness in either a linear or a branching succession. Acute lymphoblastic leukemia (ALL) is a highly malignant cancer of the lymphoid system in the bone marrow with a dismal prognosis after relapse. However, stabile phenotypes and functional data of CSCs in ALL, the so-called leukemia-initiating cells (LICs), are highly controversial and the question remains whether there is evidence for their existence. This review discusses the concepts of CSCs and clonal evolution in respect to LICs mainly in B-ALL and sheds light onto the technical controversies in LIC isolation and evaluation. These aspects are important for the development of strategies to eradicate cells with LIC capacity. Common properties of LICs within different subclones need to be defined for future ALL diagnostics, treatment, and disease monitoring to improve the patients’ outcome in ALL.
We investigate the properties of the QCD matter across the deconfinement phase transition in the scope of the parton-hadron string dynamics (PHSD) transport approach. We present here in particular the results on the electromagnetic radiation, i.e. photon and dilepton production, in relativistic heavy-ion collisions. By comparing our calculations for the heavy-ion collisions to the available data, we determine the relative importance of the various production sources and address the possible origin of the observed strong elliptic flow v2 of direct photons. We argue that the different centrality dependence of the hadronic and partonic sources for direct photon production in nucleusnucleus collisions can be employed to shed some more light on the origin of the photon v2 “puzzle”. While the dilepton spectra at low invariant mass show in-medium effects like an enhancement from multiple baryonic resonance formation or a collisional broadening of the vector meson spectral functions, the dilepton yield at high invariant masses (above 1.1 GeV) is dominated by QGP contributions for central heavy-ion collisions at ultra-relativistic energies. This allows to have an independent view on the parton dynamics via their electromagnetic massive radiation.
The high collision energies reached at the LHC lead to significant production yields of light (anti-)nuclei and (hyper-)nuclei in proton–proton, proton–lead and, in particular, lead–lead collisions. The excellent particle identification capabilities of the ALICE apparatus, based on the specific energy loss in the Time Projection Chamber and the velocity information in the Time-Of-Flight detector, allow for the detection of these rarely produced particles. Further, the Inner Tracking System gives the possibility to separate primary nuclei from those coming from weak decay of heavier systems. One example of such a weak decay is the measurement of the (anti-)hypertriton decay to 3He + π− (3H̅e̅ + π+). The aforementioned capabilities of the ALICE apparatus offer the unique opportunity to search for exotica, like the bound state of a Λ and a neutron which would decay into a deuteron and a pion, or the bound state of two Λ’s. Results on the production of stable nuclei in Pb–Pb collisions at √sNN = 2.76 TeV are presented, and compared with thermal model predictions. We further present the current status of the searches, by their upper limits on the production yields, and compare the results to thermal and coalescence model expectations.
The pA system is typically regarded in heavy ion collisions as a “cold” nuclear matter environment and thought to isolate and identify initial state effects due to the presence of multiple nucleons in the incoming nucleus. Moreover, pA collisions bridge the gap between peripheral AA collisions and the pp baseline to create a more complete understanding of underlying production mechanisms and how they evolve with multiplicity. Recent measurements at both RHIC and the LHC provide an indication, however, that the “cold” nuclear matter picture may be somewhat naïve.
Recent LHC results from the 2013 p–Pb run at √sNN = 5.02 TeV will be discussed.
Das Feld der interdisziplinäre Diskursforschung hat in den letzten Jahren zunehmend an Bedeutung gewonnen und sich zu einer etablierten Forschungsperspektive am Schnittpunkt von Sprache und Gesellschaft, von Wissen und Macht entwickelt. Die theoretische und methodische Vielgestaltigkeit dieser Forschungsperspektive führt allerdings insbesondere bei der Konzeption und Durchführung von Forschungsarbeiten solchen Zuschnitts immer wieder zu Unsicherheiten und Schwierigkeiten. Drei Werke, die – in unterschiedlicher Weise – auf das sich aus der Vielgestaltigkeit dieses Feldes ergebende Bedürfnis nach Systematisierung und Orientierung antworten, werden im Folgenden vorgestellt. Dabei gilt es deutlich herauszustellen, dass die vorgestellten Werke nicht als Methodenbücher oder Anleitungen zur "korrekten" Durchführung von diskursorientierten Forschungsarbeiten misszuverstehen, sondern vielmehr als Anregung und Verständigung über Fragen, Probleme und Richtungen der Diskursforschung auch über nationale und disziplinäre Grenzen hinweg zu lesen sind.
In this review, I argue that this textbook edited by BENNETT and CHECKEL is exceptionally valuable in at least four aspects. First, with regards to form, the editors provide a paragon of how an edited volume should look: well-connected articles "speak to" and build on each other. The contributors refer to and grapple with the theoretical framework of the editors who, in turn, give heed to the conclusions of the contributors. Second, the book is packed with examples from research practice. These are not only named but thoroughly discussed and evaluated for their methodological potential in all chapters. Third, the book aims at improving and popularizing process tracing, but does not shy away from systematically considering the potential weaknesses of the approach. Fourth, the book combines and bridges various approaches to (mostly) qualitative methods and still manages to provide abstract and easily accessible standards for making "good" process tracing. As such, it is a must-read for scholars working with qualitative methods. However, BENNETT and CHECKEL struggle with fulfilling their promise of bridging positivist and interpretive approaches, for while they do indeed take the latter into account, their general research framework remains largely unchanged by these considerations. On these grounds, I argue that, especially for scholars in the positivist camp, the book can function as a "how-to" guide for designing and implementing research. Although this may not apply equally to interpretive researchers, the book is still a treasure chest for them, providing countless conceptual clarifications and potential pitfalls of process tracing practice.
Die Entwicklungen in der Medizinischen Ausbildung der letzten Jahre konfrontieren Lehrende zunehmend mit neuen didaktischen Herausforderungen. An zahlreichen Standorten im deutschsprachigen Raum werden bereits Qualifizierungsangebote für Lehrende angeboten, jedoch fehlt bisher ein Orientierungsrahmen für medizindidaktische Kompetenzen, der ein Qualifikationsprofil für Lehrende darstellt.
Vor dem Hintergrund der Diskussion um die Kompetenzorientierung des Medizinstudiums und auf Grundlage aktueller internationaler Literatur wurde durch den GMA Ausschuss für Personal- und Organisationsentwicklung in der Lehre ein Kernkompetenzmodell für Lehrende in der Medizin entwickelt. Das Modell soll nicht nur den Lehrenden Orientierung zu ihrem Qualifikationsprofil geben, sondern auch die inhaltliche Ausrichtung hochschuldidaktischer (Aus-) Weiter- und Fortbildungen sowie die Evaluation von Fakultätsentwicklungsprozessen erleichtern und nicht zuletzt einheitliche Kriterien für die Beurteilung der Lehrqualifikation in deutschsprachigen Raum definieren.
Das Modell besteht aus sechs Kompetenzfeldern, für die jeweils Teilkompetenzen definiert und Lernziele beschrieben wurden. Anwendungsbeispiele sollen die jeweiligen Kompetenzen verdeutlichen.
Das Modell ist für die praktische Anwendung konzipiert und soll in einem nächsten Schritt durch spezifische Kompetenzen für Lehrende mit besonderen Aufgaben ergänzt werden.
Recent developments in medical education have created increasing challenges for medical teachers which is why the majority of German medical schools already offer educational and instructional skills trainings for their teaching staff. However, to date no framework for educational core competencies for medical teachers exists that might serve as guidance for the qualification of the teaching faculty. Against the background of the discussion about competency based medical education and based upon the international literature, the GMA Committee for Faculty and Organizational Development in Teaching developed a model of core teaching competencies for medical teachers. This framework is designed not only to provide guidance with regard to individual qualification profiles but also to support further advancement of the content, training formats and evaluation of faculty development initiatives and thus, to establish uniform quality criteria for such initiatives in German-speaking medical schools. The model comprises a framework of six competency fields, subdivided into competency components and learning objectives. Additional examples of their use in medical teaching scenarios illustrate and clarify each specific teaching competency. The model has been designed for routine application in medical schools and is thought to be complemented consecutively by additional competencies for teachers with special duties and responsibilities in a future step.
Membrane proteins are biological macromolecules that are located in a cell’s membrane and are responsible for essential functions within an organism, which makes them to prominent drug targets. The extraction of membrane proteins from the hydrophobic membrane bilayer to determine high-resolution crystal structures is a difficult task and only 2% of all solved proteins structures are membrane proteins. Computational methods may help to gain deeper insights into membrane protein structures and their functions. This study will give an overview of such computational methods on a representative set of membrane proteins and will provide ideas for future computational and experimental research on membrane proteins.
In a first step (chapter 2), I updated an earlier, manually-curated data set of homologous membrane proteins (HOMEP) to more recent versions in 2010 (HOMEP2) and 2013 (HOMEP3) using an automated clustering approach. High-resolution structures of membrane proteins listed in the PDB_TM database were structurally aligned and subsequently clustered using structural similarity scores. Both data sets were used as a standard gold reference set for subsequent work.
Subsequently, I have updated and applied the sequence alignment program AlignMe to determine protein descriptors that are suitable for detecting evolutionary relationship between homologous a-helical membrane proteins. Single input descriptors were tested alone and in combination with each other in different modes of AlignMe by optimizing gap penalties on the HOMEP2 data set. Most accurate alignments and homology models on the HOMEP2 data set were observed when using position-specific substitution information (P), secondary structure propensities (S) and transmembrane propensities (T) in the AlignMe PST mode. An evaluation on an independent reference set of membrane protein sequence alignments from the BAliBASE collection showed that different modes of AlignMe are suitable for different sequence similarity levels. The AlignMe PST mode improved the alignment accuracy significantly for distantly related proteins, whereas for closely-related proteins from the BAliBASE set the AlignMe PS mode was more suitable. This work was published in March 2013 in PLOS ONE. In order to allow also an easier usage of the AlignMe program, I have implemented a web server of AlignMe (chapter 4) that provides the optimized settings and gap penalties for the AlignMe P, PS and PST modes. A comparison to other recent alignment web server shows that the alignments of AlignMe are similar or even more accurate than those of other methods, especially for very distantly related proteins for which the inclusion of membrane protein information has been shown to be suitable. This work was published in the NAR web server issue in July 2014.
Although membrane-specific information has been shown to be suitable for aligning distantly related membrane proteins on a sequence level, such information was not incorporated into structural alignment programs making it unclear which method is the most suitable for aligning membrane proteins. Thus, I compared 13 widely-used pairwise structural alignment methods on an updated reference set of homologous membrane protein structures (HOMEP3) and evaluated their accuracy by building models based on the underlying sequence alignments and used scoring functions (e.g., AL4 or CAD-score) to rate the model accuracy (chapter 5). The analysis showed that fragment-based approaches such as FR-TM-align are the most useful for aligning structures of membrane proteins that have undergone large conformational changes whereas rigid approaches were more suitable for proteins that were solved in the same or a similar state. However, no method showed a significant higher accuracy than any other. Additionally, all methods lack a measure to rate the reliability of the accuracy for a specific position within a structure alignment. In order to solve these problems, I propose a consensus-type approach that combines alignments from four different methods, namely FR-TM-align, DaliLite, MATT and FATCAT and assigns a confidence value to each position of the alignment that describes the agreement between the methods. This work has been published 2015 in the journal “PROTEINS: structure, function and bioinformatics”.
Consensus alignments were then generated for each pair of proteins of the HOMEP3 data set and subsequently analyzed for single evolutionary events within membrane spanning segments and for irregular structures (e.g., 310- and p-helices) (chapter 6). Interestingly, single insertions and deletions could be observed with the help of consensus alignments in the conserved membrane-spanning segments of membrane proteins in four protein families. The detection of such single InDels might help to identify crucial residues for a proteins function.
Introduction: In 2008, the German Council of Science had advised universities to establish a quality management system (QMS) that conforms to international standards. The system was to be implemented within 5 years, i.e., until 2014 at the latest. The aim of the present study was to determine whether a QMS suitable for electronic learning (eLearning) domain of medical education to be used across Germany has meanwhile been identified.
Methods: We approached all medical universities in Germany (n=35), using an anonymous questionnaire (8 domains, 50 items).
Results: Our results (response rate 46.3%) indicated very reluctant application of QMS in eLearning and a major information deficit at the various institutions.
Conclusions: Authors conclude that under the limitations of this study there seems to be a considerable need to improve the current knowledge on QMS for eLearning, and that clear guidelines and standards for their implementation should be further defined.
Einleitung: Der Wissenschaftsrat empfahl 2008 den Universitäten innerhalb der nächsten 5 Jahre, d. h. bis spätestens 2014, ein Qualitätsmanagementsystem (QMS), das internationalen Maßstäben entspricht, zu etablieren. Ziel der vorliegenden Studie war es, zu evaluieren, ob es derzeit ein geeignetes QMS für das elektronische Lernen (eLearning) gibt, das speziell im Fach Humanmedizin deutschlandweit eingesetzt werden kann.
Methoden: Im Rahmen einer Umfrage wurden mittels eines anonymisierten Fragebogens (8 Domänen, 50 Items) alle Universitäten (n=35) des Fachbereichs Medizin in Deutschland evaluiert.
Ergebnisse: Die Ergebnisse (46,3% Rücklaufquote) zeigen einen nur zögerlichen Einsatz von QMS für eLearning und dass vor Ort ein großes Informationsdefizit herrscht.
Schlussfolgerung: Unter Berücksichtigung der Limitationen dieser Studie kann zusammenfassend festgehalten werden, dass erheblicher Bedarf zu bestehen scheint, das existierende Informationsdefizit für QMS eLearning zu mindern, sowie zukünftig genaue Richtlinien und Standards zur Umsetzung zu definieren.
Background: The West African country of Burkina Faso (BFA) is an example for the enduring importance of traditional plant use today. A large proportion of its 17 million inhabitants lives in rural communities and strongly depends on local plant products for their livelihood. However, literature on traditional plant use is still scarce and a comprehensive analysis for the country is still missing.
Methods: In this study we combine the information of a recently published plant checklist with information from ethnobotanical literature for a comprehensive, national scale analysis of plant use in Burkina Faso. We quantify the application of plant species in 10 different use categories, evaluate plant use on a plant family level and use the relative importance index to rank all species in the country according to their usefulness. We focus on traditional medicine and quantify the use of plants as remedy against 22 classes of health disorders, evaluate plant use in traditional medicine on the level of plant families and rank all species used in traditional medicine according to their respective usefulness.
Results: A total of 1033 species (50%) in Burkina Faso had a documented use. Traditional medicine, human nutrition and animal fodder were the most important use categories. The 12 most common plant families in BFA differed considerably in their usefulness and application. Fabaceae, Poaceae and Malvaceae were the plant families with the most used species. In this study Khaya senegalensis, Adansonia digitata and Diospyros mespiliformis were ranked the top useful plants in BFA. Infections/Infestations, digestive system disorders and genitourinary disorders are the health problems most commonly addressed with medicinal plants. Fabaceae, Poaceae, Asteraceae, Apocynaceae, Malvaceae and Rubiaceae were the most important plant families in traditional medicine. Tamarindus indica, Vitellaria paradoxa and Adansonia digitata were ranked the most important medicinal plants.
Conclusions: The national-scale analysis revealed systematic patterns of traditional plant use throughout BFA. These results are of interest for applied research, as a detailed knowledge of traditional plant use can a) help to communicate conservation needs and b) facilitate future research on drug screening.
Proteins of the secretin family form large macromolecular complexes, which assemble in the outer membrane of Gram-negative bacteria. Secretins are major components of type II and III secretion systems and are linked to extrusion of type IV pili (T4P) and to DNA uptake. By electron cryo-tomography of whole Thermus thermophilus cells, we determined the in situ structure of a T4P molecular machine in the open and the closed state. Comparison reveals a major conformational change whereby the N-terminal domains of the central secretin PilQ shift by ∼30 Å, and two periplasmic gates open to make way for pilus extrusion. Furthermore, we determine the structure of the assembled pilus.
Background: Second hand smoke (ETS)-associated particulate matter (PM) contributes considerably to indoor air contamination and constitutes a health risk for passive smokers. Easy to measure, PM is a useful parameter to estimate the dosage of ETS that passive smokers are exposed to. Apart from its suitability as a surrogate parameter for ETS-exposure, PM itself affects human morbidity and mortality in a dose-dependent manner. We think that ETS-associated PM should be considered an independent hazard factor, separately from the many other known harmful compounds of ETS. We believe that brand-specific and tobacco-product-specific differences in the release of PM matter and that these differences are of public interest. Methods: To generate ETS of cigarettes and cigarillos as standardized and reproducible as possible, an automatic second hand smoke emitter (AETSE) was developed and placed in a glass chamber. L&M cigarettes ("without additives", "red label", "blue label"), L&M filtered cigarillos ("red") and 3R4F standard research cigarettes (as reference) were smoked automatically according to a self-developed, standardized protocol until the tobacco product was smoked down to 8 mm distance from the tipping paper of the filter. Results: Mean concentration (Cmean) and area under the curve (AUC) in a plot of PM2.5 against time were measured, and compared. CmeanPM2.5 were found to be 518 μg/m3 for 3R4F cigarettes, 576 μg/m3 for L&M "without additives" ("red"), 448 μg/m3 for L&M "blue label", 547 μg/m3 for L&M "red label", and 755 μg/m3 for L&M filtered cigarillos ("red"). AUCPM2.5-values were 208,214 μg/m3·s for 3R4F reference cigarettes, 204,629 μg/m3·s for L&M "without additives" ("red"), 152,718 μg/m3·s for L&M "blue label", 238,098 μg/m3·s for L&M "red label" and 796,909 μg/m3·s for L&M filtered cigarillos ("red"). Conclusion: Considering the large and significant differences in particulate matter emissions between cigarettes and cigarillos, we think that a favorable taxation of cigarillos is not justifiable.
Background: Measurement of prostate-specific antigen (PSA) advanced the diagnostic and prognostic potential for prostate cancer (PCa). However, due to PSA’s lack of specificity, novel biomarkers are needed to improve risk assessment and ensure optimal personalized therapy. A set of protein molecules as potential biomarkers was therefore evaluated in serum of PCa patients.
Methods: Serum samples from patients undergoing radical prostatectomy (RPE) for biopsy-proven PCa without neoadjuvant treatment were compared to serum samples from healthy subjects. Preliminary screening of 119 proteins in 10 PCa patients and 10 controls was carried out by the Proteome Profiler Antibody Array. Those markers showing distinct differences between patients and controls were then further evaluated by ELISA in the serum of 165 PCa patients and 19 controls. Uni- and multivariate as well as correlation analysis were performed to test the capability of these molecules to detect disease and predict pathological outcome.
Results: Screening showed that soluble (s)E-cadherin, E-selectin, MMP2, MMP9, TIMP1, TIMP2, Galectin and Clusterin warranted further evaluation. sE-Cadherin, TIMP1, Galectin and Clusterin were significantly over- and MMP9 under-expressed in PCa compared to controls. The concentration of sE-cadherin, MMP2 and Clusterin correlated negatively and that of MMP9 and TIMP1 positively with the Gleason Sum at prostatectomy. Only sE-cadherin significantly correlated with the highest Gleason pattern. Compared to serum PSA, sE-cadherin provided an independent and better matching predictive ability for discriminating PCas with an upgrade at RPE and aggressive tumors with a Gleason Sum ≥7.
Conclusions: sE-cadherin performed most favorably from a large panel of serum proteins in terms of diagnostic and predictive potential in curatively treatable PCa. sE-cadherin merits further investigation as a biomarker for PCa.
Hintergrund: Die Interaktion zwischen β-HCG und TSH in der Schwangerschaft stellt ein differentialdiagnostisches Problem dar, weil die Wertung von supprimierten TSH-Spiegeln erschwert wird. Dies kann im schlimmsten Fall zu Fehlinterpretationen führen. Ziel der vorliegenden Arbeit war, diese Interaktion an einem großen Kollektiv in einen zeitlichen Kontext mit dem Verlauf der Schwangerschaft zu bringen, da der Zeitpunkt des Screenings entscheidenden Einfluss auf die Höhe des TSH-Spiegels hat. Zusätzlich wurden anhand der vorliegenden Daten Referenzbereiche für Schwangere berechnet und der Einfluss einer Jodmedikation untersucht.
Patienten und Methoden: Aus einem unselektionierten Pool von Patientinnen eines nuklearmedizinischen Praxisverbundes wurden die Daten von 1283 schilddrüsengesunden Schwangeren zwischen 16 und 48 Jahren ausgewertet. Neben der TSH-Bestimmung lag ein Schwerpunkt auf dem zeitlichen Verlauf, so dass die Schwangeren in Untergruppen von je 2 Wochen analysiert wurden. Untersucht wurde der Einfluss der Jodmedikation auf die TSH-Werte. Am Ende erfolgte mit Hilfe der logarithmischen Transformation unter Verwendung der 2-Sigma-Grenzen die Bestimmung neuer TSH-Referenzbereiche für Schwangere.
Ergebnisse: Es zeigt sich zu Beginn der Schwangerschaft ein Anstieg der mittleren TSH-Werte von 1,22 mU/l in der 2. SSW bis auf 1,7 mU/l um die 7. SSW mit einem konsekutiven Abfall der TSH-Werte bis auf 0,9 mU/l bis zur 16. SSW (entsprechend 52,9%). Der größte Abfall findet sich in der 12. bis 14. SSW, also zum Zeitpunkt des ersten Screenings. Die Jodmedikation hat keinen maßgeblichen Einfluss auf den TSH-Wert. Die Berechnung schwangerschaftskorrigierter Referenzbereiche zeigt im ersten Drittel TSH-Werte von 0,08 – 3,67 mU/l, im 2. Drittel 0,04 – 2,88 mU/l und im 3. Trimenon 0,17 – 3,19 mU/l.
Schlussfolgerungen: Die Arbeit zeigt, dass die niedrigsten TSH-Werte zum Zeitpunkt des ersten Screenings zu erwarten sind und deswegen möglicherweise zu Fehlentscheidungen führen können. Ein relevanter Zusammenhang der Jodmedikation mit dem TSH-Wert lässt sich nicht nachweisen. Neue Referenzbereiche für Schwangere könnten hilfreich sein, dieses diagnostische Dilemma zu vermeiden.
Diffusion tensor imaging (DTI) is amongst the simplest mathematical models available for diffusion magnetic resonance imaging, yet still by far the most used one. Despite the success of DTI as an imaging tool for white matter fibers, its anatomical underpinnings on a microstructural basis remain unclear. In this study, we used 65 myelin-stained sections of human premotor cortex to validate modeled fiber orientations and oft used microstructure-sensitive scalar measures of DTI on the level of individual voxels. We performed this validation on high spatial resolution diffusion MRI acquisitions investigating both white and gray matter. We found a very good agreement between DTI and myelin orientations with the majority of voxels showing angular differences less than 10°. The agreement was strongest in white matter, particularly in unidirectional fiber pathways. In gray matter, the agreement was good in the deeper layers highlighting radial fiber directions even at lower fractional anisotropy (FA) compared to white matter. This result has potentially important implications for tractography algorithms applied to high resolution diffusion MRI data if the aim is to move across the gray/white matter boundary. We found strong relationships between myelin microstructure and DTI-based microstructure-sensitive measures. High FA values were linked to high myelin density and a sharply tuned histological orientation profile. Conversely, high values of mean diffusivity (MD) were linked to bimodal or diffuse orientation distributions and low myelin density. At high spatial resolution, DTI-based measures can be highly sensitive to white and gray matter microstructure despite being relatively unspecific to concrete microarchitectural aspects.
Aim: We investigated the long-term impact of adjunctive systemic antibiotics on periodontal disease progression. Periodontal therapy is frequently supplemented by systemic antibiotics, although its impact on the course of disease is still unclear.
Material & Methods: This prospective, randomized, double-blind, placebo-controlled multi-centre trial comprising patients suffering from moderate to severe periodontitis evaluated the impact of rational adjunctive use of systemic amoxicillin 500 mg plus metronidazole 400 mg (3x/day, 7 days) on attachment loss. The primary outcome was the percentage of sites showing further attachment loss (PSAL) ≥1.3 mm after the 27.5 months observation period. Standardized therapy comprised mechanical debridement in conjunction with antibiotics or placebo administration, and maintenance therapy at 3 months intervals.
Results: From 506 participating patients, 406 were included in the intention to treat analysis. Median PSAL observed in placebo group was 7.8% compared to 5.3% in antibiotics group (Q25 4.7%/Q75 14.1%; Q25 3.1%/Q75 9.9%; p < 0.001 respectively).
Conclusions: Both treatments were effective in preventing disease progression. Compared to placebo, the prescription of empiric adjunctive systemic antibiotics showed a small absolute, although statistically significant, additional reduction in further attachment loss. Therapists should consider the patient's overall risk for periodontal disease when deciding for or against adjunctive antibiotics prescription.
Background: Detailed injury data are not available for international tournaments in field hockey. We investigated the epidemiology of field hockey injuries during major International Hockey Federation (Fédération Internationale de Hockey, FIH) tournaments in 2013.
Materials: and methods FIH injury reports were used for data collection. All major FIH tournaments for women (n=5) and men (n=11) in 2013 were included. The main focus of this study was to assess the pattern, time, site on the pitch, body site and mechanism of each of the injuries. We calculated the average number of injuries per match and the number of injuries per 1000 player match hours.
Results: The average number of injuries was 0.7 (95% CI 0.5 to 1.0) per match in women's tournaments and 1.2 (95% CI 0.8 to 1.7) per match in men's tournaments. The number of injuries per 1000 player match hours ranged from 23.4 to 44.2 (average 29.1; 95% CI 18.6 to 39.7) in women and 20.8 to 90.9 (average 48.3; 95% CI 30.9 to 65.8) in men. Most injuries occurred in the circle (n=25, 50%, in women, n=95, 51%, in men). The rate of injuries increased after the first quarter. Injuries to the head and face (n=20, 40%) were most common in women. The head/face (n=51, 27%) and the thigh/knee (n=52, 28%) were equally affected in men. The ball caused the most injuries, followed by the stick, collisions and tripping/falling. There were no deaths or injuries that required hospital treatment in the entire cohort.
Summary: Field hockey has a low incidence of acute injuries during competition.
Quantification of spatially and temporally resolved water flows and water storage variations for all land areas of the globe is required to assess water resources, water scarcity and flood hazards, and to understand the Earth system. This quantification is done with the help of global hydrological models (GHMs). What are the challenges and prospects in the development and application of GHMs? Seven important challenges are presented. (1) Data scarcity makes quantification of human water use difficult even though significant progress has been achieved in the last decade. (2) Uncertainty of meteorological input data strongly affects model outputs. (3) The reaction of vegetation to changing climate and CO2 concentrations is uncertain and not taken into account in most GHMs that serve to estimate climate change impacts. (4) Reasons for discrepant responses of GHMs to changing climate have yet to be identified. (5) More accurate estimates of monthly time series of water availability and use are needed to provide good indicators of water scarcity. (6) Integration of gradient-based groundwater modelling into GHMs is necessary for a better simulation of groundwater–surface water interactions and capillary rise. (7) Detection and attribution of human interference with freshwater systems by using GHMs are constrained by data of insufficient quality but also GHM uncertainty itself. Regarding prospects for progress, we propose to decrease the uncertainty of GHM output by making better use of in situ and remotely sensed observations of output variables such as river discharge or total water storage variations by multi-criteria validation, calibration or data assimilation. Finally, we present an initiative that works towards the vision of hyperresolution global hydrological modelling where GHM outputs would be provided at a 1-km resolution with reasonable accuracy.
Magnetoencephalography (MEG) measures neural activity non-invasively and at an excellent temporal resolution. Since its invention (Cohen, 1968, 1972), MEG has proven a most valuable tool in neurocognitive (Salmelin et al., 1994) and clinical research (Stufflebeam et al., 2009; Van ’t Ent et al., 2003). MEG is able to measure rapid changes in electrophysiological neural signals related to sensory and cognitive processes. The magnetic fields measured outside the head by MEG directly reflect the cortical currents generated by the synchronised activity of thousands of neuronal sources. This distinguishes MEG from functional magnetic resonance imaging (fMRI), where measurements are only indirectly related to electrophysiological activity through neurovascular coupling...
Background: In German breast cancer care, the S1-guidelines of the 1990s were substituted by national S3-guidelines in 2003. The application of guidelines became mandatory for certified breast cancer centers. The aim of the study was to assess guideline adherence according to time intervals and its impact on survival.
Methods: Women with primary breast cancer treated in three rural hospitals of one German geographical district were included. A cohort study design encompassed women from 1996–97 (N = 389) and from 2003–04 (N = 488). Quality indicators were defined along inpatient therapy sequences for each time interval and distinguished as guideline-adherent and guideline-divergent medical decisions. Based on all of the quality indicators, a binary overall adherence index was defined and served as a group indicator in multivariate Cox-regression models. A corrected group analysis estimated adjusted 5-year survival curves.
Results: From a total of 877 patients, 743 (85 %) and 504 (58 %) were included to assess 104 developed quality indicators and the resuming binary overall adherence index. The latter significantly increased from 13–15 % (1996–97) up to 33–35 % (2003–04). Within each time interval, no significant survival differences of guideline-adherent and -divergent treated patients were detected. Across time intervals and within the group of guideline-adherent treated patients only, survival increased but did not significantly differ between time intervals. Across time intervals and within the group of guideline-divergent treated patients only, survival increased and significantly differed between time intervals.
Conclusions: Infrastructural efforts contributed to the increase of process quality of the examined certified breast cancer center. Paradoxically, a systematic impact on 5-year survival has been observed for patients treated divergently from the guideline recommendations. This is an indicator for the appropriate application of guidelines. A maximization of guideline-based decisions instead of the ubiquitous demand of guideline adherence maximization is advocated.
An accurate quantification of low viremic HCV RNA plasma samples has gained importance since the approval of direct acting antivirals and since only one single measurement predicts the necessity of a prolonged or shortened therapy. As reported previously, HCV quantification assays such as Abbott RealTime HCV and Roche COBAS AmpliPrep/COBAS TaqMan HCV version 2 (CTM v2) may vary in sensitivity and precision particularly in low-level viremia. Importantly, substantial variations were previously demonstrated between some of these assays compared to the Roche High Pure System/COBAS TaqMan assay (HPS) reference assay, which was used to establish the clinical decision points in clinical studies. In this study, the reproducibility of assay performances across several laboratories was assessed by analysing quantification results generated by six independent laboratories (3× RealTime, 3× CTM v2) in comparison with one HPS reference laboratory. The 4th WHO Standard was diluted to 100, 25 and 10 IU/ml, and aliquots were tested in triplicates in 5 independent runs by each assay in the different laboratories to assess assay precision and detection rates. In a second approach, 2 clinical samples (GT 1a & GT 1b) were diluted to 100 and 25 IU/ml and tested as described above. While the result range for WHO 100 IU/ml replicates across all laboratories was similar in this analysis, the CVs of each laboratory ranged from 19.3 to 25.6 % for RealTime laboratories and were lower than CVs of CTM v2 laboratories with a range of 26.1–47.3 %, respectively, and also in comparison with the CV of the HPS reference laboratory (34.9 %). At WHO standard dilution of 25 IU/ml, 24 replicates were quantified by RealTime compared to 8 replicates with CTM v2. Results of clinical samples again revealed a higher variation of CTM v2 results as compared to RealTime values. (CVs at 100 IU/ml: RealTime: 13.1–21.0 % and CTM v2: 15.0–32.3 %; CVs at 25 IU/ml: RealTime 17.6–34.9 % and CTM v2 28.2–54.9 %). These findings confirm the superior precision of RealTime versus CTM v2 at low-level viremia even across different laboratories including the new clinical decision point at 25 IU/ml. A highly precise monitoring of HCV viral load during therapy will remain crucial for patient management with regard to futility rules, therapy efficacy and SVR.
Aerosolteilchen agieren als Kondensationskeime für Wolkentröpfchen (engl. Cloud Condensation Nuclei, CCN) oder Eiskristalle und sind deswegen für die Wolken- und Niederschlagsbildung entscheidend. Sowohl die Aerosolpartikel als auch die Wolken können Sonnenlicht effizient streuen, wodurch ein kühlender Effekt auf das Klima ausgeübt wird. Einige der Teilchen, wie z. B. aufgewirbelter Staub oder Seesalz, werden direkt in die Atmosphäre injiziert; der größte Anteil der Teilchen und etwa die Hälfte der CCN werden allerdings durch die Kondensation gasförmiger Substanzen gebildet. Dieser Prozess wird als Nukleation oder Partikelneubildung (engl. New Particle Formation, NPF) bezeichnet. Trotz intensiver Forschung ist die NPF noch nicht vollständig verstanden, was an der Komplexität der chemischen Abläufe in der Atmosphäre und an der Schwierigkeit liegt, die relevanten Substanzen bei extrem geringen Mischungsverhältnissen (etwa ein Molekül oder Cluster per 1012 bis 1015 Moleküle) zu identifizieren und zu quantifizieren. Neben der Frage nach den bei der Nukleation beteiligten Substanzen ist außerdem noch unklar, ob Ionen-induzierte Nukleation ein wichtiger Prozess für das Klima ist. Das CLOUD-Projekt (Cosmics Leaving OUtdoor Droplets) am CERN soll diesen Fragen nachgehen, indem dort die Partikelbildung in einem Kammer-Experiment unter extrem gut kontrollierten Bedingungen simuliert wird. Die chemischen Systeme, die in dieser Schrift diskutiert werden, umfassen das binäre (H2SO4-H2O), das ternäre Ammoniak (H2SO4-H2O-NH3) und das ternäre Dimethylamin (H2SO4-H2O-(CH3)2NH)-System.
Einige der wesentlichen Ergebnisse von Experimenten an der CLOUD-Kammer werden diskutiert. Diese zeigen, dass das binäre und das ternäre Ammoniak System die atmosphärische Nukleation bei niedrigen Temperaturen erklären können, wohingegen das ternäre Dimethylamin System prinzipiell in der Lage ist, die hohen bodennahen Nukleationsraten bei atmosphärisch relevanten Schwefelsäure-Konzentrationen zu beschreiben. Des Weiteren werden zwei für Nukleationsstudien wesentliche Messmethoden vorgestellt. Das Chemical Ionization Mass Spectrometer (CIMS) wird zur Messung von gasförmiger Schwefelsäure verwendet, da H2SO4 vermutlich die wichtigste Substanz bei der atmosphärischen Nukleation ist. Das Chemical Ionization-Atmospheric Pressure interface-Time Of Flight (CI-APi-TOF) Massenspektrometer misst Schwefelsäure und neutrale Cluster. Beide Geräte wurden für den Einsatz bei CLOUD optimiert und instrumentelle Entwicklungen wurden in Bezug auf die Ionenquelle vorgenommen, die eine Korona-Entladung verwendet. Außerdem wurden eine Kalibrationseinheit zur Bereitstellung definierter Schwefelsäure-Konzentrationen entwickelt und das CI-APi-TOF aufgebaut. In Bezug auf das ternäre Dimethylamin System werden Nukleationsraten und die ersten Messungen von gro en nukleierenden neutralen Clustern präsentiert. Monomer- und Dimer-Konzentrationen der Schwefelsäure, die mit dem CIMS bei tiefen Temperaturen gemessen wurden, dienten der Ableitung der thermodynamischen Eigenschaften bei der Dimer-Bildung im binären und ternären Ammoniak System. Um möglichst exakte Nukleationsraten zu bestimmen, wurde eine neue Methode entwickelt, die es erlaubt, den Effekt der Selbst-Koagulation bei der Nukleation miteinzubeziehen.
Die zusammengefassten Studien tragen signifikant zum Verständnis der Partikelneubildung bei.
Il presente lavoro ha come oggetto d'esame lo scontro del filosofo neo-idealista Giovanni Gentile (1875-1944) col Modernismo cattolico. Obiettivo del lavoro è offrire per la prima volta un esame completo della controversa, durata sei anni, dal 1903 al 1909, attraverso una constestualizzazione storico ecclesiastica e storico filosofica di essa. Il tutto basandosi sulla pubblicazione di inediti fonti d'archivio.
Background aims: Immunomagnetic enrichment of CD34+ hematopoietic “stem” cells (HSCs) using paramagnetic nanobead coupled CD34 antibody and immunomagnetic extraction with the CliniMACS plus system is the standard approach to generating T-cell-depleted stem cell grafts. Their clinical beneficence in selected indications is established. Even though CD34+ selected grafts are typically given in the context of a severely immunosuppressive conditioning with anti-thymocyte globulin or similar, the degree of T-cell depletion appears to affect clinical outcomes and thus in addition to CD34 cell recovery, the degree of T-cell depletion critically describes process quality. An automatic immunomagnetic cell processing system, CliniMACS Prodigy, including a protocol for fully automatic CD34+ cell selection from apheresis products, was recently developed. We performed a formal process validation to support submission of the protocol for CE release, a prerequisite for clinical use of Prodigy CD34+ products.
Methods: Granulocyte-colony stimulating factor–mobilized healthy-donor apheresis products were subjected to CD34+ cell selection using Prodigy with clinical reagents and consumables and advanced beta versions of the CD34 selection software. Target and non-target cells were enumerated using sensitive flow cytometry platforms.
Results: Nine successful clinical-scale CD34+ cell selections were performed. Beyond setup, no operator intervention was required. Prodigy recovered 74 ± 13% of target cells with a viability of 99.9 ± 0.05%. Per 5 × 10E6 CD34+ cells, which we consider a per-kilogram dose of HSCs, products contained 17 ± 3 × 10E3 T cells and 78 ± 22 × 10E3 B cells.
Conclusions: The process for CD34 selection with Prodigy is robust and labor-saving but not time-saving. Compared with clinical CD34+ selected products concurrently generated with the predecessor technology, product properties, importantly including CD34+ cell recovery and T-cell contents, were not significantly different. The automatic system is suitable for routine clinical application.
This contribution1 is framed within the field of cultural studies and migration and ethnic relations, trying to examine how the Italian American experience has been imaginatively (re)created and received. It will entail an interdisciplinary approach about the cultural and literary analysis of the Italian diaspora in the United States, from a gender perspective that recovers the voice and historical presence of women as has been transmitted in the arts and critical methods. Focusing on the media and literary representations that deal with Italian migration to the United States since the last decades of the 19th century, their welcome or later development until our days, I make particular reference to a community mainly conceived in the masculine, as major receptions and persistent stereotypes about family relations and ethnicity attest. I will analyse, at the same time, the existence of other works that either contest or balance that cultural and gender stereotyping of the Italian American experience or community.
This paper1 investigates changes in the domestic work sector when passing from the informal to the formal labor market. The issue is explored within the context of the housework voucher policy (titres-services), which allows households to officially purchase weekly housework services from an authorized agency, through vouchers. This contribution has therefore a twofold focus: observing changes in labor market dynamics and investigating workers’ perception of this change. In order to discuss these issues, I will firstly look at the step from informal to formal labor market through two aspects: ethnic niches and individual labor dynamics – two bedrocks of Brussels domestic work market. Then, I will analyze workers’ personal experiences when acquiring a declared job in the voucher system.
Analyzing objective and subjective changes, a entral question of this article is to which extent the switch to the housework voucher system can bring empowerment to domestic workers. The sector work quality, in objective and subjective terms, has improved mainly by the setting of rules and by allowing workers to enjoy labor rights and a work status. The formal market dynamics of the housework voucher system remain, however, profoundly ethicized and marked by women’s presence, as was/is the shadow market.
The article shows that workers’ understanding of the transition from an informal to a formal sector is largely a result of their previous experiences and social position, mainly regarding migration status. This change will be thus much more assertive for workers who had their migrant status regularization and work formalization processes concomitantly, demonstrating that the most empowering shift is the one of acquiring papers, and not of entering declared work.
In the ‘age of transnationalization’, spatial mobility is highly valued as a resource and accordingly ‘sedentariness’ is often symbolically devalued. Migration between Poland and Germany (mainly from Poland to Germany) has a century-long tradition. Not only has it yielded the emergence of a dense transnational social space, but is also considered as a re-enactor of cultural traits and symbolic meanings. Spatial mobility is tied to notions of social mobility and to projects of life-making. Since legal restrictions for Polish migrants seeking to work and settle in Germany have vanished, the quest for ‘normalcy’ has enhanced and pressures towards even more migration have increased. I argue that symbolic meanings of mobility are decisive for hierarchies in transnational social spaces. I have put main emphasize on families’ practices of caring for and caring about each other: the first being more a physical or material activity, while the latter is a more symbolic and emotional one. The interviews reveal that people draw multiple differentiations between migrant populations in terms of their migration reasons as well as between the mobile and the immobile. Those differentiations are embedded in the distinct feature of the transnational social space between Poland and Germany with assumed differences in terms of ‘modernity’. At the end the symbolic meanings of mobility also help explain the puzzle of why the emigration rates from Poland are constantly high, although Poland is a comparatively wealthy country.
Often adopting a feminist perspective, the sociological literature on migrant domestic services (MDS) does not make explicit which feminist paradigm it speaks from. This article situates this literature within ongoing debates in feminist theory, in particular the tension between materialist and poststructuralist approaches. Then, it discusses the empirical relevance of each of those two paradigms on the example of the results of original research into the personalization of employment relationships in MDS.
The contribution proposes a new way of making sense of the diversity of feminist theories, distinguishing between modern and postmodern approaches. Indeed, since the 1980s, feminist theory in the US and Western Europe has undergone a ‘postmodern turn’, which renders previous typologies much less up-to-speed with recent developments in the field. Then, the article examines which paradigms are implicit in the sociological literature on MDS. Initially, personalization in MDS was mainly seen in materialist terms, as a way to maximize the quantity and quality of labour (including emotional labour) extracted from domestic workers. The emergence of postmodern approaches in feminist theory set off a progressive shift in MDS literature. First, this literature showed that personalization also fulfils identity functions for employers and
workers, then it widened its focus to include the affective dimensions of domestic labour (not to be confused with emotional labour). The final section shows how modern and postmodern feminist approaches can be combined within a single research, on the example of original research on personalization in MDS in Belgium and Poland. In particular, the contribution shows that the distinction between material functions of personalization on the one hand, and its emotional/identity functions on the other is not empirically operative. Indeed, migrant domestic workers generally use emotional/identity categories to frame material questions, and vice versa. This final part shows that, rather than representing incompatible approaches, modern and postmodern feminisms complete each other, in this case showing a fuller image of personalization processes in MDS.
Recent advances in basic cardiovascular research as well as their translation into the clinical situation were the focus at the last "New Frontiers in Cardiovascular Research meeting". Major topics included the characterization of new targets and procedures in cardioprotection, deciphering new players and inflammatory mechanisms in ischemic heart disease as well as uncovering microRNAs and other biomarkers as versatile and possibly causal factors in cardiovascular pathogenesis. Although a number of pathological situations such as ischemia-reperfusion injury or atherosclerosis can be simulated and manipulated in diverse animal models, also to challenge new drugs for intervention, patient studies are the ultimate litmus test to obtain unequivocal information about the validity of biomedical concepts and their application in the clinics. Thus, the open and bidirectional exchange between bench and bedside is crucial to advance the field of ischemic heart disease with a particular emphasis of understanding long-lasting approaches in cardioprotection.
BACKGROUND AND PURPOSE: We evaluated cerebral white and gray matter changes in patients with iRLS in order to shed light on the pathophysiology of this disease.
METHODS: Twelve patients with iRLS were compared to 12 age- and sex-matched controls using whole-head diffusion tensor imaging (DTI) and voxel-based morphometry (VBM) techniques. Evaluation of the DTI scans included the voxelwise analysis of the fractional anisotropy (FA), radial diffusivity (RD), and axial diffusivity (AD).
RESULTS: Diffusion tensor imaging revealed areas of altered FA in subcortical white matter bilaterally, mainly in temporal regions as well as in the right internal capsule, the pons, and the right cerebellum. These changes overlapped with changes in RD. Voxel-based morphometry did not reveal any gray matter alterations.
CONCLUSIONS: We showed altered diffusion properties in several white matter regions in patients with iRLS. White matter changes could mainly be attributed to changes in RD, a parameter thought to reflect altered myelination. Areas with altered white matter microstructure included areas in the internal capsule which include the corticospinal tract to the lower limbs, thereby supporting studies that suggest changes in sensorimotor pathways associated with RLS.
Molecular cause and functional impact of altered synaptic lipid signaling due to a prg‐1 gene SNP
(2015)
Loss of plasticity-related gene 1 (PRG-1), which regulates synaptic phospholipid signaling, leads to hyperexcitability via increased glutamate release altering excitation/inhibition (E/I) balance in cortical networks. A recently reported SNP in prg-1 (R345T/mutPRG-1) affects ~5 million European and US citizens in a monoallelic variant. Our studies show that this mutation leads to a loss-of-PRG-1 function at the synapse due to its inability to control lysophosphatidic acid (LPA) levels via a cellular uptake mechanism which appears to depend on proper glycosylation altered by this SNP. PRG-1(+/-) mice, which are animal correlates of human PRG-1(+/mut) carriers, showed an altered cortical network function and stress-related behavioral changes indicating altered resilience against psychiatric disorders. These could be reversed by modulation of phospholipid signaling via pharmacological inhibition of the LPA-synthesizing molecule autotaxin. In line, EEG recordings in a human population-based cohort revealed an E/I balance shift in monoallelic mutPRG-1 carriers and an impaired sensory gating, which is regarded as an endophenotype of stress-related mental disorders. Intervention into bioactive lipid signaling is thus a promising strategy to interfere with glutamate-dependent symptoms in psychiatric diseases.
A handling study to assess use of the Respimat(®) Soft Mist™ inhaler in children under 5 years old
(2015)
Background: Respimat® Soft Mist™ Inhaler (SMI) is a hand-held device that generates an aerosol with a high, fine-particle fraction, enabling efficient lung deposition. The study objective was to assess inhalation success among children using Respimat SMI, and the requirement for assistance by the parent/caregiver and/or a valved holding chamber (VHC).
Methods: This open-label study enrolled patients aged <5 years with respiratory disease and history of coughing and/or recurrent wheezing. Patients inhaled from the Respimat SMI (air only; no aerosol) using a stepwise configuration: “1” (dose released by child); “2” (dose released by parent/caregiver), and “3” (Respimat SMI with VHC, facemask, and parent/caregiver help). Co-primary endpoints included the ability to perform successful inhalation as assessed by the investigators using a standardized handling questionnaire and evaluation of the reasons for success. Inhalation profile in the successful handling configuration was verified with a pneumotachograph. Patient satisfaction and preferences were investigated in a questionnaire.
Results: Of the children aged 4 to <5 years (n=27) and 3 to <4 years (n=30), 55.6% and 30.0%, respectively, achieved success without a VHC or help; with assistance, another 29.6% and 10.0%, respectively, achieved success, and the remaining children were successful with VHC. All children aged 2 to <3 years (n=20) achieved success with the Respimat SMI and VHC. Of those aged <2 years (n=22), 95.5% had successful handling of the Respimat SMI with VHC and parent/caregiver help. Inhalation flow profiles generally confirmed the outcome of the handling assessment by the investigators. Most parent/caregiver and/or child respondents were satisfied with operation, instructions for use, handling, and ease of holding the Respimat SMI with or without a VHC.
Conclusions: The Respimat SMI is suitable for children aged <5 years; however, children aged <5 years are advised to add a VHC to complement its use.
Polo-like kinase 1 inhibition sensitizes neuroblastoma cells for vinca alkaloid-induced apoptosis
(2015)
High polo-like kinase 1 (PLK1) expression has been linked to poor outcome in neuroblastoma (NB), indicating that it represents a relevant therapeutic target in this malignancy. Here, we identify a synergistic induction of apoptosis by the PLK1 inhibitor BI 2536 and vinca alkaloids in NB cells. Synergistic drug interaction of BI 2536 together with vincristine (VCR), vinblastine (VBL) or vinorelbine (VNR) is confirmed by calculation of combination index (CI). Also, BI 2536 and VCR act in concert to reduce long-term clonogenic survival. Importantly, BI 2536 significantly enhances the antitumor activity of VCR in an in vivo model of NB. Mechanistically, BI 2536/VCR co-treatment triggers prolonged mitotic arrest, which is necessary for BI 2536/VCR-mediated apoptosis, since pharmacological inhibition of mitotic arrest by the CDK1 inhibitor RO-3306 significantly reduces cell death. Prolonged mitotic arrest leads to phosphorylation-mediated inactivation of BCL-2 and BCL-XL as well as downregulation of MCL-1, since inhibition of mitotic arrest by RO-3306 also prevents phosphorylation of BCL-2 and BCL-XL and MCL-1 downregulation. This inactivation of antiapoptotic BCL-2 proteins promotes activation of BAX and BAK, cleavage of caspase-9 and -3 and caspase-dependent apoptosis. Engagement of the mitochondrial pathway of apoptosis is critically required for BI 2536/VCR-induced apoptosis, since ectopic expression of a non-degradable MCL-1 phospho-mutant, BCL-2 overexpression or BAK knockdown significantly reduce BI 2536/VCR-mediated apoptosis. Thus, PLK1 inhibitors may open new perspectives for chemosensitization of NB.
Climate change and its impacts already pose considerable challenges for societies that will further increase with global warming (IPCC, 2014a, b). Uncertainties of the climatic response to greenhouse gas emissions include the potential passing of large-scale tipping points (e.g. Lenton et al., 2008; Levermann et al., 2012; Schellnhuber, 2010) and changes in extreme meteorological events (Field et al., 2012) with complex impacts on societies (Hallegatte et al., 2013). Thus climate change mitigation is considered a necessary societal response for avoiding uncontrollable impacts (Conference of the Parties, 2010). On the other hand, large-scale climate change mitigation itself implies fundamental changes in, for example, the global energy system. The associated challenges come on top of others that derive from equally important ethical imperatives like the fulfilment of increasing food demand that may draw on the same resources. For example, ensuring food security for a growing population may require an expansion of cropland, thereby reducing natural carbon sinks or the area available for bio-energy production. So far, available studies addressing this problem have relied on individual impact models, ignoring uncertainty in crop model and biome model projections. Here, we propose a probabilistic decision framework that allows for an evaluation of agricultural management and mitigation options in a multi-impact-model setting. Based on simulations generated within the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), we outline how cross-sectorally consistent multi-model impact simulations could be used to generate the information required for robust decision making.
Using an illustrative future land use pattern, we discuss the trade-off between potential gains in crop production and associated losses in natural carbon sinks in the new multiple crop- and biome-model setting. In addition, crop and water model simulations are combined to explore irrigation increases as one possible measure of agricultural intensification that could limit the expansion of cropland required in response to climate change and growing food demand. This example shows that current impact model uncertainties pose an important challenge to long-term mitigation planning and must not be ignored in long-term strategic decision making.
Glioblastoma multiforme (GBM) is treated by surgical resection followed by radiochemotherapy. Bevacizumab is commonly deployed for anti‐angiogenic therapy of recurrent GBM; however, innate immune cells have been identified as instigators of resistance to bevacizumab treatment. We identified angiopoietin‐2 (Ang‐2) as a potential target in both naive and bevacizumab‐treated glioblastoma. Ang‐2 expression was absent in normal human brain endothelium, while the highest Ang‐2 levels were observed in bevacizumab‐treated GBM. In a murine GBM model, VEGF blockade resulted in endothelial upregulation of Ang‐2, whereas the combined inhibition of VEGF and Ang‐2 leads to extended survival, decreased vascular permeability, depletion of tumor‐associated macrophages, improved pericyte coverage, and increased numbers of intratumoral T lymphocytes. CD206+ (M2‐like) macrophages were identified as potential novel targets following anti‐angiogenic therapy. Our findings imply a novel role for endothelial cells in therapy resistance and identify endothelial cell/myeloid cell crosstalk mediated by Ang‐2 as a potential resistance mechanism. Therefore, combining VEGF blockade with inhibition of Ang‐2 may potentially overcome resistance to bevacizumab therapy.
This study aims at evaluating the combination of the tumor-necrosis-factor-related apoptosis-inducing ligand (TRAIL)-receptor 2 (TRAIL-R2)-specific antibody Drozitumab and the Smac mimetic BV6 in preclinical glioblastoma models. To this end, the effect of BV6 and/or Drozitumab on apoptosis induction and signaling pathways was analyzed in glioblastoma cell lines, primary glioblastoma cultures and glioblastoma stem-like cells. Here, we report that BV6 and Drozitumab synergistically induce apoptosis and reduce colony formation in several glioblastoma cell lines (combination index<0.1). Also, BV6 profoundly enhances Drozitumab-induced apoptosis in primary glioblastoma cultures and glioblastoma stem-like cells. Importantly, BV6 cooperates with Drozitumab to suppress tumor growth in two glioblastoma in vivo models including an orthotopic, intracranial mouse model, underlining the clinical relevance of these findings. Mechanistic studies reveal that BV6 and Drozitumab act in concert to trigger the formation of a cytosolic receptor-interacting protein (RIP) 1/Fas-associated via death domain (FADD)/caspase-8-containing complex and subsequent activation of caspase-8 and -3. BV6- and Drozitumab-induced apoptosis is blocked by the caspase inhibitor zVAD.fmk, pointing to caspase-dependent apoptosis. RNA interference-mediated silencing of RIP1 almost completely abolishes the BV6-conferred sensitization to Drozitumab-induced apoptosis, indicating that the synergism critically depends on RIP1 expression. In contrast, both necrostatin-1, a RIP1 kinase inhibitor, and Enbrel, a TNFα-blocking antibody, do not interfere with BV6/Drozitumab-induced apoptosis, demonstrating that apoptosis occurs independently of RIP1 kinase activity or an autocrine TNFα loop. In conclusion, the rational combination of BV6 and Drozitumab presents a promising approach to trigger apoptosis in glioblastoma, which warrants further investigation.
BACKGROUND: Vermeulen et al. 2014 published a meta-regression analysis of three relevant epidemiological US studies (Steenland et al. 1998, Garshick et al. 2012, Silverman et al. 2012) that estimated the association between occupational diesel engine exhaust (DEE) exposure and lung cancer mortality. The DEE exposure was measured as cumulative exposure to estimated respirable elemental carbon in μg/m(3)-years. Vermeulen et al. 2014 found a statistically significant dose-response association and described elevated lung cancer risks even at very low exposures.
METHODS: We performed an extended re-analysis using different modelling approaches (fixed and random effects regression analyses, Greenland/Longnecker method) and explored the impact of varying input data (modified coefficients of Garshick et al. 2012, results from Crump et al. 2015 replacing Silverman et al. 2012, modified analysis of Moehner et al. 2013).
RESULTS: We reproduced the individual and main meta-analytical results of Vermeulen et al. 2014. However, our analysis demonstrated a heterogeneity of the baseline relative risk levels between the three studies. This heterogeneity was reduced after the coefficients of Garshick et al. 2012 were modified while the dose coefficient dropped by an order of magnitude for this study and was far from being significant (P = 0.6). A (non-significant) threshold estimate for the cumulative DEE exposure was found at 150 μg/m(3)-years when extending the meta-analyses of the three studies by hockey-stick regression modelling (including the modified coefficients for Garshick et al. 2012). The data used by Vermeulen and colleagues led to the highest relative risk estimate across all sensitivity analyses performed. The lowest relative risk estimate was found after exclusion of the explorative study by Steenland et al. 1998 in a meta-regression analysis of Garshick et al. 2012 (modified), Silverman et al. 2012 (modified according to Crump et al. 2015) and Möhner et al. 2013. The meta-coefficient was estimated to be about 10-20 % of the main effect estimate in Vermeulen et al. 2014 in this analysis.
CONCLUSIONS: The findings of Vermeulen et al. 2014 should not be used without reservations in any risk assessments. This is particularly true for the low end of the exposure scale.
In recent years, there have been prominent calls for a new social contract that accords a more central role to citizens in health research. Typically, this has been understood as citizens and patients having a greater voice and role within the standard research enterprise. Beyond this, however, it is important that the renegotiated contract specifically addresses the oversight of a new, path-breaking approach to health research: participant-led research. In light of the momentum behind participant-led research and its potential to advance health knowledge by challenging and complementing traditional research, it is vital for all stakeholders to work together in securing the conditions that will enable it to flourish.
A wide variety of enzymatic pathways that produce specialized metabolites in bacteria, fungi and plants are known to be encoded in biosynthetic gene clusters. Information about these clusters, pathways and metabolites is currently dispersed throughout the literature, making it difficult to exploit. To facilitate consistent and systematic deposition and retrieval of data on biosynthetic gene clusters, we propose the Minimum Information about a Biosynthetic Gene cluster (MIBiG) data standard.
Der Aufsatz schlägt die Verbindung und Erweiterung von Analysen des (neoliberalen) Regierens mit nicht-subjektzentrierten und affekttheoretischen Ansätzen vor. Anhand einer Analyse des sozialpolitischen und sozialarbeiterischen Umgangs mit Wohnungslosen wird nachvollzogen, welcher Gewinn sich aus der Verbindung von gouvernementalen und affekttheoretischen Perspektiven ergeben kann. Aus einer gouvernementalen Perspektive wird zunächst nachgezeichnet, wie Affekte und Emotionen in Räumen des betreuten Wohnens für Wohnungslose zum Gegenstand fürsorglicher Intervention werden. Im betreuten Wohnen kommen Mikrotechniken zum Einsatz, die auf eine "ausgewogene" emotionale Bindung an Wohnräume und ihr Inventar hinarbeiten. Das betreute Wohnen ist von Problematisierungen durchzogen, die Wohnungslosigkeit als emotionale Haltung der Rastlosigkeit und Unruhe, als einen Mangel an Verbundenheit mit Orten und Dingen deuten. Gleichzeitig wird den Untergebrachten häufig auch eine übersteigerte affektive Bindung an Dinge unterstellt, die sogenannte "Horder" und "Messies" an einer sozial unauffälligen Haushaltsführung hindere. Eine gouvernementale Analyse kann die therapeutische Rationalität sichtbar machen, die diesen Problematisierungen zugrunde liegt. Eine gouvernementale Analyse allein bietet gleichwohl keine Möglichkeit, alternative Erzählungen über die Bedeutung affektiver Beziehungen für das Wohnen zu entwickeln. Mithilfe unterschiedlicher affekttheoretischer Ansätze geht der Aufsatz daher auch der Frage nach, wie sich jenseits therapeutisierender Perspektiven über das Wohnen und die Bedeutung von Bindungen an Orte und Dinge nachdenken lässt. Nicht-subjektzentrierte Konzepte von Affektivität ermöglichen solche alternativen Erzählungen und eröffnen neue Fluchtlinien der Kritik: Wohnen wir sichtbar als immer schon "betreut", eingelassen in ein Netz von intersubjektiven und interobjektiven Beziehungen.
Background: Although the risk of developing colorectal cancer (CRC) is 2-4 times higher in case of a positive family history, risk-adapted screening programs for family members related to CRC- patients do not exist in the German health care system. CRC screening recommendations for persons under 55 years of age that have a family predisposition have been published in several guidelines.
The primary aim of this study is to determine the frequency of positive family history of CRC (1st degree relatives with CRC) among 40–54 year old persons in a general practitioner (GP) setting in Germany. Secondary aims are to detect the frequency of occurrence of colorectal neoplasms (CRC and advanced adenomas) in 1st degree relatives of CRC patients and to identify the variables (e.g. demographic, genetic, epigenetic and proteomic characteristics) that are associated with it. This study also explores whether evidence-based information contributes to informed decisions and how screening participation correlates with anxiety and (anticipated) regret.
Methods/Design: Prior to the beginning of the study, the GP team (GP and one health care assistant) in around 50 practices will be trained, and about 8,750 persons that are registered with them will be asked to complete the “Network against colorectal cancer” questionnaire. The 10 % who are expected to have a positive family history will then be invited to give their informed consent to participate in the study. All individuals with positive family history will be provided with evidence-based information and prevention strategies. We plan to examine each participant’s family history of CRC in detail and to collect information on further variables (e.g. demographics) associated with increased risk. Additional stool and blood samples will be collected from study-participants who decide to undergo a colonoscopy (n ~ 350) and then analyzed at the German Cancer Research Center (DKFZ) Heidelberg to see whether further relevant variables are associated with an increased risk of CRC. One screening list and four questionnaires will be used to collect the data, and a detailed statistical analysis plan will be provided before the database is closed (expected to be June 30, 2015).
Discussion: It is anticipated that when persons with a family history of colorectal cancer have been provided with professional advice by the practice team, there will be an increase in the availability of valid information on the frequency of affected individuals and an increase in the number of persons making informed decisions. We also expect to identify further variables that are associated with colorectal cancer. This study therefore has translational relevance from lab to practice.
Trial registration: German Clinical Trials Register DRKS00006277
Profiles of CFC-11 (CCl3F) and CFC-12 (CCl2F2) of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) aboard the European satellite Envisat have been retrieved from versions MIPAS/4.61 to MIPAS/4.62 and MIPAS/5.02 to MIPAS/5.06 level-1b data using the scientific level-2 processor run by Karlsruhe Institute of Technology (KIT), Institute of Meteorology and Climate Research (IMK) and Consejo Superior de Investigaciones Científicas (CSIC), Instituto de Astrofísica de Andalucía (IAA). These profiles have been compared to measurements taken by the balloon-borne cryosampler, Mark IV (MkIV) and MIPAS-Balloon (MIPAS-B), the airborne MIPAS-STRatospheric aircraft (MIPAS-STR), the satellite-borne Atmospheric Chemistry Experiment Fourier transform spectrometer (ACE-FTS) and the High Resolution Dynamic Limb Sounder (HIRDLS), as well as the ground-based Halocarbon and other Atmospheric Trace Species (HATS) network for the reduced spectral resolution period (RR: January 2005–April 2012) of MIPAS. ACE-FTS, MkIV and HATS also provide measurements during the high spectral resolution period (full resolution, FR: July 2002–March 2004) and were used to validate MIPAS CFC-11 and CFC-12 products during that time, as well as profiles from the Improved Limb Atmospheric Spectrometer, ILAS-II. In general, we find that MIPAS shows slightly higher values for CFC-11 at the lower end of the profiles (below ∼ 15 km) and in a comparison of HATS ground-based data and MIPAS measurements at 3 km below the tropopause. Differences range from approximately 10 to 50 pptv ( ∼ 5–20 %) during the RR period. In general, differences are slightly smaller for the FR period. An indication of a slight high bias at the lower end of the profile exists for CFC-12 as well, but this bias is far less pronounced than for CFC-11 and is not as obvious in the relative differences between MIPAS and any of the comparison instruments. Differences at the lower end of the profile (below ∼ 15 km) and in the comparison of HATS and MIPAS measurements taken at 3 km below the tropopause mainly stay within 10–50 pptv (corresponding to ∼ 2–10 % for CFC-12) for the RR and the FR period. Between ∼ 15 and 30 km, most comparisons agree within 10–20 pptv (10–20 %), apart from ILAS-II, which shows large differences above ∼ 17 km. Overall, relative differences are usually smaller for CFC-12 than for CFC-11. For both species – CFC-11 and CFC-12 – we find that differences at the lower end of the profile tend to be larger at higher latitudes than in tropical and subtropical regions. In addition, MIPAS profiles have a maximum in their mixing ratio around the tropopause, which is most obvious in tropical mean profiles. Comparisons of the standard deviation in a quiescent atmosphere (polar summer) show that only the CFC-12 FR error budget can fully explain the observed variability, while for the other products (CFC-11 FR and RR and CFC-12 RR) only two-thirds to three-quarters can be explained. Investigations regarding the temporal stability show very small negative drifts in MIPAS CFC-11 measurements. These instrument drifts vary between ∼ 1 and 3 % decade−1. For CFC-12, the drifts are also negative and close to zero up to ∼ 30 km. Above that altitude, larger drifts of up to ∼ 50 % decade−1 appear which are negative up to ∼ 35 km and positive, but of a similar magnitude, above.
Reconstructions of biomass burning from sediment charcoal records to improve data–model comparisons
(2015)
The location, timing, spatial extent, and frequency of wildfires are changing rapidly in many parts of the world, producing substantial impacts on ecosystems, people, and potentially climate. Paleofire records based on charcoal accumulation in sediments enable modern changes in biomass burning to be considered in their long-term context. Paleofire records also provide insights into the causes and impacts of past wildfires and emissions when analyzed in conjunction with other paleoenvironmental data and with fire models. Here we present new 1000-year and 22 000-year trends and gridded biomass burning reconstructions based on the Global Charcoal Database version 3 (GCDv3), which includes 736 charcoal records (57 more than in version 2). The new gridded reconstructions reveal the spatial patterns underlying the temporal trends in the data, allowing insights into likely controls on biomass burning at regional to global scales. In the most recent few decades, biomass burning has sharply increased in both hemispheres but especially in the north, where charcoal fluxes are now higher than at any other time during the past 22 000 years. We also discuss methodological issues relevant to data–model comparisons and identify areas for future research. Spatially gridded versions of the global data set from GCDv3 are provided to facilitate comparison with and validation of global fire simulations.
Amines are potentially important for atmospheric new particle formation, but their concentrations are usually low with typical mixing ratios in the pptv range or even smaller. Therefore, the demand for highly sensitive gas-phase amine measurements has emerged in the last several years. Nitrate chemical ionization mass spectrometry (CIMS) is routinely used for the measurement of gas-phase sulfuric acid in the sub-pptv range. Furthermore, extremely low volatile organic compounds (ELVOCs) can be detected with a nitrate CIMS. In this study we demonstrate that a nitrate CIMS can also be used for the sensitive measurement of dimethylamine (DMA, (CH3)2NH) using the NO3−•(HNO3)1 − 2• (DMA) cluster ion signal. Calibration measurements were made at the CLOUD chamber during two different measurement campaigns. Good linearity between 0 and ∼ 120 pptv of DMA as well as a sub-pptv detection limit of 0.7 pptv for a 10 min integration time are demonstrated at 278 K and 38 % RH.
MIPAS-Envisat is a satellite-borne sensor which measured vertical profiles of a wide range of trace gases from 2002 to 2012 using IR emission spectroscopy. We present geophysical validation of the MIPAS-Envisat operational retrieval (version 6.0) of N2O, CH4, CFC-12, and CFC-11 by the European Space Agency (ESA). The geophysical validation data are derived from measurements of samples collected by a cryogenic whole air sampler flown to altitudes of up to 34 km by means of large scientific balloons. In order to increase the number of coincidences between the satellite and the balloon observations, we applied a trajectory matching technique. The results are presented for different time periods due to a change in the spectroscopic resolution of MIPAS in early 2005. Retrieval results for N2O, CH4, and CFC-12 show partly good agreement for some altitude regions, which differs for the periods with different spectroscopic resolution. The more recent low spectroscopic resolution data above 20 km altitude show agreement with the combined uncertainties, while there is a tendency of the earlier high spectral resolution data set to underestimate these species above 25 km. The earlier high spectral resolution data show a significant overestimation of the mixing ratios for N2O, CH4, and CFC-12 below 20 km. These differences need to be considered when using these data. The CFC-11 results from the operation retrieval version 6.0 cannot be recommended for scientific studies due to a systematic overestimation of the CFC-11 mixing ratios at all altitudes.
Modelling short-term variability in carbon and water exchange in a temperate Scots pine forest
(2015)
The vegetation–atmosphere carbon and water exchange at one particular site can strongly vary from year to year, and understanding this interannual variability in carbon and water exchange (IAVcw) is a critical factor in projecting future ecosystem changes. However, the mechanisms driving this IAVcw are not well understood. We used data on carbon and water fluxes from a multi-year eddy covariance study (1997–2009) in a Dutch Scots pine forest and forced a process-based ecosystem model (Lund–Potsdam–Jena General Ecosystem Simulator; LPJ-GUESS) with local data to, firstly, test whether the model can explain IAVcw and seasonal carbon and water exchange from direct environmental factors only. Initial model runs showed low correlations with estimated annual gross primary productivity (GPP) and annual actual evapotranspiration (AET), while monthly and daily fluxes showed high correlations. The model underestimated GPP and AET during winter and drought events. Secondly, we adapted the temperature inhibition function of photosynthesis to account for the observation that at this particular site, trees continue to assimilate at very low atmospheric temperatures (up to daily averages of −10 °C), resulting in a net carbon sink in winter. While we were able to improve daily and monthly simulations during winter by lowering the modelled minimum temperature threshold for photosynthesis, this did not increase explained IAVcw at the site. Thirdly, we implemented three alternative hypotheses concerning water uptake by plants in order to test which one best corresponds with the data. In particular, we analyse the effects during the 2003 heatwave. These simulations revealed a strong sensitivity of the modelled fluxes during dry and warm conditions, but no single formulation was consistently superior in reproducing the data for all timescales and the overall model–data match for IAVcw could not be improved. Most probably access to deep soil water leads to higher AET and GPP simulated during the heatwave of 2003. We conclude that photosynthesis at lower temperatures than assumed in most models can be important for winter carbon and water fluxes in pine forests. Furthermore, details of the model representations of water uptake, which are often overlooked, need further attention, and deep water access should be treated explicitly.
Modelling short-term variability in carbon and water exchange in a temperate Scots pine forest
(2015)
The vegetation–atmosphere carbon and water exchange at one particular site can strongly vary from year to year, and understanding this interannual variability in carbon and water exchange (IAVcw) is a critical factor in projecting future ecosystem changes. However, the mechanisms driving this IAVcw are not well understood. We used data on carbon and water fluxes from a multi-year eddy covariance study (1997–2009) in a Dutch Scots pine forest and forced a process-based ecosystem model (Lund–Potsdam–Jena General Ecosystem Simulator; LPJ-GUESS) with local data to, firstly, test whether the model can explain IAVcw and seasonal carbon and water exchange from direct environmental factors only. Initial model runs showed low correlations with estimated annual gross primary productivity (GPP) and annual actual evapotranspiration (AET), while monthly and daily fluxes showed high correlations. The model underestimated GPP and AET during winter and drought events. Secondly, we adapted the temperature inhibition function of photosynthesis to account for the observation that at this particular site, trees continue to assimilate at very low atmospheric temperatures (up to daily averages of −10 °C), resulting in a net carbon sink in winter. While we were able to improve daily and monthly simulations during winter by lowering the modelled minimum temperature threshold for photosynthesis, this did not increase explained IAVcw at the site. Thirdly, we implemented three alternative hypotheses concerning water uptake by plants in order to test which one best corresponds with the data. In particular, we analyse the effects during the 2003 heatwave. These simulations revealed a strong sensitivity of the modelled fluxes during dry and warm conditions, but no single formulation was consistently superior in reproducing the data for all timescales and the overall model–data match for IAVcw could not be improved. Most probably access to deep soil water leads to higher AET and GPP simulated during the heatwave of 2003. We conclude that photosynthesis at lower temperatures than assumed in most models can be important for winter carbon and water fluxes in pine forests. Furthermore, details of the model representations of water uptake, which are often overlooked, need further attention, and deep water access should be treated explicitly.
We present the application of time-of-flight mass spectrometry (TOF MS) for the analysis of halocarbons in the atmosphere after cryogenic sample preconcentration and gas chromatographic separation. For the described field of application, the quadrupole mass spectrometer (QP MS) is a state-of-the-art detector. This work aims at comparing two commercially available instruments, a QP MS and a TOF MS, with respect to mass resolution, mass accuracy, stability of the mass axis and instrument sensitivity, detector sensitivity, measurement precision and detector linearity. Both mass spectrometers are operated on the same gas chromatographic system by splitting the column effluent to both detectors. The QP MS had to be operated in optimised single ion monitoring (SIM) mode to achieve a sensitivity which could compete with the TOF MS. The TOF MS provided full mass range information in any acquired mass spectrum without losing sensitivity. Whilst the QP MS showed the performance already achieved in earlier tests, the sensitivity of the TOF MS was on average higher than that of the QP MS in the "operational" SIM mode by a factor of up to 3, reaching detection limits of less than 0.2 pg. Measurement precision determined for the whole analytical system was up to 0.2% depending on substance and sampled volume. The TOF MS instrument used for this study displayed significant non-linearities of up to 10% for two-thirds of all analysed substances.
We present the characterization and application of a new gas chromatography time-of-flight mass spectrometry instrument (GC-TOFMS) for the quantitative analysis of halocarbons in air samples. The setup comprises three fundamental enhancements compared to our earlier work (Hoker et al., 2015): (1) full automation, (2) a mass resolving power R = m/Δm of the TOFMS (Tofwerk AG, Switzerland) increased up to 4000 and (3) a fully accessible data format of the mass spectrometric data. Automation in combination with the accessible data allowed an in-depth characterization of the instrument. Mass accuracy was found to be approximately 5 ppm in mean after automatic recalibration of the mass axis in each measurement. A TOFMS configuration giving R = 3500 was chosen to provide an R-to-sensitivity ratio suitable for our purpose. Calculated detection limits are as low as a few femtograms by means of the accurate mass information. The precision for substance quantification was 0.15 % at the best for an individual measurement and in general mainly determined by the signal-to-noise ratio of the chromatographic peak. Detector non-linearity was found to be insignificant up to a mixing ratio of roughly 150 ppt at 0.5 L sampled volume. At higher concentrations, non-linearities of a few percent were observed (precision level: 0.2 %) but could be attributed to a potential source within the detection system. A straightforward correction for those non-linearities was applied in data processing, again by exploiting the accurate mass information. Based on the overall characterization results, the GC-TOFMS instrument was found to be very well suited for the task of quantitative halocarbon trace gas observation and a big step forward compared to scanning, quadrupole MS with low mass resolving power and a TOFMS technique reported to be non-linear and restricted by a small dynamical range.
Be it in the case of opening a website, sending an email, or high-frequency trading, bits and bytes of information have to cross numerous nodes at which micro-decisions are made. These decisions concern the most efficient path through the network, the processing speed, or the priority of incoming data packets.
Despite their multifaceted nature, micro-decisions are a dimension of control and surveillance in the twenty-first century that has received little critical attention. They represent the smallest unit and the technical precondition of a contemporary network politics – and of our potential opposition to it. The current debates regarding net neutrality and Edward Snowden’s revelation of NSA surveillance are only the tip of the iceberg. What is at stake is nothing less than the future of the Internet as we know it.
Knowledge about mass discrimination effects in a chemical ionization mass spectrometer (CIMS) is crucial for quantifying, e.g., the recently discovered extremely low volatile organic compounds (ELVOCs) and other compounds for which no calibration standard exists so far. Here, we present a simple way of estimating mass discrimination effects of a nitrate-based chemical ionization atmospheric pressure interface time-of-flight (CI-APi-TOF) mass spectrometer. Characterization of the mass discrimination is achieved by adding different perfluorinated acids to the mass spectrometer in amounts sufficient to deplete the primary ions significantly. The relative transmission efficiency can then be determined by comparing the decrease of signals from the primary ions and the increase of signals from the perfluorinated acids at higher masses. This method is in use already for PTR-MS; however, its application to a CI-APi-TOF brings additional difficulties, namely clustering and fragmentation of the measured compounds, which can be treated with statistical analysis of the measured data, leading to self-consistent results. We also compare this method to a transmission estimation obtained with a setup using an electrospray ion source, a high-resolution differential mobility analyzer and an electrometer, which estimates the transmission of the instrument without the CI source. Both methods give different transmission curves, indicating non-negligible mass discrimination effects of the CI source. The absolute transmission of the instrument without the CI source was estimated with the HR-DMA method to plateau between the m∕z range of 127 and 568 Th at around 1.5 %; however, for the CI source included, the depletion method showed a steady increase in relative transmission efficiency from the m∕z range of the primary ion (mainly at 62 Th) to around 550 Th by a factor of around 5. The main advantages of the depletion method are that the instrument is used in the same operation mode as during standard measurements and no knowledge of the absolute amount of the measured substance is necessary, which results in a simple setup.
Recently significant advances have been made in the collection, detection, and characterization of ice nucleating particles (INP). Ice nuclei are particles that facilitate the heterogeneous formation of ice within the atmospheric aerosol by lowering the free energy barrier to spontaneous nucleation and growth of ice from atmospheric water and/or vapor. The Frankfurt isostatic diiffusion chamber (FRIDGE) is an INP collection and offline detection system that has become widely deployed and shows additional potential for ambient measurements. Since its initial development FRIDGE has gone through several iterations and improvements. Here we describe improvements that have been made in the collection and analysis techniques. We detail the uncertainties inherent in the measurement method, and suggest a systematic method of error analysis for FRIDGE measurements. Thus what is presented herein should serve as a foundation for the dissemination of all current and future measurements using FRIDGE instrumentation.
Background: In addition to controlled post-translational modifications proteins can be modified with highly reactive compounds. Usually this leads to a compromised functionality of the protein. Methylglyoxal is one of the most common agents that attack arginine residues. Methylglyoxal is also regarded as a pro-oxidant that affects cellular redox homeostasis by contributing to the formation of reactive oxygen species. Antioxidant enzymes like catalase are required to protect the cell from oxidative damage. These enzymes are also targets for methylglyoxal-mediated modification which could severely affect their catalytic activity in breaking down reactive oxygen species to less reactive or inert compounds.
Results: Here, bovine liver catalase was incubated with high levels of methylglyoxal to induce its glycation. This treatment did not lead to a pronounced reduction of enzymatic activity. Subsequently methylglyoxal-mediated arginine modifications (hydroimidazolone and dihydroxyimidazolidine) were quantitatively analysed by sensitive nano high performance liquid chromatography/electron spray ionisation/tandem mass spectrometry. Whereas several arginine residues displayed low to moderate levels of glycation (e.g., Arg93, Arg365, Arg444) Arg354 in the active centre of catalase was never found to be modified.
Conclusions: Bovine liver catalase is able to tolerate very high levels of the modifying α-oxoaldehyde methylglyoxal so that its essential enzymatic function is not impaired.
Electronic supplementary material: The online version of this article (doi:10.1186/s13104-015-1793-5) contains supplementary material, which is available to authorized users.
Vimentin is currently used to differentiate between malignant renal carcinomas and benign oncocytomas. Recent reports showing Vimentin positive oncocytomas seriously question the validity of this present diagnostic approach. Vimentin 3 is a spliced variant and ends with a unique C-terminal ending after exon 7 which differentiates it from the full length version that has 9 exons. Therefore, the protein size is different; the full length Vimentin version has a protein size of ~57 kDa and the truncated version of ~47 kDa. We designed an antibody, called Vim3, against the unique C-terminal ending of the Vimentin 3 variant. Using immune histology, immune fluorescence, Western blot, and qRT-PCR analysis, a Vim3 overexpression was detectable exclusively in oncocytoma, making the detection of Vim3 a potential specific marker for benign kidney tumors. This antibody is the first to clearly differentiate benign oncocytoma and the mimicking eosinophilic variants of the RCCs. This differentiation between malignant and benign RCCs is essential for operative planning, follow-up therapy, and patients' survival. In the future the usage of Vimentin antibodies in routine pathology has to be applied with care. Consideration must be given to Vimentin specific binding epitopes otherwise a misdiagnosis of the patients' tumor samples may result.
The Central Nigerian Nok Culture has been well known for its elaborate terracotta sculptures and evidence of iron metallurgy since its discovery by British archaeologist Bernard Fagg in the 1940s. With a date in the first millennium BCE, both, sculptures and ironworking, belong to the earliest of their kind in sub-Saharan Africa. After a period of destruction of Nok sites by looting, scientific research resumed in 2006, when a team of archaeologists from Goethe University in Germany started to explore different Nok Culture aspects, one of which focused on chronology. Establishing a chronology for the Nok Culture employed two approaches: a comprehensive pottery analysis based on decoration and form elements and a wealth of radiocarbon dates from a large number of excavated sites. This volume presents the radiocarbon dates and the methods, data and results of the chronological pottery analysis, conducted within the scope of a dissertation project completed in 2015. Combining the two strands of information, a chronology emerges, dividing the Nok Culture into three phases from the middle of the second millennium BCE to the last centuries BCE and defining seven pottery groups that can be arranged to some extent in a chronological order.
Capital maintenance rules are part of a legal capital regime that consists of rules on raising capital and rules on maintaining it. The function of these rules is the protection of the corporation’s creditors. This is evidenced by the fact that in public as well as private companies the provisions on legal capital are not open to disapplication or variation even with unanimous shareholder consent. Thus, providing the company with a minimum of funding and ensuring equal treatment of shareholders are mere reflexes of creditor protection or, at best, ancillary purposes of legal capital. Legal capital is part of a corporation’s equity. The key feature of equity is that it ranks behind the claims of other stakeholders in the distribution of a corporation’s assets. Consequently, equity will also be the first part of a corporation’s funds to be depleted by losses. Capital maintenance rules seek to enforce this order of priority of different groups of stakeholders by restricting distributions to shareholders. Such restrictions are not unique to legal systems that have adopted a legal capital regime. A prominent example of a statute that has eliminated mandatory legal capital is the Delaware General Corporation Law. § 154 DCGL leaves it up to the directors to decide whether any part of the consideration received by the corporation for its shares shall be attributed to capital. Thus, a Delaware corporation need not have any stated capital. This has significant impact on the funds available for distribution to shareholders. Pursuant to § 170 (a) DGCL dividends may only be paid out of surplus or, in the absence of surplus, out of net profits of the current or the preceding fiscal year. § 154 DGCL defines surplus as the excess of a corporation’s net assets over the amount of its capital, and net assets as the amount by which total assets exceed total liabilities. A corporation without stated capital may, therefore, distribute all of its net assets to its shareholders and continue business without any equity on its balance sheet. This highlights the difference between the different approaches to creditor protection in Germany and the U.S. Both legal systems acknowledge the priority of creditors over shareholders in corporate distributions. However, German law seeks to give creditors additional comfort by requiring companies to raise and maintain additional layers of assets above and beyond those corresponding to the company’s liabilities that may not be depleted by way of distributions to shareholders. While private companies must merely raise and maintain their stated capital, public companies are required to raise and maintain additional equity accounts unavailable for distributions to shareholders such as the share premium account1 and the legal reserve.2
In recent years a number of objections have been raised against this concept of creditor protection. Critics argue that contractual arrangements are a more efficient means for protecting the interests of creditors.3 Capital maintenance does not prevent creditors from negotiating for more stringent protection of their claims such as collateral or financial covenants. It does, however, provide a minimum standard of protection for the benefit of creditors who lack the commercial experience or the bargaining power or who, like tort victims, are simply unable to negotiate for contractual safeguards. Capital maintenance ensures that their protection against excessive distributions does not depend on large creditors who are free to waive covenants that, in effect, benefit all creditors in exchange for individual arrangements that work exclusively in their favour.
Anleihen werden in der Regel in zahlreiche Teilschuldverschreibungen aufgespalten und diese an verschiedene Investoren verkauft. Dies begründet, der Zahl der umlaufenden Teilschuldverschreibungen entsprechend, jeweils unterschiedliche Schuldverhältnisse zwischen dem Emittenten und dem jeweiligen Investor. Hält ein Investor mehrere Teilschuldverschreibungen, so entstehen dementsprechend mehrere rechtlich voneinander zu unterscheidende Schuldverhältnisse mit gleichem Inhalt.1 Diese können jeweils ein unterschiedliches rechtliches Schicksal haben, z. B. getrennt voneinander übertragen werden. Sie können auch, von atypischen Gestaltungen abgesehen, je einzeln vom Gläubiger gekündigt werden, wenn die Anleihebedingungen insoweit keine Vorkehrungen treffen. Die folgenden Bemerkungen dazu befassen sich zunächst mit der umstrittenen Frage, ob auch eine Kündigung aus wichtigem Grund seitens eines Gläubigers gemäß §§ 490 Abs. 1, 314 BGB in Betracht kommt (im Folgenden I. - VII.)
Die sog. Business Judgment Rule wurde durch Art. 1 Nr. 1a des UMAG1 auf entsprechende Vorschläge im Schrifttum2 als neuer § 93 Abs. 1 Satz 2 in das Aktiengesetz eingefügt. Der Sache nach war sie bereits zuvor in Rechtsprechung3 und Lehre4 anerkannt. Nach gängigem Verständnis soll die Business Judgment Rule einen „sicheren Hafen“ bieten, der Organmitglieder davor schützt, dass unternehmerische Misserfolge auf der Grundlage nachträglicher besserer Erkenntnis als Sorgfaltspflichtverstöße sanktioniert werden. Nach ganz überwiegen-der Auffassung beschränkt sich die Bedeutung von § 93 Abs. 1 Satz 2 AktG nicht darauf, durch ausdrückliche Regelung von Elementen der Sorgfaltspflicht klarzustellen, dass das Gesetz mit dem strengen Sorgfaltsmaßstab des ordentlichen und gewissenhaften Geschäftslei-ters nicht etwa eine Erfolgshaftung statuiert. Die Business Judgment Rule wird vielmehr als Privilegierung gegenüber dem ansonsten geltenden Haftungsmaßstab des § 93 Abs. 1 Satz 1 AktG verstanden. Ausdrückliche Stellungnahmen zur Wirkungsweise dieses Privilegs reichen von der Annahme eines der richterlichen Nachprüfung entzogenen unternehmerischen Ermes-sensspielraums5 über die Einordnung als unwiderlegliche Vermutung objektiv rechtmäßigen Verhaltens6 bis hin zu der Annahme, dass im Anwendungsbereich der Business Judgment Rule eine Haftung gegenüber der Gesellschaft nur ab der Grenze der groben Fahrlässigkeit in Betracht komme.7 Aber auch die zahlreichen Stellungnahmen, die sich nicht ausdrücklich zur Frage der Haftungserleichterung äußern, setzen eine privilegierende Wirkung der Business Judgment Rule voraus. Anderenfalls hätten die eingehenden Überlegungen zur Abgrenzung unternehmerischer von anderen, insbesondere rechtlich gebundenen Entscheidungen, für die offenbar ein strengerer Sorgfalts- und Haftungsmaßstab gelten soll, keinerlei praktische Bedeutung.
1.Hinsichtlich der Haftung von Organmitgliedern gegenüber der Gesellschaft für Fehlein-schätzungen der Rechtslage gilt kein anderer Maßstab als hinsichtlich der Haftung für Fehler bei unternehmerischen Entscheidungen (dazu sogleich, II).
2.Die Business Judgment Rule des § 93 Abs. 1 Satz 2 AktG enthält kein Haftungsprivileg; insbesondere stellt sie Organmitglieder nicht grundsätzlich von der Haftung für grobe Fahr-lässigkeit frei. Sie konkretisiert vielmehr lediglich die Sorgfaltsanforderungen an einen or-dentlichen und gewissenhaften Geschäftsleiter und stellt klar, dass dessen Haftung nicht mit nachträglicher besserer Erkenntnis begründet werden kann. Aus diesem Grund ist es unbe-denklich, dass sich die Haftung für unternehmerische, rechtliche und sonstige Fehler nach einheitlichen Haftungsgrundsätzen richtet (dazu unten, III.).
Ribosomes are the central cellular assembly lines for protein synthesis. To cope with the translational needs, a proliferating mammalian cell can produce up to 7500-ribosomes per minute. However, under growth limiting conditions, such as nutrient depletion, ribosome synthesis is rapidly shut down exemplifying the importance of a tight coordination between ribosome supply and cellular energy status. In addition to the quantitative regulation, a strict quality control of ribosome synthesis is equally important, because alterations in the composition or function of ribosomes can lead to a variety of pathologies. To cope with these challenges a highly regulated, multi-step pathway of ribosome biogenesis has evolved. In mammals this pathway generates the mature 80S ribosomes that comprise the large 60S and the small 40S subunits. Together they contain around 80 ribosomal proteins and the 28S, 18S, 5.8S and 5S rRNAs. The 28S, 5.8S and 5S rRNAs are assembled into the large subunit, while the 18S rRNA is part of the small subunit. The pathway of ribosome biogenesis is a multi-step cellular process, where specific stages occur in distinct subcellular compartments. Transcription of the 47S rRNA, which is the precursor for the 28S, 18S and 5.8S species, occurs in the nucleolus. Modification of distinct bases and early processing of this precursor also take place in the nucleolus. Subsequently, the 40S and 60S pre-ribosomes take separate maturation routes through the nucleoplasm before their export and final assembly in the cytoplasm. The various stages of preribosomal maturation require the constant and sequential action of a large number of non-ribosomal proteins, known as trans-acting factors. These factors coordinate the delicate remodeling of the pre-ribosomal intermediates and thereby ensure proper progression of the maturation process. The remodeling events largely depend on the dynamics of post-translational modifications, such as phosphorylation or SUMOylation. This requires that the enzymes controlling these modifications are properly targeted to their sites of activity as they fulfill their functions within specific compartments. Here we studied the regulatory principles that govern the subcellular partitioning of the SUMO-specific isopeptidase SENP3 and its associated factor PELP1. Previous work from our laboratory has delineated the importance of the SUMO system for proper ribosome biogenesis in mammalian cells. In particular, we have shown that SENP3 is critically involved in 28S rRNA formation, which is a key step for pre-60S subunit maturation. A critical involvement of SENP3 at this stage of the maturation process is in agreement with the observed enrichment of SENP3 in the nucleolus, since 28S rRNA processing is considered to occur in the nucleolus. Our subsequent work identified the nucleolar scaffold protein NPM1 and the ribosomal trans-acting factor PELP1 as bona fide substrates of SENP3. For both proteins we could demonstrate modification by SUMO2/3 and define SENP3 as the demodifying enzyme. Depletion of SENP3 enhanced the conjugation of SUMO to both proteins and concomitantly reduced conversion of the 32S pre-rRNA to the mature 28S rRNA. PELP1 is part of a larger protein complex consisting of the core components PELP1, TEX10 and WDR18. We could show that the balanced SUMOylation/deSUMOylation of PELP1 controls the nucleolar/nucleoplasmic distribution of this complex. Enhanced SUMOylation, which is observed in the absence of SENP3, triggers the nucleolar release of the complex suggesting that SENP3-mediated deSUMOylation controls the dynamics of nucleolar trans-acting factors. Based on these findings we first wanted to understand, in which cellular compartment(s) SENP3 exerts its function on 28S maturation. Next, we wanted to tackle the question how the subcellular distribution of SENP3 is controlled. Finally
we addressed the question how the SUMOylation of PELP1 determines the subnuclear distribution of the PELP1 complex. This work initially revealed that the nucleolar localization of SENP3 is crucial for proper 28S rRNA formation and 60S ribosome maturation. Importantly, we could demonstrate that the nucleolar compartmentalization of SENP3 depends on its direct physical interaction with NPM1. Further, we could show that the amino-terminal region of SENP3 is necessary for its binding to NPM1 and nucleolar recruitment. Strikingly, this interaction requires the phosphorylation of SENP3, which is brought about by the mTOR kinase. By in-vitro kinase assays and mass-spectrometric approaches we identified five serine/threonine residues within the amino-terminal region of SENP3 that are targeted by mTOR (S/T 25, 26, 141, 142, 143). We could further demonstrate by mutagenesis that these sites in SENP3 are in fact critical for the phospho-dependent binding of SENP3 to NPM1 and its nucleolar recruitment.
Consistent with these data, we found that chemical inhibitors of the mTOR kinase trigger the nucleolar release of SENP3 and impair its interaction with NPM1. Strikingly, this goes along with severe 28S rRNA maturation defects demonstrating the physiological importance of mTOR signaling in the regulation SENP3 function and rRNA processing. By specifically depleting components of the either mTORC1 or mTORC2, we could attribute the observed effects to signaling by mTORC1 rather than mTORC2. In an attempt to find the negative regulators of SENP3 phosphorylation, we identified PP1-γ as the candidate phosphatase in this pathway. We found a strong physical interaction of SENP3 with PP1-γ and observed a loss of SENP3 nucleolar localization upon ectopic expression of PP1-γ. Thus we could define mTOR/PP1-γ mediated phosphorylation/dephosphorylation of SENP3 as an important
mechanism in the control of ribosome maturation. Given that mTOR activity is controlled by nutrient availability, SENP3 functions as a sensor that couples ribosome synthesis with nutrient availability. The second part of this work delineated the role of SUMOylated PELP1 in nucleoplasmic partitioning of the SENP3-PELP1 complex. It was revealed that the AAA-ATPase MDN1 binds preferentially to SUMO modified PELP1 and likely segregates SUMOylated PELP1 from nucleolar pre-60S particles. We initially found that the PELP1 complex associates with MDN1, a factor known to be involved in the 28S rRNA maturation. Notably, depletion of MDN1 led to an enhanced accumulation of the PELP1 complex in the nucleolus and a strong association of PELP1 with pre-60S particles, suggesting that MDN1 is required for the release of this complex from the pre-ribosomes. Intriguingly, the interaction of PELP1 with MDN1 requires SUMO2/3 and SUMOylated PELP1 shows enhanced binding to MDN1 when compared to unmodified PELP1. Taken together this work provides new insights in the control of the SENP3-PELP1 complex dynamics. We could define several layers for the coordinated spatial regulation of SENP3 and the PELP1 complex. This work therefore underscores the crucial importance of dynamic post-translational modifications for the control of ribosome maturation.
A record number of 39 209 HSCT in 34 809 patients (14 950 allogeneic (43%) and 19 859 autologous (57%)) were reported by 658 centers in 48 countries to the 2013 survey. Trends include: more growth in allogeneic than in autologous HSCT, increasing use of sibling and unrelated donors and a pronounced increase in haploidentical family donors when compared with cord blood donors for those patients without a matched related or unrelated donor. Main indications were leukemias, 11 190 (32%; 96% allogeneic); lymphoid neoplasias, 19 958 (57%; 11% allogeneic); solid tumors, 1543 (4%; 4% allogeneic); and nonmalignant disorders, 1975 (6%; 91% allogeneic). In patients without a matched sibling or unrelated donor, alternative donors are used. Since 2010 there has been a marked increase of 96% in the number of transplants performed from haploidentical relatives (802 in 2010 to 1571 in 2013), whereas the number of unrelated cord blood transplants has slightly decreased (789 in 2010 to 666 in 2013). The use of donor type varies greatly throughout Europe.
Das natürlich vorkommende Polyphenol Resveratrol (3,4‘,5-(E)-Trihydroxystilben) ist eine potente chemopräventive Substanz, die in vielen verschiedenen Krebszelllinien wirksam ist. Außerdem verfügt sie über anti-inflammatorische, anti-oxidative und pro-apoptotische Wirkungen. Da Resveratrol auch in Tiermodellen des Typ-2-Diabetes und der nicht-alkoholischen Fettlebererkrankung gute Effekte gezeigt hat, wird in Erwägung gezogen es zur Prävention und Behandlung von metabolischen Erkrankungen einzusetzen. Allerdings liegen, aufgrund von schneller Metabolisierung und geringer Bioverfügbarkeit, die wirksamen Konzentrationen im mikromolaren Bereich. Eine geeignete Strategie, um die anti-tumorale Wirkung und die Bioverfügbarkeit von Resveratrol zu verbessern, scheint die Methylierung der freien Hydroxylgruppen zu sein. Allerdings liefern einige Studien Hinweise darauf, dass diese strukturelle Modifikation der Stilbengrundstruktur zu einer Veränderung des antiproliferativen Wirkmechanismus der methylierten Substanzen führt. Daher führten wir im ersten Teil dieser Arbeit genauere Untersuchungen durch, um die Veränderungen der biologischen Wirkung, die durch die Methylierung der freien Hydroxylgruppen von (E)- und (Z)-Resveratrol verursacht werden, zu charakterisieren. Einen Schwerpunkt bildete die Bestimmung der metabolischen Effekte der methylierten Substanzen. Dabei sollte aufgeklärt werden, ob die Analoga noch immer in der Lage sind bekannte Resveratrol-Targets, wie AMPK, SIRT1 und Phosphodiesterasen, zu modulieren. Zunächst bestätigten wir, dass die methylierten Resveratrolanaloga ST911 (3,4‘,5-Z)-Trimethoxystilben) und ST912 (3,4‘,5-(E)-Trimethoxystilben) einen starken antiproliferativen Effekt auf verschiedene Krebszelllinien ausüben. Wie bereits zuvor beschrieben, konnten wir beobachten, dass ST911 und ST912 das Wachstum von Tumorzellen stärker beeinflussen, als die hydroxylierten Substanzen (E)- und (Z)-Resveratrol. Dies, in Verbindung mit einer vernachlässigbaren zytotoxischen Wirkung und einer deutlich geringeren antiproliferativen Wirkung auf Primärzellen, legt nahe, dass ST911 als potentielles neues Chemotherapeutikum weiter untersucht werden sollte. Zudem zeigten ST911 und ST912 signifikante pro-apoptotische Wirkungen in CaCo-2-Zellen. Auch Resveratrol konnte in diesen Zellen Apoptose auslösen, allerdings erst nach Behandlung mit deutlich höheren Konzentrationen, verglichen mit ST911 und ST912. Eine genauere Charakterisierung der antitumoralen Wirkung von ST911 in HT-29-Zellen zeigte, dass ST911 die Polymerisation von Tubulin zu Mikrotubuli beeinflusst und einen Arrest des Zellzyklus in der Mitose-Phase auslöst. Im Gegensatz dazu führt Resveratrol zu einem Zellzyklus-Arrest in der S-Phase und beeinflusst die Tubulinpolymerisation nicht. Diese Beobachtungen verstärkten die Annahme, dass ST911 ein Mitosehemmer ist und betonten noch einmal die mechanistischen Unterschiede zwischen Resveratrol und den methylierten Analoga. Interessanterweise konnte ST911 die hepatische Fettakkumulation in einem in-vitro-Steatosemodell nicht beeinflussen, während eine Behandlung mit Resveratrol zu einer signifikanten Reduktion der intrahepatischen Triglyzeride führte. Dieses Experiment lässt vermuten, dass die stärkere antiproliferative Wirkung von ST911, keine erhöhte Aktivität in metabolischen Krankheitsmodellen nach sich zieht. Die beobachteten Unterschiede im Steatosemodell führten zu der Frage, ob die methylierten Analoga noch immer in der Lage sind die gleichen metabolischen Targetgene zu modulieren, die in der Literatur für Resveratrol beschrieben sind. Vor kurzem wurden Phosphodiesterasen (PDEs) als direkte Targets von Resveratrol identifiziert. Die Inhibition von PDEs durch Resveratrol führt zu einem Anstieg der intrazellulären cAMP-Konzentration. Diese wiederum aktiviert die bekannten Resveratrol-Targetgene AMPK und SIRT1. Unsere Experimente zeigten, dass ST911 und ST912 keinen Einfluss auf die intrazelluläre cAMP-Konzentration haben. Zusätzlich konnten wir keine AMPK- oder SIRT1-abhängigen Veränderungen der Genexpression beobachten. Dies ist ein Hinweis darauf, dass die Substanzen ihre zellulären Effekte vermutlich nicht über eine Modulation von PDEs, AMPK oder SIRT1 vermitteln. Zusammenfassend liefert der erste Teil der Arbeit Beweise dafür, dass ST911 keine positiven Effekte in metabolischen Krankheitsmodellen ausübt. Dies liegt vermutlich in einem Aktivitätsverlust gegenüber den metabolischen Targetgenen von Resveratrol begründet. Des Weiteren unterstützen unsere Ergebnisse frühere Arbeiten, die zeigen konnten, dass ST911 an Tubulin bindet und die Polymerisation zu Mikrotubuli verhindert. Weiterhin bestätigen unsere Daten, dass die Methylierung von Resveratrol zu einer grundlegenden Veränderung des Wirkmechanismus dieser Substanzen führt, die von einem kompletten Verlust der metabolischen Aktivität begleitet wird. Dies sollte bei zukünftigen Leitstrukturoptimierungen mit Resveratrol berücksichtigt werden. Im ersten Teil dieser Arbeit konnte außerdem gezeigt werden, dass Resveratrol die Gentranskription des nukleären Rezeptors SHP (aus dem Englischen: small heterodimer partner) stark induziert. Der Mechanismus dieser Induktion scheint von der Aktivität von AMPK und SIRT1 abhängig zu sein. Diese Ergebnisse konnten unser Verständnis der vielseitigen biologischen Wirkungen von Resveratrol erweitern. Dennoch sollte die Relevanz der SHP-Induktion für die Effekte von Resveratrol auf metabolische Krankheiten und Tumorwachstum noch weiter untersucht werden. Während der Experimente für den ersten Teil der Arbeit stellten wir fest, dass der AMPK-Inhibitor Compound C (CC) in der Lage war, die wachstumshemmende Wirkung von ST911 signifikant zu reduzieren. Die Untersuchung dieses sogenannten „Rescue-Effektes“ wird durch die Tatsache bestärkt, dass eine steigende Anzahl von Tumoren resistent gegenüber Chemotherapeutika ist. Außerdem fehlen spezifische Antidota für akute Intoxikationen mit Mitosehemmern. Daher zielten die folgenden Experimente darauf ab den Rescue-Effekt näher zu charakterisieren und die zugrundeliegenden Wirkmechanismen aufzuklären. Zunächst zeigten Knockdown-Experimente, dass der Rescue-Effekt unabhängig von der AMPK-inhibierenden Wirkung von CC vermittelt wird. Da CC ein ATP-kompetitiver Inhibitor der AMPK ist und zuvor bereits gezeigt wurde, dass es auch eine große Zahl anderer Kinasen inhibieren kann, vermuteten wir, dass der Rescue-Effekt mit diesen Off-Target-Effekten von CC zusammenhängt. Als nächstes testeten wir, ob die wachstumshemmenden Effekte von anderen Mitosehemmern auch durch CC aufgehoben werden können. Wir wählten verschiedene etablierte Substanzen, die dafür bekannt sind mit Mikrotubuli zu interagieren: Colchicin, das Vinca-Alkaloid Vinblastin, Disorazol A und das aus Taxus-Arten isolierte Paclitaxel. Die ersten drei dieser Substanzen haben eine depolymerisierende Wirkung auf die Mikrotubuli, während Paclitaxel zu einer stärkeren Polymerisierung führt. Zudem binden diese Substanzen an drei verschiedenen Bindestellen am Tubulin. Interessanterweise zeigten unsere Versuche, dass CC die antiproliferative Wirkung aller getesteten Mitosehemmer auf HT-29-Zellen, unabhängig von der Bindestelle, abschwächen kann. Des Weiteren konnte CC die Wirkung der pro-apoptotischen Substanz Staurosporin nicht reduzieren. Diese Ergebnisse weisen darauf hin, dass eher die tubulinbindenden, als die pro-apoptotischen Eigenschaften, von ST911 für den Rescue-Effekt verantwortlich sind. Um zu untersuchen, ob der Rescue-Effekt mit einer kompetitiven Bindung von CC und Mitosehemmern an Mikrotubuli erklärt werden kann, führten wir eine Immunfluoreszenzfärbung von ?-Tubulin durch. Wir konnten beobachten, dass die Tubulinpolymerisation und die Funktion des Spindelapparates in Zellen, die mit Mitosehemmern behandelt wurden, deutlich eingeschränkt waren. Außerdem stellten wir fest, dass CC nicht in der Lage ist die Zerstörung des Tubulingerüstes durch die Mitosehemmer zu verhindern. Eine Einzelbehandlung mit CC hatte keine Wirkung auf die Polymerisation des Tubulin zu Mikrotubuli. Insgesamt legen diese Daten nahe, dass CC nicht direkt an Mikrotubuli binden kann, um mit den Mitosehemmern um eine Bindung zu kompetitieren. Um diese Hypothese zu stärken, führten wir, in Kooperation mit Dr. Jennifer Herrmann (Helmholtz Institut für Pharmazeutische Forschung, Saarbrücken) SPR-Experimente mit Chips durch, auf denen Tubulin immobilisiert wurde. Die Messungen zeigten, das CC nicht in der Lage war gebundenes Disorazol A von der Bindestelle am Tubulin zu verdrängen. Dies zeigte nun deutlich, dass der Rescue-Effekt nicht auf einer Kompetition von CC und Mitosehemmern um Tubulinbindestellen beruht. Zellzyklusanalysen zeigten, dass die kombinierte Behandlung mit ST911 und CC zu einer Abschwächung des durch ST911 verursachten G2/M-Arrestes führt. Da wir zuvor bereits eine Beeinflussung der direkten Targets von CC und Mitosehemmern, AMPK oder Tubulin, ausgeschlossen hatten, schlussfolgerten wir, dass CC vermutlich mit anderen zellulären Signalwegen interagiert, die zu den beschriebenen Veränderungen des Zellwachstums und der Zellzyklusprogression führen. Eine Literaturrecherche ergab, dass ein erhöhter intrazellulärer Polyaminspiegel, die Aktivierung des PI3K/Akt-Signalweges oder eine erhöhte Aktivität des Transkriptionsfaktors c-Myc zu einer Abschwächung eines G2/M-Arrestes führen können. Daher fokussierten wir die weiteren Experimente auf die Untersuchung einer möglichen Beteiligung dieser Targets an der Vermittlung des Rescue-Effektes. Wir zeigten, dass CC die Expression der Spermidin/Spermin-N1-Acetyltransferase (SSAT) erhöhen kann. Die SSAT ist ein Enzym, das an der Biosynthese der Polyamine beteiligt ist. Zusätzlich beobachteten wir, dass die Behandlung mit CC nach 4 h zu einer Erhöhung von phosphoryliertem und damit aktiviertem Akt (pAkt) führt. Die zusätzliche Behandlung mit Wortmannin, einer Substanz, welche die Phosphorylierung von Akt hemmen kann, führte zu einer Abschwächung des Rescue-Effektes. Insgesamt weisen diese Ergebnisse darauf hin, dass eine Aktivierung von Akt-Signalwegen und ein Einfluss auf die Polyaminbiosynthese, zumindest teilweise, mit dem Rescue-Effekt zusammenhängen können. Die Überexpression von c-Myc, einem Transkriptionsfaktor, der eng mit dem Akt-Signalweg und der Biosynthese von Polyaminen zusammenhängt, ist oft mit einer erhöhten Zellproliferation verbunden. Wir untersuchten die zellulären Proteinmengen von c-Myc mittels Western Blot und entdeckten, dass nach der Behandlung mit Mitosehemmern zusätzliche Banden für c-Myc auf den Blots auftauchten. Diese Ergebnisse geben einen Hinweis auf eine posttranslationale Modifikation von c-Myc nach der Behandlung mit Mitosehemmern. Durch Kombination mit CC wurden die zusätzlichen Banden abgeschwächt und die Gesamtmenge an c-Myc-Protein nahm nach längeren Inkubationszeiten rapide ab. Dies legt nahe, dass die posttranslationale Modifikation von c-Myc zum Abbau des Proteins führt und, dass CC dies abschwächen kann. Verschiedene Arbeiten zeigten bereits, dass c-Myc phosphoryliert wird und nach Konjugation mit Ubiquitin vom Proteasom abgebaut wird. Daher überprüften wir, ob eine Inhibition des Proteasoms mit MG-132 zu einem ähnlichen Rescue-Effekt führt wie mit CC. Tatsächlich führte die Behandlung mit ST911 in Kombination mit MG-132 zu einer Zunahme der Zellproliferation, wie sie vorher bereits für CC beobachtet wurde. Dies bestärkte die Theorie, dass der proteasomale Abbau von c-Myc eine Rolle beim Rescue-Effekt spielen kann. Als nächstes untersuchten wir die Phosphorylierungen von c-Myc am Ser62 und Thr58. Diese Phosphorylierungen spielen eine wichtige Rolle beim Abbau von c-Myc, indem Sie das Protein für die Konjugation mit Ubiquitin markieren. Die densitometrische Auswertung der Western Blots ergab, dass die Behandlung mit ST911 initial zu einem Anstieg von phospho-c-Myc führt, dem eine schnelle Abnahme zu späteren Zeitpunkten folgt. Außerdem konnte gezeigt werden, dass dieser Anstieg von phospho-c-Myc durch Kombination mit CC reduziert wurde. Dies unterstützt die Hypothese, dass ST911 den proteasomalen Abbau von c-Myc begünstigt und CC dies verhindern kann. Dies ist eine mögliche Erklärung für die erhöhte Zellproliferation, die für die durch CC „geretteten“ Zellen beobachtet wurde. Allerdings konnte das direkte Target, das für die Vermittlung des Rescue-Effektes durch CC verantwortlich ist, bisher nicht identifiziert werden. DYRKs (aus dem Englischen: Dual-specificity tyrosine-phosphorylation-regulated kinases) sind wichtige Regulatoren von Proteinstabilität und –abbau während der Zellzyklusprogression. Vor kurzem wurde gezeigt, dass DYRK1A und DYRK2 c-Myc am Ser62 phosphorylieren können und es dadurch für den proteasomalen Abbau markieren. Interessanterweise wurde CC bereits in einer früheren Publikation als potenter Inhibitor verschiedener DYRKs beschrieben. Allerdings wurde die Hemmung der DYRKs durch CC in diesem Artikel nur in einer einzelnen Konzentration getestet. Daher bestimmten wir in einem in-vitro-Kinaseassay in Kooperation mit Dr. Matthias Engel (Universität des Saarlandes, Saarbrücken) die IC50-Werte für CC gegenüber DYRK1A, DYRK1B und DYRK2. Unsere Ergebnisse zeigten deutlich, dass CC ein bevorzugter Inhibitor von DYRK1A und DYRK1B (IC50-Wert von etwa 1 µM) ist, aber auch DYRK2 hemmen kann (IC50-Wert von etwa 5 µM). Da sich die vermutete Bindestelle von CC in der stark konservierten Kinasedomäne befindet, ist eine unspezifische Inhibition verschiedener DYRKs nicht überraschend. Genexpressionsanalysen zeigten, dass HT-29 und HepG2 vergleichbare Mengen an DYRK1A exprimieren, während DYRK1B und DYRK2 deutlich weniger in HepG2 vorhanden sind. Vorige Experimente hatten gezeigt, dass HepG2 weniger sensitiv für ST911 und den durch CC vermittelten Rescue-Effekt waren. Wir schlussfolgerten, dass die unterschiedliche Expression der DYRK-Formen eine mögliche Erklärung für diese Unterschiede sein könnte. Daher entschieden wir uns für eine nähere Untersuchung von DRK1B und DYRK2. Experimente mit verschiedenen Inhibitoren der DYRKs zeigten, dass diese Substanzen, ähnlich wie CC, in der Lage waren die antiproliferative Wirkung von ST911 abzuschwächen. Diese Ergebnisse wurden in nachfolgenden Knockdown-Experimenten bestätigt. Dies legt nahe, dass die DYRKs zumindest teilweise für die Vermittlung des Rescue-Effektes verantwortlich sind. Zusammenfassend man kann sagen, dass der Rescue-Effekt vermutlich mit der Biosynthese von Polyaminen, dem Akt-Signalweg und dem proteasomalen Abbau von c-Myc zusammenhängt. Des Weiteren scheint die direkte Inhibition von DYRKs durch CC ein vielversprechender Ansatz für die Erklärung des Effektes zu sein. Allerdings konnte in keinem der Experimente eine kompletten Aufhebung des Rescue-Effektes durch CC gezeigt werden. Daher gehen wir davon aus, dass verschiedene Targets in die Vermittlung des Rescue-Effektes involviert sind. Dies ist höchstwahrscheinlich auf eine unspezifische, ATP-kompetitive Hemmung verschiedener Kinasen durch CC zurückzuführen. Nichtsdestotrotz, sind eine nähere Untersuchung von DYRKs im Rahmen der Therapieresistenz von Tumoren und eine genauere Aufklärung der am Rescue-Effekt beteiligten Signalwege eine interessantes Feld für weitere Untersuchungen.
In addition to infectious viral particles, hepatitis B virus-replicating cells secrete high amounts of SVPs, which are ssembled by HBsAg in the shape of spheres and filaments but lack any capsid and genome. Filaments are characterized by a much higher amount of the surface protein LHBs as compared to spheres. Spheres are
released via the constitutive secretory pathway, while viral particles are ESCRT-dependently released via MVBs. The interaction of virions with the ESCRT machinery is mediated by α-taxilin that connects the PreS1 domain of LHBs with the ESCRT-component tsg101. Since viral particles and filaments contain a significant amount of LHBs, it is unclear whether filaments are secreted as spheres or released like viral particles. To study the release pathways of HBV filaments in the absence of viral particles, A core-deficient
HBV mutant (1.2×HBVΔCore) was generated by site-directed mutagenesis based on wt1.2x HBV. The start codon of core protein was mutated into stop codon, which was confirmed by DNA sequencing. Data from HBsAg ELISA, Western blot, immunofluorescence microscopy and immunoelectron microscopy showed that the lack of core protein did neither affect the production nor the secretion of HBV SVPs. The intracellular distribution of
LHBs and SHBs showed no difference between wtHBV and the core-deficient mutant expressing cells. Therefore, this system is suitable to investigate the release pathway of HBV filaments in the absence of viral particles. Confocal microscopy analysis of cells cotransfected core-deficient mutants with peYFPRab7 as marker for the endosomal/MVB pathway or with pGalT-eGFP as marker for the trans Golgi apparatus showed that YFP-Rab7, but not GalT-GFP, partially colocalized with LHBs. Furthermore, LHBs could be found in dilated MVBs by immune electron microscopy of ultrathin sections. This was confirmed by isolation of MVBs by cell fractionation using discontinuous sucrose gradient ultracentrifugation and percoll-based linear gradient ultracentrifugation, indicating that filaments enter MVBs in the absence of virion formation. Moreover, inhibition of MVB biogenesis by the small molecular inhibitor U18666A significantly abolished the release of filaments in a dose-dependent manner, but no inhibition could be observed in the production. In contrast, no inhibition on the secretion and production of spheres could be
detected. Inhibition of ESCRT-functionality by coexpression of transdominant negative mutants (Vps4A, Vps4B, CHMP3) abolished the release of filaments while secretion of spheres was not affected. These data indicate that in contrast Abstract 73 to spheres while are secreted via the secretory pathway, filaments are released via ESCRT/MVB pathway like infectious viral particles.
Ataxia telangiectasia (A-T) is a rare, progressive, multisystem disease that has a large number of complex and diverse manifestations which vary with age. Patients with A-T die prematurely with the leading causes of death being respiratory diseases and cancer. Respiratory manifestations include immune dysfunction leading to recurrent upper and lower respiratory infections; aspiration resulting from dysfunctional swallowing due to neurodegenerative deficits; inefficient cough; and interstitial lung disease/pulmonary fibrosis. Malnutrition is a significant comorbidity. The increased radiosensitivity and increased risk of cancer should be borne in mind when requesting radiological investigations. Aggressive proactive monitoring and treatment of these various aspects of lung disease under multidisciplinary expertise in the experience of national multidisciplinary clinics internationally forms the basis of this statement on the management of lung disease in A-T. Neurological management is outwith the scope of this document.
In seinem unlängst erschienenen Buch „Citizen Science“ untersucht der Wissenschaftstheoretiker Peter Finke die Rolle von Laiinnen und Laien für die Wissenschaft. Sein Anliegen ist es, ihre Bedeutung für den Erkenntnisfortschritt wie auch für ein praxisbezogenes bürgerschaftliches Engagement darzulegen. Aus zahlreichen Blickwinkeln variiert Finke den Grundgedanken einer Kontinuität des Handelns von Laiinnen und Laien zu dem von Fachwissenschaftlerinnen und Fachwissenschaftlern, die durch die institutionalisierten Erscheinungsformen der Wissenschaft verschleiert wird. Demgegenüber sollen im vorliegenden Beitrag Aspekte der Diskontinuität hervorgehoben werden, die es zu berücksichtigen gilt, gerade wenn man von der Wichtigkeit einer Etablierung und Förderung von „Citizen Science“ überzeugt ist.
In the past few years a multidisciplinary team of scholars based at Goethe Universität Frankfurt has been involved in the development of three projects: the research project “Political language in the Middle Ages: Semantic Approaches”, and two online platforms, “Computational Historical Semantics” and “eHumanities Desktop”. These are closely related to each other, as they bring together historical research on Latin medieval texts and Digital Humanities. This article will offer an overview of the projects, focusing particularly on the digital tools which have been developed by the team.
Im Rahmen des Kongresses „Literaturwissenschaften in Frankfurt, 1914 – 1945“, der von Bernd Zegowitz (Germanistik) und Frank Estelmann (Romanistik) am 20. und 21. Juni 2014 an der Universität Frankfurt am Main organisiert wurde, gaben 13 Vortragende an zwei Tagen Einblicke sowohl in die Geschichte als auch in exemplarische Werke von Literaturwissenschaftlern, die in der Zeit zwischen der Universitätsgründung im Jahr 1914 und dem Ende des Nationalsozialismus 1945 an der Universität Frankfurt lehrten und forschten.