Refine
Year of publication
Document Type
- Article (30731)
- Part of Periodical (11922)
- Book (8312)
- Doctoral Thesis (5734)
- Part of a Book (3721)
- Working Paper (3388)
- Review (2878)
- Contribution to a Periodical (2369)
- Preprint (2200)
- Report (1544)
Language
- German (42597)
- English (29553)
- French (1067)
- Portuguese (723)
- Multiple languages (314)
- Croatian (302)
- Spanish (301)
- Italian (195)
- mis (174)
- Turkish (148)
Is part of the Bibliography
- no (75699) (remove)
Keywords
- Deutsch (1038)
- Literatur (809)
- taxonomy (766)
- Deutschland (543)
- Rezension (491)
- new species (453)
- Frankfurt <Main> / Universität (341)
- Rezeption (325)
- Geschichte (292)
- Übersetzung (271)
Institute
- Medizin (7767)
- Präsidium (5228)
- Physik (4543)
- Wirtschaftswissenschaften (2710)
- Extern (2661)
- Gesellschaftswissenschaften (2378)
- Biowissenschaften (2195)
- Biochemie und Chemie (1978)
- Frankfurt Institute for Advanced Studies (FIAS) (1775)
- Center for Financial Studies (CFS) (1632)
Denisovite is a rare mineral occurring as aggregates of fibres typically 200–500 nm diameter. It was confirmed as a new mineral in 1984, but important facts about its chemical formula, lattice parameters, symmetry and structure have remained incompletely known since then. Recently obtained results from studies using microprobe analysis, X-ray powder diffraction (XRPD), electron crystallography, modelling and Rietveld refinement will be reported. The electron crystallography methods include transmission electron microscopy (TEM), selected-area electron diffraction (SAED), high-angle annular dark-field imaging (HAADF), high-resolution transmission electron microscopy (HRTEM), precession electron diffraction (PED) and electron diffraction tomography (EDT). A structural model of denisovite was developed from HAADF images and later completed on the basis of quasi-kinematic EDT data by ab initio structure solution using direct methods and least-squares refinement. The model was confirmed by Rietveld refinement. The lattice parameters are a = 31.024 (1), b = 19.554 (1) and c = 7.1441 (5) Å, β = 95.99 (3)°, V = 4310.1 (5) Å3 and space group P12/a1. The structure consists of three topologically distinct dreier silicate chains, viz. two xonotlite-like dreier double chains, [Si6O17]10−, and a tubular loop-branched dreier triple chain, [Si12O30]12−. The silicate chains occur between three walls of edge-sharing (Ca,Na) octahedra. The chains of silicate tetrahedra and the octahedra walls extend parallel to the z axis and form a layer parallel to (100). Water molecules and K+ cations are located at the centre of the tubular silicate chain. The latter also occupy positions close to the centres of eight-membered rings in the silicate chains. The silicate chains are geometrically constrained by neighbouring octahedra walls and present an ambiguity with respect to their z position along these walls, with displacements between neighbouring layers being either Δz = c/4 or −c/4. Such behaviour is typical for polytypic sequences and leads to disorder along [100]. In fact, the diffraction pattern does not show any sharp reflections with l odd, but continuous diffuse streaks parallel to a* instead. Only reflections with l even are sharp. The diffuse scattering is caused by (100) nanolamellae separated by stacking faults and twin boundaries. The structure can be described according to the order–disorder (OD) theory as a stacking of layers parallel to (100).
I summarize recent developments in the hard-thermal-loop approach to QCD. I first discuss a finite-temperature and -density calculation of QCD thermodynamics at NNLO from the hard-thermal-loop perturbation theory. I then discuss a generalization of the hard-thermal-loop framework to the magnetic scale g2T, from which a novel non-Abelian massless mode is uncovered.
We study a random matrix model for QCD at finite density via complex Langevin dynamics. This model has a phase transition to a phase with nonzero baryon density. We study the convergence of the algorithm as a function of the quark mass and the chemical potential and focus on two main observables: the baryon density and the chiral condensate. For simulations close to the chiral limit, the algorithm has wrong convergence properties when the quark mass is in the spectral domain of the Dirac operator. A possible solution of this problem is discussed.
The high collision energies reached at the LHC lead to significant production yields of light (anti-)nuclei and (hyper-)nuclei in proton–proton, proton–lead and, in particular, lead–lead collisions. The excellent particle identification capabilities of the ALICE apparatus, based on the specific energy loss in the Time Projection Chamber and the velocity information in the Time-Of-Flight detector, allow for the detection of these rarely produced particles. Further, the Inner Tracking System gives the possibility to separate primary nuclei from those coming from weak decay of heavier systems. One example of such a weak decay is the measurement of the (anti-)hypertriton decay to 3He + π− (3H̅e̅ + π+). The aforementioned capabilities of the ALICE apparatus offer the unique opportunity to search for exotica, like the bound state of a Λ and a neutron which would decay into a deuteron and a pion, or the bound state of two Λ’s. Results on the production of stable nuclei in Pb–Pb collisions at √sNN = 2.76 TeV are presented, and compared with thermal model predictions. We further present the current status of the searches, by their upper limits on the production yields, and compare the results to thermal and coalescence model expectations.
The pA system is typically regarded in heavy ion collisions as a “cold” nuclear matter environment and thought to isolate and identify initial state effects due to the presence of multiple nucleons in the incoming nucleus. Moreover, pA collisions bridge the gap between peripheral AA collisions and the pp baseline to create a more complete understanding of underlying production mechanisms and how they evolve with multiplicity. Recent measurements at both RHIC and the LHC provide an indication, however, that the “cold” nuclear matter picture may be somewhat naïve.
Recent LHC results from the 2013 p–Pb run at √sNN = 5.02 TeV will be discussed.
O Sentido do Direito
(2014)
Time resolved measurements of the biased disk effect at an Electron Cyclotron Resonance Ion Source
(1999)
First results are reported from time resolved measurements of ion currents extracted from the Frankfurt 14 GHz Electron Cyclotron Resonance Ion Source with pulsed biased-disk voltage. It was found that the ion currents react promptly to changes of the bias. From the experimental results it is concluded that the biased disk effect is mainly due to improvements of the extraction conditions for the source and/or an enhanced transport of ions into the extraction area. By pulsing the disk voltage, short current pulses of highly charged ions can be generated with amplitudes significantly higher than the currents obtained in continuous mode.
A small electrostatic storage ring is the central machine of the Frankfurt Ion Storage Experiments (FIRE) which will be built at the new Stern-Gerlach Center of Frankfurt University. As a true multiuser, multipurpose facility with ion energies up to 50 keV, it will allow new methods to analyze complex many-particle systems from atoms to very large biomolecules. With envisaged storage times of some seconds and beam emittances in the order of a few mm mrad, measurements with up to 6 orders of magnitude better resolutions as compared to single-pass experiments become possible. In comparison to earlier designs, the ring lattice was modified in many details: Problems in earlier designs were related to, e.g., the detection of light particles and highly charged ions with different charge states. Therefore, the deflectors were redesigned completely, allowing a more flexible positioning of the diagnostics. Here, after an introduction to the concept of electrostatic machines, an overview of the planned FIRE is given and the ring lattice and elements are described in detail.
We report on the results on the dynamical modelling of cluster formation with the new combined PHSD+FRIGA model at Nuclotron and NICA energies. The FRIGA clusterization algorithm, which can be applied to the transport models, is based on the simulated annealing technique to obtain the most bound configuration of fragments and nucleons. The PHSD+FRIGA model is able to predict isotope yields as well as hypernucleus production. Based on present predictions of the combined model we study the possibility to detect such clusters and hypernuclei in the BM@N and MPD/NICA detectors.
The properties of matter at finite baryon densities play an important role for the astrophysics of compact stars as well as for heavy ion collisions or the description of nuclear matter. Because of the sign problem of the quark determinant, lattice QCD cannot be simulated by standard Monte Carlo at finite baryon densities. I review alternative attempts to treat dense QCD with an effective lattice theory derived by analytic strong coupling and hopping expansions, which close to the continuum is valid for heavy quarks only, but shows all qualitative features of nuclear physics emerging from QCD. In particular, the nuclear liquid gas transition and an equation of state for baryons can be calculated directly from QCD. A second effective theory based on strong coupling methods permits studies of the phase diagram in the chiral limit on coarse lattices.
The topic of global trade has become central to debates on global justice and on duties to the global poor, two important concerns of contemporary political theory. However, the leading approaches fail to directly address the participants in trade and provide them with normative guidance for making choices in non-ideal circumstances. This paper contributes an account of individuals’ responsibilities for global problems in general, an account of individuals’ responsibilities as market actors, and an explanation of how these responsibilities coexist. The argument is developed through an extended case study of a consumer’s choice between conventional and fair trade coffee. My argument is that the coffee consumer’s choice requires consideration of two distinct responsibilities. First, she has responsibilities to help meet foreigners’ claims for assistance. Second, she has moral responsibilities to ensure that trades, such as between herself and a coffee farmer, are fair rather than exploitative.
Aims: The purpose of this study was to analyze the prevalence of depression, anxiety, adjustment disorders, and somatoform disorders in patients diagnosed with age-related macular degeneration (AMD) in Germany.
Methods: This study included 7,580 patients between the ages of 40 and 90 diagnosed with AMD between January 2011 and December 2014 in 1,072 primary care practices (index date). The last follow-up was in July 2016. We also included 7,580 controls without AMD, which were matched (1:1) to the AMD cases by age, sex, type of health insurance (private or statutory), physician, and Charlson comorbidity score as a generic marker of comorbidity. The outcome of the study was the prevalence of depression, anxiety, adjustment disorders, and somatoform disorders recorded in the database between the index date and the end of follow-up.
Results: The mean age among subjects was 75.7 years (SD=10.1 years), 34.0% were men, and 7.8% had private health insurance coverage. The Charlson comorbidity index was 2.0 (SD=1.8). Depression was the most frequent disease (33.7% in AMD patients versus 27.3% in controls), followed by somatoform disorders (19.6% and 16.7%), adjustment disorders (14.8% and 10.5%), and anxiety disorders (11.7% and 8.2%). Depression (OR=1.37, 95% CI: 1.27–1.47), anxiety (OR=1.50, 95% CI: 1.35–1.67), adjustment disorders (OR=1.50, 95% CI: 1.36–1.65), and somatoform disorders (OR=1.22, 95% CI: 1.12–1.32) were all positively associated with AMD.
Conclusion: Overall, a significant association was found between AMD and depression, anxiety, adjustment disorders, and somatoform disorders.
Green tea (GT) and green tea extracts (GTE) have been postulated to decrease cancer incidence. In vitro results indicate a possible effect; however, epidemiological data do not support cancer chemoprevention. We have performed a PubMED literature search for green tea consumption and the correlation to the common tumor types lung, colorectal, breast, prostate, esophageal and gastric cancer, with cohorts from both Western and Asian countries. We additionally included selected mechanistical studies for a possible mode of action. The comparability between studies was limited due to major differences in study outlines; a meta analysis was thus not possible and studies were evaluated individually. Only for breast cancer could a possible small protective effect be seen in Asian and Western cohorts, whereas for esophagus and stomach cancer, green tea increased the cancer incidence, possibly due to heat stress. No effect was found for colonic/colorectal and prostatic cancer in any country, for lung cancer Chinese studies found a protective effect, but not studies from outside China. Epidemiological studies thus do not support a cancer protective effect. GT as an indicator of as yet undefined parameters in lifestyle, environment and/or ethnicity may explain some of the observed differences between China and other countries.
Rimski novčići u ženskim srednjovekovnim grobovima sa teritorije Srbije: mogućnosti interpretacije
(2016)
U ovom radu istražuje se fenomen sekundarne upotrebe rimskih novčića (II–IV vek) u srednjovekovnim nekropolama (X–XV vek) sa teritorije Srbije. U fokusu istraživanja su grobovi u kojima su rimski novčići upotrebljeni kao dekorativni elementi pokojnikove odeće – najčešće preoblikovani u priveske. Ovakav tip sekundarne upotrebe rimskog novca konstatovan je samo u ženskim grobovima. Cilj rada je da predloži interpretaciju ove pojave kroz analizu vrednosti i značaja sekundarno upotrebljenih novčića u stvaranju porodičnih dragocenosti koje se definišu u važnim i kritičnim momentima društvenog života zajednice. Posebno se ispituje mogućnost interpretacije ovih nalaza kao primera grobova u kojima su sahranjene ženske osobe sa delovima svog miraza. Analizira se konstrukcija značenja i vrednosti ovih predmeta kroz njihovu razmenu u običajima vezanim za sklapanje braka, i, naposletku, u funerarnim praksama. Budući da je rimski novac iz ovih grobova malobrojan, i da se uvek radi o bronzanim denominacijama, možemo pretpostaviti da je definisanje njihove vrednosti i značaja zasnovano na simboličkom i reprezentativnom nivou. Polazna tačka ovog rada je korpus radova koji istražuju fenomen ponovne upotrebe stvari u prošlosti, da bi se dalje u radu dublje istražila veza između srednjovekovne društvene strukture i evaluacije novčića u seoskim zajednicama centralnog Balkana.
Depth hermeneutics—as developed by LORENZER within the framework of the Frankfort School's program of critical social research—represents a methodological and systematic approach to psychoanalytic research. The new ways and means by which a neo-Nazi utilises his visit to the Auschwitz Memorial to arouse further anti-Semitism are to be investigated by means of a scene-by-scene interpretation of his filmed appearances—first as a good-mooded tourist, then as a volatile right-wing extremist, as competent expert, and as rebellious adolescent. The aim is to demonstrate how the meaning of these role plays develops within the tension between a manifest and a latent significance. The results of this process of interpretations form the basis for clarifying the question: what patterns of socialisation are used by this "yuppie-neo-Nazi" to fascinate particularly adolescents?? In conclusion, the way in which through his post-modern film-production the producer turns Auschwitz into a test-ground where the neo-Nazi can do "a merry dance on the volcano", is analysed.
Mit der von LORENZER begründeten Tiefenhermeneutik wird eine methodologisch und methodisch reflektierte Methode psychoanalytischer Forschung vorstellt, die im Rahmen der kritischen Sozialforschung der Frankfurter Schule entwickelt wurde. Die neue Art und Weise, wie ein Neonazi einen Besuch der Gedenkstätte Auschwitz dazu benutzt, um einen neuen Antisemitismus zu erzeugen, soll durch eine szenische Interpretation seiner medialen Auftritte als gut gelaunter Tourist, als zorniger Rechtsextremist, als sachlicher Experte und als trotziger Jugendlicher untersucht werden: Es wird zu zeigen sein, wie sich die Bedeutung dieser Rollenspiele in der Spannung zwischen einem manifesten und einem latenten Sinn entfaltet. Die Ergebnisse dieses Interpretationsprozesses bilden die Grundlage für die theoretische Klärung der Frage, welcher Sozialisationsmuster dieser "Yuppie-Nazi" sich bedient, um vor allem Jugendliche zu faszinieren. Schließlich soll analysiert werden, wie der Regisseur durch eine postmoderne Inszenierung Auschwitz als Testgelände zur Verfügung stellt, auf dem der Neonazi einen "fröhlichen Tanz auf dem Vulkan" aufführen kann.
In meinem Beitrag stelle ich einige Besonderheiten und Probleme des Konzepts einer "ethnografischen Polizeiforschung" dar. Empirische Referenz ist eine ethnografische Untersuchung mehrerer Hessischer Polizeidienststellen im Jahr 1995. Die "teilnehmende Beobachtung des Gewaltmonopols" ist zwar nicht neu, nach wie vor aber in mehrfacher Hinsicht spannend, weil es sich um den Blick auf ein exklusives Gewaltverhältnis handelt, das zwar durch individuelle Akteure vollzogen wird, das aber strukturell auf die Kontextabhängigkeit der Handlung verweist: Gewalt ist nicht gleich Gewalt, die Staatsgewalt ist eine andere als der "Widerstand" gegen dieselbe. Diese Spannung, so die Ausgangsthese, findet sich auch in den Texten und Handlungen wieder, die von Polizisten tagtäglich habituell gestaltet werden. Die ethnografische Untersuchung der Polizei bezieht sich hauptsächlich auf Erzählungen von Polizisten und der Beobachtung ihres Alltags. Beschrieben wird, dass, im Gegensatz zu den offiziellen Bildern der Polizei (d.h. zur Polizeikultur), die sog. Handlungsmuster der Polizisten "auf der Straße" (diese nenne ich Polizistenkultur) sich im wesentlichen an einem informell tradierten Alltagspragmatismus orientieren, der häufig gekoppelt ist mit einer expressiven Bewerkstelligung von Männlichkeit.
Das Buch enthält einerseits eine Reihe von Fallstudien zu unterschiedlichen pädagogischen Fragen. Gemeinsam ist ihnen, dass qualitative Methoden angewandt werden, die Studierenden ermöglichen, eine "forschende Haltung" zu entwickeln. Andere Beiträge beschreiben die Bedingungen für eine Integration von wissenschaftlichen Methoden im Rahmen der Ausbildung von Lehrerinnen und Lehrern. Das Buch arbeitet keine Theorie auf, sondern zeigt, wie die Analyse empirisch erhobener Daten sinnvoll in der ersten Phase der Lehrerbildung zur Selbstreflexion der Studierenden beitragen kann. Die praktische Philosophie des Freiburger Verständnisses einer Interpretationswerkstatt kann Lehrende zur Arbeit mit Fallstudien ermuntern und Studierenden einen Einblick in die Möglichkeiten von Fallstudien bieten.
Wolfgang KÖHLER's Monographie über das Verstehen einer Person als eines Individuums ist ein Versuch, den in der Alltagssprache undifferenziert gebrauchten Begriff des Fremdverstehens sowie den des Selbstverstehens, wie er in Aussagen wie "Maria versteht ihren Mann" oder "Ich verstehe mich selbst nicht mehr" vorkommt, durch eine philosophische Reflexion zu differenzieren. Durch diese Reflexion wird gezeigt, dass der Verstehensbegriff auf drei verschiedene Weisen gebraucht wird: mit dem Anspruch eines Wissens, eines Könnens und eines Fühlens. Es ergibt sich, dass das Verstehen der eigenen wie der fremden Person im Sinne eines Wissens eine Ausnahme bildet, zumal dieses Wissen in einer mehr oder weniger poetischen Beschreibung darzustellen ist. Der Normalfall des Verstehens einer Person als eines unverwechselbaren Individuums besteht dagegen in einem Können oder Fühlen – und verdient damit gar nicht, ein "Verstehen" genannt zu werden im Sinne eines Wissens, wer jemand und wer man selbst ist. Das Buch erlaubt es, kritisch zu prüfen, welche hohen Ansprüche an die Versuche zu stellen sind, andere Personen, aber auch sich selbst zu verstehen. KÖHLER bleibt dabei eng an seinem lebensweltlichen Gegenstand und klärt im besten Sinne über das Verstehen von Personen auf.
Quest and query: interpreting a biographical interview with a turkish woman laborer in Germany
(2003)
Hülya, a young woman who came to Germany from Turkey at the age of 17 in pursuit of a better life looks back at the age of 31. In her biographical query she relates her experiences to a social commentary on the hard and inhuman conditions of contract labor. At the same time she is critical of the common sense notions that suffering and social problems are the main consequences of labor migration. In our analytical query of "doing biographical analysis" we discuss how we interpreted Hülya's narrative and commentary in socio-historical context and also in relation to the discourse on migration from Turkey. We looked for terms to analyze agency and suffering within biographical accounts without giving priority to either of them. Referring to the analysis of another case and to the concept of "twofold perspectivity" we describe how both suffering and also pursuing one's potential are negotiated in biographical quests and queries.
Der Band versammelt heterogene Beiträge zur "Pädagogischen Forschung im Kontext von Ethnografie und Biografie", die ihren gemeinsamen Bezugspunkt in den Forschungswerkstätten an der Kasseler Universität haben. In der Rezension werden die 14 Artikel vor dem Hintergrund der Zielsetzung des Bandes dargestellt. Diese besteht darin, die Vielfalt von ethnografischen Zugängen zu pädagogischen Feldern entlang einer methodenreflexiven Präsentation von Forschungsergebnissen zu dokumentieren und damit einen Beitrag zur Methodendiskussion in der Erziehungswissenschaft zu leisten. Die meisten Einzelbeiträge legen ihre Forschungsergebnisse entsprechend methodenreflexiv dar, wobei sie sich jedoch sehr unterschiedlich und zum Teil auch eher vage auf Ethnografie ausrichten bzw. auf pädagogische Felder beziehen. Leider wird der Ertrag dieser Kompilation von teilweise disparaten Forschungszugängen von den Herausgebern nicht systematisiert, sodass – trotz interessanter Einzelbeiträge – das Potenzial des Bandes für die methodologische "Vergewisserungsarbeit" in der Erziehungswissenschaft nur wenig sichtbar wird. Vielmehr hinterlässt die Lektüre des Bandes insgesamt eher den Eindruck einer gewissen Beliebigkeit im Gebrauch des Begriffes "Ethnografie".
An den Rändern der Diskurse. Jenseits der Unterscheidung diskursiver und nicht-diskursiver Praktiken
(2007)
Wenn von und für Diskursanalytiker(innen) eine Preisfrage ausgesetzt werden würde, dann wäre wohl eine der ersten zu beantwortenden Fragen, was denn eine "nicht-diskursive Praktik" sei. Die Frage markiert gewissermaßen die Grenze des Diskurses, denn schon die Benennung lässt vermuten, dass "nicht-diskursive Praktiken" eben nicht mehr Diskurs sind. Dieses Problem des Nicht-Diskursiven und die verschiedenen Möglichkeiten, diesen Rand, diese Grenze zu denken, auf ihrem Grat zu gehen oder sie zu unterlaufen, möchten wir im Folgenden zunächst anhand der theoretisch-methodologischen Debatte und dann anhand einiger konkreter Interpretationen von Texten und Beobachtungen aus verschiedenen empirischen Forschungsprojekten diskutieren. Dabei orientieren wir uns an den denkbaren Grenzen des Diskurses – der Macht, der Alltagspraxis, dem Körper, dem Subjekt – und entfalten die These, dass die Unterscheidung von diskursiv und nicht-diskursiv gerade nicht geeignet ist, Klarheit in die Debatte zu bringen.
In der letzten Dekade hat sich die Diskursforschung im Anschluss an Michel FOUCAULT im deutschsprachigen Raum interdisziplinär beständig weiterentwickelt. Sie ist dabei, sich im Rahmen qualitativer Sozialforschung – wie auch an sprachwissenschaftlichen Verfahren orientiert – zu etablieren.
Auf der internationalen und interdisziplinären Tagung "Sprache – Macht – Wissen" vom 10.-12.Oktober 2007 in Augsburg wurde der aktuelle Stand von Diskurstheorie und -analyse eruiert und diskutiert. Der Tagungsessay soll einen Einblick in die derzeitige Diskussion geben. Wir zeichnen zunächst die Fragestellungen und Zielsetzungen der Tagung nach. Es folgt eine knappe Zusammenfassung der gehaltenen Vorträge. Im Laufe der Tagung kristallisierten sich verschiedene Schwerpunkte heraus, die wiederholt aufgegriffen und diskutiert wurden: das Verhältnis von Diskursanalyse und Kritik, das Verhältnis von Subjekt(ivität) und Diskurs, das Verhältnis von Macht, Diskurs und Dispositiv sowie das Verhältnis von Diskursanalyse und Visualität. Mit der Systematisierung dieser vier Punkte nehmen wir eine kritische Betrachtung der "Ergebnisse" der Tagung vor. Abschließend verweisen wir auf zwei aktuelle Netzwerkinitiativen zur interdisziplinären Diskursforschung, die während der Tagung vorgestellt wurden.
Das Buch "Qualitative Evaluation" zeigt Schritt für Schritt die Vorgehensweise und stellt damit in sehr anwendungsorientierter Form das Handwerkszeug für eine qualitative Evaluation zur Verfügung. So werden Lesende in die Lage versetzt, eine solche Evaluation auch bei geringen Vorkenntnissen durchzuführen, wobei damit das Risiko verbunden sein kann, dass bei der Interpretation der Befunde wichtige Voraussetzungen für das Gelingen einer qualitativen Studie wie Verwurzelung der Interpretation im tatsächlich aufgezeichneten Text vernachlässigt werden.
Basierend auf Erfahrungen in einem Forschungsprojekt mit iranischstämmigen Migranten geht der Beitrag der Frage nach, inwiefern sich die umfassend reglementierte und damit weitgehend fremdbestimmte Lebenssituation als Flüchtling im deutschen Asyl auf die biografische Selbstthematisierung in Forschungszusammenhängen auswirkt.
Unabhängig vom jeweiligen Forschungsgegenstand beeinflusst der Kontext der Interviewsituation und die darin zustande kommende Beziehung zwischen Forschenden und Beforschten grundsätzlich die Gestalt der biografischen Erzählung. Infolge der Machtprozeduren im "totalen Flüchtlingsraum", die mit institutionell weitreichenden Zugriffen auf die Biografien von Asylsuchenden verbunden sind, ließ sich in den untersuchten Interviews jedoch eine mehr oder weniger stark ausgeprägte Verschärfung des ohnehin vorhandenen Hierarchieverhältnisses beobachten. In Anbetracht der empirischen Beobachtungen wird für eine reflexive biografiewissenschaftliche Migrationsforschung plädiert, die die Machtverhältnisse im transnationalen Raum in ihren Auswirkungen auf den Forschungsprozess systematisch analysiert. Forschende und Beforschte sind dabei nicht lediglich in ihren kulturellen Differenzen zu betrachten, sondern darüber hinaus in ihren unterschiedlichen intersektionellen Positionierungen, die von weiteren Machtmomenten wie dem sozioökonomischen Status, der Nationalität, dem Geschlecht, der Sexualität usw. bestimmt werden.
Rezension zu "Klassiker neu übersetzen. Zum Phänomen der Neuübersetzungen deutscher und italienischer Klassiker / Ritradurre i classici. Sul fenomeno delle ritraduzioni di classici italiani e tedeschi". Hg. Barbara Kleiner, Michele Vangi und Ada Vigliani. Stuttgart: Franz Steiner, 2014 (Villa Vigoni im Gespräch; Band 8). 147 S.
Alljährlich wird von der "Arbeitsgemeinschaft Objektive Hermeneutik" eine Tagung ausgerichtet, in deren Rahmen Vorhaben und/oder Ergebnisse von Forscher/innen, die mit der Methode der objektiven Hermeneutik arbeiten, vorgestellt und diskutiert werden. Für die diesjährige Tagung wurde mit "Bildung und Unterricht" ein inhaltlicher Schwerpunkt gelegt, der in vier Blöcken diskutiert wurde: "Berufliches Handeln im Kontext von Bildungsinstitutionen", "Wirkungen des Unterrichts und deren Analyse", "Zur Ordnung des Unterrichts"; in einem vierten Block wurden Fragen der Methode aufgegriffen, z.B. inwieweit sich fremdsprachige Unterrichtstranskripte analysieren lassen. Eine der zentralen Diskussionen der Tagung betraf das Verhältnis von Erziehungs- und Sozialwissenschaften. Als strittig erwies sich die Frage, ob die gewinnbringende Anwendung der Methode der objektiven Hermeneutik in der Unterrichtsforschung an eine dem Forschungsgebiet und dessen "Eigenstruktur" verpflichtete theoretische Perspektive gebunden ist.
Der Soziologe Johannes TWARDELLA analysiert in seinem Buch "Pädagogischer Pessimismus" den vollständigen Verlauf einer Unterrichtsstunde im Fach Deutsch in der 10. Jahrgangsstufe einer Hauptschule. Aus einem "wunderschönen guten Morgen" – so beginnt das Transkript – wird eine kleine Katastrophe. Wie konnte das passieren? Die detaillierte Analyse TWARDELLAs zeigt eindrücklich, dass das Verhältnis der Lehrkraft zu den Schülerinnen und Schülern sowie zu ihrer Profession gestört und widersprüchlich ist. Auf der einen Seite ist der Unterricht geprägt von einer negativen Anthropologie des Schülers bzw. der Schülerin, dem pädagogischen Pessimismus. Auf der anderen Seite besteht aus Sicht der Lehrkraft der optimistische Glaube an die didaktische Lösung des handlungsorientierten Unterrichts. Letztlich wird erkennbar, dass sich durch eine Self-Fulfilling Prophecy diese abgründige Kombination zu einer veränderungsresistenten Ideologie verhärtet und am Ende nur noch die Aufrechterhaltung des Betriebs steht – so sinnfrei er auch geworden sein mag. Das vorliegende Buch wird in den Kontext der derzeitigen bildungspolitischen und bildungswissenschaftlichen Diskussion gesetzt.
The article focuses on the way events are connected with preceding events of the same type carrying out a participatory observation on a golden wedding celebrated in a small village in the middle of Germany. Events are formally connected by their participants. In contrast to participant networks, the chronological order of event-event networks is evident. Different models for the connection of events are discussed with reference to a classic dataset of the "Deep South" study DAVIS, GARDNER and GARDNER (1941). A stability of forms (in the sense of SIMMEL's "formal sociology" [1908]) was found with a variation of some elements. The main reason for the stability is the uncertainty that arises when people temporarily change their position from that of guest to host. They fall back on approved forms for their celebration. Professionals are the other important position. They ensure that events will take place as they did in the past. It is proposed that an analysis of the chronological order of networks between events can lead to a renaissance in the cultural analysis of forms. The analysis presented is an approach to an investigation of the development of culture.
Das Feld der interdisziplinäre Diskursforschung hat in den letzten Jahren zunehmend an Bedeutung gewonnen und sich zu einer etablierten Forschungsperspektive am Schnittpunkt von Sprache und Gesellschaft, von Wissen und Macht entwickelt. Die theoretische und methodische Vielgestaltigkeit dieser Forschungsperspektive führt allerdings insbesondere bei der Konzeption und Durchführung von Forschungsarbeiten solchen Zuschnitts immer wieder zu Unsicherheiten und Schwierigkeiten. Drei Werke, die – in unterschiedlicher Weise – auf das sich aus der Vielgestaltigkeit dieses Feldes ergebende Bedürfnis nach Systematisierung und Orientierung antworten, werden im Folgenden vorgestellt. Dabei gilt es deutlich herauszustellen, dass die vorgestellten Werke nicht als Methodenbücher oder Anleitungen zur "korrekten" Durchführung von diskursorientierten Forschungsarbeiten misszuverstehen, sondern vielmehr als Anregung und Verständigung über Fragen, Probleme und Richtungen der Diskursforschung auch über nationale und disziplinäre Grenzen hinweg zu lesen sind.
Die Frage, ob und in welcher Hinsicht ADORNO als Vorbereiter eines Paradigmas qualitativer Sozialforschung verstanden werden kann, wird diskutiert anhand zweier Briefe ADORNOs an Paul LAZARSFELD aus dem Jahre 1938, als er in dessen "Radio Research Project" an der Universität Princeton mitzuarbeiten begann. ADORNO musste sich hier erstmals mit empirischer Sozialforschung amerikanischer Prägung ins Verhältnis setzen, wobei er in Ermangelung praktischer Erfahrung auf diesem Gebiet zunächst ganz auf seine Bordmittel als Philosoph und Künstler angewiesen war. In der Korrespondenz mit LAZARSFELD artikulierten sich erstmals Überlegungen, die in ADORNOs Schriften zur Sozialforschung aus der Nachkriegszeit ihre kanonische Gestalt fanden. Die quantifizierenden Verfahren kritisierend, entwickelte er gleichsam naturwüchsig ein Modell qualitativer Forschung, das aber zugleich bestimmten, auch später nicht überwundenen Restriktionen unterlag, die ihren Grund vor allem in Vorbehalten gegenüber methodisch geregelten Vorgehen überhaupt hatten.
Der Kulturwissenschaftler Andreas RECKWITZ beschäftigt sich in seinem Buch mit dem Titel "Subjekt" dergestalt mit einer Reihe von strukturalistischen bzw. poststrukturalistischen Autor/innen, dass er ihre Werke als Beiträge zu einer Analyse des Subjekts in der Moderne interpretiert. In dem vorliegenden Aufsatz werden nun nicht nur diese Interpretationen in groben Zügen wiedergegeben, sondern es wird auch der Versuch unternommen, ihre die Empirie aufschließende "Kraft" zu "prüfen". In dem Bezug auf einen Ausschnitt aus dem Alltag der Schule, einer Unterrichtsstunde im Fach Deutsch, zeigt sich, wie diese unterschiedlichen "subjekttheoretischen Analysestrategien" zu jeweils anderen, interessanten und aufschlussreichen Interpretationen führen können. Darüber hinaus wird aber auch deutlich, dass auf die Vorstellung von Subjektivität – und damit auch von Bildung und Mündigkeit – nicht verzichtet werden kann. Ohne diese Vorstellung wäre die pädagogische Praxis zynisch – und ihr Verständnis unmöglich.
In this review, I argue that this textbook edited by BENNETT and CHECKEL is exceptionally valuable in at least four aspects. First, with regards to form, the editors provide a paragon of how an edited volume should look: well-connected articles "speak to" and build on each other. The contributors refer to and grapple with the theoretical framework of the editors who, in turn, give heed to the conclusions of the contributors. Second, the book is packed with examples from research practice. These are not only named but thoroughly discussed and evaluated for their methodological potential in all chapters. Third, the book aims at improving and popularizing process tracing, but does not shy away from systematically considering the potential weaknesses of the approach. Fourth, the book combines and bridges various approaches to (mostly) qualitative methods and still manages to provide abstract and easily accessible standards for making "good" process tracing. As such, it is a must-read for scholars working with qualitative methods. However, BENNETT and CHECKEL struggle with fulfilling their promise of bridging positivist and interpretive approaches, for while they do indeed take the latter into account, their general research framework remains largely unchanged by these considerations. On these grounds, I argue that, especially for scholars in the positivist camp, the book can function as a "how-to" guide for designing and implementing research. Although this may not apply equally to interpretive researchers, the book is still a treasure chest for them, providing countless conceptual clarifications and potential pitfalls of process tracing practice.
Das zu besprechende Buch ist der Versuch einer Integration von Kindheitsund Biographieforschung. Es bietet einen umfangreichen, fast alle Autoren in diesen Bereichen versammelnde Übersicht über die beiden Forschungsgebiete. Ein Teil dieser Beiträge wird unter der Frage betrachtet, welchen Beitrag die Biographieforschung für die neue Kindheitsforschung zu leisten vermag. Unter "neue Kindheitsforschung" wird dabei jene Kindheitsforschung verstanden, die nach der "Perspektive von Kindern" fragt. Das Ergebnis besteht in Bezug auf das Buch darin, dass hier eine Vielzahl von neuen Verbindungen zwischen beiden Forschungsbereichen eröffnet wird. Eine der wesentlichen Verbindungen wird darin gesehen, dass die Biographieforschung Hinweise zu einem anderen Verständnis qualitativer Forschung im Kontext von Kindheitsforschung zu geben vermag.
The following paper is about artists doing experimental and performative art who expect the spectators to become participants in the process of artwork production. The artwork is thus produced through a process of participation. As a researcher, I was similarly expected to participate in the artwork process. As I observed, the artists worked at having their agency in the artwork process recognized by the participating spectators. At the same time, the artists create a certain proximity to the spectators-participants through performing art, which I call "performing proximity." By involving the participants in their art-in-process, they make use of their agency to redefine the artworld and enlarge it into other social worlds. I also discuss how artists' ability to enact redefined social worlds can be compared to agency in performative social science and in biographical research.
In seinem Buch "Interview und dokumentarische Methode. Anleitungen für die Forschungspraxis" erklärt der Erziehungswissenschaftler Arnd-Michael NOHL, wie die dokumentarische Methode für die Interpretation von Interviews fruchtbar gemacht werden kann. Sein zentraler Gedanke besagt, der Prozess der Forschung solle in Stufen erfolgen: von der Stufe der "formulierenden Interpretation" über die der "reflektierenden Interpretation" bis zur Stufe der "Typenbildung". In Bezug auf die Frage, wie ein Forschungsprozess organisiert werden kann, scheint das ein sinnvolles Verfahren zu sein. Das zentrale Problem der Deutung von "Äußerungen" bzw. "Sequenzen" bleibt bei NOHL jedoch weitgehend unbehandelt.
The aim of this thesis is finding a geometric configuration that allows electron insertion into a Gabor plasma lens in order to increase the density of the confined electrons and provide ignition conditions at parameters where ignition is not possible. First, simulations using CST and bender were conducted to investigate several geometric configurations in terms of their performance of inserting electrons manually. One particular design has been chosen as a basis for an experiment. In order to prepare the experiment, further simulations using the code bender have been conducted to investigate the density distribution that is formed inside the Gabor lens when inserting electrons transversally in compliance with the chosen design. Additionally, bender was used to investigate the impact of the initial electron energy on the distribution inside the lens. Simulations with and without space charge effects have shown a significant impact of the space charge effects on the resulting density dstribution. Therefore, space charge effects have proven to be the major electron redistribution process. A given electron source was characterised in order to find the performance under the conditions inside a Gabor lens. In particular, a transversal magnetic field that will be present in the experiment has to be compensated by shielding the inner regions of the source by a μ-metal layer. Using a μ-metal shield, transversal magnetic fields are sufficiently tolerable to perform measurements in a Gabor lens. Additionally, operating close to 100 eV electron energy yields a maximum in the emitted current. Adding a Wehnelt cylinder to the electron source furthermore improves the extracted current to roughly 1 mA. A test stand consisting of a newly designed anode for the Gabor lens, as well as a terminal for the electron source, was constructed. The electron source was thoroughly characterised in the environment of the Gabor lens and the ignition properties of the new system were evaluated. In further experiments, electron beam assisted ignition by increasing the residual gas pressure was observed and the impact of the position of the electron source on the ignition properties was investigated. In addition, ignition of a sub-critical state, that is a state consisting of potential, magnetic field and pressure that did not yet perform ignition by itself, was performed by increasing the extracted current from the electron source. Finally, the electron source was used to influence a pre-ignited plasma. The density was measured, which was increased by the use of the electron source in most cases. This project is part of the EDEN collaboration (Electron DENsity boosting) of the NNP Group at IAP Frankfurt with INFN institutes in Bologna and Catania.
Osteoid osteoma is a benign bone tumor of undetermined etiology, composed of a central zone named nidus which is an atypical bone completely enclosed within a wellvascularized stroma and a peripheral sclerotic reaction zone. There are three types of radiographic features: cortical, medullary and subperiosteal. Forty-four patients with osteoid osteoma were studied retrospectively. In plain films, 35 patients presented as the cortical type, six cases were located in the medullary zone and three had subperiosteal osteoid osteoma. In all the cases, the nidus was visualized on computed tomography (CT) scan. The nidus was visible in four out of five patients who had also undergone magnetic resonance imaging (MRI). Double-density sign, seen on radionuclide bone scans was positive in all patients. MRI is more sensitive in the diagnosis of bone marrow and soft tissue abnormalities adjacent to the lesion, and in the nidus that is located closer to the medullary zone. On the other hand, CT is more specific when it comes to detecting the lesion’s nidus.
Die Entwicklungen in der Medizinischen Ausbildung der letzten Jahre konfrontieren Lehrende zunehmend mit neuen didaktischen Herausforderungen. An zahlreichen Standorten im deutschsprachigen Raum werden bereits Qualifizierungsangebote für Lehrende angeboten, jedoch fehlt bisher ein Orientierungsrahmen für medizindidaktische Kompetenzen, der ein Qualifikationsprofil für Lehrende darstellt.
Vor dem Hintergrund der Diskussion um die Kompetenzorientierung des Medizinstudiums und auf Grundlage aktueller internationaler Literatur wurde durch den GMA Ausschuss für Personal- und Organisationsentwicklung in der Lehre ein Kernkompetenzmodell für Lehrende in der Medizin entwickelt. Das Modell soll nicht nur den Lehrenden Orientierung zu ihrem Qualifikationsprofil geben, sondern auch die inhaltliche Ausrichtung hochschuldidaktischer (Aus-) Weiter- und Fortbildungen sowie die Evaluation von Fakultätsentwicklungsprozessen erleichtern und nicht zuletzt einheitliche Kriterien für die Beurteilung der Lehrqualifikation in deutschsprachigen Raum definieren.
Das Modell besteht aus sechs Kompetenzfeldern, für die jeweils Teilkompetenzen definiert und Lernziele beschrieben wurden. Anwendungsbeispiele sollen die jeweiligen Kompetenzen verdeutlichen.
Das Modell ist für die praktische Anwendung konzipiert und soll in einem nächsten Schritt durch spezifische Kompetenzen für Lehrende mit besonderen Aufgaben ergänzt werden.
Recent developments in medical education have created increasing challenges for medical teachers which is why the majority of German medical schools already offer educational and instructional skills trainings for their teaching staff. However, to date no framework for educational core competencies for medical teachers exists that might serve as guidance for the qualification of the teaching faculty. Against the background of the discussion about competency based medical education and based upon the international literature, the GMA Committee for Faculty and Organizational Development in Teaching developed a model of core teaching competencies for medical teachers. This framework is designed not only to provide guidance with regard to individual qualification profiles but also to support further advancement of the content, training formats and evaluation of faculty development initiatives and thus, to establish uniform quality criteria for such initiatives in German-speaking medical schools. The model comprises a framework of six competency fields, subdivided into competency components and learning objectives. Additional examples of their use in medical teaching scenarios illustrate and clarify each specific teaching competency. The model has been designed for routine application in medical schools and is thought to be complemented consecutively by additional competencies for teachers with special duties and responsibilities in a future step.
Objective: The glucose stimulation of insulin secretion (GSIS) by pancreatic β-cells critically depends on increased production of metabolic coupling factors, including NADPH. Nicotinamide nucleotide transhydrogenase (NNT) typically produces NADPH at the expense of NADH and ΔpH in energized mitochondria. Its spontaneous inactivation in C57BL/6J mice was previously shown to alter ATP production, Ca2+ influx, and GSIS, thereby leading to glucose intolerance. Here, we tested the role of NNT in the glucose regulation of mitochondrial NADPH and glutathione redox state and reinvestigated its role in GSIS coupling events in mouse pancreatic islets.
Methods: Islets were isolated from female C57BL/6J mice (J-islets), which lack functional NNT, and genetically close C57BL/6N mice (N-islets). Wild-type mouse NNT was expressed in J-islets by adenoviral infection. Mitochondrial and cytosolic glutathione oxidation was measured with glutaredoxin 1-fused roGFP2 probes targeted or not to the mitochondrial matrix. NADPH and NADH redox state was measured biochemically. Insulin secretion and upstream coupling events were measured under dynamic or static conditions by standard procedures.
Results: NNT is largely responsible for the acute glucose-induced rise in islet NADPH/NADP+ ratio and decrease in mitochondrial glutathione oxidation, with a small impact on cytosolic glutathione. However, contrary to current views on NNT in β-cells, these effects resulted from a glucose-dependent reduction in NADPH consumption by NNT reverse mode of operation, rather than from a stimulation of its forward mode of operation. Accordingly, the lack of NNT in J-islets decreased their sensitivity to exogenous H2O2 at non-stimulating glucose. Surprisingly, the lack of NNT did not alter the glucose-stimulation of Ca2+ influx and upstream mitochondrial events, but it markedly reduced both phases of GSIS by altering Ca2+-induced exocytosis and its metabolic amplification.
Conclusion: These results drastically modify current views on NNT operation and mitochondrial function in pancreatic β-cells.
Photodynamic treatment of oral squamous cell carcinoma cells with low curcumin concentrations
(2017)
Objective: Curcumin is known for its anti-oxidative, anti-inflammatory and anti-tumorigenic qualities at concentrations ranging from 3.7µg/ml to 55µg/ml. Therefore it is pre-destined for tumour therapy. Due to high oral doses that have to be administered and the low bioavailability of curcumin new therapy concepts have to be developed. One of these therapy concepts is the combination of low curcumin concentrations and UVA or visible light. Aim of our study was to investigate the influence of this treatment regime on oral squamous cell carcinoma cells.
Materials and Methods: A human oral squamous cell carcinoma cell line (HN) was pre-incubated with low curcumin concentrations (0.01µg/ml to 1µg/ml). Thereafter cell cultures were either left un-irradiated or were irradiated either with 1J/cm2 UVA or for 5min with visible light. Quantitative analysis of proliferation, membrane integrity, oxidative potential and DNA fragmentation were done.
Results: It could be shown that low curcumin concentrations neither influenced proliferation, nor cell morphology, nor cell integrity nor apoptosis. When combining these curcumin concentrations with UVA or visible light irradiation cell proliferation as well as development of reactive oxygen species was reduced whereas DNA fragmentation was increased. Concentration as well as light entity specific effects could be observed.
Conclusions: The present findings substantiate the potential of the combination of low curcumin concentrations and light as a new therapeutic concept to increase the efficacy of curcumin in the treatment of cancer of the oral mucosa.
The production of 77,79,85,85mKr and 77Br via the reaction Se(a, x) was investigated between Ea = 11 and 15 MeV using the activation technique. The irradiation of natural selenium targets on aluminum backings was conducted at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, Germany. The spectroscopic analysis of the reaction products was performed using a high-purity germanium detector located at PTB and a low energy photon spectrometer detector at the Goethe University Frankfurt, Germany. Thicktarget yields were determined. The corresponding energy-dependent production cross sections of 77,79,85,85mKr and 77Br were calculated from the thicktarget yields. Good agreement between experimental data and theoretical predictions using the TALYS-1.6 code was found.
Die "Jedermann-Lizenzen" von Creative Commons (CC) geben Menschen die Möglichkeit, ihre kreativen Werke unter bestimmten Bedingungen zur Nutzung freizugeben. Weil Urheber unterschiedliche Motive und Interessen haben, gibt es sechs verschiedene Lizenzvarianten. Die beliebteste ist die Einschränkung, dass Werke nur nicht-kommerziell verwendet werden können. Das hat aber weitreichende Folgen für die Verbreitung der Inhalte. Gleichzeitig erreichen viele Creative-Commons-Nutzer dadurch gar nicht die gewünschten Ziele. Diese Broschüre informiert über Folgen, Risiken und Nebenwirkungen einer Beschränkung der CC-Lizenz auf nicht-kommerzielle Nutzung.
Editorial
(2017)
Bevacizumab for patients with recurrent gliomas presenting with a gliomatosis cerebri growth pattern
(2017)
Bevacizumab has been shown to improve progression-free survival and neurologic function, but failed to improve overall survival in newly diagnosed glioblastoma and at first recurrence. Nonetheless, bevacizumab is widely used in patients with recurrent glioma. However, its use in patients with gliomas showing a gliomatosis cerebri growth pattern is contentious. Due to the marked diffuse and infiltrative growth with less angiogenic tumor growth, it may appear questionable whether bevacizumab can have a therapeutic effect in those patients. However, the development of nodular, necrotic, and/or contrast-enhancing lesions in patients with a gliomatosis cerebri growth pattern is not uncommon and may indicate focal neo-angiogenesis. Therefore, control of growth of these lesions as well as control of edema and reduction of steroid use may be regarded as rationales for the use of bevacizumab in these patients. In this retrospective patient series, we report on 17 patients with primary brain tumors displaying a gliomatosis cerebri growth pattern (including seven glioblastomas, two anaplastic astrocytomas, one anaplastic oligodendroglioma, and seven diffuse astrocytomas). Patients have been treated with bevacizumab alone or in combination with lomustine or irinotecan. Seventeen matched patients treated with bevacizumab for gliomas with a classical growth pattern served as a control cohort. Response rate, progression-free survival, and overall survival were similar in both groups. Based on these results, anti-angiogenic therapy with bevacizumab should also be considered in patients suffering from gliomas with a mainly infiltrative phenotype.
Overrepresentation of bidirectional connections in local cortical networks has been repeatedly reported and is a focus of the ongoing discussion of nonrandom connectivity. Here we show in a brief mathematical analysis that in a network in which connection probabilities are symmetric in pairs, Pij = Pji, the occurrences of bidirectional connections and nonrandom structures are inherently linked; an overabundance of reciprocally connected pairs emerges necessarily when some pairs of neurons are more likely to be connected than others. Our numerical results imply that such overrepresentation can also be sustained when connection probabilities are only approximately symmetric.
Celiac disease (CD) is an immune-mediated enteropathy that is characterized by intraepithelial lymphocytosis, crypt hyperplasia, and villous atrophy. Prevalence is high and has been estimated to range between 0.5% and 1.5%. Capsule endoscopy (CE) has a sensitivity and specificity of approximately 90%. CD is an important differential diagnosis for diagnostic workup for anemia, malabsorption, or diarrhea, and must be recognized reliably by the investigator. Moreover, CE is the preferred method to screen for complications in CD, such as enteropathy-associated T-cell lymphoma, ulcerative jejunitis, and small bowel adenocarcinoma. This article is part of an expert video encyclopedia.
The von Willebrand factor (VWF) is a glycoprotein in the blood that plays a central role in hemostasis. Among other functions, VWF is responsible for platelet adhesion at sites of injury via its A1 domain. Its adjacent VWF domain A2 exposes a cleavage site under shear to degrade long VWF fibers in order to prevent thrombosis. Recently, it has been shown that VWF A1/A2 interactions inhibit the binding of platelets to VWF domain A1 in a force-dependent manner prior to A2 cleavage. However, whether and how this interaction also takes place in longer VWF fragments as well as the strength of this interaction in the light of typical elongation forces imposed by the shear flow of blood remained elusive. Here, we addressed these questions by using single molecule force spectroscopy (SMFS), Brownian dynamics (BD), and molecular dynamics (MD) simulations. Our SMFS measurements demonstrate that the A2 domain has the ability to bind not only to single A1 domains but also to VWF A1A2 fragments. SMFS experiments of a mutant [A2] domain, containing a disulfide bond which stabilizes the domain against unfolding, enhanced A1 binding. This observation suggests that the mutant adopts a more stable conformation for binding to A1. We found intermolecular A1/A2 interactions to be preferred over intramolecular A1/A2 interactions. Our data are also consistent with the existence of two cooperatively acting binding sites for A2 in the A1 domain. Our SMFS measurements revealed a slip-bond behavior for the A1/A2 interaction and their lifetimes were estimated for forces acting on VWF multimers at physiological shear rates using BD simulations. Complementary fitting of AFM rupture forces in the MD simulation range adequately reproduced the force response of the A1/A2 complex spanning a wide range of loading rates. In conclusion, we here characterized the auto-inhibitory mechanism of the intramolecular A1/A2 bond as a shear dependent safeguard of VWF, which prevents the interaction of VWF with platelets.
The adaptor protein Src homology 2 domain-containing leukocyte phosphoprotein of 76 kDa (SLP-76) plays a crucial role in T cell activation by linking antigen receptor (T cell receptor, TCR) signals to downstream pathways. At its N terminus, SLP-76 has three key tyrosines (Tyr-113, Tyr-128, and Tyr-145, “3Y”) as well as a sterile α motif (SAM) domain whose function is unclear. We showed previously that the SAM domain has two binding regions that mediate dimer and oligomer formation. In this study, we have identified SAM domain-carrying non-receptor tyrosine kinase, activated Cdc42-associated tyrosine kinase 1 (ACK1; also known as Tnk2, tyrosine kinase non-receptor 2) as a novel binding partner of SLP-76. Co-precipitation, laser-scanning confocal microscopy, and in situ proximity analysis confirmed the binding of ACK1 to SLP-76. Further, the interaction was induced in response to the anti-TCR ligation and abrogated by the deletion of SLP-76 SAM domain (ΔSAM) or mutation of Tyr-113, Tyr-128, and Tyr-145 to phenylalanine (3Y3F). ACK1 induced phosphorylation of the SLP-76 N-terminal tyrosines (3Y) dependent on the SAM domain. Further, ACK1 promoted calcium flux and NFAT-AP1 promoter activity and decreased the motility of murine CD4+ primary T cells on ICAM-1-coated plates, an event reversed by a small molecule inhibitor of ACK1 (AIM-100). These findings identify ACK1 as a novel SLP-76-associated protein-tyrosine kinase that modulates early activation events in T cells.
Until recently the Nigerian Nok Culture had primarily been known for its terracotta sculptures and the existence of iron metallurgy, providing some of the earliest evidence for artistic sculpting and iron working in sub-Saharan Africa. Research was resumed in 2005 to understand the Nok Culture phenomenon, employing a holistic approach in which the sculptures and iron metallurgy remain central, but which likewise covers other archaeological aspects including chronology, settlement patterns, economy, and the environment as key research themes. In the beginning of this endeavour the development of social complexity during the duration of the Nok Culture constituted a focal point. However, after nearly ten years of research and an abundance of new data the initial hypothesis can no longer be maintained. Rather than attributes of social complexity like signs of inequality, hierarchy, nucleation of settlement systems, communal and public monuments, or alternative African versions of complexity discussed in recent years, it has become apparent that the Nok Culture, no matter which concept is followed, developed complexity only in terms of ritual. Relevant information and arguments for the transition of the theoretical background are provided here.
The growing interest of the Arabs in Arabic translations from Greek since the 8th century has been interpreted as a sign of humanism in Islam. This is comparable to humanists in Europe who, since the 14th century, considered the Greek and Latin literature the foundation of spiritual and moral education. We will have to address the question of whether a similar ideal of education has been developed in harmony with religion in the Islamic cultural sphere. The perceived tension between the humanists of antiquity and Christianity has a parallel in the tensions between Islamic religiosity and a rational Islamic worldview. However, there are past and present approaches to developing an educational ideal, which is comparable to the European concept of a moral shaping of the individual. The Qur’ān and Islamic tradition do not impede the free development of personality and creative responsibility if their historicity is taken into account and if they are not elevated to an unreflected norm.
Keywords: Humanism, Islamic and European, education, individuality, solidarity, free will and subordination, Ibn al-Muqaffa’, Fārābī, Yaḥyā Ibn’Adī, Miskawayh, Rāghib al-Isfahānī, Ghazzālī, Ibn Khaldūn, Renaissance of Islam - Italian Renaissance, Pico della Mirandola, Nahda, Tāhā Husayn, Sadik J. Al-Azm, Edward W. Said, Naquib Al-Attas
Stroke patients with proximal occlusions of the main stems of cerebral arteries are no optimal candidates for i. v. thrombolysis. For many years interventional stroke treatment could not be established as alternative. This changed with the introduction of stent retrievers and flexible large lumen aspiration catheters. Randomized trials now proved a significant benefit from intervention for a wide spectrum of severely compromised stroke patients in time windows of up to 8 hrs. However, the randomized trials leave open questions concerning proper patient selection. The benefit for patients with larger infarcts with an ASPECTS between 3 and 5 or patients in time windows above 8 hrs is still uncertain. Especially for critical candidates imaging for reliable detection of the ischemic core and surrounding salvageable brain tissue plays an important role. Technically equivalence between new aspiration techniques as alternative to the use of stent-retrievers is not finally proven. Recanalization of tandem occlusions with the necessity of acute stenting demands better materials for plaque coverage and thrombus withhold. Management of cases with occlusions due to intracranial atherosclerosis is also debatable. The positive trial results provide especially new challenges to establish countrywide neurointerventional services. Even in developed countries recruitment and training of interventional radiologists as well as priority transportation of stroke patients is challenging to organize.
This is an abstract presented in the 33rd Iranian congress of radiology (ICR) and the 15th congress of Iranian radiographic science association (IRSA).
Disabling practices
(2017)
Following Foucault’s theory of discourse this article aims at reformulating the established concept of disability. To this end, the author reconstructs ways in which disabling practices of subjectivation occur in and through public media discourses. The article focuses on the discoursive production of infantile identities in people with cognitive disabilities. The examples demonstrate that this discoursive production occurs in self-representational media formats and in outside media representations. Hence, the author develops a concept of disability as a discoursively produced ordering category, from which follows a reformulation of the disability concept. This reformulated concept, which grasps disability as discourse disability, allows in turn for a perspective on disability as practice and thus as independent from the subject. To conclude, the article discusses implications of such a perspective of disability for pedagogy and the social sciences, ultimately arguing for a broader definition of disability and for making respective benefits a matter of social pedagogy.
Membrane proteins are biological macromolecules that are located in a cell’s membrane and are responsible for essential functions within an organism, which makes them to prominent drug targets. The extraction of membrane proteins from the hydrophobic membrane bilayer to determine high-resolution crystal structures is a difficult task and only 2% of all solved proteins structures are membrane proteins. Computational methods may help to gain deeper insights into membrane protein structures and their functions. This study will give an overview of such computational methods on a representative set of membrane proteins and will provide ideas for future computational and experimental research on membrane proteins.
In a first step (chapter 2), I updated an earlier, manually-curated data set of homologous membrane proteins (HOMEP) to more recent versions in 2010 (HOMEP2) and 2013 (HOMEP3) using an automated clustering approach. High-resolution structures of membrane proteins listed in the PDB_TM database were structurally aligned and subsequently clustered using structural similarity scores. Both data sets were used as a standard gold reference set for subsequent work.
Subsequently, I have updated and applied the sequence alignment program AlignMe to determine protein descriptors that are suitable for detecting evolutionary relationship between homologous a-helical membrane proteins. Single input descriptors were tested alone and in combination with each other in different modes of AlignMe by optimizing gap penalties on the HOMEP2 data set. Most accurate alignments and homology models on the HOMEP2 data set were observed when using position-specific substitution information (P), secondary structure propensities (S) and transmembrane propensities (T) in the AlignMe PST mode. An evaluation on an independent reference set of membrane protein sequence alignments from the BAliBASE collection showed that different modes of AlignMe are suitable for different sequence similarity levels. The AlignMe PST mode improved the alignment accuracy significantly for distantly related proteins, whereas for closely-related proteins from the BAliBASE set the AlignMe PS mode was more suitable. This work was published in March 2013 in PLOS ONE. In order to allow also an easier usage of the AlignMe program, I have implemented a web server of AlignMe (chapter 4) that provides the optimized settings and gap penalties for the AlignMe P, PS and PST modes. A comparison to other recent alignment web server shows that the alignments of AlignMe are similar or even more accurate than those of other methods, especially for very distantly related proteins for which the inclusion of membrane protein information has been shown to be suitable. This work was published in the NAR web server issue in July 2014.
Although membrane-specific information has been shown to be suitable for aligning distantly related membrane proteins on a sequence level, such information was not incorporated into structural alignment programs making it unclear which method is the most suitable for aligning membrane proteins. Thus, I compared 13 widely-used pairwise structural alignment methods on an updated reference set of homologous membrane protein structures (HOMEP3) and evaluated their accuracy by building models based on the underlying sequence alignments and used scoring functions (e.g., AL4 or CAD-score) to rate the model accuracy (chapter 5). The analysis showed that fragment-based approaches such as FR-TM-align are the most useful for aligning structures of membrane proteins that have undergone large conformational changes whereas rigid approaches were more suitable for proteins that were solved in the same or a similar state. However, no method showed a significant higher accuracy than any other. Additionally, all methods lack a measure to rate the reliability of the accuracy for a specific position within a structure alignment. In order to solve these problems, I propose a consensus-type approach that combines alignments from four different methods, namely FR-TM-align, DaliLite, MATT and FATCAT and assigns a confidence value to each position of the alignment that describes the agreement between the methods. This work has been published 2015 in the journal “PROTEINS: structure, function and bioinformatics”.
Consensus alignments were then generated for each pair of proteins of the HOMEP3 data set and subsequently analyzed for single evolutionary events within membrane spanning segments and for irregular structures (e.g., 310- and p-helices) (chapter 6). Interestingly, single insertions and deletions could be observed with the help of consensus alignments in the conserved membrane-spanning segments of membrane proteins in four protein families. The detection of such single InDels might help to identify crucial residues for a proteins function.
If the biotechnological production of chemicals can further replace or support regular synthetic chemistry, industry will be able to move away from fossil oils towards renewable sources. However, in many cases the much needed adaptation of biotechnological production systems is not yet developed to the necessary level.
For processes where short fatty acids (FA) are needed, as for example in the microbial production of biofuels in the gasoline range, protein engineering had not yet delivered feasible solutions. In this thesis, several approaches to introduce chain length control on type I fatty acid synthases (FAS) were established and made available in a publication and two patents. Therein, engineering was focused on rational design based on available structural information.
First, the type I FAS from C. ammoniagenes was used as a model enzyme to probe modifications on FAS in a low complex in vitro environment in order to gain information about structure-function relationships. At this stage, engineering was conducted in several rounds, first addressing possible ways to alter product distributions by changing substrate affinities through concise mutations in binding channels. Several FAS constructs were generated ranging from first successes, where short FA were produced as side products, to FAS where native chain length programming was overwritten and only short FA were produced.
Furthermore, another engineering target was addressed with the modification of domain-domain interactions on FAS. For its exploitation to direct synthesis, contact surfaces on catalytic domains were changed to interfere with acyl carrier protein binding. This channeling of the kinetic process on the enzyme led to similar successes and short FA became the primary product.
The two approaches have proven to be potent tools to introduce systems of chain length control in FAS. This rational engineering has the big advantage that it is mostly minimally invasive and due to the high conservation of de novo FA synthesis, individual mutations could easily be used in other FAS (and their organisms) as well. Even heterologous expression of modified FAS genes is feasible.
Engineering was not only tested in a defined in vitro environment and but also in S. cerevisiae as an exemplary in vivo system. The results eventually confirmed the in vitro findings and proved that the chosen engineering could be transferred to more complex systems. Even before any optimization for highest output, the titers of short FA from S. cerevisiae fermentation matched previous reports with 118 mg/L.
In sum, this work covers several layers from basic research to preliminary applications. The presented modifications to create short FA producing FAS can be a key step in synthesis pathways and will likely enable a whole range of new succeeding research. It can be seen as a valuable contribution towards establishing novel ways for the production of chemicals from renewable sources.
Background and Purpose. Leukocyte migration into alveolar space plays a critical role in pulmonary inflammation resulting in lung injury. Acute ethanol (EtOH) exposure exerts anti-inflammatory effects. The clinical use of EtOH is critical due to its side effects. Here, we compared effects of EtOH and ethyl pyruvate (EtP) on neutrophil adhesion and activation of cultured alveolar epithelial cells (A549). Experimental Approach. Time course and dose-dependent release of interleukin- (IL-) 6 and IL-8 from A549 were measured after pretreatment of A549 with EtP (2.5–10 mM), sodium pyruvate (NaP, 10 mM), or EtOH (85–170 mM), and subsequent lipopolysaccharide or IL-1beta stimulation. Neutrophil adhesion to pretreated and stimulated A549 monolayers and CD54 surface expression were determined. Key Results. Treating A549 with EtOH or EtP reduced substantially the cytokine-induced release of IL-8 and IL-6. EtOH and EtP (but not NaP) reduced the adhesion of neutrophils to monolayers in a dose- and time-dependent fashion. CD54 expression on A549 decreased after EtOH or EtP treatment before IL-1beta stimulation. Conclusions and Implications. EtP reduces secretory and adhesive potential of lung epithelial cells under inflammatory conditions. These findings suggest EtP as a potential treatment alternative that mimics the anti-inflammatory effects of EtOH in early inflammatory response in lungs.
Nosological delineation of congenital ocular motor apraxia type Cogan : an observational study
(2016)
Background: The nosological assignment of congenital ocular motor apraxia type Cogan (COMA) is still controversial. While regarded as a distinct entity by some authorities including the Online Mendelian Inheritance in Man catalog of genetic disorders, others consider COMA merely a clinical symptom.
Methods: We performed a retrospective multicenter data collection study with re-evaluation of clinical and neuroimaging data of 21 previously unreported patients (8 female, 13 male, ages ranging from 2 to 24 years) diagnosed as having COMA.
Results: Ocular motor apraxia (OMA) was recognized during the first year of life and confined to horizontal pursuit in all patients. OMA attenuated over the years in most cases, regressed completely in two siblings, and persisted unimproved in one individual. Accompanying clinical features included early onset ataxia in most patients and cognitive impairment with learning disability (n = 6) or intellectual disability (n = 4). Re-evaluation of MRI data sets revealed a hitherto unrecognized molar tooth sign diagnostic for Joubert syndrome in 11 patients, neuroimaging features of Poretti-Boltshauser syndrome in one case and cerebral malformation suspicious of a tubulinopathy in another subject. In the remainder, MRI showed vermian hypo-/dysplasia in 4 and no abnormalities in another 4 patients. There was a strong trend to more severe cognitive impairment in patients with Joubert syndrome compared to those with inconclusive MRI, but otherwise no significant difference in clinical phenotypes between these two groups.
Conclusions: Systematical renewed analysis of neuroimaging data resulted in a diagnostic reappraisal in the majority of patients with early-onset OMA in the cohort reported here. This finding poses a further challenge to the notion of COMA constituting a separate entity and underlines the need for an expert assessment of neuroimaging in children with COMA, especially if they show cognitive impairment.
Introduction: In 2008, the German Council of Science had advised universities to establish a quality management system (QMS) that conforms to international standards. The system was to be implemented within 5 years, i.e., until 2014 at the latest. The aim of the present study was to determine whether a QMS suitable for electronic learning (eLearning) domain of medical education to be used across Germany has meanwhile been identified.
Methods: We approached all medical universities in Germany (n=35), using an anonymous questionnaire (8 domains, 50 items).
Results: Our results (response rate 46.3%) indicated very reluctant application of QMS in eLearning and a major information deficit at the various institutions.
Conclusions: Authors conclude that under the limitations of this study there seems to be a considerable need to improve the current knowledge on QMS for eLearning, and that clear guidelines and standards for their implementation should be further defined.
Einleitung: Der Wissenschaftsrat empfahl 2008 den Universitäten innerhalb der nächsten 5 Jahre, d. h. bis spätestens 2014, ein Qualitätsmanagementsystem (QMS), das internationalen Maßstäben entspricht, zu etablieren. Ziel der vorliegenden Studie war es, zu evaluieren, ob es derzeit ein geeignetes QMS für das elektronische Lernen (eLearning) gibt, das speziell im Fach Humanmedizin deutschlandweit eingesetzt werden kann.
Methoden: Im Rahmen einer Umfrage wurden mittels eines anonymisierten Fragebogens (8 Domänen, 50 Items) alle Universitäten (n=35) des Fachbereichs Medizin in Deutschland evaluiert.
Ergebnisse: Die Ergebnisse (46,3% Rücklaufquote) zeigen einen nur zögerlichen Einsatz von QMS für eLearning und dass vor Ort ein großes Informationsdefizit herrscht.
Schlussfolgerung: Unter Berücksichtigung der Limitationen dieser Studie kann zusammenfassend festgehalten werden, dass erheblicher Bedarf zu bestehen scheint, das existierende Informationsdefizit für QMS eLearning zu mindern, sowie zukünftig genaue Richtlinien und Standards zur Umsetzung zu definieren.
Homeodomain proteins are encoded by homeobox genes and regulate development and differentiation in many neuronal systems. The mouse vomeronasal organ (VNO) generates in situ mature chemosensory neurons from stem cells. The roles of homeodomain proteins in neuronal differentiation in the VNO are poorly understood. Here we have characterized the expression patterns of 28 homeobox genes in the VNO of C57BL/6 mice at postnatal stages using multicolor fluorescent in situ hybridization. We identified 11 homeobox genes (Dlx3, Dlx4, Emx2, Lhx2, Meis1, Pbx3, Pknox2, Pou6f1, Tshz2, Zhx1, Zhx3) that were expressed exclusively in neurons; 4 homeobox genes (Pax6, Six1, Tgif1, Zfhx3) that were expressed in all non-neuronal cell populations, with Pax6, Six1 and Tgif1 also expressed in some neuronal progenitors and precursors; 12 homeobox genes (Adnp, Cux1, Dlx5, Dlx6, Meis2, Pbx2, Pknox1, Pou2f1, Satb1, Tshz1, Tshz3, Zhx2) with expression in both neuronal and non-neuronal cell populations; and one homeobox gene (Hopx) that was exclusively expressed in the non-sensory epithelium. We studied further in detail the expression of Emx2, Lhx2, Meis1, and Meis2. We found that expression of Emx2 and Lhx2 initiated between neuronal progenitor and neuronal precursor stages. As far as the sensory neurons of the VNO are concerned, Meis1 and Meis2 were only expressed in the apical layer, together with Gnai2, but not in the basal layer.
Infectious diseases remain a remarkable health threat for humans and animals. In the past, the epidemiology, etiology and pathology of infectious agents affecting humans and animals have mostly been investigated in separate studies. However, it is evident, that combined approaches are needed to understand geographical distribution, transmission and infection biology of “zoonotic agents”. The genus Bartonella represents a congenial example of the synergistic benefits that can arise from such combined approaches: Bartonella spp. infect a broad variety of animals, are linked with a constantly increasing number of human diseases and are transmitted via arthropod vectors. As a result, the genus Bartonella is predestined to play a pivotal role in establishing a One Health concept combining veterinary and human medicine.
Criticality meets learning : criticality signatures in a self-organizing recurrent neural network
(2017)
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
Background: Currently, there is inadequate evidence on which to base clinical management of neurotoxic snakebite envenoming, especially in the choice of initial antivenom dosage. This randomised controlled trial compared the effectiveness and safety of high versus low initial antivenom dosage in victims of neurotoxic envenoming.
Methodology/ Principal findings: This was a balanced, randomised, double-blind trial that was conducted in three health care centers located in the Terai plains of Nepal. Participants received either low (two vials) or high (10 vials) initial dosage of Indian polyvalent antivenom. The primary composite outcome consisted of death, the need for assisted ventilation and worsening/recurrence of neurotoxicity. Hourly evaluations followed antivenom treatment. Between April 2011 and October 2012, 157 snakebite victims were enrolled, of which 154 were analysed (76 in the low and 78 in the high initial dose group). Sixty-seven (43·5%) participants met the primary outcome definition. The proportions were similar in the low (37 or 48.7%) vs. high (30 or 38.5%) initial dose group (difference = 10·2%, 95%CI [-6·7 to 27·1], p = 0·264). The mean number of vials used was similar between treatment groups. Overall, patients bitten by kraits did worse than those bitten by cobras. The occurrence of treatment-related adverse events did not differ among treatment groups. A total of 19 serious adverse events occurred, including seven attributed to antivenom.
Conclusions: This first robust trial investigating antivenom dosage for neurotoxic snakebite envenoming shows that the antivenom currently used in Nepal performs poorly. Although the high initial dose regimen is not more effective than the low initial dose, it offers the practical advantage of being a single dose, while not incurring higher consumption or enhanced risk of adverse reaction. The development of new and more effective antivenoms that better target the species responsible for bites in the region will help improve future patients’ outcomes.
Trial registration: The study was registered on clinicaltrials.gov (NCT01284855) (GJ 5/1)
Ziel. Lokoregionäre Rezidive sind der Hauptgrund für ein Therapieversagen nach primärer multimodaler Behandlung von Plattenepithelkarzinomen der Kopf-Hals-Region (SCCHN). Wir verglichen die Effektivität und Toxizität von Cisplatin oder Cetuximab simultan zur Re-Bestrahlung (ReRT) bei inoperablen SCCHN-Rezidiven. Ein prognostischer Score sollte auf Grundlage verschiedener klinischer und pathologischer Faktoren etabliert werden.
Patienten und Methoden. 66 Patienten mit in vorbestrahlten Regionen rezidivierten SCCHN wurden von 2007 bis 2014 simultan mit Cetuximab (n=33) oder cisplatin-basierter Chemotherapie (n=33) re-bestrahlt. Die Toxizität wurde wöchentlich sowie bei jedem Nachsorgetermin erfasst. Klinische Untersuchung, Endoskopie, CT- oder MRT-Untersuchungen wurden zur Beurteilung des Therapieansprechens und der Krankheitskontrolle eingesetzt.
Ergebnisse. Nach einer mittleren Nachbeobachtungszeit von 18,3 Monaten betrug das 1-Jahres-Überleben (OS) für ReRT mit Cetuximab 44,4% und mit cisplatin-basierter Chemotherapie 45,5% (p=0.352). Die lokalen Kontollraten nach einem Jahr waren jeweils 46,4% und 54,2% (p=0.625); die Raten an Metastasenfreiheit 73,6% und 81% (p=0.842). Hämatologische Toxizität ≥ Grad 3 kam in der Cisplatin-Gruppe häufiger vor (p<0.001), dagegen trat Schmerz ≥ Grad 3 in der Cetuximab-Gruppe häufiger auf (p=0.034). Ein physiologischer Hb-Wert und ein längeres Intervall zwischen primärer RT und ReRT erwiesen sich als signifikante prognostische Faktoren für das OS (multivariat: p=0.003 und p=0.002). Die Rezidivlokalisation sowie das GTV zeigten keinen signifikanten Einfluss auf das OS in der multivariaten Analyse (p=0.160 und p=0.167). Ein auf Grundlage dieser Variablen konstruierter Prognose-Score (0 bis 4 Punkte) zeigte signifikante Überlebensunterschiede: 1-Jahres-OS für 0/1/2/3/4 Prognosepunkte: 10%, 38%, 76%, 80% und 100% (p<0.001).
Schlussfolgerung. Sowohl Cetuximab- als auch Cisplatin-basierte ReRT für SCCHN-Rezidive sind gut durchführbare und effektive Behandlungsoptionen mit vergleichbaren Ergebnissen bezüglich Tumorkontrolle und Überleben. Die akuten Nebenwirkungen könnten gering variieren. Unser Prognose-Score könnte zur Identifizierung der für ReRT geeigneten Patienten sowie zur Stratifizierung in künftigen klinischen Studien dienen.
Da HRS-Zellen im cHL nur eine Minderheit und CD4+ T-Zellen die Mehrheit im Begleitinfiltrat ausmachen, wurde innerhalb der vorliegenden Dissertation das Begleitinfiltrat und der Tumorzellgehalt von 24 HIV-assoziierten cHL-Fällen mit 15 HIV-negativen cHL-Fällen immunhistochemisch verglichen. Das reaktive Begleitinfiltrat im HIV-assoziierten cHL zeigte eine deutlich geringere Anzahl an CD4+ T-Zellen und einen höheren Gehalt an CD163+ Makrophagen als das HIV-negative cHL. Es konnte kein Unterschied in der Anzahl der CD30+ HRS-Zellen und S100+ dendritischen Zellen zwischen beiden Gruppen festgestellt werden. Mit Kokultur-Versuchen im Labor und darauf folgenden Zellausstrichen dieser Kokulturen konnte bestätigt werden, dass sich CD14+ Monozyten ebenso gut wie CD4+ T-Zellen als Rosetten um HRS-Zellen anordnen können. Im immunkomprimierten HIV-Patienten ersetzen die langlebigen CD163+ Makrophagen die CD4+ T-Zellen. Die Makrophagen werden vermutlich ebenso wie CD4+ T-Zellen mittels Zytokine/Chemokine (z. B. CCL5) zum Tumorgewebe rekrutiert, bilden Rosetten um die Tumorzellen und unterstützen diese in ihrer Proliferation.
Aufgrund der besonderen Zusammensetzung des Begleitinfiltrats sollte das HIV-assoziierte cHL von Pathologen als eigenständiger Subtyp des cHL betrachtet werden.
Des Weiteren wurde das Begleitinfiltrat der typisch knotigen NLPHL Typen A und C mit dem des diffusen NLPHL Typen E (THRLBCL-like NLPHL) und dem THRLBCL immunhistochemisch verglichen. Aufgrund histologischer und klinischer Ähnlichkeiten zwischen dem diffusen NLPHL und dem THRLBCL fällt eine Differenzierung dieser Entitäten schwer. Es konnte festgestellt werden, dass das Begleitinfiltrat im THRLBCL-like NLPHL dem Begleitinfiltrat im THRLBCL mehr ähnelt als dem typischen NLPHL und zwar in Bezug auf Makrophagengehalt und Anzahl der follikulären TFH-Zellen. Es konnten Rosetten im Begleitinfiltrat von THRLBCL nachgewiesen werden, obwohl Rosettenformationen um Tumorzellen im THRLBCL in der Literatur kein charakteristisches Merkmal darstellen. Es ist naheliegend, dass das THRLBCL-like NLPHL und das THRLBCL ein und dieselbe Krankheit ist und möglicherweise eine aggressivere Variante des NLPHL darstellt.
Im Anbetracht aller Ergebnisse kommt dem Immunstatus eines Patienten eine ausschlaggebende Rolle auf das Begleitinfiltrat im Tumorgewebe zu und dieser beeinflusst so auch den klinischen Verlauf der Lymphomerkrankung.
The Fire Modeling Intercomparison Project (FireMIP), phase 1: experimental and analytical protocols
(2016)
The important role of fire in regulating vegetation community composition and contributions to emissions of greenhouse gases and aerosols make it a critical component of dynamic global vegetation models and Earth system models. Over two decades of development, a wide variety of model structures and mechanisms have been designed and incorporated into global fire models, which have been linked to different vegetation models. However, there has not yet been a systematic examination of how these different strategies contribute to model performance. Here we describe the structure of the first phase of the Fire Model Intercomparison Project (FireMIP), which for the first time seeks to systematically compare a number of models. By combining a standardized set of input data and model experiments with a rigorous comparison of model outputs to each other and to observations, we will improve the understanding of what drives vegetation fire, how it can best be simulated, and what new or improved observational data could allow better constraints on model behavior. Here we introduce the fire models used in the first phase of FireMIP, the simulation protocols applied, and the benchmarking system used to evaluate the models. The works published in this journal are distributed under the Creative Commons Attribution 3.0 License. This license does not affect the Crown copy-right work, which is re-usable under the Open Government Licence (OGL). The Creative Commons Attribution 3.0 License and the OGL are interoperable and do not conflict with, reduce, or limit each other.
Hematopoietic differentiation is controlled by key transcription factors, which regulate stem cell functions and differentiation. TAL1 is a central transcription factor for hematopoietic stem cell development in the embryo and for gene regulation during erythroid/megakaryocytic differentiation. Knowledge of the target genes controlled by a given transcription factor is important to understand its contribution to normal development and disease. To uncover direct target genes of TAL1 we used high affinity streptavidin/biotin-based chromatin precipitation (Strep-CP) followed by Strep-CP on ChIP analysis using ChIP promoter arrays. We identified 451 TAL1 target genes in K562 cells. Furthermore, we analysed the regulation of one of these genes, the catalytic subunit beta of protein kinase A (PRKACB), during megakaryopoiesis of K562 and primary human CD34+ stem cell/progenitor cells. We found that TAL1 together with hematopoietic transcription factors RUNX1 and GATA1 binds to the promoter of the isoform 3 of PRKACB (Cβ3). During megakaryocytic differentiation a coactivator complex on the Cβ3 promoter, which includes WDR5 and p300, is replaced with a corepressor complex. In this manner, activating chromatin modifications are removed and expression of the PRKACB-Cβ3 isoform during megakaryocytic differentiation is reduced. Our data uncover a role of the TAL1 complex in controlling differential isoform expression of PRKACB. These results reveal a novel function of TAL1, RUNX1 and GATA1 in the transcriptional control of protein kinase A activity, with implications for cellular signalling control during differentiation and disease.
A recent report showed PINK1 transcript levels to be up- or down-regulated by the gain or loss of Ataxin-2 function, respectively, in human blood, in a human neural cell line and in mouse tissues. These observations may have profound implications for the regulation of cell growth and may be medically exploited for the treatment of cancer and neural atrophy...
Body image dissatisfaction is a serious, global problem that negatively affects life satisfaction. Several claims have been made about the possible psychological benefits of naturist activities, but very little empirical research has investigated these benefits or any plausible explanations for them. In three studies—one large-scale, cross-sectional study (n = 849), and 2 prospective studies (n = 24, n = 100) this research developed and applied knowledge about the possible benefits of naturist activities. It was found that more participation in naturist activities predicted greater life satisfaction—a relationship that was mediated by more positive body image, and higher self-esteem (Study 1). Applying these findings, it was found that participation in actual naturist activities led to an increase in life satisfaction, an effect that was also mediated by improvements in body image and self-esteem (Studies 2 and 3). The potential benefits of naturism are discussed, as well as possible future research, and implications for the use of naturist activities.
The important role of fire in regulating vegetation community composition and contributions to emissions of greenhouse gases and aerosols make it a critical component of dynamic global vegetation models and Earth system models. Over 2 decades of development, a wide variety of model structures and mechanisms have been designed and incorporated into global fire models, which have been linked to different vegetation models. However, there has not yet been a systematic examination of how these different strategies contribute to model performance. Here we describe the structure of the first phase of the Fire Model Intercomparison Project (FireMIP), which for the first time seeks to systematically compare a number of models. By combining a standardized set of input data and model experiments with a rigorous comparison of model outputs to each other and to observations, we will improve the understanding of what drives vegetation fire, how it can best be simulated, and what new or improved observational data could allow better constraints on model behavior. In this paper, we introduce the fire models used in the first phase of FireMIP, the simulation protocols applied, and the benchmarking system used to evaluate the models. We have also created supplementary tables that describe, in thorough mathematical detail, the structure of each model.
In this study, we aim to reconstruct a relevant and new database of monthly zonal mean distribution of carbon dioxide (CO2) at global scale extending from the upper-troposphere (UT) to stratosphere (S). This product can be used for model and satellite validation in the UT/S, as a prior for inversion modelling and mainly to analyse a plausible feature of the stratospherictropospheric exchange as well as the stratospheric circulation and its variability. To do so, we investigate the ability of a Lagrangian trajectory model guided by ERA-Interim reanalysis to construct the CO2 abundance in the UT/S. From 10 year backward trajectories and tropospheric observations of CO2, we reconstruct upper-tropospheric and stratospheric CO2 over the period 2000–2010. The inter-comparisons of the reconstructed CO2 with mid-latitude vertical profiles measured by balloon samples as well as quasi-horizontal air samples from ER-2 aircraft during SOLVE and CONTRAIL campaigns exhibit a remarkable agreement. That demonstrates the potential of Lagrangian model to reconstruct CO2 in the UT/S. The zonal mean distribution exhibits relatively large CO2 in the tropical stratosphere due to the seasonal variation of the tropical upwelling of Brewer-Dobson circulation. During winter and spring, the tropical pipe is relatively isolated but is less confined during summer and autumn so that high CO2 values are more readily transported out of the tropics to the mid- and high latitude stratosphere. The shape of the vertical profiles suggests that relatively high CO2 above 20 km altitude mainly enter the stratosphere through tropical upwelling. CO2 mixing ratio is relatively low in the polar and tropical regions above 25 km. On average the CO2 mixing ratio decreases with altitude by 6-8 ppmv from the UT to stratosphere (e.g. up to 35 km) and is nearly constant with altitude.
In this study, we construct a new monthly zonal mean carbon dioxide (CO2) distribution from the upper troposphere to the stratosphere over the 2000–2010 time period. This reconstructed CO2 product is based on a Lagrangian backward trajectory model driven by ERA-Interim reanalysis meteorology and tropospheric CO2 measurements. Comparisons of our CO2 product to extratropical in situ measurements from aircraft transects and balloon profiles show remarkably good agreement. The main features of the CO2 distribution include (1) relatively large mixing ratios in the tropical stratosphere; (2) seasonal variability in the extratropics, with relatively high mixing ratios in the summer and autumn hemisphere in the 15–20 km altitude layer; and (3) decreasing mixing ratios with increasing altitude from the upper troposphere to the middle stratosphere ( ∼ 35 km). These features are consistent with expected variability due to the transport of long-lived trace gases by the stratospheric Brewer–Dobson circulation. The method used here to construct this CO2 product is unique from other modelling efforts and should be useful for model and satellite validation in the upper troposphere and stratosphere as a prior for inversion modelling and to analyse features of stratosphere–troposphere exchange as well as the stratospheric circulation and its variability.
The fractional release factor (FRF) gives information on the amount of a halocarbon that is released at some point in the stratosphere from its source form to the inorganic form, which can harm the ozone layer through catalytic reactions. The quantity is of major importance because it directly affects the calculation of the Ozone Depletion Potential (ODP). To apply FRF in this context, steady-state values are needed, thus representing a molecular property for a given atmospheric situation. In particular, these values should be independent of the tropospheric trends of the respective halogenated trace gases.
We analyzed the temporal evolution of FRF from ECHAM/MESSy Atmospheric Chemistry (EMAC) model simulations for several halocarbons and nitrous oxide between 1965–2011 on different mean age levels and found that the current formulation of FRF yields highly time-dependent values. We show that this is caused by the way that the tropospheric trend is handled in the current calculation method of FRF.
Taking into account chemical loss in the calculation of stratospheric mixing ratios reduces the time-dependence in correlations of different tracers. Therefore we implemented a loss term in the formulation of FRF and applied the parameterization of a "mean arrival time" to our data set.
We find that the time-dependence in FRF can almost be compensated by applying a new trend correction in the calculation of FRF. We suggest that this new method should be used to calculate time-independent FRF, which can then be used e.g. for the calculation of ODP
The fractional release factor (FRF) gives information on the amount of a halocarbon that is released at some point into the stratosphere from its source form to the inorganic form, which can harm the ozone layer through catalytic reactions. The quantity is of major importance because it directly affects the calculation of the ozone depletion potential (ODP). In this context time-independent values are needed which, in particular, should be independent of the trends in the tropospheric mixing ratios (tropospheric trends) of the respective halogenated trace gases. For a given atmospheric situation, such FRF values would represent a molecular property.
We analysed the temporal evolution of FRF from ECHAM/MESSy Atmospheric Chemistry (EMAC) model simulations for several halocarbons and nitrous oxide between 1965 and 2011 on different mean age levels and found that the widely used formulation of FRF yields highly time-dependent values. We show that this is caused by the way that the tropospheric trend is handled in the widely used calculation method of FRF.
Taking into account chemical loss in the calculation of stratospheric mixing ratios reduces the time dependence in FRFs. Therefore we implemented a loss term in the formulation of the FRF and applied the parameterization of a mean arrival time to our data set.
We find that the time dependence in the FRF can almost be compensated for by applying a new trend correction in the calculation of the FRF. We suggest that this new method should be used to calculate time-independent FRFs, which can then be used e.g. for the calculation of ODP.
Malignant gliomas are intrinsic brain tumors with a dismal prognosis. They are well-adapted to hypoxic conditions and poorly immunogenic. NKG2D is one of the major activating receptors of natural killer (NK) cells and binds to several ligands (NKG2DL).
Here we evaluated the impact of miRNA on the expression of NKG2DL in glioma cells including stem-like glioma cells. Three of the candidate miRNA predicted to target NKG2DL were expressed in various glioma cell lines as well as in glioblastomas in vivo: miR-20a, miR-93 and miR-106b. LNA inhibitor-mediated miRNA silencing up-regulated cell surface NKG2DL expression, which translated into increased susceptibility to NK cell-mediated lysis. This effect was reversed by neutralizing NKG2D antibodies, confirming that enhanced lysis upon miRNA silencing was mediated through the NKG2D system. Hypoxia, a hallmark of glioblastomas in vivo, down-regulated the expression of NKG2DL on glioma cells, associated with reduced susceptibility to NK cell-mediated lysis. This process, however, was not mediated through any of the examined miRNA. Accordingly, both hypoxia and the expression of miRNA targeting NKG2DL may contribute to the immune evasion of glioma cells at the level of the NKG2D recognition pathway. Targeting miRNA may therefore represent a novel approach to increase the immunogenicity of glioblastoma.
Background: The West African country of Burkina Faso (BFA) is an example for the enduring importance of traditional plant use today. A large proportion of its 17 million inhabitants lives in rural communities and strongly depends on local plant products for their livelihood. However, literature on traditional plant use is still scarce and a comprehensive analysis for the country is still missing.
Methods: In this study we combine the information of a recently published plant checklist with information from ethnobotanical literature for a comprehensive, national scale analysis of plant use in Burkina Faso. We quantify the application of plant species in 10 different use categories, evaluate plant use on a plant family level and use the relative importance index to rank all species in the country according to their usefulness. We focus on traditional medicine and quantify the use of plants as remedy against 22 classes of health disorders, evaluate plant use in traditional medicine on the level of plant families and rank all species used in traditional medicine according to their respective usefulness.
Results: A total of 1033 species (50%) in Burkina Faso had a documented use. Traditional medicine, human nutrition and animal fodder were the most important use categories. The 12 most common plant families in BFA differed considerably in their usefulness and application. Fabaceae, Poaceae and Malvaceae were the plant families with the most used species. In this study Khaya senegalensis, Adansonia digitata and Diospyros mespiliformis were ranked the top useful plants in BFA. Infections/Infestations, digestive system disorders and genitourinary disorders are the health problems most commonly addressed with medicinal plants. Fabaceae, Poaceae, Asteraceae, Apocynaceae, Malvaceae and Rubiaceae were the most important plant families in traditional medicine. Tamarindus indica, Vitellaria paradoxa and Adansonia digitata were ranked the most important medicinal plants.
Conclusions: The national-scale analysis revealed systematic patterns of traditional plant use throughout BFA. These results are of interest for applied research, as a detailed knowledge of traditional plant use can a) help to communicate conservation needs and b) facilitate future research on drug screening.
Proteins of the secretin family form large macromolecular complexes, which assemble in the outer membrane of Gram-negative bacteria. Secretins are major components of type II and III secretion systems and are linked to extrusion of type IV pili (T4P) and to DNA uptake. By electron cryo-tomography of whole Thermus thermophilus cells, we determined the in situ structure of a T4P molecular machine in the open and the closed state. Comparison reveals a major conformational change whereby the N-terminal domains of the central secretin PilQ shift by ∼30 Å, and two periplasmic gates open to make way for pilus extrusion. Furthermore, we determine the structure of the assembled pilus.
Background: Second hand smoke (ETS)-associated particulate matter (PM) contributes considerably to indoor air contamination and constitutes a health risk for passive smokers. Easy to measure, PM is a useful parameter to estimate the dosage of ETS that passive smokers are exposed to. Apart from its suitability as a surrogate parameter for ETS-exposure, PM itself affects human morbidity and mortality in a dose-dependent manner. We think that ETS-associated PM should be considered an independent hazard factor, separately from the many other known harmful compounds of ETS. We believe that brand-specific and tobacco-product-specific differences in the release of PM matter and that these differences are of public interest. Methods: To generate ETS of cigarettes and cigarillos as standardized and reproducible as possible, an automatic second hand smoke emitter (AETSE) was developed and placed in a glass chamber. L&M cigarettes ("without additives", "red label", "blue label"), L&M filtered cigarillos ("red") and 3R4F standard research cigarettes (as reference) were smoked automatically according to a self-developed, standardized protocol until the tobacco product was smoked down to 8 mm distance from the tipping paper of the filter. Results: Mean concentration (Cmean) and area under the curve (AUC) in a plot of PM2.5 against time were measured, and compared. CmeanPM2.5 were found to be 518 μg/m3 for 3R4F cigarettes, 576 μg/m3 for L&M "without additives" ("red"), 448 μg/m3 for L&M "blue label", 547 μg/m3 for L&M "red label", and 755 μg/m3 for L&M filtered cigarillos ("red"). AUCPM2.5-values were 208,214 μg/m3·s for 3R4F reference cigarettes, 204,629 μg/m3·s for L&M "without additives" ("red"), 152,718 μg/m3·s for L&M "blue label", 238,098 μg/m3·s for L&M "red label" and 796,909 μg/m3·s for L&M filtered cigarillos ("red"). Conclusion: Considering the large and significant differences in particulate matter emissions between cigarettes and cigarillos, we think that a favorable taxation of cigarillos is not justifiable.
Background: Patients with Ph-negative myeloproliferative neoplasms (MPN), such as polycythemia vera (PV), essential thrombocythemia (ET), and primary myelofibrosis (PMF), are at increased risk for thrombosis/thromboembolism and major bleeding. Due to the morbidity and mortality of these events, antiplatelet and/or anticoagulant agents are commonly employed as primary and/or secondary prophylaxis. On the other hand, disease-related bleeding complications (i.e., from esophageal varices) are common in patients with MPN. This analysis was performed to define the frequency of such events, identify risk factors, and assess antiplatelet/anticoagulant therapy in a cohort of patients with MPN.
Methods: The MPN registry of the Study Alliance Leukemia is a non-interventional prospective study including adult patients with an MPN according to WHO criteria (2008). For statistical analysis, descriptive methods and tests for significant differences as well as contingency tables were used to identify the odds of potential risk factors for vascular events.
Results: MPN subgroups significantly differed in sex distribution, age at diagnosis, blood counts, LDH levels, JAK2V617F positivity, and spleen size (length). While most thromboembolic events occurred around the time of MPN diagnosis, one third of these events occurred after that date. Splanchnic vein thrombosis was most frequent in post-PV-MF and MPN-U patients. The chance of developing a thromboembolic event was significantly elevated if patients suffered from post-PV-MF (OR 3.43; 95 % CI = 1.39–8.48) and splenomegaly (OR 1.76; 95 % CI = 1.15–2.71). Significant odds for major bleeding were previous thromboembolic events (OR = 2.71; 95 % CI = 1.36–5.40), splenomegaly (OR = 2.22; 95 % CI 1.01–4.89), and the administration of heparin (OR = 5.64; 95 % CI = 1.84–17.34). Major bleeding episodes were significantly less frequent in ET patients compared to other MPN subgroups.
Conclusions: Together, this report on an unselected "real-world" cohort of German MPN patients reveals important data on the prevalence, diagnosis, and treatment of thromboembolic and major bleeding complications of MPN.
Background: Measurement of prostate-specific antigen (PSA) advanced the diagnostic and prognostic potential for prostate cancer (PCa). However, due to PSA’s lack of specificity, novel biomarkers are needed to improve risk assessment and ensure optimal personalized therapy. A set of protein molecules as potential biomarkers was therefore evaluated in serum of PCa patients.
Methods: Serum samples from patients undergoing radical prostatectomy (RPE) for biopsy-proven PCa without neoadjuvant treatment were compared to serum samples from healthy subjects. Preliminary screening of 119 proteins in 10 PCa patients and 10 controls was carried out by the Proteome Profiler Antibody Array. Those markers showing distinct differences between patients and controls were then further evaluated by ELISA in the serum of 165 PCa patients and 19 controls. Uni- and multivariate as well as correlation analysis were performed to test the capability of these molecules to detect disease and predict pathological outcome.
Results: Screening showed that soluble (s)E-cadherin, E-selectin, MMP2, MMP9, TIMP1, TIMP2, Galectin and Clusterin warranted further evaluation. sE-Cadherin, TIMP1, Galectin and Clusterin were significantly over- and MMP9 under-expressed in PCa compared to controls. The concentration of sE-cadherin, MMP2 and Clusterin correlated negatively and that of MMP9 and TIMP1 positively with the Gleason Sum at prostatectomy. Only sE-cadherin significantly correlated with the highest Gleason pattern. Compared to serum PSA, sE-cadherin provided an independent and better matching predictive ability for discriminating PCas with an upgrade at RPE and aggressive tumors with a Gleason Sum ≥7.
Conclusions: sE-cadherin performed most favorably from a large panel of serum proteins in terms of diagnostic and predictive potential in curatively treatable PCa. sE-cadherin merits further investigation as a biomarker for PCa.
In ∼30% of families affected by colorectal adenomatous polyposis, no germline mutations have been identified in the previously implicated genes APC, MUTYH, POLE, POLD1, and NTHL1, although a hereditary etiology is likely. To uncover further genes with high-penetrance causative mutations, we performed exome sequencing of leukocyte DNA from 102 unrelated individuals with unexplained adenomatous polyposis. We identified two unrelated individuals with differing compound-heterozygous loss-of-function (LoF) germline mutations in the mismatch-repair gene MSH3. The impact of the MSH3 mutations (c.1148delA, c.2319−1G>A, c.2760delC, and c.3001−2A>C) was indicated at the RNA and protein levels. Analysis of the diseased individuals’ tumor tissue demonstrated high microsatellite instability of di- and tetranucleotides (EMAST), and immunohistochemical staining illustrated a complete loss of nuclear MSH3 in normal and tumor tissue, confirming the LoF effect and causal relevance of the mutations. The pedigrees, genotypes, and frequency of MSH3 mutations in the general population are consistent with an autosomal-recessive mode of inheritance. Both index persons have an affected sibling carrying the same mutations. The tumor spectrum in these four persons comprised colorectal and duodenal adenomas, colorectal cancer, gastric cancer, and an early-onset astrocytoma. Additionally, we detected one unrelated individual with biallelic PMS2 germline mutations, representing constitutional mismatch-repair deficiency. Potentially causative variants in 14 more candidate genes identified in 26 other individuals require further workup. In the present study, we identified biallelic germline MSH3 mutations in individuals with a suspected hereditary tumor syndrome. Our data suggest that MSH3 mutations represent an additional recessive subtype of colorectal adenomatous polyposis.
For infectious diseases caused by highly pathogenic agents (e. g., Ebola/Lassa fever virus, SARS-/MERS-CoV, pandemic influenza virus) which have the potential to spread over several continents within only a few days, international Health Protection Authorities have taken appropriate measures to limit the consequences of a possible spread. A crucial point in this context is the disinfection of an aircraft that had a passenger on board who is suspected of being infected with one of the mentioned diseases. Although, basic advice on hygiene and sanitation on board an aircraft is given by the World Health Organization, these guidelines lack details on available and effective substances as well as standardized operating procedures (SOP). The purpose of this paper is to give guidance on the choice of substances that were tested by a laboratory of Lufthansa Technik and found compatible with aircraft components, as well as to describe procedures which ensure a safe and efficient disinfection of civil aircrafts. This guidance and the additional SOPs are made public and are available as mentioned in this paper.
Background: Butanol isomers are regarded as more suitable fuel substitutes than bioethanol. n-Butanol is naturally produced by some Clostridia species, but due to inherent problems with clostridial fermentations, industrially more relevant organisms have been genetically engineered for n-butanol production. Although the yeast Saccharomyces cerevisiae holds significant advantages in terms of scalable industrial fermentation, n-butanol yields and titers obtained so far are only low.
Results: Here we report a thorough analysis and significant improvements of n-butanol production from glucose with yeast via the acetoacetyl-CoA-derived pathway. First, we established an improved n-butanol pathway by testing various isoenzymes of different pathway reactions. This resulted in n-butanol titers around 15 mg/L in synthetic medium after 74 h. As the initial substrate of the n-butanol pathway is acetyl-coenzyme A (acetyl-CoA) and most intermediates are bound to coenzyme A (CoA), we increased CoA synthesis by overexpression of the pantothenate kinase coaA gene from Escherichia coli. Supplementation with pantothenate increased n-butanol production up to 34 mg/L. Additional reduction of ethanol formation by deletion of alcohol dehydrogenase genes ADH1-5 led to n-butanol titers of 71 mg/L. Further expression of a mutant form of an ATP independent acetylating acetaldehyde dehydrogenase, adhEA267T/E568K, converting acetaldehyde into acetyl-CoA, resulted in 95 mg/L n-butanol. In the final strain, the n-butanol pathway genes, coaA and adhE A267T/E568K, were stably integrated into the yeast genome, thereby deleting another alcohol dehydrogenase gene, ADH6, and GPD2-encoding glycerol-3-phosphate dehydrogenase. This led to a further decrease in ethanol and glycerol by-product formation and elevated redox power in the form of NADH. With the addition of pantothenate, this strain produced n-butanol up to a titer of 130 ± 20 mg/L and a yield of 0.012 g/g glucose. These are the highest values reported so far for S. cerevisiae in synthetic medium via an acetoacetyl-CoA-derived n-butanol pathway.
Conclusions: By gradually increasing substrate supply and redox power in the form of CoA, acetyl-CoA, and NADH, and decreasing ethanol and glycerol formation, we could stepwise increase n-butanol production in S. cerevisiae. However, still further bottlenecks in the n-butanol pathway must be deciphered and improved for industrially relevant n-butanol production levels.
Biallelic mutations in TMEM126B cause severe complex i deficiency with a variable clinical phenotype
(2016)
Complex I deficiency is the most common biochemical phenotype observed in individuals with mitochondrial disease. With 44 structural subunits and over 10 assembly factors, it is unsurprising that complex I deficiency is associated with clinical and genetic heterogeneity. Massively parallel sequencing (MPS) technologies including custom, targeted gene panels or unbiased whole-exome sequencing (WES) are hugely powerful in identifying the underlying genetic defect in a clinical diagnostic setting, yet many individuals remain without a genetic diagnosis. These individuals might harbor mutations in poorly understood or uncharacterized genes, and their diagnosis relies upon characterization of these orphan genes. Complexome profiling recently identified TMEM126B as a component of the mitochondrial complex I assembly complex alongside proteins ACAD9, ECSIT, NDUFAF1, and TIMMDC1. Here, we describe the clinical, biochemical, and molecular findings in six cases of mitochondrial disease from four unrelated families affected by biallelic (c.635G>T [p.Gly212Val] and/or c.401delA [p.Asn134Ilefs∗2]) TMEM126B variants. We provide functional evidence to support the pathogenicity of these TMEM126B variants, including evidence of founder effects for both variants, and establish defects within this gene as a cause of complex I deficiency in association with either pure myopathy in adulthood or, in one individual, a severe multisystem presentation (chronic renal failure and cardiomyopathy) in infancy. Functional experimentation including viral rescue and complexome profiling of subject cell lines has confirmed TMEM126B as the tenth complex I assembly factor associated with human disease and validates the importance of both genome-wide sequencing and proteomic approaches in characterizing disease-associated genes whose physiological roles have been previously undetermined.
Iron deficiency anemia (IDA) is associated with a number of pathological gastrointestinal conditions other than inflammatory bowel disease, and also with liver disorders. Different factors such as chronic bleeding, malabsorption and inflammation may contribute to IDA. Although patients with symptoms of anemia are frequently referred to gastroenterologists, the approach to diagnosis and selection of treatment as well as follow-up measures is not standardized and suboptimal. Iron deficiency, even without anemia, can substantially impact physical and cognitive function and reduce quality of life. Therefore, regular iron status assessment and awareness of the clinical consequences of impaired iron status are critical. While the range of options for treatment of IDA is increasing due to the availability of effective and well-tolerated parenteral iron preparations, a comprehensive overview of IDA and its therapy in patients with gastrointestinal conditions is currently lacking. Furthermore, definitions and assessment of iron status lack harmonization and there is a paucity of expert guidelines on this topic. This review summarizes current thinking concerning IDA as a common co-morbidity in specific gastrointestinal and liver disorders, and thus encourages a more unified treatment approach to anemia and iron deficiency, while offering gastroenterologists guidance on treatment options for IDA in everyday clinical practice.
Aims: History of bleeding strongly influences decisions for anticoagulation in atrial fibrillation (AF). We analyzed outcomes in relation to history of bleeding and randomization in ARISTOTLE trial patients.
Methods and results: The on-treatment safety population included 18,140 patients receiving at least 1 dose of study drug (apixaban) or warfarin. Centrally adjudicated outcomes in relation to bleeding history were analyzed using a Cox proportional hazards model adjusted for randomized treatment and established risk factors. Efficacy end points were analyzed on the randomized (intention to treat) population. A bleeding history was reported at baseline in 3,033 patients (16.7%), who more often were male, with a history of prior stroke/transient ischemic attack/systemic embolism and diabetes; higher CHADS2 scores, age, and body weight; and lower creatinine clearance and mean systolic blood pressure. Major (but not intracranial) bleeding occurred more frequently in patients with versus without a history of bleeding (adjusted hazard ratio 1.35, 95% CI 1.14-1.61). There were no significant interactions between bleeding history and treatment for stroke/systemic embolism, hemorrhagic stroke, death, or major bleeding, with fewer outcomes with apixaban versus warfarin for all of these outcomes independent of the presence/absence of a bleeding history.
Conclusion: In patients with AF in a randomized clinical trial of oral anticoagulants, a history of bleeding is associated with several risk factors for stroke and portends a higher risk of major—but not intracranial—bleeding, during anticoagulation. However, the beneficial effects of apixaban over warfarin for stroke, hemorrhagic stroke, death, or major bleeding remains consistent regardless of history of bleeding.