Refine
Year of publication
Document Type
- Article (30731)
- Part of Periodical (11922)
- Book (8312)
- Doctoral Thesis (5734)
- Part of a Book (3721)
- Working Paper (3388)
- Review (2878)
- Contribution to a Periodical (2369)
- Preprint (2200)
- Report (1544)
Language
- German (42597)
- English (29553)
- French (1067)
- Portuguese (723)
- Multiple languages (314)
- Croatian (302)
- Spanish (301)
- Italian (195)
- mis (174)
- Turkish (148)
Is part of the Bibliography
- no (75699) (remove)
Keywords
- Deutsch (1038)
- Literatur (809)
- taxonomy (766)
- Deutschland (543)
- Rezension (491)
- new species (453)
- Frankfurt <Main> / Universität (341)
- Rezeption (325)
- Geschichte (292)
- Übersetzung (271)
Institute
- Medizin (7767)
- Präsidium (5228)
- Physik (4543)
- Wirtschaftswissenschaften (2710)
- Extern (2661)
- Gesellschaftswissenschaften (2378)
- Biowissenschaften (2195)
- Biochemie und Chemie (1978)
- Frankfurt Institute for Advanced Studies (FIAS) (1775)
- Center for Financial Studies (CFS) (1632)
Cnestus mutilatus (Blandford) (Coleoptera: Curculionidae: Scolytinae) is reported from Pennsylvania for the fi rst time, new state record. Specimens were collected using baited Lindgren funnels as early as 2013. Within Pennsylvania, C. mutilatus is now reported from Berks, Bucks, Lehigh, Montgomery, and York Counties.
Five new species of Bakerius Bondar (Hemiptera: Aleyrodidae: Aleurodicinae) are described and illustrated from the Americas and Vietnam based on the adult, nymph, and pupal stages: Bakerius asiaticus, Bakerius colombianus, Bakerius hondurensis, Bakerius leei and Bakerius peruvianus. The following six species: Bakerius attenuatus Bondar 1923, Bakerius calmoni Bondar 1928, Bakerius marmoratus (Hempel 1923), Bakerius phrygilanthi Bondar 1923, Bakerius sanguineus Bondar 1928, and Bakerius sublatus Bondar 1928 are re-described. An identification key to the New World genera of the subfamily Aleurodicinae, and a key to the adults and the puparia of Bakerius species are provided.
Eine neuere Entscheidung des Bundesgerichtshofs zu den Anforderungen an die Mitteilung nach § 20 AktG über die Mitteilung eines Beteiligungserwerbs2 gibt Anlass zu Überlegungen zu den Rechtsfolgen einer Verletzung von Mitteilungspflichten durch mittelbar beteiligte Gesellschafter.
Der Bundesgerichthof hat, ohne auf abweichende Ansichten einzugehen, die h.M.3 bestätigt, nach der bei Verletzungen einer Mitteilungspflicht durch ein herrschendes Unternehmen die Rechtsfolge des Rechtsverlustes das unmittelbar beteiligte Tochterunternehmen selbst dann trifft, wenn dieses seine eigene Mitteilungspflicht ordnungsgemäß erfüllt hat.4 Im Hinblick auf den (zeitweiligen) Verlust von Dividendenansprüchen, um die es in dem vom BGH entschiedenen Fall ging, dürfte die in der Sache entscheidende Erwägung sein, dass anderenfalls dem herrschenden Unternehmen die mittelbaren Folgen der Gewinnausschüttung auch dann erhalten blieben, wenn es den eigenen Verstoß gegen die Mitteilungspflicht und den daraus folgenden temporären Wegfall des Gewinnbezugsrechts kannte oder kennen musste.
In den vergangen Jahren wurde erkannt, dass eine Quantenfeldtheorie (QFT) namens Quantenchromodynamik (QCD) die richtige Theorie der starken Wechselwirkungen ist. QCD beschreibt erfolgreich die starken Wechselwirkungen, die Quarks zu Nukleonen und Nukleonen zu Atomkernen zusammenbinden. Jedoch ist die theoretische Beschreibung vieler Phänomene der starken Wechselwirkung aufgrund des starken Kopplungsverhaltens bei niedrigen Energien schwierig. Stoßexperimente mit Schwerionen sind ein möglicher Weg, um die charakteristischen Phänomene und Eigenschaften der QCD-Materie zu untersuchen. In Stoßexperimenten mit Schwerionen werden schwere (d.h. große) Atomkerne aufeinander geschossen, beispielsweise Gold (am RHIC) oder Blei (am CERN, LHC), mit einer ultrarelativistischen Energie √s im Schwerpunktsystem. Auf diese Art ist es möglich, eine große Menge von Materie mit hoher Energiedichte hervorzubringen. Das Ziel von Schwerionenkollisionen ist die Erzeugung und Charakterisierung einer makroskopischen Phase von freien Quarks und Gluonen im lokalen thermischen Gleichgewicht. Ein solcher Aggregatzustand kann neue Informationen über das QCD-Phasendiagramm und den QCD-Phasenübergang liefern. Man nimmt an, dass ein solcher Übergang stattfand, als sich die Materie des frühen Universums von einem Plasma aus Quarks und Gluonen (QGP) in ein Gas von Hadronen umwandelte...
The elliptic flow of heavy-flavour decay electrons is measured at midrapidity |eta| < 0.8 in three centrality classes (0-10%, 10-20% and 20-40%) of Pb-Pb collisions at sqrt(sNN) = 2.76TeV with ALICE at LHC. The collective motion of the particles inside the medium which is created in the heavy-ion collisions can be analyzed by a Fourier decomposition of the azimuthal anisotropic particle distribution with respect to the event plane. Elliptic flow is the component of the collective motion characterized by the second harmonic moment of this decomposition. It is a direct consequence of the initial geometry of the collision which is translated to a particle number anisotropy due to the strong interactions inside the medium. The amount of elliptic flow of low-momentum heavy quarks is related to their thermalization with the medium, while high-momentum heavy quarks provide a way to assess the path-length dependence of the energy loss induced by the interaction with the medium.
The heavy-quark elliptic flow is measured using a three-step procedure.
First the v2 coefficient of the inclusive electrons is measured using the event-plane and scalar-product methods. The electron background from light flavours and direct photons is then simulated, calculating the decay kinematics of the electron sources which are initialised by their respective measured spectra. The final result of this work emerges by subtracting the background from the inclusive measurement. A significant elliptic flow is observed after this subtraction. Its value is decreasing from low to intermediate pT and from semi-central to central collisions.
The results are described by model calculations with significant elastic interactions of the heavy quarks with the expanding strongly-interacting medium.
At sufficiently high temperatures and baryon densities, nuclear matter is expected to undergo a transition into the Quark-Gluon-Plasma (QGP) consisting of deconfined quarks and gluons and accompanied by chiral symmetry restoration. Signals of these two fundamental characteristics of Quantum-Chromo-Dynamics (QCD) can be studied in ultra-relativistic heavy-ion collisions producing a relatively large volume of high energy and nucleon densities as existent in the early universe. Dileptons are unique bulk-penetrating sources for this purpose since they penetrate through the surrounding medium with negligible interaction and are created throughout the entire evolution of the initially created fireball. A multitude of experiments at SIS18, SPS and RHIC have taken on the challenging task to measure these rare probes in a heavy-ion environment. NA60's results from high-quality dimuon measurements have identified the broadened ρ spectral function as favorable scenario to explain the low-mass dilepton excess, and partonic sources as dominant at intermediate dilepton masses.
Enabled by the addition of a TOF detector system in 2010, the first phase of the Beam Energy Scan (BES-I) at RHIC allows STAR to conduct an unprecedented energy-dependent study of dielectron production within a homogeneous experimental environment, and hence close the wide gap in the QCD phase diagram between SPS and top RHIC energies. This thesis concentrates on the understanding of the LMR enhancement regarding its invariant mass, transverse momentum and energy dependence. It studies dielectron production in Au+Au collisions at beam energies of 19.6, 27, 39, and 62.4 GeV with sufficient statistics. In conjunction with the published STAR results at top RHIC energy, this thesis presents results on the first comprehensive energy-dependent study of dielectron production.
This includes invariant mass- and transverse momenta-spectra for the four beam energies measured in 0-80% minimum-bias Au+Au collisions with high statistics up to 3.5 GeV/c² and 2.2 GeV/c, respectively. Their comparison with cocktail simulations of hadronic sources reveals a sizeable and steadily increasing excess yield in the LMR at all beam energies. The scenario of broadened in-medium ρ spectral functions proves to not only serve well as dominating underlying source but also to be universal in nature since it quantitatively and qualitatively explains the LMR enhancements measured over the wide range from SPS to top RHIC energies. It shows that most of the enhancement is governed by interactions of the ρ meson with thermal resonance excitations in the late(r)-stage hot and dense hadronic phase. This conclusion is supported by the energy-dependent measurement of integrated LMR excess yields and enhancement factors. The former do not exhibit a strong dependence on beam energy as expected from the approximately constant total baryon density above 20 GeV, and the latter show agreement with the CERES measurement at SPS energy. The consistency in excess yields and agreement with model calculations over the wide RHIC energy regime makes a strong case for LMR enhancements on the order of a factor 2-3.
The extent of the results presented here enables a more solid discussion of its relation to chiral symmetry restoration from a theoretical point of view. High-statistics measurements at BES-II hold the promise to confirm these conclusions along with the LMR enhancment's relation to total baryon density with decreasing beam energy.
Identifizierung des vertebraten-spezifischen Proteins C7orf43 als neue TRAPPII Komplexuntereinheit
(2016)
Bei den transport protein particle (TRAPP) Komplexen handelt es sich um eine Familie von Protein Komplexen, die jeweils aus mehreren Untereinheiten bestehen. In der vorliegenden Arbeit konnte das Protein C7orf43 als neue potenzielle TRAPPII Untereinheit identifiziert werden, die - wie auch die beiden anderen TRAPPII-spezifischen Komponenten TRAPPC9 und TRAPPC10 - sowohl für die Erhaltung von ERGIC, Golgi-Apparat und COPI Vesikel als für den ER zu Golgi Transportweg benötigt wird.
Die folgenden Ausführungen widmen sich der von Müller zur Steigerung der sinnlichen Erfahrbarkeit und zur Einübung in die ethnologische Perspektive genutzten ästhetisch-literarischen Verfahrensweisen. Neben die durch die Briefform vorgebebene, breit angewandte und von der Forschung bereits untersuchte Leseransprache, mit der er das Gesehene vergegenwärtigte, fügte Müller Originaldokumente ein, die Authentizität und Anschaulichkeit garantierten und den reizvollen Charakter des Fremden transportierten. Den Charakter der Anschaulichkeit seiner Darstellung unterstützte Müller, indem er immer wieder auf die bekannten und in weiten Kreisen auch außerhalb Italiens kursierenden Darstellungen des römischen Alltagslebens durch den Kupferstecher und Maler Bartolomeo Pinelli (1781-1835) rekurrierte. Auf ihr spannungsvolles Verhältnis zu den beschriebenen Volksszenen und den gesammelten Originaldokumenten möchte ich zunächst das Augenmerk legen, bevor mit den 1825 erschienenen 'Scenen aus Rom' des Spätaufklärers Christian August Vulpius ein gänzlich anderes Verfahren vorgestellt wird, das den Leser ebenfalls möglichst nah an das italienische Volksleben heranzuführen beabsichtigt.
Der Wald ist eines der wichtigsten Landschaftselemente im Kulissenfundus der Literatur - mit entsprechender Bedeutungsvielfalt: Die christlich-abendländische Tradition sieht ihn als Ort der Finsternis, der durch das göttliche Licht erhellt werden muss. In den berühmten Eingangsversen der 'Divina Commedia' erscheint er als rauer, wilder und dunkler Raum, Dantes "selva oscura" ist Stätte des Verwirrtseins und Zeichen irdischer Sündhaftigkeit. Aufgrund seiner transzendenten Unermesslichkeit jenseits rein geographischer Dimensionen kommt Gaston Bachelard in seiner 'Poetik des Raumes' zu dem Schluss: "Der Wald ist ein Seelenzustand". Als Metapher für die Seele des Menschen ist der Wald nicht nur in der Romantik beliebt, als Ausdruck des kollektiven Unbewussten interessiert er auch die Psychoanalyse, etwa in Gestalt der Archetypenlehre C. G. Jungs. In seiner Studie 'Masse und Macht' rechnet Elias Canetti den Wald zu den "Massensymbolen", mit durchaus unangenehmen Konnotationen für das Individuum. Da jeder einzelne Stamm, aus denen sich dieser zusammensetzt, fest verwurzelt und unverrückbar in der Erde stehe, sei "der Wald zum Symbol des Heeres geworden: ein Heer in Aufstellung, ein Heer, das unter keinen Umständen flieht; das sich bis zum letzten Mann in Stücke hauen läßt, bevor es einen Fußbreit Boden aufgibt". In diametralem Gegensatz zu diesem Szenario der Bedrohung des Einzelnen durch die Masse zusammengerotteter Bäume erscheint der Wald jedoch gleichermaßen als Ort der Freiheit, als Abbild der ursprünglichen Natur jenseits der einengenden menschlichen Zivilisation, als fruchtbare Wildnis, in der sich das Individuum frei entfalten kann. In dieser Bildtradition stehen zwei Texte, die ungefähr im Abstand von 100 Jahren und auf zwei verschiedenen Kontinenten entstanden sind: Henry David Thoreaus 'Walden' (1854) und Ernst Jüngers 'Der Waldgang' (1951). Wenn diese beiden in ihren historischen Voraussetzungen so unterschiedlichen Essays im Folgenden miteinander verglichen werden, so geschieht dies deshalb, weil sich in den Werken Jüngers und Thoreaus die einschlägige Rede von der Freiheit des Waldes mit einer dezidiert individualanarchistischen Programmatik verbindet, die dieser Raumsymbolik eine neue Bedeutungsdimension verleiht, die bisher in der einschlägigen topologischen Forschung unterbelichtet geblieben ist.
Die Analogien zwischen der Barthes'schen Theorie und den Konzepten der deutschen Romantik sind von der Forschung durchaus registriert worden, Beobachtungen in diese Richtung haben aber bislang kaum über die Nominierung des Desiderats hinausgeführt. Das ist sicher überraschend, da in den letzten Jahrzehnten bekanntermaßen eine Vielzahl aktualisierender Romantik-Lektüren vor der Folie postmoderner Ästhetik und Erkenntnistheorie veröffentlicht wurden. Zur Behebung dieses Defizits beizutragen, ist dementsprechend das Anliegen dieses Beitrags. Im Folgenden sind Ansätze zur systematischen Aufarbeitung der vielfachen Bezüge zu konzipieren, die sich zwischen dem Text der Deutschen Romantik und den Arbeiten Barthes' aufzeigen lassen. In methodologischer Hinsicht bewegt sich ein solcher Vergleich allerdings auf nicht eben einfachem Terrain. Das resultiert zunächst aus dem beträchtlichen Volumen der theoretischen Erträge Roland Barthes' sowie der romantischen Autoren, infolgedessen eine strenge Reduktion der Textbasis notwendig wird. Hinzu kommt, dass diese Erträge jeweils in rigoros fragmentierten und hermetischen Schreibweisen codiert sind, die das Paradox und die begriffliche Unklarheit bewusst suchen. Das multipliziert die Zahl der Blickwinkel und wirft die Frage nach der Wahl der interpretativen Zugänge auf. Um diesem Dilemma zu entgehen, rekurriert das Folgende bewusst auf eine einzelne Darstellung romantischer Theorie, die sich durch ihre besondere Prägnanz und ein überdurchschnittliches analytisches Niveau auszeichnet: Walter Benjamins Dissertation zum Begriff der Kunstkritik in der deutschen Romantik.
Der vorliegende Beitrag versteht sich als Versuch, die interkulturelle Dimension der modernen Metropole von ihrer Kehrseite her zu analysieren. Stellt die 'global city' tatsächlich in allen ihren Bereichen jenen offenen Raum interkultureller Kombinatorik dar, als den sie sich auf ihrer Schauseite der Produktion und Konsumption präsentiert? Oder perpetuiert sie Mechanismen der Ausgrenzung und der Segregation, die unterschwellig wirksam bleiben? Die Analyse erfolgt unter zwei Prämissen. Sie setzt zum einen voraus, dass der Umgang mit dem Müll als kulturelle, d.h. symbolische Praxis begriffen wird. Sie bestimmt Müll nicht als etwas Überflüssiges, sondern als Zeichenmüll, der gelesen werden kann, und schließt somit an kultur- und sozialanthropologische Ansätze zur Erforschung des Abfalls an. Zum anderen werden nicht die symbolischen Praktiken der Müllbehandlung selbst, sondern ihre literarischen Repräsentationen untersucht. Dies geschieht unter der Annahme, dass die Literatur der Moderne ein besonderes Sensorium für die symbolische Ordnung entwickelt hat, die sich im Müll abzeichnet.
"La cosmogonie est un genre littéraire d’une remarquable persistance et d’une étonnante variété, l’un des genres les plus antiques qui soient." Mit diesen Worten umreißt Paul Valéry in einem Essay über Edgar Allan Poes Prosagedicht "Eureka" die Bedeutung des literarischen Genres der Kosmogonie, der mythischen Erzählung der Weltentstehung, und hebt dabei deren anhaltende, bis in die Moderne reichende Wirkungsgeschichte hervor. Man wird Valérys Bemerkung ohne weiteres zustimmen können: Ist doch die Idee des Kosmos nicht nur eine alte, bis in die Antike zurückreichende Denkfigur, die ihrer Herkunft nach mythischen und religiösen Vorstellungen entstammt. Sie ist darüber hinaus eine Figur, die auch im neuzeitlichen Denken, etwa in verschiedenen philosophischen und literarischen Richtungen der Renaissance, eine erneute Konjunktur erfährt und deren Nachwirkungen sich bis in die Moderne verfolgen lassen. Dabei mag man nicht nur an die Rekurrenz des Kosmischen in moderner Esoterik, New Age oder Fantasy-Literatur denken, sondern mehr noch an neuere philosophische und epistemologische Ansätze, die in entscheidenden Hinsichten an Momente des alten Kosmos-Denkens anschließen und letzteres unter wenngleich veränderten Vorzeichen wieder aufnehmen. Doch wie kommt es zu dieser eigentümlichen Persistenz des Kosmos-Begriffs?
Purpose: High precision radiosurgery demands comprehensive delivery-quality-assurance techniques. The use of a liquid-filled ion-chamber-array for robotic-radiosurgery delivery-quality-assurance was investigated and validated using several test scenarios and routine patient plans.
Methods and material: Preliminary evaluation consisted of beam profile validation and analysis of source–detector-distance and beam-incidence-angle response dependence. The delivery-quality-assurance analysis is performed in four steps: (1) Array-to-plan registration, (2) Evaluation with standard Gamma-Index criteria (local-dose-difference ⩽ 2%, distance-to-agreement ⩽ 2 mm, pass-rate ⩾ 90%), (3) Dose profile alignment and dose distribution shift until maximum pass-rate is found, and (4) Final evaluation with 1 mm distance-to-agreement criterion. Test scenarios consisted of intended phantom misalignments, dose miscalibrations, and undelivered Monitor Units. Preliminary method validation was performed on 55 clinical plans in five institutions.
Results: The 1000SRS profile measurements showed sufficient agreement compared with a microDiamond detector for all collimator sizes. The relative response changes can be up to 2.2% per 10 cm source–detector-distance change, but remains within 1% for the clinically relevant source–detector-distance range. Planned and measured dose under different beam-incidence-angles showed deviations below 1% for angles between 0° and 80°. Small-intended errors were detected by 1 mm distance-to-agreement criterion while 2 mm criteria failed to reveal some of these deviations. All analyzed delivery-quality-assurance clinical patient plans were within our tight tolerance criteria.
Conclusion: We demonstrated that a high-resolution liquid-filled ion-chamber-array can be suitable for robotic radiosurgery delivery-quality-assurance and that small errors can be detected with tight distance-to-agreement criterion. Further improvement may come from beam specific correction for incidence angle and source–detector-distance response.
Chronic hepatitis C is a major reason for development of cirrhosis and hepatocellular carcinoma and a leading cause for liver transplantation. The development of direct-acting antiviral agents lead to (pegylated) interferon-alfa free antiviral therapy regimens with a remarkable increase in sustained virologic response (SVR) rates and opened therapeutic options for patients with advanced cirrhosis and liver graft recipients. This concise review gives an overview about most current prospective trials and cohort analyses for treatment of patients with liver cirrhosis and liver graft recipients. In patients with compensated cirrhosis Child-Pugh-Turcotte (CTP) class A, all approved agents are safe and SVR rates do not significantly differ from patients without cirrhosis in general. In patients with decompensated cirrhosis CTP class B or C, daclastasvir, ledipasvir, velpatasvir, and sofosbuvir are approved, and SVR rates higher than 90% can be achieved. Especially for patients with a model of end stage liver disease score higher than 15 and therefore eligible for liver transplantation, data is scarce. Reported SVR rates in patients with cirrhosis CTP class C are lower compared to patients with a less severe liver disease. In liver transplant recipients with a maximum of CTP class A, SVR rates are comparable to patients without LT. Patients with decompensated graft cirrhosis should be treated on an individual basis.
Global investment in biomedical research has grown significantly over the last decades, reaching approximately a quarter of a trillion US dollars in 2010. However, not all of this investment is distributed evenly by gender. It follows, arguably, that scarce research resources may not be optimally invested (by either not supporting the best science or by failing to investigate topics that benefit women and men equitably). Women across the world tend to be significantly underrepresented in research both as researchers and research participants, receive less research funding, and appear less frequently than men as authors on research publications. There is also some evidence that women are relatively disadvantaged as the beneficiaries of research, in terms of its health, societal and economic impacts. Historical gender biases may have created a path dependency that means that the research system and the impacts of research are biased towards male researchers and male beneficiaries, making it inherently difficult (though not impossible) to eliminate gender bias. In this commentary, we – a group of scholars and practitioners from Africa, America, Asia and Europe – argue that gender-sensitive research impact assessment could become a force for good in moving science policy and practice towards gender equity. Research impact assessment is the multidisciplinary field of scientific inquiry that examines the research process to maximise scientific, societal and economic returns on investment in research. It encompasses many theoretical and methodological approaches that can be used to investigate gender bias and recommend actions for change to maximise research impact. We offer a set of recommendations to research funders, research institutions and research evaluators who conduct impact assessment on how to include and strengthen analysis of gender equity in research impact assessment and issue a global call for action.
La littérature comparée est aujourd’hui solidement implantée dans les cursus universitaires français. Elle a engagé depuis le tournant des années 2000 une série de bilans et de réflexions sur son histoire, conditions d’un renouvellement pour affronter les nouveaux défis contemporains, notamment celui du numérique et de la mondialisation. On peut difficilement, de fait, comprendre sa place dans l’enseignement et la recherche français sans rappeler, même très brièvement, l’histoire de sa constitution en pratique de la critique et en discipline universitaire, et sans tenir compte des cadres institutionnels au sein desquels elle s’exerce. Cette présentation commencera donc par là, avant d’esquisser un état des lieux et des perspectives pour l’avenir.
Systemic lupus erythematosus (SLE) is a chronic disease characterized by progressive tissue damage. In recent decades, novel treatments have greatly extended the life span of SLE patients. This creates a high demand for identifying the overarching symptoms associated with SLE and developing therapies that improve their life quality under chronic care. We hypothesized that SLE patients would present dysphonic symptoms. Given that voice disorders can reduce life quality, identifying a potential SLE-related dysphonia could be relevant for the appraisal and management of this disease. We measured objective vocal parameters and perceived vocal quality with the GRBAS (Grade, Roughness, Breathiness, Asthenia, Strain) scale in SLE patients and compared them to matched healthy controls. SLE patients also filled a questionnaire reporting perceived vocal deficits. SLE patients had significantly lower vocal intensity and harmonics to noise ratio, as well as increased jitter and shimmer. All subjective parameters of the GRBAS scale were significantly abnormal in SLE patients. Additionally, the vast majority of SLE patients (29/36) reported at least one perceived vocal deficit, with the most prevalent deficits being vocal fatigue (19/36) and hoarseness (17/36). Self-reported voice deficits were highly correlated with altered GRBAS scores. Additionally, tissue damage scores in different organ systems correlated with dysphonic symptoms, suggesting that some features of SLE-related dysphonia are due to tissue damage. Our results show that a large fraction of SLE patients suffers from perceivable dysphonia and may benefit from voice therapy in order to improve quality of life.
Quantification of spatially and temporally resolved water flows and water storage variations for all land areas of the globe is required to assess water resources, water scarcity and flood hazards, and to understand the Earth system. This quantification is done with the help of global hydrological models (GHMs). What are the challenges and prospects in the development and application of GHMs? Seven important challenges are presented. (1) Data scarcity makes quantification of human water use difficult even though significant progress has been achieved in the last decade. (2) Uncertainty of meteorological input data strongly affects model outputs. (3) The reaction of vegetation to changing climate and CO2 concentrations is uncertain and not taken into account in most GHMs that serve to estimate climate change impacts. (4) Reasons for discrepant responses of GHMs to changing climate have yet to be identified. (5) More accurate estimates of monthly time series of water availability and use are needed to provide good indicators of water scarcity. (6) Integration of gradient-based groundwater modelling into GHMs is necessary for a better simulation of groundwater–surface water interactions and capillary rise. (7) Detection and attribution of human interference with freshwater systems by using GHMs are constrained by data of insufficient quality but also GHM uncertainty itself. Regarding prospects for progress, we propose to decrease the uncertainty of GHM output by making better use of in situ and remotely sensed observations of output variables such as river discharge or total water storage variations by multi-criteria validation, calibration or data assimilation. Finally, we present an initiative that works towards the vision of hyperresolution global hydrological modelling where GHM outputs would be provided at a 1-km resolution with reasonable accuracy.
Am 10. Mai 2014 starb, im hohen Alter von 99 Jahren, Elisabeth Frenzel, geborene Lüttig-Niese, in ihrem letzten Wohnort in Berlin. Zu ihrer Beerdigung erschienen, wie ein lesenswerter Nachruf von David Ensikat im Berliner 'Tagesspiegel' berichtet, 15 Menschen. Die Pastorin stellte die Trauerfeier unter das Thema des schwierigen Erbes, der Schuld und den Möglichkeiten der Vergebung. Das war vielleicht nicht im Sinne der Verstorbenen, aber im Sinne der Lebensgeschichte der Verstorbenen sicherlich ein sehr guter Ansatz, um sich von Elisabeth Frenzel zu verabschieden.
Nachruf auf Eberhard Lämmert
(2015)
Den Erzähler Thomas Mann nannte der Germanist Eberhard Lämmert 'buchtenreich'. Auch sein eigenes OEuvre verdient indes diese Qualifikation: Es ist - natürlich - eines aus zahllosen literaturwissenschaftlichen Büchern, Aufsätzen, Vorworten, Nachbemerkungen, aus offiziellen Reden, öffentlichen Statements, Aufrufen, Gutachten und und und, die Themen sind vielfältig, die Beziehungen zwischen ihnen komplex.
The world in a 'Zeitschrift'
(2015)
The relaunching of the Jahrbuch 'Komparatistik' in 2015 takes place at a time of ferment in comparative literary studies, as a discipline long focused primarily on Western Europe seeks to reconsider its position in a global landscape, and in the process to rethink the contours of European literature itself. Here I would like to discuss one new manifestation of this rethinking: the founding of the 'Journal of World Literature', which will be debuting in 2016. Published in Amsterdam by Brill, with its managing editors located in Leuven and in Göttingen, the 'JWL' represents a European initiative in comparative and world literary studies, and the journal has a global presence as well. It is overseen by an international board of editors (myself among them), and it has an association with the Institute for World Literature, a Harvard-based program supported by five dozen institutions around the world, which will be responsible for one of its quarterly issues each year. Global in outlook and outreach, the 'JWL' can equally be thought of as carrying on an originally German project: to embody the potentially vast field of comparative and world literature within the pages available in a scholarly journal. To this end, very different approaches were tried in the last quarter of the nineteenth century by two foundational journals: the 'Acta Comparationis Litterarum Universarum', published in Cluj from 1877-88 by the Transylvanian scholars Hugo Meltzl and Sámuel Brassai, and the 'Zeitschrift für vergleichende Litteraturgeschichte', founded in 1886, published in Berlin under the editorship of Max Koch. Probably the very first journals in the field – the French 'Revue de littérature comparée', for example, dates only from 1921 – these pioneering journals divided up the literary territory in very different ways. Meltzl and Brassai’s 'Acta' reflected an idealistic globalism grounded in a radical multilingualism, whereas Koch opted for a more pragmatic but markedly nationalistic conception of the field. The new 'Journal of World Literature' will need to draw on the strengths of each approach even as its editors seek to avoid the pitfalls of both.
The venture capital industry holds relevance for entrepreneurs looking for money to finance an innovative project, investors seeking to make money by investing in entrepreneurial firms and governments trying to promote innovation and entrepreneurship. Venture capital investment could facilitate innovation and thus a better economy.
Venture capital has enabled the U.S. to support its entrepreneurial talent by turning ideas into world-famous products and services, building companies from mere business plans to mature and powerful organizations. Three of the five largest U.S. public companies by market capitalization – Apple, Google and Microsoft – received most of their early external funding from venture capital. Having its ups and downs, venture capital investment in the U.S. expanded from virtually zero in the mid-1970s to $8 billion in 1995 and $49.3 billion in 2014. Venture backed companies have been a prime driver of economic growth in the U.S.Across the pacific, venture capital investment in China has grown out of the transition from a centrally planned economy to a free market economy over the past three decades, becoming an important pillar supporting China’s innovation system. In 2015, a total of 2,824 venture capital investment deals provided an aggregate investment of $36.9 billion. Venture capital has long been a hot topic in China’s capital market, particularly since the government decided to boost “mass entrepreneurship and innovation” in 2014.
In the U.S., most venture capital firms are organized as limited partnerships, with the venture capitalists being general partners and the investors limited partners. Studies have shown that investors choose to invest through venture funds as an intermediary rather than placing their investments directly with the entrepreneurs; because of the high risk nature of the entrepreneur’s business, it is hard for them to get bank loans or direct equity investments. Conflicts may also arise, however, between the venture capitalists acting as agents and the investors as principals.5 This agency problem maybe particularly severe, since venture capital provides money for businesses with high potential and high risk, although the limited partnership has certain merits and is still most commonly chosen as the business form for venture capital funds.6 At the same time, the fact that general partners have total control of the partnership business necessitates that the agency problem is addressed by legal rules, contracts and other mechanisms.
Meanwhile, despite the rapid growth of venture capital investments in China, little attention has been paid to the organizational form of venture capital funds. In contrast to the U.S., most Chinese venture funds have been structured as corporations. One may argue that it was due to legislative reasons: that the limited partnership was not recognized by Chinese law when venture capital first appeared in China. However, after adopted a chapter was adopted in the Partnership Enterprise Law (PEL) governing limited partnerships in 2007, most of the venture funds abided by their choice, while those opting for the limited partnership have encountered difficulties: the limited partners are having trouble trusting the general partners with their money and are therefore interfering with the operation of the partnership business, which may lead to dissolution of the partnership.
This thesis applies transaction cost theory to explain the benefits and costs of choosing the limited partnership as a business form in the special context of venture capital investments, showing that the potential agency conflict between the general partners and the limited partners have been mitigated by legal and other mechanismsin the United States, and that the U.S. investors could therefore exploit the merit of the limited partnership form in venture capital financing. In China, investors have different answers to the agency problem. Similarly to the situation in the U.S., Chinese partners also employ contract terms to deal with agency problems, and the legislators enact laws that aim at regulating the limited partnership form; some legislation was even transplanted from the U.S., such as that part of the PEL which governs limited partnerships. It seems, then, that similar mechanisms that deal with agency problems also exist in China. However, given the unique history of the development of China’s innovation system and venture capital market, the effectiveness of these constraints is questionable. Chinese venture capital investors have therefore characteristically behaved differently to U.S. investors. Rather than relying on these questionable mechanisms, Chinese investors as well as the Chinese government have developed different approaches to addressing these agency problems.
During the 1970s, industrial countries, including the US and continental Europa, experienced a combination of slow productivity growth and high unemplyoment. Subsequent research has shown that the standard model of unemployment actually gives counterfactual predictions. Motivated by the observation that the 1970s were also characterized by high and rising inflation, Tesfaselassie and Wolters examine the effect of growth on unemployment in the presence of nominal price rigidity.
The authors demonstrate that the effect of growth on unemployment may be positive or negative. Faster growth leads to lower unemployment if the rate of inflation is high enough. There is a threshold level of inflation below which faster growth leads to higher unemployment and above which faster growth leads to lower unemployment. The threshold level in turn depends on labor market characteristics, such as hiring efficiency, the job destruction rate, workers' relative bargaining power and the opportunity cost of work.
To broaden the scope of monetary policy, cash abolishment is often suggested as a means of breaking through the zero lower bound. However, practically nothing is said about the welfare costs of such a proposal. Rösl, Seitz and Tödter argue that the welfare costs of bypassing the zero lower bound can be analyzed analytically and empirically by assuming negative interest rates on cash holdings. They gauge the welfare effects of abolishing cash, both, for the euro area and for Germany.
Their findings suggest that the welfare losses of negative interest rates incurred by money holders are large, notably if implemented in the current low interest rate environment. Imposing a negative interest rate of 3 percentage points on cash holdings and reducing the interest on all assets included in M3 creates a deadweight loss of € 62bn for the euro area and of €18bn for Germany. Therefore, the authors argue that cash abolishment or negative interest rates on cash to break through the zero lower bound at any price can hardly be a meaningful policy goal.
The currrent debate on monetary and fiscal policy is heavily influenced by estimates of the equilibrium real interest rate. Beyer and Wieland re-estimate the U.S. equilibrium rate with the methodology of Laubach and Williams and further modifications. They provide new estimates for the United States, the euro area and Germany and subject them to sensitivity tests. Beyer and Wieland conclude that due to the great uncertainty and sensitivity, the observed decline in the estimates is not a reliable indicator of a need for expansionary monetary and fiscal policy. Yet, if those estimates are employed to determine the appropriate monetary policy stance, such estimates are better used together with the consistent estimate of the level of potential output.
I propose a dynamic stochastic general equilibrium model in which the leverage of borrowers as well as banks and housing finance play a crucial role in the model dynamics. The model is used to evaluate the relative effectiveness of a policy to inject capital into banks versus a policy to relieve households of mortgage debt. In normal times, when the economy is near the steady state and policy rates are set according to a Taylor-type rule, capital injections to banks are more effective in stimulating the economy in the long-run. However, in the middle of a housing debt crisis, when households are highly leveraged, the short-run output effects of the debt relief are more substantial. When the zero lower bound (ZLB) is additionally considered, the debt relief policy can be much more powerful in boosting the economy both in the short-run and in the longrun. Moreover, the output effects of the debt relief become increasingly larger, the longer the ZLB is binding.
We use the Italian Survey of Household Income and Wealth, a rather unique dataset with a long time dimension of panel information on consumption, income and wealth, to structurally estimate a buffer-stock saving model. We exploit the information contained in the joint dynamics of income, consumption and wealth to quantify the degree of insurance against income risk. The estimated model implies that Italian households can insure between 89 and 95 percent of a transitory and between 7 and 9 percent of a permanent income shock. Compared to existing empirical estimates for the same dataset, our findings suggest that Italian households do not have access to significant insurance beyond self-insurance.
Immunopathogenic mechanisms of autoimmune Hepatitis : how much do we know from animal models?
(2016)
Autoimmune hepatitis (AIH) is characterized by a progressive destruction of the liver parenchyma and a chronic fibrosis. The current treatment of autoimmune hepatitis is still largely dependent on the administration of corticosteroids and cytostatic drugs. For a long time the development of novel therapeutic strategies has been hampered by a lack of understanding the basic immunopathogenic mechanisms of AIH and the absence of valid animal models. However, in the past decade, knowledge from clinical observations in AIH patients and the development of innovative animal models have led to a situation where critical factors driving the disease have been identified and alternative treatments are being evaluated. Here we will review the insight on the immunopathogenesis of AIH as gained from clinical observation and from animal models.
Das Internet ist allgegenwärtig - so allgegenwärtig, dass es inzwischen in gewissen Kreisen en vogue ist, sich ab und an komplett vom Internet abzukapseln. Passend zur vorösterlichen Zeit könnte man von Internetfasten sprechen. Aber was passiert, wenn das Internet einfach komplett abgestellt wird, für alle? Was für uns primär eine akademische Fragestellung ist, ist in Kamerun, Indien, Pakistan und vielen anderen Ländern Realität. Diese Beispiele verdeutlichen nicht nur wie Internetabschaltung ein Instrument sozialer und politischer Kontrolle sind, sie zeigen auch ihre dramatischen Auswirkungen. Das Thema sollte uns auch hier interessieren...
Curcumin, the active constituent of Curcuma longa L. (family Zingiberaceae), has gained increasing interest because of its anti-cancer, anti-inflammatory, anti-diabetic, and anti-rheumatic properties associated with good tolerability and safety up to very high doses of 12 g. Nanoscaled micellar formulations on the base of Tween 80 represent a promising strategy to overcome its low oral bioavailability. We therefore aimed to investigate the uptake and transepithelial transport of native curcumin (CUR) vs. a nanoscaled micellar formulation (Sol-CUR) in a Caco-2 cell model. Sol-CUR afforded a higher flux than CUR (39.23 vs. 4.98 μg min−1 cm−2, respectively). This resulted in a higher Papp value of 2.11 × 10−6 cm/s for Sol-CUR compared to a Papp value of 0.56 × 10−6 cm/s for CUR. Accordingly a nearly 9.5 fold higher amount of curcumin was detected on the basolateral side at the end of the transport experiments after 180 min with Sol-CUR compared to CUR. The determined 3.8-fold improvement in the permeability of curcumin is in agreement with an up to 185-fold increase in the AUC of curcumin observed in humans following the oral administration of the nanoscaled micellar formulation compared to native curcumin. The present study demonstrates that the enhanced oral bioavailability of micellar curcumin formulations is likely a result of enhanced absorption into and increased transport through small intestinal epithelial cells.
50 years of amino acid hydrophobicity scales : revisiting the capacity for peptide classification
(2016)
Background: Physicochemical properties are frequently analyzed to characterize protein-sequences of known and unknown function. Especially the hydrophobicity of amino acids is often used for structural prediction or for the detection of membrane associated or embedded β-sheets and α-helices. For this purpose many scales classifying amino acids according to their physicochemical properties have been defined over the past decades. In parallel, several hydrophobicity parameters have been defined for calculation of peptide properties. We analyzed the performance of separating sequence pools using 98 hydrophobicity scales and five different hydrophobicity parameters, namely the overall hydrophobicity, the hydrophobic moment for detection of the α-helical and β-sheet membrane segments, the alternating hydrophobicity and the exact ß-strand score.
Results: Most of the scales are capable of discriminating between transmembrane α-helices and transmembrane β-sheets, but assignment of peptides to pools of soluble peptides of different secondary structures is not achieved at the same quality. The separation capacity as measure of the discrimination between different structural elements is best by using the five different hydrophobicity parameters, but addition of the alternating hydrophobicity does not provide a large benefit. An in silico evolutionary approach shows that scales have limitation in separation capacity with a maximal threshold of 0.6 in general. We observed that scales derived from the evolutionary approach performed best in separating the different peptide pools when values for arginine and tyrosine were largely distinct from the value of glutamate. Finally, the separation of secondary structure pools via hydrophobicity can be supported by specific detectable patterns of four amino acids.
Conclusion: It could be assumed that the quality of separation capacity of a certain scale depends on the spacing of the hydrophobicity value of certain amino acids. Irrespective of the wealth of hydrophobicity scales a scale separating all different kinds of secondary structures or between soluble and transmembrane peptides does not exist reflecting that properties other than hydrophobicity affect secondary structure formation as well. Nevertheless, application of hydrophobicity scales allows distinguishing between peptides with transmembrane α-helices and β-sheets. Furthermore, the overall separation capacity score of 0.6 using different hydrophobicity parameters could be assisted by pattern search on the protein sequence level for specific peptides with a length of four amino acids.
Bears are iconic mammals with a complex evolutionary history. Natural bear hybrids and studies of few nuclear genes indicate that gene flow among bears may be more common than expected and not limited to polar and brown bears. Here we present a genome analysis of the bear family with representatives of all living species. Phylogenomic analyses of 869 mega base pairs divided into 18,621 genome fragments yielded a well-resolved coalescent species tree despite signals for extensive gene flow across species. However, genome analyses using different statistical methods show that gene flow is not limited to closely related species pairs. Strong ancestral gene flow between the Asiatic black bear and the ancestor to polar, brown and American black bear explains uncertainties in reconstructing the bear phylogeny. Gene flow across the bear clade may be mediated by intermediate species such as the geographically wide-spread brown bears leading to large amounts of phylogenetic conflict. Genome-scale analyses lead to a more complete understanding of complex evolutionary processes. Evidence for extensive inter-specific gene flow, found also in other animal species, necessitates shifting the attention from speciation processes achieving genome-wide reproductive isolation to the selective processes that maintain species divergence in the face of gene flow.
The role of endogenous melatonin for the control of the circadian system under entrained conditions and for the determination of the chronotype is still poorly understood. Mice with deletions in the melatoninergic system (melatonin deficiency or the lack of melatonin receptors, respectively) do not display any obvious defects in either their spontaneous (circadian) or entrained (diurnal) rhythmic behavior. However, there are effects that can be detected by analyzing the periodicity of the locomotor behaviors in some detail. We found that melatonin-deficient mice (C57Bl), as well as melatonin-proficient C3H mice that lack the melatonin receptors (MT) 1 and 2 (C3H MT1,2 KO), reproduce their diurnal locomotor rhythms with significantly less accuracy than mice with an intact melatoninergic system. However, their respective chronotypes remained unaltered. These results show that one function of the endogenous melatoninergic system might be to stabilize internal rhythms under conditions of a steady entrainment, while it has no effects on the chronotype.
In its soluble form, the extracellular matrix proteoglycan biglycan triggers the synthesis of the macrophage chemoattractants, chemokine (C-C motif) ligand CCL2 and CCL5 through selective utilization of Toll-like receptors (TLRs) and their adaptor molecules. However, the respective downstream signaling events resulting in biglycan-induced CCL2 and CCL5 production have not yet been defined. Here, we show that biglycan stimulates the production and activation of sphingosine kinase 1 (SphK1) in a TLR4- and Toll/interleukin (IL)-1R domain-containing adaptor inducing interferon (IFN)-β (TRIF)-dependent manner in murine primary macrophages. We provide genetic and pharmacological proof that SphK1 is a crucial downstream mediator of biglycan-triggered CCL2 and CCL5 mRNA and protein expression. This is selectively driven by biglycan/SphK1-dependent phosphorylation of the nuclear factor NF-κB p65 subunit, extracellular signal-regulated kinase (Erk)1/2 and p38 mitogen-activated protein kinases. Importantly, in vivo overexpression of soluble biglycan causes Sphk1-dependent enhancement of renal CCL2 and CCL5 and macrophage recruitment into the kidney. Our findings describe the crosstalk between biglycan- and SphK1-driven extracellular matrix- and lipid-signaling. Thus, SphK1 may represent a new target for therapeutic intervention in biglycan-evoked inflammatory conditions.
Background: In oldest-old patients (>80), few trials showed efficacy of treating hypertension and they included mostly the healthiest elderly. The resulting lack of knowledge has led to inconsistent guidelines, mainly based on systolic blood pressure (SBP), cardiovascular disease (CVD) but not on frailty despite the high prevalence in oldest-old. This may lead to variation how General Practitioners (GPs) treat hypertension. Our aim was to investigate treatment variation of GPs in oldest-olds across countries and to identify the role of frailty in that decision.
Methods: Using a survey, we compared treatment decisions in cases of oldest-old varying in SBP, CVD, and frailty. GPs were asked if they would start antihypertensive treatment in each case. In 2016, we invited GPs in Europe, Brazil, Israel, and New Zealand. We compared the percentage of cases that would be treated per countries. A logistic mixed-effects model was used to derive odds ratio (OR) for frailty with 95% confidence intervals (CI), adjusted for SBP, CVD, and GP characteristics (sex, location and prevalence of oldest-old per GP office, and years of experience). The mixed-effects model was used to account for the multiple assessments per GP.
Results: The 29 countries yielded 2543 participating GPs: 52% were female, 51% located in a city, 71% reported a high prevalence of oldest-old in their offices, 38% and had >20 years of experience. Across countries, considerable variation was found in the decision to start antihypertensive treatment in the oldest-old ranging from 34 to 88%. In 24/29 (83%) countries, frailty was associated with GPs’ decision not to start treatment even after adjustment for SBP, CVD, and GP characteristics (OR 0.53, 95%CI 0.48–0.59; ORs per country 0.11–1.78).
Conclusions: Across countries, we found considerable variation in starting antihypertensive medication in oldest-old. The frail oldest-old had an odds ratio of 0.53 of receiving antihypertensive treatment. Future hypertension trials should also include frail patients to acquire evidence on the efficacy of antihypertensive treatment in oldest-old patients with frailty, with the aim to get evidence-based data for clinical decision-making.