Refine
Year of publication
- 2008 (801) (remove)
Document Type
- Article (308)
- Working Paper (100)
- Book (95)
- Doctoral Thesis (85)
- Part of Periodical (81)
- Conference Proceeding (57)
- Part of a Book (25)
- Preprint (18)
- Report (14)
- Other (10)
Language
- English (801) (remove)
Keywords
- Deutsch (10)
- Metapher (9)
- USA (9)
- Bank (6)
- Englisch (6)
- Phonetik (6)
- Phonologie (6)
- Bedeutung (5)
- Geldpolitik (5)
- Grammatik (5)
Institute
- Medizin (83)
- Center for Financial Studies (CFS) (59)
- Biochemie und Chemie (55)
- Physik (36)
- Geowissenschaften (35)
- Biowissenschaften (32)
- Extern (26)
- E-Finance Lab e.V. (20)
- Informatik (18)
- Wirtschaftswissenschaften (16)
This paper investigates the relation between TT-MCTAG, a formalism used in computational linguistics, and RCG. RCGs are known to describe exactly the class PTIME; simple RCG even have been shown to be equivalent to linear context-free rewriting systems, i.e., to be mildly context-sensitive. TT-MCTAG has been proposed to model free word order languages. In general, it is NP-complete. In this paper, we will put an additional limitation on the derivations licensed in TT-MCTAG. We show that TT-MCTAG with this additional limitation can be transformed into equivalent simple RCGs. This result is interesting for theoretical reasons (since it shows that TT-MCTAG in this limited form is mildly context-sensitive) and, furthermore, even for practical reasons: We use the proposed transformation from TT-MCTAG to RCG in an actual parser that we have implemented.
The distribution of linguistic structures in the world is the joint product of universal principles, inheritance from ancestor languages, language contact, social structures, and random fluctuation. This paper proposes a method for evaluating the relative significance of each factor — and in particular, of universal principles — via regression modeling: statistical evidence for universal principles is found if the odds for families to have skewed responses (e.g. all or most members have postnominal relative clauses) as opposed to having an opposite response skewing or no skewing at all, is significantly higher for some condition (e.g. VO order) than for another condition, independently of other factors.
The paper provides novel insights on the effect of a firm’s risk management objective on the optimal design of risk transfer instruments. I analyze the interrelation between the structure of the optimal insurance contract and the firm’s objective to minimize the required equity it has to hold to accommodate losses in the presence of multiple risks and moral hazard. In contrast to the case of risk aversion and moral hazard, the optimal insurance contract involves a joint deductible on aggregate losses in the present setting.
Arthropods use fluid medium motion-sensing filiform hairs on their exoskeleton to detect aerodynamic or hydrodynamic stimuli in their surroundings that affect their behaviour. The hairs, often of different lengths and organized in groups or arrays, respond to particular fluid motion amplitudes and frequencies produced by prey, predators, or conspecifics, even in the presence of background noise peculiar to the environment. While long known to biologists and experimentally investigated by them, it is only relatively recently that comprehensive physical-mathematical models have emerged offering an alternative methodology for investigating the biomechanics of filiform hair motion. These models have been developed and applied to quantitatively predict the performance characteristics of filiform hairs in air and water as a function of the relevant parameters that affect their physical behaviour. They even allow the exploration of possible biological evolutionary paths for filiform hair changes resulting from physical selection pressures. In this chapter we review the state of knowledge of filiform hair biomechanics and discuss two physical-mathematical models to predict hair dynamical behaviour. One modelling approach is analytically exact, serving for quantitative purposes, while the other, derived from it, is approximate, serving for qualitative guidance concerning the parameter dependencies of hair motion. Using these models we look in turn at the influence of these parameters and the fluid media physical properties on hair motion, including the possibility of medium-facilitated viscous coupling between hairs. The models point to areas where data is currently lacking and future research could be focused. In addition, new results are presented pertaining to transient tlows. We qualitatively explore the possibility of an overlapping water-air niches adaptation potential that may explain how, over many generations, the filiform hairs of an arthropod living in water could have evolved to function in air. Because flow-sensing hairs have served to inspire corresponding artificial medium motion microsensors, we discuss recent advances in this area. Significant challenges remain to be overcome, especially with respect to the materials and fabrication techniques used. In spite of the impressive technological advances made, nature still remains unrivalled.
Writing against the odds : the south’s cultural and literary struggle against progress and modernity
(2008)
Die Literatur und Kultur der Südstaaten ist entscheidend geprägt von ihrer Orientierung an der eigenen Geschichte und Vergangenheit. Die düstere Vergangenheit, die die Gegenwart überschatten und die Zukunft determiniert ist ein Südstaatenthema par exellence und allgegenwärtig in ihrer Kultur und Literatur. Nach dem Bürgerkrieg und der Reconstruction Era ist der Süden kulturell und ökonomisch ausgeblutet, am Boden und isoliert. Nach dem Krieg weitet sich die Kluft zwischen Nord- und Südstaaten immer weiter aus, ein Prozess, der jedoch schon so alt ist wie die Vereinigten Staaten selbst und bereits im beginnenden 18. Jahrhundert seinen Anfang nimmt. Die Isolation ist gleichzeitig gewollt und ungewollt, bewusst und unbewusst. Die Scham des verlorenen Krieges und die Marginalisierung sind die Katalysatoren für die Kultivierung und das Bestreben nach Erhalt der Besonderheiten der Südstaaten, mit ihrer vermeintlich überlegenen Kultur und Moral. Es beginnt die kommerzialisierte, hoch ideologisierte Konstruktion der Geschichte und Identität der Südstaaten, die in alle Lebensbereiche strahlt. Der melancholische Blick in die Vergangenheit als wichtigste Referenz und kulturellen Fluchtpunkt ersetzt den Eintritt in die Moderne und Modernität mit ihrer Schnelllebigkeit, Austauschbarkeit und die Aufgabe der Tradition für eine rasante Gegenwart. Das Individuum der Südstaaten sieht sich statt mit einer Flut an Wahlmöglichkeiten und Optionen, mit einer einengenden Gesellschaft konfrontiert, die wenig Spielraum für Abweichungen übriglässt und ein harsches Kontrollsystem hat. Es ist eine einzigartige Mischung aus Stolz, Scham und einem Gefühl der gleichzeitigen Unter- und Überlegenheit, die einen besonders guten literarischen Nährboden hervorbringt. In dieser Arbeit wird den historischen, kulturellen, und literarischen Wurzeln der Südstaatenliteratur seit der Southern Renaissance nachgegegangen, um dann die ständig perpetuierten formalen und inhaltlichen Strukturen darzustellen, die wenig Veränderungen erfahren haben. Diese Perpetuierung resultiert aus der einzigartigen Situation der Südstaaten, aus der historischen Last, die unvermindert aktuell bleibt und längst nicht verarbeitet ist. Südstaatenautoren konnten und können nicht die traditionellen Formen und Themen ablegen, solange diese konstituierende Bestandteile der Kultur und Identität der Südstaaten bleiben. Die Südstaaten verweigern sich der Modernität und empfinden Fortschritt und die moderne Massengesellschaft nicht nur als Bedrohung, sondern als Einfluss aus den Nordstaaten, der die eigene Kultur bedroht und eine Einmischung von außen ist, die es abzuwenden gilt. Traditionsbewusste, reaktionäre Tendenzen und Elemente ziehen sich selbst durch vermeintlich progressive, moderne Entwicklungen und Phänomene. Ich kombiniere identitätskonstituierende, isolierende und melancholische Elemente und beleuchte sie historisch, kulturell und literarisch, um eine mehrschichtige Perspektive zu erlangen. Das Verständnis dieser historischen Last und deren unverminderte Bedeutung und Auswirkung auf die Literatur und Kultur der Südstaaten ist essentiell für eine tiefere Einsicht in deren Strukturen und Bedeutung.
Religious conversion has become a dangerous social and individual problem. In Latin America, a traditional Catholic area, Protestant sects are successfully con-verting more and more Catholics into their own communities. Therefore the Pope demands a strict control of these activities. In India e.g., the Catholic hierarchy is critizising the Indian governments which have forbidden conversion on non-spiritual reasons. Hindu organizations have started even very successfully to re-convert Indian Christians particularly of Dalit and tribal background. Buddhists are very successful in indirect and even direct conversion of many Westerners. Wah-habit missionaries spread their Neo-Islam in the Muslim societies and get more and more even non-Muslim converts. We should add the forcible and sometimes ex-tremely cruel conversions the atheistic states had executed since the last century. ...
Poster presentation A central problem in neuroscience is to bridge local synaptic plasticity and the global behavior of a system. It has been shown that Hebbian learning of connections in a feedforward network performs PCA on its inputs [1]. In recurrent Hopfield network with binary units, the Hebbian-learnt patterns form the attractors of the network [2]. Starting from a random recurrent network, Hebbian learning reduces system complexity from chaotic to fixed point [3]. In this paper, we investigate the effect of Hebbian plasticity on the attractors of a continuous dynamical system. In a Hopfield network with binary units, it can be shown that Hebbian learning of an attractor stabilizes it with deepened energy landscape and larger basin of attraction. We are interested in how these properties carry over to continuous dynamical systems. Consider system of the form Math(1) where xi is a real variable, and fi a nondecreasing nonlinear function with range [-1,1]. T is the synaptic matrix, which is assumed to have been learned from orthogonal binary ({1,-1}) patterns ξμ, by the Hebbian rule: Math. Similar to the continuous Hopfield network [4], ξμ are no longer attractors, unless the gains gi are big. Assume that the system settles down to an attractor X*, and undergoes Hebbian plasticity: T´ = T + εX*X*T, where ε > 0 is the learning rate. We study how the attractor dynamics change following this plasticity. We show that, in system (1) under certain general conditions, Hebbian plasticity makes the attractor move towards its corner of the hypercube. Linear stability analysis around the attractor shows that the maximum eigenvalue becomes more negative with learning, indicating a deeper landscape. This in a way improves the system´s ability to retrieve the corresponding stored binary pattern, although the attractor itself is no longer stabilized the way it does in binary Hopfield networks.
Oscillatory activity in human electro- or magnetoencephalogram has been related to cortical stimulus representations and their modulation by cognitive processes. Whereas previous work has focused on gamma-band activity (GBA) during attention or maintenance of representations, there is little evidence for GBA reflecting individual stimulus representations. The present study aimed at identifying stimulus-specific GBA components during auditory spatial short-term memory. A total of 28 adults were assigned to 1 of 2 groups who were presented with only right- or left-lateralized sounds, respectively. In each group, 2 sample stimuli were used which differed in their lateralization angles (15° or 45°) with respect to the midsagittal plane. Statistical probability mapping served to identify spectral amplitude differences between 15° versus 45° stimuli. Distinct GBA components were found for each sample stimulus in different sensors over parieto-occipital cortex contralateral to the side of stimulation peaking during the middle 200–300 ms of the delay phase. The differentiation between "preferred" and "nonpreferred" stimuli during the final 100 ms of the delay phase correlated with task performance. These findings suggest that the observed GBA components reflect the activity of distinct networks tuned to spatial sound features which contribute to the maintenance of task-relevant information in short-term memory.
Market uptake of pegylated interferons for the treatment of hepatitis C in Europe : meeting abstract
(2008)
Introduction and Objectives Hepatitis C virus (HCV) infection is a leading cause of chronic liver disease with life threatening sequelae such as end-stage liver cirrhosis and liver cancer. It is estimated that the infection annually causes about 86,000 deaths, 1.2 million disability adjusted life years (DALYs), and ¼ of the liver transplants in the WHO European region. Presently, only antiviral drugs can prevent the progression to severe liver disease. Pegylated interferons combined with ribavirin are considered as current state-of-the-art treatment. Objective of this investigation was to assess the market uptake of these drugs across Europe in order to find out whether there is unequal access to optimised therapy. Material and Methods We used IMS launch and sales data (April 2000 to December 2005) for peginterferons and ribavirin for 21 countries of the WHO European region. Market uptake was investigated by comparing the development of country-specific sales rates. For market access analysis, we converted sales figures into numbers of treated patients and related those to country-specific hepatitis C prevalence. To convert sales figures into patient figures, the amount of active pharmaceutical ingredients (API) sold was divided by average total patient doses (ATPD), derived by a probability tree-based calculation algorithm accounting for genotype distribution, early stopping rules, body weight, unscheduled treatment stops and dose reductions Ntotal=APIPegIFNalpha-2a/ATPDPegIFNalpha-2a+APIPegIFN&alpha-2b/ATPDPegIFNalpha-2b For more concise result presentation the 21 included countries were aggregated into four categories: 1. EU founding members (1957): Belgium, France, Germany, Italy and Netherlands; 2. Countries joining EU before 2000: Austria (1995), Denmark (1973), Finland (1995), Greece (1981), Republic of Ireland (1973), Spain (1986), Sweden and UK (1973) 3. Countries joining EU after 2000: Czech Republic (2004), Hungary (2004), Poland (2004) and Romania (2007); 4. EU non-member states: Norway, Russia, Switzerland and Turkey. Results Market launch and market uptake of the investigated drugs differed considerably across countries. The earliest, most rapid and highest increases in sales rates were observed in the EU founding member states, followed by countries that joined the EU before 2000, countries that joined the EU after 2000, and EU non-member states. Most new EU member states showed a noticeable increase in sales after joining the EU. Market access analysis yielded that until end of 2005, about 308 000 patients were treated with peginterferon in the 21 countries. Treatment rates differed across Europe. The number of patients ever treated with peginterferon per 100 prevalent cases ranged from 16 in France to less than one in Romania, Poland, Greece and Russia. Discussion Peginterferon market uptake and prevalence adjusted treatment rates were found to vary considerably across 21 countries in the WHO European region suggesting unequal access to optimised therapy. Poor market access was especially common in low-resource countries. Besides budget restrictions, national surveillance and prevention policy should be considered as explanations for market access variation. Although our results allowed for the ranking of countries in order of market access, no final conclusions on over- or undertreatment can be drawn, because the number of patients who really require antiviral treatment is unknown. Further research based on pan-European decision models is recommended to determine the fraction of not yet successfully treated but treatable patients among those ever diagnosed with HCV. ...
In this work data of the NA49 experiment at CERN SPS on the energy dependence of multiplicity fluctuations in central Pb+Pb collisions at 20A, 30A, 40A, 80A and 158A GeV, as well as the system size dependence at 158A GeV, is analysed for positively, negatively and all charged hadrons. Furthermore the rapidity and transverse momentum dependence of multiplicity fluctuations are studied. The experimental results are compared to predictions of statistical hadron-gas and string-hadronic models. It is expected that multiplicity fluctuations are sensitive to the phase transition to quark-gluon-plasma (QGP) and to the critical point of strongly interacting matter. It is predicted that both the onset of deconfinement, the lowest energy where QGP is created, and the critical point are located in the SPS energy range. Furthermore, the predictions for the multiplicity fluctuations of statistical and string-hadronic models are different, the experimental data might allow to distinguish between them. The used measure of multiplicity fluctuations is the scaled variance omega, defined as the ratio of the variance and the mean of the multiplicity distribution. In the NA49 experiment the tracks of charged particles are detected in four large volume time projection chambers (TPCs). In order to remove possible detector effects a detailed study of event and track selection criteria is performed. Naively one would expect Poisson fluctuations in central heavy ion collisions. A suppression of fluctuations compared to a Poisson distribution is observed for positively and negatively charged hadrons at forward rapidity in Pb+Pb collisions. At midrapidity and for all charged hadrons the fluctuations are larger than the Poisson ones. The fluctuations seem to increase with decreasing system size. It is suggested that this is due to increased relative fluctuations in the number of participants. Furthermore, it was discovered that omega increases for decreasing rapidity and transverse momentum. A hadron-gas model predicts different values of omega for different statistical ensembles. In the grand-canonical ensemble, where all conservation laws are fulfilled only on the average, not on an event-by-event basis, the predicted fluctuations are the largest ones. In the canonical ensemble the charges, namely the electrical charge, the baryon number and the strangeness, are conserved for each event. The scaled variance in this ensemble is smaller than for the grand-canonical ensemble. In the micro-canonical ensemble not only the charges, but also the energy and the momentum are conserved in each event, the predicted $omega$ is the smallest one. The grand-canonical and canonical formulations of the hadron-gas model over-predict fluctuations in the forward acceptance. In contrast to the experimental data no dependence of omega on rapidity and transverse momentum is expected. For the micro-canonical formulation, which predicts small fluctuations in the total phase space, no quantitative calculation is available yet for the limited experimental acceptance. The increase of fluctuations for low rapidities and transverse momenta can be qualitatively understood in a micro-canonical ensemble as an effect of energy and momentum conservation. The string-hadronic model UrQMD significantly over-predicts the mean multiplicities but approximately reproduces the scaled variance of the multiplicity distributions at all measured collision energies, systems and phase-space intervals. String-hadronic models predict for Pb+Pb collisions a monotonous increase of omega with collision energy, similar to the observations for p+p interactions. This is in contrast to the predictions of the hadron-gas model, where omega shows no energy dependence at higher energies. At SPS energies the predictions of the string-hadronic and hadron-gas models are in the same order of magnitude, but at RHIC and LHC energies the difference in omega in the full phase space is much larger. Experimental data should be able to distinguish between them rather easily. Narrower than Poissonian (omega < 1) multiplicity fluctuations measured in the forward kinematic region (1<y(pi)<y_{beam}) can be related to the reduced fluctuations predicted for relativistic gases with imposed conservation laws. This general feature of relativistic gases may be preserved also for some non-equilibrium systems as modeled by the string-hadronic approaches. A quantitative estimate shows that the predicted maximum in fluctuations due to a first order phase transition from hadron-gas to QGP is smaller than the experimental errors of the present experiment and can therefore neither be confirmed nor disproved. No sign of increased fluctuations as expected for a freeze-out near the critical point of strongly interacting matter is observed.
A graph theoretical approach to the analysis, comparison, and enumeration of crystal structures
(2008)
As an alternative approach to lattices and space groups, this work explores graph theory as a means to model crystal structures. The approach uses quotient graphs and nets - the graph theoretical equivalent of cells and lattices - to represent crystal structures. After a short review of related work, new classes of cycles in nets are introduced and their ability to distinguish between non-isomorphic nets and their computational complexity are evaluated. Then, two methods to estimate a structure’s density from the corresponding net are proposed. The first uses coordination sequences to estimate the number of nodes in a sphere, whereas the second method determines the maximal volume of a unit cell. Based on the quotient graph only, methods are proposed to determine whether nets consist of islands, chains, planes, or penetrating, disconnected sub-nets. An algorithm for the enumeration of crystal structures is revised and extended to a search for structures possessing certain properties. Particular attention is given to the exclusion of redundant nets and those, which, by the nature of their connectivity, cannot correspond to a crystal structure. Nets with four four-coordinated nodes, corresponding to sp3 hybridised carbon polymorphs with four atoms per unit cell, are completely enumerated in order to demonstrate the approach. In order to render quotient graphs and nets independent from crystal structures, they are reintroduced in a purely graph-theoretical way. Based on this, the issue of iso- and automorphism of nets is reexamined. It is shown that the topology of a net (that is the bonds in a crystal) constrains severely the symmetry of the embedding (that is the crystal), and in the case of connected nets the space group except for the setting. Several examples are studied and conclusions on phases are drawn (pseudo-cubic FeS2 versus pyrite; α- versus β- quartz; marcasite- versus rutile-like phases). As the automorphisms of certain quotient graphs stipulate a translational symmetry higher than an arbitrary embedding of the corresponding net would show, they are examined in more detail and a method to reduce the size of such quotient graphs is proposed. Besides two instructional examples with 2-dimensional graphs, the halite, calcite, magnesite, barytocalcite, and a strontium feldspar structures are discussed. For some of the structures it is shown that the quotient graph which is equivalent to a centred cell is reduced to a quotient graph equivalent to the primitive cell. For the partially disordered strontium feldspar, it is shown that even if it could be annealed to an ordered structure, the unit cell would likely remain unchanged. For the calcite and barytocalcite structures it is shown that the equivalent nets are not isomorphic.
This paper explores the role of trade integration—or openness—for monetary policy transmission in a medium-scale New Keynesian model. Allowing for strategic complementarities in price-setting, we highlight a new dimension of the exchange rate channel by which monetary policy directly impacts domestic inflation. Although the strength of this effect increases with economic openness, it also requires that import prices respond to exchange rate changes. In this case domestic producers find it optimal to adjust their prices to exchange rate changes which alter the domestic currency price of their foreign competitors. We pin down key parameters of the model by matching impulse responses obtained from a vector autoregression on U.S. time series relative to an aggregate of industrialized countries. While we find evidence for strong complementarities, exchange rate pass-through is limited. Openness has therefore little bearing on monetary transmission in the estimated model.
The popular Nelson-Siegel (1987) yield curve is routinely fit to cross sections of intra-country bond yields, and Diebold and Li (2006) have recently proposed a dynamized version. In this paper we extend Diebold-Li to a global context, modeling a potentially large set of country yield curves in a framework that allows for both global and country-specific factors. In an empirical analysis of term structures of government bond yields for the Germany, Japan, the U.K. and the U.S., we find that global yield factors do indeed exist and are economically important, generally explaining significant fractions of country yield curve dynamics, with interesting differences across countries.
Measuring financial asset return and volatilty spillovers, with application to global equity markets
(2008)
We provide a simple and intuitive measure of interdependence of asset returns and/or volatilities. In particular, we formulate and examine precise and separate measures of return spillovers and volatility spillovers. Our framework facilitates study of both non-crisis and crisis episodes, including trends and bursts in spillovers, and both turn out to be empirically important. In particular, in an analysis of nineteen global equity markets from the early 1990s to the present, we find striking evidence of divergent behavior in the dynamics of return spillovers vs. volatility spillovers: Return spillovers display a gently increasing trend but no bursts, whereas volatility spillovers display no trend but clear bursts.
Comparative studies suggest that at least some bird species have evolved mental skills similar to those found in humans and apes. This is indicated by feats such as tool use, episodic-like memory, and the ability to use one´s own experience in predicting the behavior of conspecifics. It is, however, not yet clear whether these skills are accompanied by an understanding of the self. In apes, self-directed behavior in response to a mirror has been taken as evidence of self-recognition. We investigated mirror-induced behavior in the magpie, a songbird species from the crow family. As in apes, some individuals behaved in front of the mirror as if they were testing behavioral contingencies. When provided with a mark, magpies showed spontaneous mark-directed behavior. Our findings provide the first evidence of mirror self-recognition in a non-mammalian species. They suggest that essential components of human self-recognition have evolved independently in different vertebrate classes with a separate evolutionary history.
THIS PAPER WILL conduct a critical investigation of the famous argument against atomism first made by the 4th century CE Indian Buddhist philosopher Vasubandhu in his idealist treatise Vim. ´satik¯a Vij ˜naptim¯atrat ¯asiddhi (The Twenty Verses of Mind-Only). Although the present exposition will be more conceptual than historical in focus, it will first unfold the Abhidharmic Buddhist precursors of the Mind–Only epistemology. With the necessary background in place, I shall then attempt a rational reconstruction of the substance of Vasubandhu’s argument against atomism, rendering it intelligible to the modern reader by transposing it into contemporary philosophical idiom. Finally, I will employ the analysis of atomism and the external world in the Mind–Only school as a point of departure from which to further probe closely related concerns of Buddhist transcendental philosophy having to do with the nature of empirical knowledge, the power of skeptical argument, and the status of apperception. ...
Synchronized neural activity in the visual cortex is associated with small time delays (up to ~10 ms). The magnitude and direction of these delays depend on stimulus properties. Thus, synchronized neurons produce fast sequences of action potentials, and the order in which units tend to fire within these sequences is stimulusdependent, but not stimulus-locked. In the present thesis, I investigated whether such preferred firing sequences repeat with sufficient accuracy to serve as a neuronal code. To this end, I developed a method for extracting the preferred sequence of firing in a group of neurons from their pair-wise preferred delays, as measured by the offsets of the centre peaks in their cross-correlation histograms. This analysis method was then applied to highly parallel recordings of neuronal spiking activity made in area 17 of anaesthetized cats in response to simple visual stimuli, like drifting gratings and moving bars. Using a measure of effect size, I then analyzed the accuracy with which preferred firing sequences reflected stimulus properties, and found that in the presence of gamma oscillations, the time at which a unit fired in the firing sequence conveyed stimulus information almost as precisely as the firing rate of the same unit. Moreover, the stimulus-dependent changes in firing rates and firing times were largely unrelated, suggesting that the information they carry is not redundant. Thus, despite operating at a time scale of only a few milliseconds, firing sequences have the strong potential to provide a precise neural code that can complement firing rates in the cortical processing of stimulus information.
Genetic analysis of salt adaptation in Methanosarcina mazei Gö1 : the role of abl, ota and otb genes
(2008)
1. M. mazei ist ein halotolerantes methanogenes Archäon und akkumuliert kompatible Solute als längerfristige Anpassung an erhöhte Osmolarität in der Umgebung. Bei intermediären Salzkonzentrationen (~ 400 mM NaCl) wird vorzugsweise α-Glutamat gebildet und bei höheren Salzkonzentrationen (~ 800 mM NaCl) wird Nε-Acetyl-ß-Lysin zusätzlich zu Alpha-Glutamat synthetisiert. 2. Eine Analyse der intrazellulären Solutezusammensetzung mittels NMR ergab, dass M. mazei Glycin-Betain als Osmolyt akkumulieren kann. Für die Aufnahme von Glycin-Betain konnten zwei putative Glycin-Betain-Transporter in M. mazei identifiziert werden, Ota und Otb. Ota steht für „osmoprotectant transporter A“ und Otb für „osmoprotectant transporter B“. Das Genom von M. mazei wurde, nachdem es vollstänidg sequenziert war, nach Genen durchsucht, die eine Rolle bei der Aufnhame von Glycin-Betain oder anderen kompabtiblen Solute spielen könnten. Dafür wurde die Sequenz eines Substratbindeproteins eines bekannten bakteriellen Glycin-Betain-Transporters, opuAC aus B. subtillis als Referenzsequenz verwendet. Hierbei konnte ein Homolog, otaC, in M. mazei identifiziert werden. otaC ist Teil eines Genclusters, welches für einen ABC-Transporter kodiert. otb wurde bei einer genomweiten Expressionsanalyse zur Salzadaptation von M. mazei identifiziert. Es wurden Gene eines putativen ABC-Transporters identifiziert, die unter Hochsalzbedingungen leicht induziert waren. Es stellte sich heraus, dass es sich hierbei um einen zweiten putativen Glycin-Betain-Transporter handelte. Otb gehört auch zur Familie der ABC-Transporter. Vergleichsanalysen zeigten, dass die beiden Transporter keine große Ähnlichkeit zueinander aufweisen. Die Funktion und Rolle der beiden ABC-Transporter, vor allem von Otb, war zu Beginn dieser Arbeit unklar. 3. Bei Analysen des intrazellulären Solutepools im Wildtyp von M. mazei stellte sich heraus, dass in Anwesenheit von Glycin-Betain die Konzentration von Glutamat und NE- Acetyl-ß-Lysin verringert war. Bei 400 mM NaCl reduzierte Glycin-Betain die Glutamat- Konzentration um 16% und bei 800 mM NaCl um 29%. Besonders deutlich zeigte sich der Einfluß von Glycin-Betain bei der Akkumulation von NE-Acetyl-ß-Lysin. Bei 400 mM NaCl reduzierte Glycin-Betain die Konzentration an NE-Acetyl-ß-Lysin um 60% und bei 800 mM NaCl um 50%. Der Einfluß von Glycin-Betain konnte auf verschiedenen Ebenen in M. mazei beobachten werden. Es konnte gezeigt werden, dass die relative Transkriptimenge von ota unter Hochsalzbedingungen zunimmt. Glycin-Betain reduzierte die Transkription von ota bei verschiedenen Salzkonzentrationen. Die relative Transkriptmenge an mRNA von ota wurde mittels quantitativer real-time PCR (qRT-PCR) quantifiziert und war bis zu 52% reduziert in Zellen, die in Gegenwart von Glycin-Betain gewachsen waren. Die Transkriptmenge von otb war unter den gleichen Bedingungen nicht beeinflusst und zeigte generell keine Zunahme mit der Salinität des Mediums. Des Weiteren konnte ein Effekt von Glycin-Betain auf Ebene der Transportaktivität von Ota gezeigt werden. Hier zeigte sich, dass Zellen, die bei 400 mM NaCl in Gegenwart von Glycin-Betain gezogen waren, eine geringere Transportaktivität aufweisen, als Zellen, die bei 400 mM NaCl ohne Glycin-Betain gewachsen waren. Die Transportaktivität war um 90% geringer. Es muss jedoch berücksichtigt werden, dass es sich bei den Zellen, die ohne Glycin-Betain gewachsen waren, um eine Nettoaufnahme von Glycin-Betain handelte. Im Gegensatz dazu, ist davon auszugehen, dass Zellen, die in Gegenwart von Glycin-Betain gewachsen waren, eine Austaschreaktion zwischen bereits vorhandenem intrazellulärem und extrazellulär angebotenem Glycin-Betain vornehmen. [Die dem letzten Punkt zugrundeliegenden Daten wurden von Silke Schmidt im Rahmen einer Diplomarbeit erhoben, die von mir mitbetreut wurde. Aus Gründen der vollständigen Darstellung des Projektverlaufes werden diese Daten mitaufgeführt.] 4. Zur weiteren Klärung der Rolle und Funktion der beiden putativen Glycin-Betain- Transporter Ota und Otb war es Ziel, Mutantenstudien durchzuführen. Eine Vorraussetzung für die Generierung von Mutanten ist, dass der Organismus auf Agarplatten wächst und Einzelkolonien von einer einzelnen Zelle ausgehend bildet. Dies ist ein wichtiger Punkt bei Methanosarcina spp., die Zellpakete, sogenannte Sarcinen bilden. Deshalb wurde zunächst nach den optimalsten Plattierungsbedingungen gesucht, unter denen M. mazei keine Sarcinen bildet und die Plattierungseffizienz am höchsten war. Die Plattierungseffizienz betrug im Durchschnitt 54%. Für das Einbringen von DNA in die Zellen wurde eine Liposomen-vermittelte Transformation getestet. Ein ähnliches Vorgehen war bereits für Methanosarcina acetivorans beschrieben, konnte bislang aber noch nicht erfolgreich für M. mazei Gö1 und andere Stämme von M. mazei angwendet werden. Erste Schritte zur Anpassung des Transformations-Protokolles beinhalteten das Testen von DOTAP verschiedener Hersteller, sowie die Konzentration an eingesetzter DNA. Das jeweilige Zielgen/Zieloperon, welches deletiert werden sollte, wurde durch eine pac-Kassette ersetzt. Diese kodiert für eine Puromycin-Transacetylase und verleiht dem Organismus Puromycin- Resistenz. Die pac-Kassette wurde von umgebenden Bereichen des Ziellocus flankiert und integrierte mit Hilfe dieser flankierenden Bereiche über doppelt-homologe Rekombination in das Genom. 5. Mit dem oben beschriebenen Verfahren wurden ota::pac- und otb::pac-Mutanten erzeugt und über Southern-Blot Analyse verifiziert. Eine erste Charakterisierung der Mutanten mittels qRT-PCR zeigte, dass auf mRNA-Ebene keine Transkripte von ota in M. mazei ota::pac oder otb in M. mazei otb::pac nachweisbar waren. Zusätzlich konnte auf Proteinebene das Substratbindeprotein OtaC in M. mazei ota::pac und OtbC in M. mazei otb::pac nicht über einen Antikörper gegen das jeweilige Substratbindeprotein nachgewiesen, was die erfolgreiche Deletion bestätigte. Erste phänotypische Charakterisierungen zeigten, dass das Wachstum von M. mazei ota::pac und M. mazei otb::pac unter Hochsalzbedingungen nicht beeinträchtigt und vergleichbar mit dem des Wildtyps war. Auch bei kälteren Wachstumstemperaturen von 22°C wuchsen die Mutanten ohne Phänotyp. 6. Radioaktive Transportstudien mit M. mazei otb::pac zeigten, dass diese Mutante, die noch ein funktionelles Ota besitzt, [14C]Glycin-Betain aufnehmen kann. Es stellte sich heraus, dass diese Mutante eine höhere Transportrate für Glycin-Betain aufwies, als der Wildtyp. Die Aufnahmerate war um einen Faktor 2 höher als beim Wildtyp. Zusätzlich konnten qRT-PCR Analysen zeigen, dass die relative Transkriptmenge an ota in der otb::pac-Mutante um einen Faktor 2 höher war, als im Wildtyp. Umgekehrt konnte dieser Effekt nicht beobachtet werden, d.h. eine erhöhte Transkriptmenge an otb in M. mazei ota::pac. Auf Proteinebene konnte beobachtet werden, dass die intrazelluläre Konzentration an OtaC in der Mutatne leicht höher war als im Wildtyp. Jedoch stellte sich heraus, dass die intrazelluläre Glycin-Betain-Konzentration bei 400 mM NaCl in der Mutante nicht erhöht war verglichen mit Wildtyp, sondern die Konzentrationen gleich waren. Bei höheren Salzkonzentrationen (800 mM NaCl) zeigte sich jedoch ein anderes Bild: die intrazelluläre Glycin-Betain-Konzentration war in der Mutante um 60% erhöht. Dies könnte auf die erhöhte Transportaktivität von M. mazei otb::pac zurückzuführen sein. Die Konzentration anderer kompatibler Solute wie Glutamat und NE-Acetyl-ß-Lysin waren in diesen Zellen bis zu 48% reduziert. In vorherigen Studien konnte gezeigt werden, dass heterolog überproduziertes Ota von M. mazei in E. coli MKH13, eine E. coli-Mutante, die keine Glycin-Betain-Transporter mehr besitzt, die Aufnahme von Glycin-Betain wieder herstellen konnte [die Daten von ota in E. coli MKH13 wurden in der bereits oben erwähnten Diplomarbeit von Silke Schmidt erhoben]. Zur Klärung der Funktion von Otb wurde der gleiche Versuch mit otb in E. coli MKH13 durchgeführt. Jedoch konnte eine heterologe Produktion von Otb aus M. mazei die Aufnahme von Glycin-Betain in E. coli MKH13 nicht wieder herstellen. Hierbei wurde über Western-Blot Analyse sichergestellt, dass Otb tatsächlich in der Membran vorhanden war. Auch Transportstudien mit der Mutante M. mazei ota::pac zeigten, dass diese Mutante kein [14C]Glycin-Betain mehr aufnehmen konnte. Es konnte auch keine Akkumulation von Glycin-Betain mittels NMR in dieser Mutante gemessen werden. Des Weiteren zeigte sich, dass die intrazellulären Konzentrationen an Glutamat und Nε-Acetyl-ß-Lysin bei 400 mM und 800 mM NaCl in der Mutante unbeeinflusst von der Glycin-Betain-Konzentration im Medium waren. Weitere Transportstudien mit M. mazei ota::pac zur Aufnahme von [14C]Cholin zeigten, dass dieses Molekül weder vom Wildtyp, noch von der Mutante aufgenommen wurde. Dieses Ergebnis wurde durch Messung des Solutepools mittels NMR bestätigt. Somit kann ausgeschlossen werden, dass Otb unter den gemessenen Bedingungen weder ein Glycin- Betain-Transporter noch ein Cholin-Transporter in M. mazei ist. Diese Beobachtungen belegen eindeutig, dass Ota der einzige funktionelle Glycin-Betain-Transporter in M. mazei ist, während die Rolle von Otb bislang noch ungeklärt ist. 7. Nε-Acetyl-ß-Lysin, das dominante kompatible Solut in M. mazei bei 800 mM NaCl, wird durch die Enzyme AblA, einer Lysin-2,3-Aminomutase und AblB, einer ß-Lysin- Acetyltransferase synthetisiert. In dieser Arbeit wurde eine Δabl::pac-Mutante generiert, um die Fragen zu klären, ob die beiden Enzyme vom postulierten abl-Operon kodiert werden und wenn ja, welchen Phänotyp eine Nε-Acetyl-ß-Lysin-freier-Mutante bei Salzstress zeigt. NMR-Analysen zeigten, dass in der abl::pac-Mutante kein Nε-Acetyl-ß-Lysin mehr nachweisbar war. Dies belegt, dass die Gene ablA und ablB und deren Genprodukte für die Synthese von NE-Acetyl-ß-Lysin in M. mazei essentiell sind. Unter Hochsalzbedingungen ist das Wachstum von M. mazei abl::pac im Vergleich zum Wildtyp deutlich verlangsamt. Dieses Ergebnis war unerwartet, da eine abl::pac-Mutante von Methanococcus maripaludis unter Hochsalzbedingungen nicht mehr wachsen konnte. Unter Niedrigsalz und bei intermediären Salzkonzentration war das Wachstum von M. mazei abl::pac nicht eingeschränkt und verhielt sich wie der Wildtyp. In Gegenwart von Glycin-Betain akkumulierte die abl::pac-Mutante von M. mazei unter Hochsalzbedingungen 2,4 mal mehr Glycin-Betain als der Wildtyp, um das Defizit im Solutepool auszugleichen und Wachstum bei Hochsalz zu ermöglichen. Dadurch war sie in der Lage, wieder wie der Wildtyp zu wachsen. 8. Der Verlust von NE-Acetyl-ß-Lysin wurde unter Hochsalzbedingungen durch erhöhte Konzentrationen an Glutamat und einem neuen kompatiblen Solut kompensiert. NMRAnalysen zeigten, dass es sich hierbei um Alanin handelte. Bis jetzt wurde die Verwendung von Alanin als kompatibles Solut noch nie beschrieben. Um sicherzustellen, dass Alanin als kompatibles Solut in M. mazei abl::pac dient, wurde die Konzentration bei verschiedenen Salzkonzentrationen gemessen. Die Konzentration an Alanin nahm mit steigender Salzkonzentration zu. Bei 800 mM NaCl war die Konzentration 12 fach erhöht verglichen mit der Konzentration bei 400 mM NaCl. Außerdem redzierte Glycin-Betain die Alanin- Konzentration bei 800 mM NaCl um 58%. Transportexperimente zeigten, dass M. mazei kein Alanin aus dem Medium aufnehmen kann. 9. Erste Analysen möglicher Synthesewege für Alanin zeigten, dass die Alanin- Dehydrogenase nicht auf Transkriptebene unter Hochsalzbedingungen induziert war und somit keine Rolle in der Synthese von Alanin als kompatibles Solut spielen dürfte. Es könnten jedoch Aminotransferasen eine Rolle bei der Biosynthese von Alanin spielen. Des Weiteren sind die Enzyme, die für die Synthese von Glutamat als kompatibles Solut verantwortlich sind, unbekannt. Dies gilt für alle bis jetzt untersuchten Organismen, die Glutamat als kompatibles Solut nutzen. In dieser Arbeit wurde versucht, mit Hilfe der abl::pac-Mutante, die erhöhte Glutamat-Mengen zum Osmoschutz produziert, der Frage nachzugehen, welche Gene/Enzyme eine Rolle spielen könnten bei der Synthese von Glutamat als kompatibles Solut. Dazu wurden unter Hochsalzbedingungen die Transkriptmengen verschiedener Genen, die an der Glutamat-Synthese beteiligt sein könnten, in der Mutante und im Wildtyp untersucht. Hierbei zeigte sich, dass mehrere Gene verschiedener Enzyme unter Hochsalzbedingungen in der Mutante leicht induziert waren. Eines dieser Enzyme ist die Glutaminsynthetase. Dieses Enzym ist für die Umsetzung von Glutamat zu Glutamin unter Verbrauch von ATP verantwortlich. M. mazei besitzt zwei Gene, die für eine putative Gluaminsynthetase kodieren. In M. mazei abl::pac ist unter Hochsalzbedingungen das Gen glnA2 im Vergleich zum Wildtyp (4,03 ± 1,14) leicht induziert (7,63 ± 2,2). Des weiteren konnte in der Mutante eine leichte Induktion von gltB1, gltB2 und gltB3 unter Hochsalz beobachtet werden. Diese Gene kodieren für die einzelnen Domänen einer Glutamatsynthase. Diese ersten Analysen geben einen Hinweis darauf, dass die Synthese von Glutamat als kompatibles Solut über eine gekoppelte Reaktion der Glutaminsynthetase und der Glutamatsynthase verlaufen könnte.
In the present work, the photo-protection mechanisms in plants and purple bacteria were investigated experimentally at the molecular level. For this purpose, several spectroscopic methods were combined and applied to elucidate the function of carotenoids, pigments of the photosynthetic apparatus, in photo-protection. The experiments were focused on the mechanisms involved in quenching of singlet and triplet states of the electronically excited (bacterio)chlorophylls. This photosynthetic reaction events occur on an ultrafast time-scale. Measuring such short-lived events, and understanding the underlying principles, demand some of the most precise experiments and exact measurement technologies currently available. This implies certain requirements for the light source used: a suitable wavelength within the absorption band of the sample, sufficient power, and, most importantly, a pulse duration short compared to the studied reaction. Nowadays, we can achieve all this requirements using femtosecond-spectroscopic systems, which produce laser pulses shorter than 100 femtoseconds (fs). Transient absorption spectroscopy provides important information on molecular dynamics interrogating electronic transitions. The technique is based on photochemical generation of transient species with femtoseconds pump pulses and measuring transient absorption changes of the sample using a second, time delayed probe pulse which in this case is a spectrally broad white-light pulse.
The function of APOBEC3G in the innate immune response against the HIV infection of primary cells
(2008)
In the past few years the regulation of HIV-1 replication by cellular cofactors has been a major topic of ongoing research. These factors potentially represent new targets for antiviral therapy as resistance will be minimized. However this requires a better understanding of the interaction of HIV-1 with these cellular factors and the immune system. The virus infects the cells of the immune system, beginning with macrophages and dendritic cells as primary target cells during transmission. The cellular cofactor, APOBEC3G was found to be an antiviral factor in macrophages, dendritic cells and primary T cells. APOBEC3G is a cytidindeaminase which causes G->A hypermutations in the HIV-Genome. Another protein which has a strong inhibitory effect on the HIV infection is Interferon alpha (IFN-alpha), however the exact reason for this has not yet been elucidated. The bacterial protein, Lipopolysaccharide (LPS) also induces a strong antiviral state in macrophages. In micro-array analysis it was shown that APOBEC3G was upregulated after the stimulation with both IFN-alpha and LPS in macrophages. The goal of this work was to investigate the role of APOBEC3G in the innate immune response to APOBEC3G. For this, the expression of APOBEC3G was examined in HIV-1 target cells after stimulation with IFN-alpha or LPS and the effect of the protein on the viral infection was examined. In the first experiments it could be shown through real time quantitative PCR that APOBEC3G was overexpressed after the stimulation with IFN-alpha or LPS. This result could be shown in monocytes derived macrophages from different blood donors. It was also shown that the overexpression of APOBEC3G correlated directly with the concentration of IFN-alpha. Through mutational analysis it could be then shown that the overexpressed APOBEC3G protein was also functional in the cells. In order to show that this was the result of APOBEC3G, the protein was the regulated through lentiviral vectors. After transduction of cell lines with lentiviral vectors containing APOBEC3G, the infection was inhibited by up to 70%. The infection was restored after the addition of shRNAs against APOBEC3G. For the further experiments, CD34+ stem cells were used. The cells were transduced the day after thawing with lentiviral vectors containing an eGFP marker gene and either APOBEC3G or shRNAs against APOBEC3G. The CD34+ cells were then cultivated and differentiated to macrophages. The cells transduced with Lentiviral vectors containing APOBEC3G had a very high expression of APOBEC3G in the cells, however the cells transduced with shRNA against APOBEC3G did not show a reduction in the protein expression. The infectivity of the transduced CD34+ and CD34 derived macrophages was then examined. It was expected that the cells transduced with APOBEC3G would show a reduced HIV-1 infection, and the cells transduced with shRNA against APOBEC3G would show an increase in infection. After the transduction and differentiation the CD34+ cells from the 3 donors were stimulated and infected with wild type HIV-1 and Vif defective HIV-1 virus. Vif is a viral protein that can bind to APOBEC3G leading it to the proteasome for degradation. The cells from the first donor transduced with APOBEC3G, were very difficult to infect. In general the shRNA against APOBEC3G had little effect on the course of infection; presumably, the shRNA against APOBEC3G was not active in most of these cells. Only the cells from the first donor showed an increase in HIV infection after the transduction with the shRNAs against APOBEC3G, this was most notably the case in the cells stimulated with IFN-alpha, which usually show very little infection. This work showed that APOBEC3G plays an important role in the innate immune response to HIV-1. The effect of APOBEC3G is both cell type as well as donor dependent. Recently, an interesting study also showed that there is a correlation between the expression of APOBEC3G in HIV infected individuals and their progression to AIDS. A better understanding of the role that APOBEC3G plays in the innate immune response would help in the search of new therapeutic possibilities. This could be done by inhibiting the Vif-APOBEC3G interaction in order to increase the amount of active APOBEC3G in the cells or increasing the APOBEC3G concentration in the cells in some manner.
2-Aminopyrimidinium picrate
(2008)
The geometric parameters of the title compound, C4H6N3+·C6H2N3O7-, are in the usual ranges. While two nitro groups are almost coplanar with the aromatic picrate ring [dihedral angles 3.0 (2) and 4.4 (3)°], the third is significantly twisted out of this plane [dihedral angle 46.47 (8)°]. Anions and cations are connected via N-H...O hydrogen bonds. The molecules crystallize in planes parallel to (1\overline{2}1). Key indicators: single-crystal X-ray study; T = 173 K; mean σ(C–C) = 0.002 Å; R factor = 0.036; wR factor = 0.099; data-to-parameter ratio = 10.9.
A novel experimental approach for studying exotic transitions in few-electron high-Z ions was developed. In this approach, few-electron ions with selectively produced single K-shell holes are used for the investigation of the transition modes that follow the decay of the excited ions. The feasibility of the developed approach was confirmed by an experimental study of the production of low-lying excited states in He-like uranium, produced by K-shell ionization of initially Li-like species. It was found that K-shell ionization is a very selective process that leads to the production of only two excited states, namely the 1s2s 21S0 and 1s2s 23S1. This high level of selectivity stays undisturbed by the rearrangement processes. These experimental findings can be explained using perturbation theory and an independent-particle model, and are a result of the very different impact parameter dependencies of K-shell ionization and L- intrashell excitation. The L-shell electron can be assumed to stay passive in the collision, whereas the K-shell electron is ionized. It was stressed that the current result might directly be applied to accurate studies of the two-photon decay in He-like ions. Up to now, the experimental challenge in conventional 2E1 experiments has been the photon-photon coincidence technique, which is required to separate the true 2E1 events from the x-ray background associated with single photon transitions. In contrast, by exploiting K-shell ionization, the spectral distribution of the two-photon decay could be obtained simply by a measurement of the photon emission, using only a single x-ray detector in coincidence with projectile ionization. One further particular advantage arises from the fact that the 1s2p 3P0 state is not populated, and does not contribute to the continuum distribution of the two-photon emission. At high Z, this state also undergoes a two-photon E1M1 decay, which would be indistinguishable from the 2E1 decay of the 1s2s 1S0. The first measurement of the two-photon energy distribution from the decay of 1s2s 1S0 level in He-like tin was performed by adopting the technique developed in this thesis. In this technique, excited He-like heavy ions were formed by K-shell ionization of initially Li-like species in collisions with a low-Z gas target, and x-ray spectra following the decay of the He-like ions were measured in coincidence with the up-charged tin ions. The observed intense production of the 2E1 transitions, and a very high level of selectivity, make this process particularly suited for the study of the two-photon continuum, and thus for a detailed investigation of the structure of high-Z He-like systems. The method allowed for a background-free measurement of the distribution of the two-photon decay (21S0 -> 11S0) in He-like tin. The measured distribution could also be discriminated from that of other He-like ions, and confirmed, for the first time, the fully relativistic calculations. In addition, the feasibility of the method was confirmed by studying another exotic transition, namely the two-electron one-photon transition (TEOP) in Li-like high-Z ions. An experimental investigation of the radiative decay modes of the 1s2s2 state in Li-like heavy ions has been started. In the first dedicated beam time at the ESR, selective population of this state via K-shell ionization of initially Be-like species was achieved. The x-rays produced in this process were measured by a multitude of x-ray detectors, each placed under different observation angles with respect to the ion beam direction. The spectra associated with projectile electron loss consist (in all cases) of one single x-ray transition, which was attributed to the TEOP decay to the 1s2 2p1/2 level, possibly contaminated by the M1 decay to the 1s22s. Thus it was proven that, by adopting the developed approach, one can indeed produce the desired initial state. This makes this method perfectly suited for studies of TEOP transitions in high-Z systems. An extension of this study, by the inclusion of an electron spectrometer, would also allow for measurements of the autoionization channel, which would provide complete information on the various decay modes of the 1s2s2 state.
Purpose of the study: There is a clinical need for antiretroviral therapy (ART) regimens that simplify dosing and make adherence easier for specific patient groups such as former intravenous drug users (IVDU) receiving opiate substitution. Availability of tenofovir DF (TDF) and other once-daily (OD) agents could offer a viable OD regimen. The 3OD study was designed to evaluate the use of OD HAART in IVDU patients.
Methods: 3OD was a single-arm, multicentre, 48-weeks trial to assess efficacy, tolerability and adherence to a OD TDF-containing HAART regimen in former IVDU patients receiving opiate substitution. Of 67 patients enrolled, 27 were antiretroviral treatment naïve, 10 were virologically suppressed (<400 copies/mL), and 30 were re-starting HAART without prior virological failure. Opiate substitution was adjusted according to subject symptoms of opiate overdosing or withdrawal. Various methods were used to assess adherence: besides pill count, patients were asked to fill in a MASRI (Medication Adherence Self-Report Inventory) questionnaire and an electronic log pad diary. Calculation of adherence by pill count assumed that unreturned pills had been taken by the subjects.
Summary of results: Overall, 55% (n = 37, ITT, M = F) of patients had viral load <400 copies/mL at week 48. Using an ITT, M = E analysis, 90% (37/41) of patients reached undetectable VL (<400 copies/mL), 56% (23/41 patients) had plasma HIV-1 RNA concentrations <50 copies/mL at week 48. Only 30 patients (45%) completed the full study and the follow-up period. In 51% of patients, TDF adherence was >100% using pill count. MASRI showed adherence rates of 80–100% in 83–85% of patients; however, 15 patients never entered any data. Diary data were entered by 57 patients; diary data were entered for fewer days than patients received treatment (mean difference 113 days, calculated from treatment start and stop dates).
Conclusion: TDF in combination with other OD antiretrovirals in former IVDU patients showed comparable efficacy to that seen in the average HIV-1 infected population. However, measurement of adherence to self-administered HAART via pill count, MASRI or diary may be misleading in this population.
It has been recognized that molecular classifications will form the basis for neuropathological diagnostic work in the future. Consequently, in order to reach a diagnosis of Alzheimer's disease (AD), the presence of hyperphosphorylated tau (HP-tau) and beta-amyloid protein in brain tissue must be unequivocal. In addition, the stepwise progression of pathology needs to be assessed. This paper deals exclusively with the regional assessment of AD-related HP-tau pathology. The objective was to provide straightforward instructions to aid in the assessment of AD-related immunohistochemically (IHC) detected HP-tau pathology and to test the concordance of assessments made by 25 independent evaluators. The assessment of progression in 7-µm-thick sections was based on assessment of IHC labeled HP-tau immunoreactive neuropil threads (NTs). Our results indicate that good agreement can be reached when the lesions are substantial, i.e., the lesions have reached isocortical structures (stage V–VI absolute agreement 91%), whereas when only mild subtle lesions were present the agreement was poorer (I–II absolute agreement 50%). Thus, in a research setting when the extent of lesions is mild, it is strongly recommended that the assessment of lesions should be carried out by at least two independent observers.
Background: In Germany, 17% of 59,000 persons living with HIV/AIDS are female. Accordingly, the research focus in clinical studies as well as in cohort analyses has been almost exclusively on HIV-positive men. As a consequence, there is an urgent need to characterize and evaluate the outcome of HAART in HIV-positive women and to identify special requirements of this particular patient population.
Methods: Cross-sectional multicentre (n = 31 centres) evaluation to observe characteristics of 1,557 HIV-positive women receiving medical care in Germany between June 2007 and March 2008. Data acquisition was performed using standardized questionnaires.
Summary of results: Of 1,557 HIV-positive women studied, 1,191 (77%) received HAART. Mean age was 40 years and average time of known HIV-infection was 9 years. Risk of HIV transmission was: 40% heterosexual intercourse in Germany, 36% heterosexual intercourse in a high prevalence country; 17% IDU; 7% other reasons for transmission. 46% of the women had a migration background. Mean time on antiretroviral treatment was 7 years. 53% of the female participants had been treated with >2 HAART-regimens. 47% of the study subjects received a PI-based regimen, 33% a NNRTI-based regimen; 20% were on other combinations. The most commonly used PI and NNRTI were lopinavir/r and nevirapine, respectively. Only 48% of all women under HAART achieved a viral load <40 copies/ml. There was a significant difference between the PI-treated group with 44% patients <40 copies/ml and the NNRTI-treated group with 56% <40 copies/ml (p = 0.003).
Conclusion: We found that HIV-positive women depicted an inferior virological response to HAART compared to those previously published in German cohort analyses dominated by men (response rates >75%). Possible differences in adherence or drug resistance may have impacted these results and are currently being evaluated in ongoing sub-analyses. Of note, the lack of a study arm with male patients is a limitation of this investigation. However, this is partly off-set by the fact that there are good comparative data in the male population found in other cohorts. We conclude that our results are in discordance to the popular assumption that there are no gender specific differences in virological treatment outcome of HAART.
The relation between reality and language, the instability of language as a signification system, the representation crisis, and the borders of interpretation are the controversial issues that have engaged not only philosophers, but also many authors, translators, and literary critics. Some philosophers like Derrida accuse Western thinking of being obsessed with binary oppositions. In Derrida's view, Western tradition resorts to external references as God, truth, origin, center and reason to stabilize the signification system. Since these concepts lack an internal sense and there is no transcendental signified that can fix these signifiers, language turns to an instable system by means of which no fixed meaning can be created. Many authors like Beckett, Stoppard, and Caryl Churchill also noticed this impossibility of language. While Derrida's deconstructive approach to this crisis has an epistemological nature, these playwrights present an aesthetic solution by turning the deconstructive potential of language against itself in text and performance. This dissertation aims at exploring their performing methods and dramatic texts to demonstrate how their delogocentric strategies work. By analyzing their plays, I will examine if their use of signifiers that have no references in reality, intentional misconceptions, disintegrated subjectivities, decentered narratives, and experimental performances can help them undermine the prevailing logocentrism of Western thought. The examination of the change in aesthetic strategies from Beckett, who belongs to earlier stages of post modernism, to Caryl Churchill, who should perform in a globalized world with increasing dominance of speed and information, is another aim of this research. In my view,Beckett's obsession with unspeakable, absurdity, and disintegration of subjectivity develops to Stoppard's language games, metadrama, and anti-representation and culminates in Churchill's anti-narrative texts and pluralistic performances. The monophony of Beckett's dramatic texts is replaced by the polyphony of Churchill's performances, which are a mixture of theater, dance and music. However, all explored dramatic texts in this dissertation have something in common: they are language games, which have no claim on a faithful representation of reality or transcendental truth.
The title compound, C16H14N2O2, was derived from 1-(2-hydroxyphenyl)-3-(2-methoxyphenyl)propane-1,3-dione. The molecule is essentially planar (r.m.s. deviation for all non-H atoms = 0.089 Å). Two intramolecular hydrogen bonds stabilize the molecular conformation and one N-H...O hydrogen bond stabilizes the crystal structure. Key indicators: single-crystal X-ray study; T = 173 K; mean σ(C–C) = 0.003 Å; R factor = 0.035; wR factor = 0.091; data-to-parameter ratio = 9.3.
In 1998 the German Universities of Kassel and Giessen organised a workshop on water and solute transport in large drainage basins. The workshop focused on analysing and summarising the state of research, existing problems and perspectives in this research area. It was the second of a series of annual workshops since 1997 that became an important discussion forum for the German-speaking research community in the field of hydrological modelling. Now the 11th Workshop on Large-scale Hydrological Modelling referred to the same questions as posed in 1998 in order to evaluate the developments and advances of the last ten years. Based on keynote presentations, the workshop focused on discussion in working groups where also posters were presented. This volume of "Advances in Geosciences" comprises seven papers referring to the poster contributions. At the end of the volume, an overview paper summarises the outcome of the workshop presentations and discussions (Doll et al.). ...
This dissertation analyzes tax policy, corporations, and capital market effects. First, the Savings Directive, which has left a loophole by providing grandfathering for some securities, is examined. It can be shown that investors are not willing to pay a premium for bonds that are exempt from the withholding rate, so it may be concluded that the supply of existing loopholes is large enough to allow tax evaders to continue evasion at no additional cost. Second, tax neutrality towards alternative financing instruments for corporate investment is a ubiquitous demand in the political debate. However, the magnitude of possible efficiency costs of a departure from tax neutrality is hardly discussed. Against this background, this dissertation discusses the theory of capital structure and provides back-ofthe-envelope calculations of the possible efficiency cost of a tax distortion of the debt-equity decision. Third, the ex-dividend-day effect in relation to the Gennan tax reform of 2000/2001 is discussed. The abolishment of the imputation system allows reinvestigating the size of the exdividend- day effect. I find no structural break in the size of the German ex-dividend-day effect and no evidence of an ex-dividend-day price drop that exceeds the dividend paid. Fourth, an account of the quantitative development of tax legislation in post-war Germany is presented. It can be shown that the legislative output did not increase over the decades and is not affected by a split majority in the upper and lower houses. Finally, it turns out that an increasing fraction of this legislation is passed in December.
The animated statues, robots and monsters in German Romantic narratives, as I will argue throughout, tell us something about the Romantic conception of the mutually embedded relationship between art and life. In the works of the German Romantics, the theme of artificial humans thus has an essentially autopoetic, or selfreflexive, function (cf. Schmitz-Emans 1993, 168f). It corresponds in exemplary fashion with Friedrich Schlegel´s idea of transcendental poetry, which should always be “poetry and simultaneously the poetry of poetry” (Schlegel [1985], 50). In the theme of artificial life as well as in transcendental poetry, the observation of the world is integrally bound up with the observation of art and the self (cf. Kremer [1996], 8ff).
Increasingly, individuals are in charge of their own financial security and are confronted with ever more complex financial instruments. However, there is evidence that many individuals are not well-equipped to make sound saving decisions. This paper demonstrates widespread financial illiteracy among the U.S. population, particularly among specific demographic groups. Those with low education, women, African-Americans, and Hispanics display particularly low levels of literacy. Financial literacy impacts financial decision-making. Failure to plan for retirement, lack of participation in the stock market, and poor borrowing behavior can all be linked to ignorance of basic financial concepts. While financial education programs can result in improved saving behavior and financial decision-making, much can be done to improve these programs’ effectiveness.
Background This study was carried out to compare the HRQoL of patients in general practice with differing chronic diseases with the HRQoL of patients without chronic conditions, to evaluate the HRQoL of general practice patients in Germany compared with the HRQoL of the general population, and to explore the influence of different chronic diseases on patients HRQoL, independently of the effects of multiple confounding variables. Methods A cross-sectional questionnaire survey including the SF-36, the EQ-5D and demographic questions was conducted in 20 general practices in Germany. 1009 consecutive patients aged 15–89 participated. The SF-36 scale scores of general practice patients with differing chronic diseases were compared with those of patients without chronic conditions. Differences in the SF-36 scale/summary scores and proportions in the EQ-5D dimensions between patients and the general population were analyzed. Independent effects of chronic conditions and demographic variables on the HRQoL were analyzed using multivariable linear regression and polynomial regression models. Results The HRQoL for general practice patients with differing chronic diseases tended to show more physical than mental health impairments compared with the reference group of patients without. Patients in general practice in Germany had considerably lower SF-36 scores than the general population (P < 0.001 for all) and showed significantly higher proportions of problems in all EQ-5D dimensions except for the self-care dimension (P < 0.001 for all). The mean EQ VAS for general practice patients was lower than that for the general population (69.2 versus 77.4, P < 0.001). The HRQoL for general practice patients in Germany seemed to be more strongly affected by diseases like depression, back pain, OA of the knee, and cancer than by hypertension and diabetes. Conclusion General practice patients with differing chronic diseases in Germany had impaired quality of life, especially in terms of physical health. The independent impacts on the HRQoL were different depending on the type of chronic disease. Findings from this study might help health professionals to concern more influential diseases in primary care from the patient´s perspective.
In a charter issued on 5 May 1513, the mayor and city council of the city of Freiburg/Breisgau reported that several citizens wanted to be allowed to establish a bruderschaft der sengerye, a confraternity of singing. “God, the almighty, would be praised thereby, the souls would be consoled, and all men listening to the concerts would be kept from blasphemy, gaming and other secular vices” (“gott der allmechtig [würde] dardurch gelopt, die selen getröst und die menschen zu zyten, so sy dem gesang zuhorten, von gotslesterung, ouch vom spyl vnd anderer weltlicher uppigkeyt gezogen”). Considering not least the “positive effects on the pour souls” (“guettaeten, so den armen selen dardurch nachgeschechen mocht”), the request was allowed. But the petitioners had to establish their bruderschaft in exactly the form that is described in detail in the regulations (ordnung) added to the request and cited “word for word” (“von wort zu wort”) in 17 articles in the foundation charter of the confraternity.
An interior delta in the lower course of the Ntem River near the sub-prefecture Ma’an was identified after interpretation of satellite images, topographical maps of SW Cameroon and geological as well as hydrological references and a reconnaissance fieldtrip to the study area. Here neotectonic processes have initiated the establishment of a ‘sediment trap’ (step fault), which in combination with environmental changes strongly generated the fluvial morphology. It transitionally led to temporary lacustrine and palustrine conditions in parts of this river section. Inside the interior delta an anastomosing multi-branched river system has developed, which contains ‘stillwater locations', periodically inundated sections, islands and rapids. Following geomorphological, physiogeographical and sedimentological research approaches, the alluvial plain has been prospected and studied extensively. 91 hand-corings, including three NE–SW transects, were carried out on river benches, levees, cut-off and periodical branches, islands as well as terraces throughout the entire alluvial plain and have unveiled multi-layered, sandy to clayey alluvia reaching up to 440 cm depth. At many locations, fossil organic horizons and palaeosurfaces were discovered, containing valuable palaeoenvironmental proxy data. At these sites, through additional detailed stratigraphical analysis (close-meshed hand-coring and exposure digging) a comprehensive insight into the stratification (lamination) of the alluvia could be gained, clarifying processes and conditions that prevailed in the catchment area during the period of their deposition. 32 Radiocarbon data of macro-rests (leafs, wood), charcoal and organic sediment sampled from these horizons provided ages between 48.230 ± 6.411 and 217 ± 46 years BP (not calibrated). This constitutes the importance of the alluvia as an additional, innovative palaeoarchive for proxy data contributing to the reconstruction of palaeoenvironment and palaeoclimate in western Equatorial Africa. The further examination of the alluvia will not only provide additional information on the dynamics of vegetation, climate and hydrology (esp. fluvial morphology) in SW Cameroon since the ‘First Millennium BC Crisis’ (around 3.000 years BP), the main focus of the DFG-research project, but also on conditions prevailing since the Late Pleistocene, during the Last Glacial Maximum (~18.000 years BP), the Younger Dryas impact (~11.000 years BP) and the ‘Humid African Period’ (~9.000–6.000 years BP). Delta13C-values (–31,4 to –26,4‰) evidence that at the particular drilling sites rain forest has prevailed during the corresponding time period (rain forest refuge theory). The sampled macrorests all indicate rain forest dominated ecosystems, which were able to persist in fluvial habitats, even during arid periods.
The influence of visual tasks on short and long-term memory for visual features was investigated using a change-detection paradigm. Subjects completed 2 tasks: (a) describing objects in natural images, reporting a specific property of each object when a crosshair appeared above it, and (b) viewing a modified version of each scene, and detecting which of the previously described objects had changed. When tested over short delays (seconds), no task effects were found. Over longer delays (minutes) we found the describing task influenced what types of changes were detected in a variety of explicit and incidental memory experiments. Furthermore, we found surprisingly high performance in the incidental memory experiment, suggesting that simple tasks are sufficient to instill long-lasting visual memories. Keywords: visual working memory, natural scenes, natural tasks, change detection
Friedrich Schlegel's lasting contribution to linguistics is usually seen in the impact that his book "Über die Sprache und Weisheit der Indier" from 1808 left on comparative linguistics and on the study of Sanskrit. Schlegel was one of the first European scholars to have studied Sanskrit extensively and he made a number of translations of Sanskrit literature into German which make up one third of "Über die Sprache und Weisheit der Indier". Schlegel's book is widely regarded as a founding document both of comparative linguistics and of indology, a fact which is quite remarkable in light of the development of Schlegel's thought after this text. His interest in Indian studies ceased more or less directly with the publication of this work, while his thoughts on language became more and more suffused by transcendental philosophy.
The impact of European integration on the German system of pharmaceutical product authorization
(2008)
The European Union has evolved since 1965 into an influential political player in the regulation of pharmaceutical safety standards. The objective of establishing a single European market for pharmaceuticals makes it necessary for member-states to adopt uniform safety standards and marketing authorization procedures. This article investigates the impact of the European integration process on the German marketing authorization system for pharmaceuticals. The analysis shows that the main focal points and objectives of European regulation of pharmaceutical safety have shifted since 1965. The initial phase saw the introduction of uniform European safety standards as a result of which Germany was obliged to undertake “catch-up” modernization. From the mid-1970s, these standards were extended and specified in greater detail. Since the mid-1990s, a process of reorientation has been under way. The formation of the European Agency for the Evaluation of Medicinal Products (EMEA) and the growing importance of the European authorization procedure, combined with intensified global competition on pharmaceutical markets, are exerting indirect pressure for EU member-states to adjust their medicines policies. Consequently, over the past few years Germany has been engaged in a competition-oriented reorganization of its pharmaceutical product authorization system the outcome of which will be to give higher priority to economic interests.
The dynamics of many systems are described by ordinary differential equations (ODE). Solving ODEs with standard methods (i.e. numerical integration) needs a high amount of computing time but only a small amount of storage memory. For some applications, e.g. short time weather forecast or real time robot control, long computation times are prohibitive. Is there a method which uses less computing time (but has drawbacks in other aspects, e.g. memory), so that the computation of ODEs gets faster? We will try to discuss this question for the assumption that the alternative computation method is a neural network which was trained on ODE dynamics and compare both methods using the same approximation error. This comparison is done with two different errors. First, we use the standard error that measures the difference between the approximation and the solution of the ODE which is hard to characterize. But in many cases, as for physics engines used in computer games, the shape of the approximation curve is important and not the exact values of the approximation. Therefore, we introduce a subjective error based on the Total Least Square Error (TLSE) which gives more consistent results. For the final performance comparison, we calculate the optimal resource usage for the neural network and evaluate it depending on the resolution of the interpolation points and the inter-point distance. Our conclusion gives a method to evaluate where neural nets are advantageous over numerical ODE integration and where this is not the case. Index Terms—ODE, neural nets, Euler method, approximation complexity, storage optimization.
This paper discusses the implications of transnational media production and diasporic networks for the cultural politics of migrant minorities. How are fields of cultural politics transformed if Hirschmann’s famous options ‘exit’ and ‘voice’ are no longer constituting mutually exclusive responses to dissent within a nation-state, but modes of action that can combine and build upon each other in the context of migration and diasporic media activism? Two case studies are discussed in more detail, relating to Alevi amateur television production in Germany and to a Kurdish satellite television station that reaches out to a diaspora across Europe and the Middle East. Keywords: migrant media, transnationalism, Alevis, Kurds, Turkey, Germany
After the pioneering German “Aktiengesetz” of 1965 and the Brazilian “Lei das Sociedades Anónimas” of 1976, Portugal has become the third country in the world to enact a specific regulation on groups of companies. The Code of Commercial Companies (“Código das Sociedades Comerciais”, abbreviately hereinafter CSC), enacted in 1986, contains a unitary set of rules regulating the relationships between companies, in general, and the groups of companies, in particular (arts. 481° to 508°-E CSC). With this set of rules, the Portuguese legislator has dealt with one of the major topics of modern Company Law. While this branch of law is traditionally conceived as the law of the individual company, modern economic reality is characterized by the massive emergence of large-scale enterprise networks, where parts of a whole business are allocated and insulated in several legally independent companies submitted to an unified economic direction. As Tom HADDEN put it: “Company lawyers still write and talk as if the single independent company, with its shareholders, directors and employees, was the norm. In reality, the individual company ceased to be the most significant form of organization in the 1920s and 1930s. The commercial world is now dominated both nationally and internationally by complex groups of companies”. This trend, which is now observable in any of the largest economies in the world, holds also true for small markets such as Portugal. Although Portuguese economy is still dominated by small and medium-sized enterprises, the organizational structure of the group has always been extremely common. During the 70s, it was estimated that the seven largest groups of companies owned about 50% of the equity capital of all domestic enterprises and were alone responsible for 3/4 of the internal national product. Such a trend has continued and even highlighted in the next decades, surviving to different political and economic scenarios: during the 80s, due to the process of state nationalization of these groups, an enormous public group with more than one thousand controlled companies has been created (“IPE - Instituto de Participações do Estado”); and during the 90s until today, thanks to the reprivatisation movement and the opening of our national market, we assisted to the re-emergence of some large private groups, composed of several hundred subsidiaries each, some of which are listed in foreign stock exchange markets (e.g., in the banking sector, “BCP – Banco Comercial Português”, in the industrial area, “SONAE”, and in the media and communication area, “Portugal-Telecom”).
The market reaction to legal shocks and their antidotes : lessons from the sovereign debt market
(2008)
This Article examines the market reaction to a series of legal events concerning the judicial interpretation of the pari passu clause in sovereign debt instruments. More generally, the Article provides insights into the reactions of investors (predominantly financial institutions), issuers (sovereigns), and those who draft bond covenants (lawyers), to unanticipated changes in the judicial interpretation of certain covenant terms.
Reform of the securities class action is once again the subject of national debate. The impetus for this debate is the reports of three different groups – The Committee on Capital Market Regulation, The Commission on the Regulation of U.S. Capital Markets In the 21st Century, and McKinsey & Company. Each of the reports focuses on a single theme: how the contemporary regulatory culture places U.S. capital markets at a competitive disadvantage to foreign markets. While multiple regulatory forces are targeted by each report’s call for reform, each of the reports singles out securities class actions as one of the prime villains that place U.S. capital markets at a competitive disadvantage. The reports’ recommendations range from insignificant changes to drastic curtailments of private class actions. Surprisingly, these current-day cries echo calls for reform heeded by Congress in the not too distant past. Major reform of the securities class action occurred with the Private Securities Litigation Reform Act of 1995.5 Among the PSLRA’s contributions is the introduction of procedures by which the court chooses from among competing petitioners a lead plaintiff for the class. The statute commands that the petitioner with the largest financial loss suffered as a consequence of the defendant’s alleged misrepresentation is presumed to be the most adequate plaintiff. Thus, the lead plaintiff provision supplants the traditional “first to file” rule for selecting the suit’s plaintiff with a mechanism that seeks to harness to the plaintiff’s economic self interest to the suits’ prosecution. Also, by eliminating the race to be the first to file, the lead plaintiff provision seeks to avoid “hair trigger” filings by overly eager plaintiffs’ counsel which Congress believed too frequently gave rise to incomplete and insubstantially pled causes of action. The PSLRA also introduced for securities class actions a heightened pleading requirement8 as well as a bar to the plaintiff obtaining any discovery prior to the district court disposing of the defendants’ motions to dismiss. By introducing the requirement that allegations involving fraud must be plead not only with particularity, but also that the pled facts must establish a “strong inference” of fraud, the PSLRA cast aside, albeit only for securities actions, the much lower notice pleading requirement that has been a fixture of American civil procedure for decades. Substantive changes to the law were also introduced by the PSLRA. With few exceptions, joint and several liability was replaced by proportionate liability so that a particular defendant’s liability is capped by that defendant’s relative degree of fault. Similarly, contribution rights among co-violators are also based on proportionate fault of each defendant. Three years after the PSLRA, Congress returned to the topic again by enacting the Securities Litigation Uniform Standards Act;13 this provision was prompted by aggressive efforts of plaintiff lawyers to bypass the limitations, most notably the bar to discovery and higher pleading requirement, of the PSLRA by bringing suit in state court. Post-SLUSA, securities fraud class actions are exclusively the domain of the federal court. In this paper, we examine the impact of the PSLRA and more particularly the impact the type of lead plaintiff on the size of settlements in securities fraud class actions. We thus provide insight into whether the type of plaintiff that heads the class action impacts the overall outcome of the case. Furthermore, we explore possible indicia that may explain why some suits settle for extremely small sums – small relative to the “provable losses” suffered by the class, small relative to the asset size of the defendantcompany, and small relative to other settlements in our sample. This evidence bears heavily on the debate over “strike suits.” Part I of this paper sets forth the contemporary debate surrounding the need for further reforms of securities class actions. In this section, we set forth the insights advanced in three prominent reports focused on the competitiveness of U.S. capital markets. In Part II we first provide descriptive statistics of our extensive data set, and thenuse multivariate regression analysis to explore the underlying relationships. In Part III, we closely examine small settlements for clues to whether they reflect evidence of strike suits. We conclude in Part IV with a set of policy recommendations based on our analysis of the data. Our goals in this paper are more modest than the Committee Report, the Chamber Report and the McKinsey Report, each of which called for wide-ranging reforms: we focus on how the PSLRA changed securities fraud settlements so as to determine whether the reforms it introduced accomplished at least some of the Act’s important goals. If the PSLRA was successful, and we think it was, then one must be somewhat skeptical of the need for further cutbacks in private securities class action so soon after the Act was passed.