Refine
Year of publication
- 2009 (2459) (remove)
Document Type
- Article (985)
- Doctoral Thesis (391)
- Part of Periodical (311)
- Book (210)
- Review (128)
- Working Paper (116)
- Part of a Book (86)
- Conference Proceeding (77)
- Report (65)
- Preprint (16)
Language
- German (1440)
- English (881)
- Portuguese (55)
- Croatian (39)
- French (24)
- Multiple languages (5)
- Italian (4)
- Spanish (4)
- dut (2)
- Hungarian (2)
Keywords
- Deutsch (58)
- Linguistik (35)
- Literatur (30)
- Rezension (24)
- Filmmusik (21)
- Lehrdichtung (18)
- Reiseliteratur (16)
- Deutschland (14)
- Film (13)
- Literaturwissenschaft (13)
Institute
- Medizin (287)
- Extern (198)
- Biochemie und Chemie (159)
- Biowissenschaften (93)
- Präsidium (80)
- Physik (67)
- Gesellschaftswissenschaften (61)
- Rechtswissenschaft (51)
- Geowissenschaften (48)
- Geschichtswissenschaften (48)
Oligonucleotides suppress PKB/Akt and act as superinductors of apoptosis in human keratinocytes
(2009)
DNA oligonucleotides (ODN) applied to an organism are known to modulate the innate and adaptive immune system. Previous studies showed that a CpG-containing ODN (CpG-1-PTO) and interestingly, also a non-CpG-containing ODN (nCpG- 5-PTO) suppress inflammatory markers in skin. In the present study it was investigated whether these molecules also influence cell apoptosis. Here we show that CpG-1-PTO, nCpG-5-PTO, and also natural DNA suppress the phosphorylation of PKB/Akt in a cell-type-specific manner. Interestingly, only epithelial cells of the skin (normal human keratinocytes, HaCaT and A-431) show a suppression of PKB/Akt. This suppressive effect depends from ODN lengths, sequence and backbone. Moreover, it was found that TGFa-induced levels of PKB/Akt and EGFR were suppressed by the ODN tested. We hypothesize that this suppression might facilitate programmed cell death. By testing this hypothesis we found an increase of apoptosis markers (caspase 3/7, 8, 9, cytosolic cytochrome c, histone associated DNA fragments, apoptotic bodies) when cells were treated with ODN in combination with low doses of staurosporin, a wellknown pro-apoptotic stimulus. In summary the present data demonstrate DNA as a modulator of apoptosis which specifically targets skin epithelial cells.
Global warming is expected to be associated with diverse changes in freshwater habitats in north-western Europe. Increasing evaporation, lower oxygen concentration due to increased water temperature and changes in precipitation pattern are likely to affect the survival ratio and reproduction rate of freshwater gastropods (Pulmonata, Basommatophora). This work is a comprehensive analyse of the climatic factors influencing their ranges both in the past and in the near future. A macroecological approach showed that for a great proportion of genera the ranges were projected to contract by 2080, even if unlimited dispersal was assumed. The forecasted warming in the cooler northern ranges predicted the emergence of new suitable areas, but also reduced drastically the available habitat in the southern part of the studied region. In order to better understand the ranges dynamics in the past and the post glacial colonisation patterns, an approach combining ecological niche modelling and phylogeography was used for two model species, Radix balthica and Ancylus fluviatilis. Phylogeographic model selection on a COI mtDNA dataset confirmed that R. balthica most likely spread from two central European disjunct refuges after the last glacial maximum. The phylogeographic analysis of A. fluviatilis, using 16S and COI mtDNA datasets, also inferred central European refugia. The absence of niche conservatism (adaptive potential) inferred for A. fluviatilis puts a cautionary note on the use of climate envelope models to predict the future ranges of this species. However, the other model species exhibited strong niche conservatism, which allow putting confidence into such predictions. A profound faunal shift will take place in Central Europe within the next century, either permitting the establishment of species currently living south of the studied region or the proliferation of organisms relying on the same food resources. This study points out the need for further investigations on the dispersal modes of freshwaters snails, since the future range size of the species depend on their ability to establish in newly available habitats. Likewise, the mixed mating system of these organisms gives them the possibility to fund a new population from a single individual. It will probably affect the colonisation success and needs further investigation.
In der äußeren plexiformen Schicht (OPL) der Säugetierretina sind die Photorezeptoren mit den Horizontal- und den Bipolarzellen verschaltet. Diese erste neuronale Verschaltungs-ebene des Sehsystems birgt eine hochkomplexe Architektur aus chemischen und elektrischen Synapsen. Sie ermöglicht die Modulation des Lichtsignals sowie die Aufspaltung der Signale in parallele Übertragungswege. In der vorliegenden Doktorarbeit wurden verschiedene Synapsensysteme in der OPL von Maus-, Kaninchen-, und Makakenretinae mittels immunhistochemischer Färbetechniken licht- und elektronen-mikroskopisch untersucht. In der Mausretina wurden die anatomischen Eigenschaften der Endfüßchen blau-empfindlicher (S-) Zapfen untersucht. Die S-Zapfenendfüßchen waren um 15 % kleiner als die der M-Zapfen, die Bereiche der Invaginierungen am S-Zapfenendfüßchen hingegen um 35 %. Eine deutliche Reduktion der Horizontalzellkontakte ging damit einher. Die Zahl der postsynaptisch von OFF-Bipolarzellen exprimierten Glutamatrezeptor- (GluR) Unterein-heiten GluR1 und GluR5 war am S-Zapfenendfüßchen um fast 50 % kleiner als an M-Zapfen. Dieser Befund spiegelte die geringe Anzahl synaptischer Kontakte von OFF-Bipolarzellen am S-Zapfen wider. Die OFF-Bipolarzelltypen 1 und/oder 2 waren für diese Reduktion verantwortlich. Diese Befunde sind ein erster Hinweis für den sogenannten Grün-OFF-Signalweg in der Mausretina. In der Makakenretina wurde die Verteilung von Protocadherin β16 (Pcdh β16) untersucht. Es konnte auf postsynaptischer Seite an den Photorezeptorterminalien gezeigt werden. Pcdh β16 lag auf den invaginierenden Spitzen von Dendriten der H1 Horizontalzellen sowie an ihren desmosome-like junctions unterhalb der Zapfenendfüßchen. An diesen Orten fielen die Pcdh β16-immunreaktiven Punkte mit den dort von H1 Zellen exprimierten GluR-Untereinheiten GluR2 - 4 und GluR6/7 zusammen. Im Zuge dieser Analyse wurde ebenfalls eine Kolokalisation dieser AMPA- (GluR2 - 4) und Kainat- (GluR6/7) Rezeptoren an den desmosome-like junctions festgestellt. Darüber hinaus zeigte eine elektronenmikroskopische Untersuchung, dass Pcdh β16 auch an flachen Synapsen in Triaden-assoziierter Position am Zapfenendfüßchen zu finden ist. Dies kann als Hinweis auf eine Expression durch flat midget Bipolarzellen oder Bipolarzellen des Typs DB3 gewertet werden. Dies lässt vermuten, dass dieses Protein an der Formation zelltypspezifischer Kontakte bzw. Synapsen beteiligt ist. In einer vergleichenden Studie der synaptischen Verteilung des zytoplasmatischen Gerüstproteins Zonula Occludens-1 (ZO-1) in Makaken-, Kaninchen- und Mausretinae zeigte sich ein sehr einheitliches sowie auch zelltypspezifisches Verteilungsmuster. ZO-1 fiel bei allen Spezies mit Connexin 36 (Cx36) an den gap junctions zwischen Photo-rezeptorterminalien und zwischen den Dendriten von OFF-Bipolarzellen zusammen. Außerdem ist ZO-1 mit gap junctions bestimmter Horizontalzellen assoziiert: In der OPL der Kaninchenretina fiel es mit Cx50 zusammen, dem Connexin der axonlosen A-Typ Horizontalzellen. An den großen gap junctions zwischen den Primärdendriten dieser Zellen bildete ZO-1 jedoch eine Zaun-ähnliche Struktur als Abgrenzung um die gap junctions herum, anstatt direkt mit den Connexinen kolokalisiert zu sein. Eine direkte Interaktion mit den Connexinen wird durch die räumliche Anordnung weitgehend ausgeschlossen, was auf eine Funktion von ZO-1 als tight- oder adherens junction Protein hindeutet. In der Mausretina fiel ZO-1 mit Cx57 an dendro-dendritischen gap junctions zwischen den Maus-Horizontalzellen zusammen. In der Makakenretina sind die Connexine der Horizontalzellen noch nicht bekannt. Trotzdem ließ sich ZO-1 den dendro-dendritischen gap junctions zwischen H1 Horizontalzellen zuordnen. Darüber hinaus zeigte sich eine enge Assoziation dieser dendro-dendritischen gap junctions mit den GluRs unterhalb der Zapfenendfüßchen an den desmosome-like junctions. Der von den GluRs ermöglichte Calcium-Einstrom könnte sich durch die räumliche Nähe zu den Connexinen modulierend auf die Leitfähigkeit der elektrischen Synapsen auswirken.
Lentiviral vectors mediate gene transfer into dividing and most non-dividing cells. Thereby, they stably integrate the transgene into the host cell genome. For this reason, lentiviral vectors are a promising tool for gene therapy. However, safety and efficiency of lentiviral mediated gene transfer still needs to be optimised. Ideally, cell entry should be restricted to the cell population relevant for a particular therapeutic application. Furthermore, lentiviral vectors able to transduce quiescent lymphocytes are desirable. Although many approaches were followed to engineer retroviral envelope proteins, an effective and universally applicable system for retargeting of lentiviral cell entry is still not available. Just before the experimental work of this thesis was started, retargeting of measles virus (MV) cell entry was achieved. This virus has two types of envelope glycoproteins, the hemagglutinin (H) protein responsible for receptor recognition and the fusion (F) protein mediating membrane fusion. For retargeting, the H protein was mutated in its interaction sites for the native MV receptors and a ligand or a single-chain antibody (scAb) was fused to its ectodomain. It was hypothesised that the retargeting system of MV can be transferred to lentiviral vectors by pseudotyping human immunodeficiency virus-1 (HIV-1) derived vector particles with the MV glycoproteins. As the unmodified MV glycoproteins did not pseudotype HIV vectors, two F and 15 H protein variants carrying stepwise truncations or amino acid (aa) exchanges in their cytoplasmic tails were screened for their ability to form MV-HIV pseudotypes. The combinations Hcd18/Fcd30, Hcd19/Fcd30 and Hcd24+4A/Fcd30 led to most efficient pseudotype formation with titers above 10exp6 transducing units /ml, using concentrated particles. The F cytoplasmic tail was truncated by 30 aa and the H cytoplasmic tail was truncated by 18, 19 or 24 residues with four added alanines after the start methionine in the latter case. Western blot analysis indicated that particle incorporation of the MV glycoproteins was enhanced upon truncation of their cytoplasmic tails. With the MV-HIV vectors high titers on different cell lines expressing one or both MV receptors were obtained, whereas MV receptor-negative cells remained untransduced. Titers were enhanced using an optimal H to F plasmid ratio (1:7) during vector particle production. Based on the described pseudotyping with the MV glycoprotein variants, HIV vectors retargeted to the epidermal growth factor receptor (EGFR) or the B cell surface marker CD20 were generated. For the production of the retargeted vectors MVaEGFR-HIV and MVaCD20-HIV, Fcd30 together with a native receptor blind Hcd18 protein, displaying at its ectodomain either the ligand EGF or a scAb directed against CD20 were used. With these vectors, gene transfer into target receptor-positive cells was several orders of magnitude more efficient than into control cells. The almost complete absence of background transduction of non-target cells was e.g. demonstrated in mixed cell populations, where the CD20-targeting vector selectively eliminated CD20-positive cells upon suicide gene transfer. Remarkably, transduction of activated primary human CD20-positive B cells was much more efficient with the MVaCD20-HIV vector than with the standard pseudotype vector VSV-G-HIV. Even more surprisingly, MVaCD20-HIV vectors were able to transduce quiescent primary human B cells, which until then had been resistant towards lentiviral gene transfer. The most critical step during the production of MV-HIV pseudotypes was the identification of H cytoplasmic tail mutants that allowed pseudotyping while retaining the fusion helper function. In contrast to previously inefficient targeting strategies, the reason for the success of this novel targeting system must be based on the separation of the receptor recognition and fusion functions onto two different proteins. Furthermore, with the CD20-targeting vector transduction of quiescent B cells was demonstrated for the first time. Own data and literature data suggest that CD20 binding and hyper-cross-linking by the vector particles results in calcium influx and thus activation of quiescent B cells. Alternatively this feature may be based on a residual binding activity of the MV glycoproteins to the native MV receptors that is insufficient for entry but induces cytoskeleton rearrangements dissolving the post-entry block of HIV vectors. Hence, in this thesis efficient retargeting of lentiviral vectors and transduction of quiescent cells was combined. This novel targeting strategy should be easily adaptable to many other target molecules by extending the modified MV H protein with appropriate specific domains or scAbs. It should now be possible to tailor lentiviral vectors for highly selective gene transfer into any desired target cell population with an unprecedented degree of efficiency.
Neutron stars are very dense objects. One teaspoon of their material would have a mass of five billion tons. Their gravitational force is so strong that if an object were to fall from just one meter high it would hit the surface of the respective neutron star at two thousand kilometers per second. In such dense bodies, different particles from the ones present in atomic nuclei, the nucleons, can exist. These particles can be hyperons, that contain non-zero strangeness, or broader resonances. There can also be different states of matter inside neutron stars, such as meson condensates and if the density is height enough to deconfine the nucleons, quark matter. As new degrees of freedom appear in the system, different aspects of matter have to be taken into account. The most important of them being the restoration of the chiral symmetry. This symmetry is spontaneously broken, which is a fact related to the presence of a condensate of scalar quark-antiquark pairs, that for this reason is called chiral condensate. This condensate is present at low densities and even in vacuum. It is important to remember at this point that the modern concept of vacuum is far away from emptiness. It is full of virtual particles that are constantly created and annihilated, being their existence allowed by the uncertainty principle. At very high temperature/density, when the composite particles are dissolved into constituents, the chiral consensate vanishes and the chiral symmetry is restored. To explain how and when chiral symmetry is restored in neutron stars we use a model called non-linear sigma model. This is an effective quantum relativistic model that was developed in order to describe systems of hadrons interacting via meson exchange. The model was constructed from symmetry relations, which allow it to be chiral invariant. The first consequence of this invariance is that there are no bare mass terms in the lagrangian density, causing all, or most of the particles masses to come from the interactions with the medium. There are still other interesting features in neutron stars that cannot be found anywhere else in nature. One of them is the high isospin asymmetry. In a normal nucleus, the amount of protons and neutrons is more or less the same. In a neutron star the amount of neutrons is much higher than the protons. The resulting extra energy (called Fermi energy) increases the energy of the system, allowing the star to support more mass against gravitational collapse. As a consequence of that in early stages of the neutron star evolution, when there are still many trapped neutrinos, the proton fraction is higher than in later stages and consequently the maximum mass that the star can support against gravity is smaller. This, between many other features, shows how the microscopic phenomena of the star can reflect into the macroscopic properties. Another important property of neutron stars is charge neutrality. It is a required assumption for stability in neutron stars, but there are others. One example is chemical equilibrium. It means that the number of particles from each kind is not conserved, but they are created and annihilated through specific reactions that happen at the same rate in both directions. Although to calculate microscopic physics of neutron stars the space-time of special relativity, the Minkowski space, can be used, this is not true for the global properties of the star. In this case general relativity has to be used. The solution of Einstein's equations simplified to static, spherical and isotropic stars correspond to the configurations in which the star is in hydrostatic equilibrium. That means that the internal pressure, coming mainly from the Fermi energy of the neutrons, balances the gravity avoiding the collapse. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also makes it non-spherical, what requires the metric of the star to also be a function of the polar coordinate. Another important feature that has to be taken into account is the dragging of the local inertial frame. It generates centrifugal forces that are not originated in interactions with other bodies, but from the non-rotation of the frame of reference within which observations are made. These modifications are introduced through the Hartle's approximation that solves the problem by applying perturbation theory. In the mean field approximation, the couplings as well as the parameters of the non-linear sigma model are calibrated to reproduce massive neutron stars. The introduction of new degrees of freedom decreases the maximum mass allowed for the neutron star, as they soften the equation of state. In practice, the only baryons present in the star besides the nucleons are the Lambda and Sigma-, in the case in which the baryon octet is included, and Lambda and Delta-,0,+,++, in the case in which the baryon decuplet is included. The leptons are included to ensure charge neutrality. We choose to proceed our calculations including the baryon octet but not the decuplet, in order to avoid uncertainties in the couplings. The couplings of the hyperons were fitted to the depth of their potentials in nuclei. In this case the chiral symmetry restoration can be observed through the behavior of the related order parameter. The symmetry begins to be restored inside neutron stars and the transition is a smooth crossover. Different stages of the neutron star cooling are reproduced taking into account trapped neutrinos, finite temperature and entropy. Finite-temperature calculations include the heat bath of hadronic quasiparticles within the grand canonical potential of the system. Different schemes are considered, with constant temperature, metric dependent temperature and constant entropy. The neutrino chemical potential is introduced by fixing the lepton number in the system, that also controls the amount of electrons and protons (for charge neutrality). The balance between these two features is delicate and influenced mainly by the baryon number conservation. Isolated stars have a fixed number of baryons, which creates a link between different stages of the cooling. The maximum masses allowed in each stage of the cooling process, the one with high entropy and trapped neutrinos, the deleptonized one with high entropy, and the cold one in beta equilibrium. The cooling process is also influenced by constraints related to the rotation of the star. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also deforms it, requiring the metric of the star to include modifications that are introduced through the use of perturbation theory. The analysis of the first stages of the neutron star, when it is called proto-neutron star, gives certain constraints on the possible rotation frequencies in the colder stages. Instability windows are calculated in which the star can be stable during certain stages but collapses into black holes during the cooling process. In the last part of the work the hadronic SU(3) model is extended to include quark degrees of freedom. A new effective potential to the order parameter for deconfinement, the Polyakov loop, makes the connection between the physics at low chemical potential and hight temperature of the QCD phase diagram with the height chemical potential and low temperature part. This is done through the introduction of a chemical potential dependency on the already temperature dependent potential. Analyzing the effect of both order parameters, the chiral condensate and the Polyakov loop, we can drawn a phase diagram for symmetric as well as for star matter. The diagram contains a crossover region as well as a first order phase transition line. The new couplings and parameters of the model are chosen mainly to fit lattice QCD, including the position of the critical point. Finally, this matter containing different degrees of freedom (depending on which phase of the diagram we are) is used to calculate hybrid star properties.
Shape complementarity is a compulsory condition for molecular recognition. In our 3D ligand-based virtual screening approach called SQUIRREL, we combine shape-based rigid body alignment with fuzzy pharmacophore scoring. Retrospective validation studies demonstrate the superiority of methods which combine both shape and pharmacophore information on the family of peroxisome proliferator-activated receptors (PPARs). We demonstrate the real-life applicability of SQUIRREL by a prospective virtual screening study, where a potent PPARalpha agonist with an EC50 of 44 nM and 100-fold selectivity against PPARgamma has been identified...
Für Bibliotheken gehört der Umgang mit elektronischen Ressourcen zu den größten Herausforderungen des 21. Jahrhunderts. Die Sammlung, Erschließung und dauerhafte Aufbewahrung elektronischer Ressourcen erweitert das Aufgabenfeld von Bibliotheken heutzutage enorm. Auch mit dem Aufbau von Langzeitspeichern müssen Bibliotheken sich auseinandersetzen.
Die Herausforderung der digitalen Langzeitarchivierung betrifft alle Gedächtnisorganisationen - Bibliotheken, Archive, Museen - und kann effektiv und bezahlbar nur kooperativ bewältigt werden. Aus diesem Gedanken heraus wurde 2003 in Deutschland das Kompetenznetzwerk für digitale Langzeitarchivierung „nestor“ mit den Arbeitsschwerpunkten Qualifizierung, Standardisierung, Vernetzung gegründet.
Die Überlieferung des kulturellen Erbes, traditionell eine der Aufgaben von Bibliotheken, Archiven und Museen, ist durch die Einführung digitaler Medien und innovativer Informationstechnologien deutlich anspruchsvoller geworden. In der heutigen Zeit werden zunehmend mehr Informationen (nur) digital erstellt und veröffentlicht. Diese digitalen Informationen, die Güter des Informations- und Wissenszeitalters, sind einerseits wertvolle kulturelle und wissenschaftliche Ressourcen, andererseits sind sie z.B. durch die Kurzlebigkeit vieler Formate sehr vergänglich. Die Datenträger sind ebenso der Alterung unterworfen wie die Datenformate oder die zur Darstellung notwendige Hard- und Software. Um langfristig die Nutzbarkeit der digitalen Güter sicherzustellen, muss schon frühzeitig Vorsorge getroffen werden. Es müssen Strategien zur digitalen Langzeitarchivierung entwickelt und umgesetzt werden. ...
Background The to date evidence for a dose-response relationship between physical workload and the development of lumbar disc diseases is limited. We therefore investigated the possible etiologic relevance of cumulative occupational lumbar load to lumbar disc diseases in a multi-center case-control study. Methods In four study regions in Germany (Frankfurt/Main, Freiburg, Halle/Saale, Regensburg), patients seeking medical care for pain associated with clinically and radiologically verified lumbar disc herniation (286 males, 278 females) or symptomatic lumbar disc narrowing (145 males, 206 females) were prospectively recruited. Population control subjects (453 males and 448 females) were drawn from the regional population registers. Cases and control subjects were between 25 and 70 years of age. In a structured personal interview, a complete occupational history was elicited to identify subjects with certain minimum workloads. On the basis of job task-specific supplementary surveys performed by technical experts, the situational lumbar load represented by the compressive force at the lumbosacral disc was determined via biomechanical model calculations for any working situation with object handling and load-intensive postures during the total working life. For this analysis, all manual handling of objects of about 5 kilograms or more and postures with trunk inclination of 20 degrees or more are included in the calculation of cumulative lumbar load. Confounder selection was based on biologic plausibility and on the change-in-estimate criterion. Odds ratios (OR) and 95% confidence intervals (CI) were calculated separately for men and women using unconditional logistic regression analysis, adjusted for age, region, and unemployment as major life event (in males) or psychosocial strain at work (in females), respectively. To further elucidate the contribution of past physical workload to the development of lumbar disc diseases, we performed lag-time analyses. Results We found a positive dose-response relationship between cumulative occupational lumbar load and lumbar disc herniation as well as lumbar disc narrowing among men and women. Even past lumbar load seems to contribute to the risk of lumbar disc disease. Conclusions According to our study, cumulative physical workload is related to lumbar disc diseases among men and women.
Background Since June 2002, revised regulations in Germany have required "Emergency Medical Care" as an interdisciplinary subject, and state that emergency treatment should be of increasing importance within the curriculum. A survey of the current status of undergraduate medical education in emergency medical care establishes the basis for further committee work. Methods Using a standardized questionnaire, all medical faculties in Germany were asked to answer questions concerning the structure of their curriculum, representation of disciplines, instructors' qualifications, teaching and assessment methods, as well as evaluation procedures. Results Data from 35 of the 38 medical schools in Germany were analysed. In 32 of 35 medical faculties, the local Department of Anaesthesiology is responsible for the teaching of emergency medical care; in two faculties, emergency medicine is taught mainly by the Department of Surgery and in another by Internal Medicine. Lectures, seminars and practical training units are scheduled in varying composition at 97% of the locations. Simulation technology is integrated at 60% (n=21); problem-based learning at 29% (n=10), e-learning at 3% (n=1), and internship in ambulance service is mandatory at 11% (n=4). In terms of assessment methods, multiple-choice exams (15 to 70 questions) are favoured (89%, n=31), partially supplemented by open questions (31%, n=11). Some faculties also perform single practical tests (43%, n=15), objective structured clinical examination (OSCE; 29%, n=10) or oral examinations (17%, n=6). Conclusion Emergency Medical Care in undergraduate medical education in Germany has a practical orientation, but is very inconsistently structured. The innovative options of simulation technology or state-of-the-art assessment methods are not consistently utilized. Therefore, an exchange of experiences and concepts between faculties and disciplines should be promoted to guarantee a standard level of education in emergency medical care.
Das libor Markt Modell (LMM) ist seit seiner Entwicklung in den Veröffentlichungen von Brace, Gatarek, Musiela (1997), einerseits, und unabhängig von diesen von Miltersen, Sandmann, Sondermann (1997), andererseits, zu dem anerkanntesten Instrument zur Modellierung der Zinsstruktur und der damit verbundenen Preisfindung für relevante Finanzderivate geworden. libor steht dabei für London Inter-Bank Offered Rate, ein täglich in London fixierter Referenz-Zins für kurzfristige Anlagen. Drei- oder sechsmonatige Laufzeiten sind in Verbindung mit dem LMM üblich. Die Forschung zur Verbesserung dieses Modells hat in den letzten Jahren an Zuwachs gewonnen. Beim Versuch den Fehler der Anpassung an die täglich beobachteten Preise von Zinsoptionen wie Caps und Swaptions zu verringern, erhält man in der Folge auch genauere Bewertungen für andere, exotischere, Derivate. Die zugrunde liegende und zentrale Idee des LMM besteht darin, die Forward (Termin) Zinsen direkt als primären (Vektor) Prozess mehrerer libor Sätze zu betrachten und diese simultan zu modellieren, anstatt sie nur herzuleiten aus einem übergeordneten, unendlich dimensionalen Forward Zinsprozess, wie im zeitlich früher entwickelten Heath-Jarrow-Morton Modell. Das überzeugendste Argument für diese Diskretisierung ist, dass die libor Sätze direkt im Markt beobachtbar sind und ihre Volatilitäten auf eine natürliche Weise in Beziehung gebracht werden können zu bereits liquide gehandelten Produkten, eben jenen Caps und Swaptions. Dennoch beinhaltet das Modell eine gravierende Insuffizienz, indem es keine Krümmung der Volatilitätsoberfläche, im Hinblick auf Optionen mit verschiedenen Basiszinsen, abbildet. Wie im einfachen eindimensionalen Black-Scholes Modell prägen sich auch hier die Ungenauigkeiten der Verteilung in fehlenden heavy tails deutlich aus. Smile und Skew Effekte sind erkennbar. Im klassischen liborMarkt Modell wird in Richtung der Basiszinsdimension nur eine affine Struktur erzeugt, welche bestenfalls als Approximation für die erwünschte Oberfläche dienen kann. Die beobachteten Verzerrungen führen naturgemäss zu einer ungenauen Abbildung der Realität und fehlerhaften Reproduktion der Preise in Regionen, die ein wenig entfernt vom Bereich am Geld liegen. Derartig ungewollte Dissonanzen in Gewinn und Verlustzahlen führten z.B. in 1998 zu gravierenden Verlusten im Zinsderivateportfolio der heutigen Royal Bank of Scotland. ...
The NADH:ubiquinone oxidoreductase (complex I) is a large membrane bound protein complex coupling the redox reaction of NADH oxidation and quinone reduction to vectorial proton translocation across bioenergetic membranes. The mechanism of proton pumping is still unknown; it seems however that the reduction of quinone induces conformational changes which drive proton uptake from one side and release at the other side of the membrane. In this study the proposed quinone and inhibitor binding pocket located at the interface of the 49-kDa and PSST subunits was explored by a large number of point mutations introduced into complex I from the strictly aerobic yeast Yarrowia lipolytica. Point mutations were systematically chosen based on the crystal structure of the hydrophilic domain of complex I from Thermus thermophilus. In total, the properties of 94 mutants at 39 positions which completely cover the lining of the large putative quinone and inhibitor binding cavity are described and discussed here. A structure/function analysis allowed the identification of functional domains within the large putative quinone binding cavity. A possible quinone access path ranging from the N-terminal beta-sheet of the 49-kDa subunit into the pocket to tyrosine 144 could be defined, since all exchanges introduced here, caused an almost complete loss of complex I activity. A region located deeper in the proposed quinone binding pocket is apparently not important for complex I activity. In contrast, all exchanges of tyrosine 144, even the very conservative mutant Y144F, essentially abolished dNADH:DBQ oxidoreductase activity of complex I. However, with higher concentrations of Q1 or Q2 the dNADH:Q oxidoreductase activity was largely restored in the mutants with the more conservative exchanges. Proton pumping experiments showed that this activity was also coupled to proton translocation, indicating that these quinones were reduced at the physiological site. However, the apparent Km values for Q1 or Q2 were drastically increased, clearly demonstrating that tyrosine 144 is central for quinone binding and reduction. These results further prove that the enzymatically relevant quinone binding site of complex I is located at the interface of the 49-kDa and PSST subunits. The quinone binding pocket is thought to comprise the binding sites for a plethora of specific complex I inhibitors that are usually grouped into three classes. The large array of mutants targeting the quinone binding cavity was examined with a representative of each inhibitor class. Many mutants conferring resistance were identified which, depending on the inhibitor tested, clustered in well defined and partially overlapping regions of the large putative quinone and inhibitor binding cavity. Mutants with effects on type A (DQA) and type B (rotenone) inhibitors were found in a subdomain corresponding to the former [NiFe] site in homologous hydrogenases, whereby the type A inhibitor DQA seems to bind deeper in this domain. Mutants with effects on the type C inhibitor (C12E8) were found in a narrow crevice. Exchanging more exposed residues at the border of these well defined domains affected all three inhibitor types. Therefore, the results as a whole provide further support for the concept that different inhibitor classes bind to different but partially overlapping binding sites within a single large quinone binding pocket. In addition, they also indicate the approximate location of the binding sites within the structure of the large quinone and inhibitor binding cavity at the interface of the 49 kDa and the PSST subunit. It has been proposed earlier that the highly conserved HRGXE-motif in the 49-kDa subunit forms a part of the quinone binding site of complex I. Mutagenesis of the HRGXE-motif, revealed that these residues are rather critical for complex I assembly and seem to have an important structural role. The question why iron-sulfur cluster N1a is not detectable by EPR in many models organisms is not solved yet. Introducing polar and positively charged amino acid residues close to this cluster in order to increase its midpoint potential did not result in the appearance of the cluster N1a EPR signal in mitochondrial membranes from the mutants. Clearly, further research will be necessary to gain insights to the function of this iron-sulfur cluster in complex I. In an additional project, a new and simple in vivo screen for complex I deficiency in Y. lipolytica was developed and optimized. This assay probes for defects in complex I assembly and stability, oxidoreductase activity and also proton pumping activity by complex I. Most importantly, this assay is applicable to all Y. lipolytica strains and could be used to identify loss-of-function mutants, gain-of-functions mutants (i.e. resistance towards complex I inhibitors) and revertants due to mutations in both nuclear and mitochondrially encoded genes of complex I subunits.
The light-harvesting complex of photosystem II (LHC-II) is the major antenna complex in plant photosynthesis. It accounts for roughly 30% of the total protein in plant chloroplasts, which makes it arguably the most abundant membrane protein on Earth, and binds about half of plant chlorophyll (Chl). The complex assembles as a trimer in the thylakoid membrane and binds a total of 54 pigment molecules, including 24 Chl a, 18 Chl b, 6 lutein (Lut), 3 neoxanthin (Neo) and 3 violaxanthin (Vio). LHC-II has five key roles in plant photosynthesis. It: (1) harvests sunlight and transmits excitation energy to the reaction centres of photosystems II and I, (2) regulates the amount of excitation energy reaching each of the two photosystems, (3) has a structural role in the architecture of the photosynthetic supercomplexes, (4) contributes to the tight appression of thylakoid membranes in chloroplast grana, and (5) protects the photosynthetic apparatus from photo damage by non photochemical quenching (NPQ). A major fraction of NPQ is accounted for its energy-dependent component qE. Despite being critical for plant survival and having been studied for decades, the exact details of how excess absorbed light energy is dissipated under qE conditions remain enigmatic. Today it is accepted that qE is regulated by the magnitude of the pH gradient (ΔpH) across the thylakoid membrane. It is also well documented that the drop in pH in the thylakoid lumen during high-light conditions activates the enzyme violaxanthin de-epoxidase (VDE), which converts the carotenoid Vio into zeaxanthin (Zea) as part of the xanthophyll cycle. Additionally, studies with Arabidopsis mutants revealed that the photosystem II subunit PsbS is necessary for qE. How these physiological responses switch LHC-II from the active, energy transmitting to the quenched, energy-dissipating state, in which the solar energy is not transmitted to the photosystems but instead dissipated as heat, remains unclear and is the subject of this thesis. From the results obtained during this doctoral work, five main conclusions can be drawn concerning the mechanism of qE: 1. Substitution of Vio by Zea in LHC-II is not sufficient for efficient dissipation of excess excitation energy. 2. Aggregation quenching of LHC-II does not require Vio, Neo nor a specific Chl pair. 3. With one exception, the pigment structure in LHC-II is rigid. 4. The two X-ray structures of LHC-II show the same energy transmitting state of the complex. 5. Crystalline LHC-II resembles the complex in the thylakoid membrane. Models of the aggregation quenching mechanism in vitro and the qE mechanism in vivo are presented as a corollary of this doctoral work. LHC-II aggregation quenching in vitro is attributed to the formation of energy sinks on the periphery of LHC-II through random interaction with other trimers, free pigments or impurities. A similar but unrelated process is proposed to occur in the thylakoid membrane, by which excess excitation energy is dissipated upon specific interaction between LHC-II and a PsbS monomer carrying Zea. At the end of this thesis, an innovative experimental model for the analysis of all key aspects of qE is proposed in order to finally solve the qE enigma, one of the last unresolved problems in photosynthesis research.
Samples of freshly fallen snow were collected at the high alpine research station Jungfraujoch (Switzerland) in February and March 2006 and 2007, during the Cloud and Aerosol Characterization Experiments (CLACE) 5 and 6. In this study a new technique has been developed and demonstrated for the measurement of organic acids in fresh snow. The melted snow samples were subjected to solid phase extraction and resulting solutions analysed for organic acids by HPLC-MS-TOF using negative electrospray ionization. A series of linear dicarboxylic acids from C5 to C13 and phthalic acid, were identified and quantified. In several samples the biogenic acid pinonic acid was also observed. In fresh snow the median concentration of the most abundant acid, adipic acid, was 0.69 micro g L -1 in 2006 and 0.70 micro g L -1 in 2007. Glutaric acid was the second most abundant dicarboxylic acid found with median values of 0.46 micro g L -1 in 2006 and 0.61 micro g L -1 in 2007, while the aromatic acid phthalic acid showed a median concentration of 0.34 micro g L -1 in 2006 and 0.45 micro g L -1 in 2007. The concentrations in the samples from various snowfall events varied significantly, and were found to be dependent on the back trajectory of the air mass arriving at Jungfraujoch. Air masses of marine origin showed the lowest concentrations of acids whereas the highest concentrations were measured when the air mass was strongly influenced by boundary layer air.
Current atmospheric models do not include secondary organic aerosol (SOA) production from gas-phase reactions of polycyclic aromatic hydrocarbons (PAHs). Recent studies have shown that primary semivolatile emissions, previously assumed to be inert, undergo oxidation in the gas phase, leading to SOA formation. This opens the possibility that low-volatility gas-phase precursors are a potentially large source of SOA. In this work, SOA formation from gas-phase photooxidation of naphthalene, 1-methylnaphthalene (1-MN), 2-methylnaphthalene (2-MN), and 1,2-dimethylnaphthalene (1,2-DMN) is studied in the Caltech dual 28-m3 chambers. Under high-NOx conditions and aerosol mass loadings between 10 and 40 microg m-3, the SOA yields (mass of SOA per mass of hydrocarbon reacted) ranged from 0.19 to 0.30 for naphthalene, 0.19 to 0.39 for 1-MN, 0.26 to 0.45 for 2-MN, and constant at 0.31 for 1,2-DMN. Under low-NOx conditions, the SOA yields were measured to be 0.73, 0.68, and 0.58, for naphthalene, 1-MN, and 2-MN, respectively. The SOA was observed to be semivolatile under high-NOx conditions and essentially nonvolatile under low-NOx conditions, owing to the higher fraction of ring-retaining products formed under low-NOx conditions. When applying these measured yields to estimate SOA formation from primary emissions of diesel engines and wood burning, PAHs are estimated to yield 3–5 times more SOA than light aromatic compounds. PAHs can also account for up to 54% of the total SOA from oxidation of diesel emissions, representing a potentially large source of urban SOA.
It has become popular for journalists who are trying to sell newspapers, and politicians who are trying to solicit votes, to refer to this financial crisis as the worst since the Great Depression or WWII. I don’t know whether it is the worst or not so will leave that question to the historians and economists of the future once the storm has past. But it is indeed a “storm” as described by Vince Cable, Member of Parliament in his UK bestselling book entitled “The Storm – The World Economic Crisis and What it Means”. He describes this “storm” as a very destructive one displacing jobs, businesses, banks and whole economies from Iceland to the United Kingdom to the United States. I propose to offer a short chronology and summary of the causes of the current economic crisis. Then I will review several of the regulatory responses to the crisis focusing on the Turner Report, the de Larosière Group and certain US Treasury statements. I will offer my critiques of these proposals and then make some predictions of what the financial services industry may look like in the future.
Seit dem Inkrafttreten des Investmentänderungsgesetzes zum 28.12.2007 steht der Investmentbranche als neue Gestaltungsform eines Investmentvehikels die fremdverwaltete Investmentaktiengesellschaft zur Verfügung. Die fremdverwaltete Investmentaktiengesellschaft benennt eine Kapitalanlagegesellschaft als Verwaltungsgesellschaft und überträgt ihr die allgemeine Verwaltungstätigkeit sowie die Anlage und Verwaltung ihrer Mittel. Der folgende Beitrag untersucht die Haftung der Verwaltungsgesellschaft gegenüber den Aktionären der fremdverwalteten Investmentaktiengesellschaft. Im Ergebnis wird ein gesetzliches Schuldverhältnis bejaht, für dessen Verletzung die Verwaltungsgesellschaft von den Aktionären der Investmentaktiengesellschaft gemäß §§ 280 Abs. 1, 249 ff. BGB auf Schadensersatz in Anspruch genommen werden kann.
In this thesis the first fully integrated Boltzmann+hydrodynamics approach to relativistic heavy ion reactions has been developed. After a short introduction that motivates the study of heavy ion reactions as the tool to get insights about the QCD phase diagram, the most important theoretical approaches to describe the system are reviewed. To model the dynamical evolution of the collective system assuming local thermal equilibrium ideal hydrodynamics seems to be a good tool. Nowadays, the development of either viscous hydrodynamic codes or hybrid approaches is favoured. For the microscopic description of the hadronic as well as the partonic stage of the evolution transport approaches have beeen successfully applied, since they generate the full phse-space dynamics of all the particles. The hadron-string transport approach that this work is based on is the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) approach. It constitutes an effective solution of the relativistic Boltzmann equation and is restricted to binary collisions of the propagated hadrons. Therefore, the Boltzmann equation and the basic assumptions of this model are introduced. Furthermore, predictions for the charged particle multiplicities at LHC energies are made. The next step is the development of a new framework to calculate the baryon number density in a transport approach. Time evolutions of the net baryon number and the quark density have been calculated at AGS, SPS and RHIC energies and the new approach leads to reasonable results over the whole energy range. Studies of phase diagram trajectories using hydrodynamics are performed as a first move into the direction of the development of the hybrid approach. The hybrid approach that has been developed as the main part of this thesis is based on the UrQMD transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The initial energy and baryon number density distributions are not smooth and not symmetric in any direction and the initial velocity profiles are non-trivial since they are generated by the non-equilibrium transport approach. The fulll (3+1) dimensional ideal relativistic one fluid dynamics evolution is solved using the SHASTA algorithm. For the present work, three different equations of state have been used, namely a hadron gas equation of state without a QGP phase transition, a chiral EoS and a bag model EoS including a strong first order phase transition. For the freeze-out transition from hydrodynamics to the cascade calculation two different set-ups are employed. Either an in the computational frame isochronous freeze-out or an gradual freeze-out that mimics an iso-eigentime criterion. The particle vectors are generated by Monte Carlo methods according to the Cooper-Frye formula and UrQMD takes care of the final decoupling procedure of the particles. The parameter dependences of the model are investigated and the time evolution of different quantities is explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic flow values at SPS energies are shown to be in line with an ideal hydrodynamic evolution if a proper initial state is used and the final freeze-out proceeds gradually. The hybrid model calculation is able to reproduce the experimentally measured integrated as well as transverse momentum dependent $v_2$ values for charged particles. The multiplicity and mean transverse mass excitation function is calculated for pions, protons and kaons in the energy range from $E_{\rm lab}=2-160A~$GeV. It is observed that the different freeze-out procedures have almost as much influence on the mean transverse mass excitation function as the equation of state. The experimentally observed step-like behaviour of the mean transverse mass excitation function is only reproduced, if a first order phase transition with a large latent heat is applied or the EoS is effectively softened due to non-equilibrium effects in the hadronic transport calculation. The HBT correlation of the negatively charged pion source created in central Pb+Pb collisions at SPS energies are investigated with the hybrid model. It has been found that the latent heat influences the emission of particles visibly and hence the HBT radii of the pion source. The final hadronic interactions after the hydrodynamic freeze-out are very important for the HBT correlation since a large amount of collisions and decays still takes place during this period.
Background Heme oxygenase-1 is an inducible cytoprotective enzyme which handles oxidative stress by generating anti-oxidant bilirubin and vasodilating carbon monoxide. A (GT)n dinucleotide repeat and a -413A>T single nucleotide polymorphism have been reported in the promoter region of HMOX1 to both influence the occurrence of coronary artery disease and myocardial infarction. We sought to validate these observations in persons scheduled for coronary angiography. Methods We included 3219 subjects in the current analysis, 2526 with CAD including a subgroup of CAD and MI (n = 1339) and 693 controls. Coronary status was determined by coronary angiography. Risk factors and biochemical parameters (bilirubin, iron, LDL-C, HDL-C, and triglycerides) were determined by standard procedures. The dinucleotide repeat was analysed by PCR and subsequent sizing by capillary electrophoresis, the -413A>T polymorphism by PCR and RFLP. Results In the LURIC study the allele frequency for the -413A>T polymorphism is A = 0,589 and T = 0,411. The (GT)n repeats spread between 14 and 39 repeats with 22 (19.9%) and 29 (47.1%) as the two most common alleles. We found neither an association of the genotypes or allelic frequencies with any of the biochemical parameters nor with CAD or previous MI. Conclusion Although an association of these polymorphisms with the appearance of CAD and MI have been published before, our results strongly argue against a relevant role of the (GT)n repeat or the -413A>T SNP in the HMOX1 promoter in CAD or MI.
We calculate leading-order dilepton yields from a quark-gluon plasma which has a time-dependent anisotropy in momentum space. Such anisotropies can arise during the earliest stages of quark-gluon plasma evolution due to the rapid longitudinal expansion of the created matter. A phenomenological model for the proper time dependence of the parton hard momentum scale, p_hard, and the plasma anisotropy parameter, xi, is proposed. The model describes the transition of the plasma from a 0+1 dimensional collisionally-broadened expansion at early times to a 0+1 dimensional ideal hydrodynamic expansion at late times. We find that high-energy dilepton production is enhanced by pre-equilibrium emission up to 50% at LHC energies, if one assumes an isotropization/thermalization time of 2 fm/c. Given sufficiently precise experimental data this enhancement could be used to determine the plasma isotropization time experimentally.
Introduction Impaired renal function and/or pre-existing atherosclerosis in the deceased donor increase the risk of delayed graft function and impaired long-term renal function in kidney transplant recipients. Case presentation We report delayed graft function occurring simultaneously in two kidney transplant recipients, aged 57-years-old and 39-years-old, who received renal allografts from the same deceased donor. The 62-year-old donor died of cardiac arrest during an asthmatic state. Renal-allograft biopsies performed in both kidney recipients because of delayed graft function revealed cholesterol-crystal embolism. An empiric statin therapy in addition to low-dose acetylsalicylic acid was initiated. After 10 and 6 hemodialysis sessions every 48 hours, respectively, both renal allografts started to function. Glomerular filtration rates at discharge were 26 ml/min/1.73 m2 and 23.9 ml/min/1.73 m2, and remained stable in follow-up examinations. Possible donor and surgical procedure-dependent causes for cholesterol-crystal embolism are discussed. Conclusion Cholesterol-crystal embolism should be considered as a cause for delayed graft function and long-term impaired renal allograft function, especially in the older donor population.
Methods for dichoptic stimulus presentation in functional magnetic resonance imaging : a review
(2009)
Dichoptic stimuli (different stimuli displayed to each eye) are increasingly being used in functional brain imaging experiments using visual stimulation. These studies include investigation into binocular rivalry, interocular information transfer, three-dimensional depth perception as well as impairments of the visual system like amblyopia and stereodeficiency. In this paper, we review various approaches of displaying dichoptic stimulus used in functional magnetic resonance imaging experiments. These include traditional approaches of using filters (red-green, red-blue, polarizing) with optical assemblies as well as newer approaches of using bi-screen goggles.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if there is a mismatch between the data structures and representations used by the parser and the gold standard. A particular case in point is German, for which two treebanks (TiGer and TüBa-D/Z) are available with highly different annotation schemes for the acquisition of (e.g.) PCFG parsers. The differences between the TiGer and TüBa-D/Z annotation schemes make fair and unbiased parser evaluation difficult [7, 9, 12]. The resource (TEPACOC) presented in this paper takes a different approach to parser evaluation: instead of providing evaluation data in a single annotation scheme, TEPACOC uses comparable sentences and their annotations for 5 selected key grammatical phenomena (with 20 sentences each per phenomena) from both TiGer and TüBa-D/Z resources. This provides a 2 times 100 sentence comparable testsuite which allows us to evaluate TiGer-trained parsers against the TiGer part of TEPACOC, and TüBa-D/Z-trained parsers against the TüBa-D/Z part of TEPACOC for key phenomena, instead of comparing them against a single (and potentially biased) gold standard. To overcome the problem of inconsistency in human evaluation and to bridge the gap between the two different annotation schemes, we provide an extensive error classification, which enables us to compare parser output across the two different treebanks. In the remaining part of the paper we present the testsuite and describe the grammatical phenomena covered in the data. We discuss the different annotation strategies used in the two treebanks to encode these phenomena and present our error classification of potential parser errors.
Nous présentons ici différents algorithmes d’analyse pour grammaires à concaténation d’intervalles (Range Concatenation Grammar, RCG), dont un nouvel algorithme de type Earley, dans le paradigme de l’analyse déductive. Notre travail est motivé par l’intérêt porté récemment à ce type de grammaire, et comble un manque dans la littérature existante.
In the recent literature the phenomenon of long distance agreement has become the focus of several studies as it seems to violate certain locality conditions which require that agreeing elements in general stand in clause-mate relationships. In particular, it involves a verb agreeing with a constituent which is located in the verb's clausal complement and hence poses a challenge for theories that assume a strictly local relationship for agreement. In this paper we present empirical evidence from Greek and Romanian for the reality of long distance agreement. Specifically, we focus on raising constructions in these two languages and we show that they do not involve movement but rather instantiate long distance agreement. We further argue that subjunctives allowing long distance agreement lack both a CP layer and semantic Tense. However, since the embedded verb also bears phi-features, these constructions pose a further problem for assumptions that view the presence of phi-features as evidence for the presence of a C layer. Finally, we raise the question of the common properties that these languages have that lead to the presence of long distance agreement.
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.
Parsing coordinations
(2009)
The present paper is concerned with statistical parsing of constituent structures in German. The paper presents four experiments that aim at improving parsing performance of coordinate structure: 1) reranking the n-best parses of a PCFG parser, 2) enriching the input to a PCFG parser by gold scopes for any conjunct, 3) reranking the parser output for all possible scopes for conjuncts that are permissible with regard to clause structure. Experiment 4 reranks a combination of parses from experiments 1 and 3. The experiments presented show that n- best parsing combined with reranking improves results by a large margin. Providing the parser with different scope possibilities and reranking the resulting parses results in an increase in F-score from 69.76 for the baseline to 74.69. While the F-score is similar to the one of the first experiment (n-best parsing and reranking), the first experiment results in higher recall (75.48% vs. 73.69%) and the third one in higher precision (75.43% vs. 73.26%). Combining the two methods results in the best result with an F-score of 76.69.
The aim of this paper is to address two main counterarguments raised in Landau (2007) against the movement analysis of Control, and especially against the phenomenon of Backward Control. The paper shows that unlike the situation described in Tsez (Polinsky & Potsdam 2002), Landau's objections do not hold for Greek and Romanian, where all obligatory control verbs exhibit Backward Control. Our results thus provide stronger empirical support for a theoretical approach to Control in terms of Movement, as defended in Hornstein (1999 and subsequent work).
The recent financial crisis has led to a vigorous debate about the pros and cons of fair-value accounting (FVA). This debate presents a major challenge for FVA going forward and standard setters’ push to extend FVA into other areas. In this article, we highlight four important issues as an attempt to make sense of the debate. First, much of the controversy results from confusion about what is new and different about FVA. Second, while there are legitimate concerns about marking to market (or pure FVA) in times of financial crisis, it is less clear that these problems apply to FVA as stipulated by the accounting standards, be it IFRS or U.S. GAAP. Third, historical cost accounting (HCA) is unlikely to be the remedy. There are a number of concerns about HCA as well and these problems could be larger than those with FVA. Fourth, although it is difficult to fault the FVA standards per se, implementation issues are a potential concern, especially with respect to litigation. Finally, we identify several avenues for future research. JEL Classification: G14, G15, G30, K22, M41, M42
The utility-maximizing consumption and investment strategy of an individual investor receiving an unspanned labor income stream seems impossible to find in closed form and very dificult to find using numerical solution techniques. We suggest an easy procedure for finding a specific, simple, and admissible consumption and investment strategy, which is near-optimal in the sense that the wealthequivalent loss compared to the unknown optimal strategy is very small. We first explain and implement the strategy in a simple setting with constant interest rates, a single risky asset, and an exogenously given income stream, but we also show that the success of the strategy is robust to changes in parameter values, to the introduction of stochastic interest rates, and to endogenous labor supply decisions.
In this paper, we analyze economies of scale for German mutual fund complexes. Using 2002-2005 data of 41 investment management companies, we specify a hedonic translog cost function. Applying a fixed effects regression on a one-way error component model there is clear evidence of significant overall economies of scale. On the level of individual mutual fund complexes we find significant economies of scale for all of the companies in our sample. With regard to cost efficiency, we find that the average mutual fund complexes in all size quartiles deviate considerably from the best practice cost frontier. JEL Classification: G2, L25 Keywords: mutual fund complex, investment management company, cost efficiency, economies of scale, hedonic translog cost function, fixed effects regression, one-way error component model
Der vorliegende Beitrag untersucht, ob der Mehrheitsaktionär einer Gesellschaft im Vorfeld eines Zwangsausschlusses von Minderheitsaktionären (sog. Squeeze-Out) versucht, die Kapitalmarkterwartungen negativ zu beeinflussen. Ein solches "manipulatives" Verhalten wird häufig in der juristischen wie betriebswirtschaftlichen Literatur unterstellt, da der Aktienkurs fü die Abfindungshöhe die Wertuntergrenze bildet. Unsere empirische Untersuchung der Bilanz- und Pressemitteilungspolitik von Squeeze-Out-Unternehmen im Vorfeld der Ankündigung einer solchen Maßnahme am deutschen Kapitalmarkt zeigt, dass in diesem Zeitraum tatsächlich ein signifikanter Anstieg (Rückgang) der im Ton pessimistischen (optimistischen) Pressemitteilungen feststellbar ist. Allerdings zeigt sich weiter, dass die Aktien der Squeeze-Out-Kandidaten bereits im Vorfeld und am Tag der Ankündigung so hohe positive Überrenditen erzielen, dass der von uns quantifizierte kumulierte Effekt der Informationspolitik auf die Börsenbewertung einen insgesamt nur sehr geringen Einfluss ausübt und von anderen Faktoren (z.B. Abfindungsspekulationen) dominiert wird. JEL: M41, M40, G14, K22
Gauging risk with higher moments : handrails in measuring and optimising conditional value at risk
(2009)
The aim of the paper is to study empirically the influence of higher moments of the return distribution on conditional value at risk (CVaR). To be more exact, we attempt to reveal the extent to which the risk given by CVaR can be estimated when relying on the mean, standard deviation, skewness and kurtosis. Furthermore, it is intended to study how this relationship can be utilised in portfolio optimisation. First, based on a database of 600 individual equity returns from 22 emerging world markets, factor models incorporating the first four moments of the return distribution have been constructed at different confidence levels for CVaR, and the contribution of the identified factors in explaining CVaR was determined. Following this the influence of higher moments was examined in portfolio context, i.e. asset allocation decisions were simulated by creating emerging market portfolios from the viewpoint of US investors. This can be regarded as a normal decisionmaking process of a hedge fund focusing on investments into emerging markets. In our analysis we compared and contrasted two approaches with which one can overcome the shortcomings of the variance as a risk measure. First of all, we solved in the presence of conflicting higher moment preferences a multi-objective portfolio optimisation problem for different sets of preferences. In addition, portfolio optimisation was performed in the mean-CVaR framework characterised by using CVaR as a measure of risk. As a part of the analysis, the pair-wise comparison of the different higher moment metrics of the meanvariance and the mean-CVaR efficient portfolios were also made. Throughout the work special attention was given to implied preferences to the different higher moments in optimising CVaR. We also examined the extent to which model risk, namely the risk of wrongly assuming normally-distributed returns can deteriorate our optimal portfolio choice. JEL Classification: G11, G15, C61
Auf dem 67. Deutschen Juristentag (DJT) in Erfurt wurde über eine Grundfrage des deutschen Aktienrechts diskutiert. Gefordert wurde eine stärkere Differenzierung zwischen börsennotierten und nichtbörsennotierten Aktiengesellschaften. Einzelne Deregulierungsvorschläge bezogen sich in diesem Zusammenhang auf die Reichweite des Prinzips der Satzungsstrenge, die Vinkulierung von Aktien und das Mehrstimmrecht. Die folgende Ausarbeitung beschäftigt sich mit der Frage, ob eine Differenzierung zwischen börsennotierten und nichtbörsennotierten Aktiengesellschaften insbesondere vor dem Hintergrund einer rechtsvergleichenden und empirischen Betrachtung überzeugt. Im Einzelnen wird zunächst kurz der Vorschlag Bayer an dem 67. DJT dargestellt (II.). Weiter wird die Bedeutung des außerbörslichen Handels in Deutschland untersucht (III.). Im Anschluss werden das deutsche, englische und – kursorisch – das US-amerikanische Aktien- und Kapitalmarktrecht rechtsvergleichend betrachtet (IV.). Dem folgt eine Stellungnahme zum Reformvorschlag Bayer (V.). Ein Fazit schließt die Untersuchung ab (VI.).
Bei der islamisch-mystischen Koranexegese (at-tafsir al-isari) handelt es sich um eine Schule der Koranauslegung. Die Koranexegeten, die dieser Schule angehören, interpretieren einzelne Verse des Korans durch kasf1 (wörtl. Enthüllung, Entdeckung) und ilham (Inspiration). Die Bedeutung dieser Verse wurde, nach ihrer Überzeugung, von Gott in ihre Herzen gelegt.