Refine
Year of publication
- 2005 (562) (remove)
Document Type
- Article (215)
- Working Paper (79)
- Doctoral Thesis (58)
- Part of a Book (51)
- Preprint (43)
- Part of Periodical (40)
- Conference Proceeding (36)
- Report (22)
- Book (11)
- Review (3)
Language
- English (562) (remove)
Has Fulltext
- yes (562) (remove)
Keywords
- Artikulation (13)
- Phonetik (13)
- Artikulatorische Phonetik (12)
- Englisch (11)
- Artikulator (8)
- Deutsch (7)
- Europäische Union (7)
- Geldpolitik (7)
- Bedeutungswandel (6)
- Computerlinguistik (6)
Institute
- Physik (72)
- Center for Financial Studies (CFS) (42)
- Wirtschaftswissenschaften (39)
- Biochemie und Chemie (32)
- Medizin (24)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Geowissenschaften (17)
- E-Finance Lab e.V. (16)
- Extern (15)
- Biowissenschaften (14)
This paper makes a case for the future development of European corporate law through regulatory competition rather than EC legislation. It is for the first time becoming legally possible for firms within the EU to select the national company law that they wish to govern their activities. A significant number of firms can be expected to exercise this freedom, and national legislatures can be expected to respond by seeking to make their company laws more attractive to firms. Whilst the UK is likely to be the single most successful jurisdiction in attracting firms, the presence of different models of corporate governance within Europe make it quite possible that competition will result in specialisation rather than convergence, and that no Member State will come to dominate as Delaware has done in the US. Procedural safeguards in the legal framework will direct the selection of laws which increase social welfare, as opposed simply to the welfare of those making the choice. Given that European legislators cannot be sure of the ‘optimal’ model for company law, the future of European company law-making would better be left with Member States than take the form of harmonized legislation.
Virtual screening of potential bioactive substances using the support vector machine approach
(2005)
Die vorliegende Dissertation stellt eine kumulative Arbeit dar, die in insgesamt acht wissenschaftlichen Publikationen (fünf publiziert, zwei eingerichtet und eine in Vorbereitung) dargelegt ist. In diesem Forschungsprojekt wurden Anwendungen von maschinellem Lernen für das virtuelle Screening von Moleküldatenbanken durchgeführt. Das Ziel war primär die Einführung und Überprüfung des Support-Vector-Machine (SVM) Ansatzes für das virtuelle Screening nach potentiellen Wirkstoffkandidaten. In der Einleitung der Arbeit ist die Rolle des virtuellen Screenings im Wirkstoffdesign beschrieben. Methoden des virtuellen Screenings können fast in jedem Bereich der gesamten pharmazeutischen Forschung angewendet werden. Maschinelles Lernen kann einen Einsatz finden von der Auswahl der ersten Moleküle, der Optimierung der Leitstrukturen bis hin zur Vorhersage von ADMET (Absorption, Distribution, Metabolism, Toxicity) Eigenschaften. In Abschnitt 4.2 werden möglichen Verfahren dargestellt, die zur Beschreibung von chemischen Strukturen eingesetzt werden können, um diese Strukturen in ein Format zu bringen (Deskriptoren), das man als Eingabe für maschinelle Lernverfahren wie Neuronale Netze oder SVM nutzen kann. Der Fokus ist dabei auf diejenigen Verfahren gerichtet, die in der vorliegenden Arbeit verwendet wurden. Die meisten Methoden berechnen Deskriptoren, die nur auf der zweidimensionalen (2D) Struktur basieren. Standard-Beispiele hierfür sind physikochemische Eigenschaften, Atom- und Bindungsanzahl etc. (Abschnitt 4.2.1). CATS Deskriptoren, ein topologisches Pharmakophorkonzept, sind ebenfalls 2D-basiert (Abschnitt 4.2.2). Ein anderer Typ von Deskriptoren beschreibt Eigenschaften, die aus einem dreidimensionalen (3D) Molekülmodell abgeleitet werden. Der Erfolg dieser Beschreibung hangt sehr stark davon ab, wie repräsentativ die 3D-Konformation ist, die für die Berechnung des Deskriptors angewendet wurde. Eine weitere Beschreibung, die wir in unserer Arbeit eingesetzt haben, waren Fingerprints. In unserem Fall waren die verwendeten Fingerprints ungeeignet zum Trainieren von Neuronale Netzen, da der Fingerprintvektor zu viele Dimensionen (~ 10 hoch 5) hatte. Im Gegensatz dazu hat das Training von SVM mit Fingerprints funktioniert. SVM hat den Vorteil im Vergleich zu anderen Methoden, dass sie in sehr hochdimensionalen Räumen gut klassifizieren kann. Dieser Zusammenhang zwischen SVM und Fingerprints war eine Neuheit, und wurde von uns erstmalig in die Chemieinformatik eingeführt. In Abschnitt 4.3 fokussiere ich mich auf die SVM-Methode. Für fast alle Klassifikationsaufgaben in dieser Arbeit wurde der SVM-Ansatz verwendet. Ein Schwerpunkt der Dissertation lag auf der SVM-Methode. Wegen Platzbeschränkungen wurde in den beigefügten Veröffentlichungen auf eine detaillierte Beschreibung der SVM verzichtet. Aus diesem Grund wird in Abschnitt 4.3 eine vollständige Einführung in SVM gegeben. Darin enthalten ist eine vollständige Diskussion der SVM Theorie: optimale Hyperfläche, Soft-Margin-Hyperfläche, quadratische Programmierung als Technik, um diese optimale Hyperfläche zu finden. Abschnitt 4.3 enthält auch eine Diskussion von Kernel-Funktionen, welche die genaue Form der optimalen Hyperfläche bestimmen. In Abschnitt 4.4 ist eine Einleitung in verschiede Methoden gegeben, die wir für die Auswahl von Deskriptoren genutzt haben. In diesem Abschnitt wird der Unterschied zwischen einer „Filter“- und der „Wrapper“-basierten Auswahl von Deskriptoren herausgearbeitet. In Veröffentlichung 3 (Abschnitt 7.3) haben wir die Vorteile und Nachteile von Filter- und Wrapper-basierten Methoden im virtuellen Screening vergleichend dargestellt. Abschnitt 7 besteht aus den Publikationen, die unsere Forschungsergebnisse enthalten. Unsere erste Publikation (Veröffentlichung 1) war ein Übersichtsartikel (Abschnitt 7.1). In diesem Artikel haben wir einen Gesamtüberblick der Anwendungen von SVM in der Bio- und Chemieinformatik gegeben. Wir diskutieren Anwendungen von SVM für die Gen-Chip-Analyse, die DNASequenzanalyse und die Vorhersage von Proteinstrukturen und Proteininteraktionen. Wir haben auch Beispiele beschrieben, wo SVM für die Vorhersage der Lokalisation von Proteinen in der Zelle genutzt wurden. Es wird dabei deutlich, dass SVM im Bereich des virtuellen Screenings noch nicht verbreitet war. Um den Einsatz von SVM als Hauptmethode unserer Forschung zu begründen, haben wir in unserer nächsten Publikation (Veröffentlichung 2) (Abschnitt 7.2) einen detaillierten Vergleich zwischen SVM und verschiedenen neuronalen Netzen, die sich als eine Standardmethode im virtuellen Screening etabliert haben, durchgeführt. Verglichen wurde die Trennung von wirstoffartigen und nicht-wirkstoffartigen Molekülen („Druglikeness“-Vorhersage). Die SVM konnte 82% aller Moleküle richtig klassifizieren. Die Klassifizierung war zudem robuster als mit dreilagigen feedforward-ANN bei der Verwendung verschiedener Anzahlen an Hidden-Neuronen. In diesem Projekt haben wir verschiedene Deskriptoren zur Beschreibung der Moleküle berechnet: Ghose-Crippen Fragmentdeskriptoren [86], physikochemische Eigenschaften [9] und topologische Pharmacophore (CATS) [10]. Die Entwicklung von weiteren Verfahren, die auf dem SVM-Konzept aufbauen, haben wir in den Publikationen in den Abschnitten 7.3 und 7.8 beschrieben. Veröffentlichung 3 stellt die Entwicklung einer neuen SVM-basierten Methode zur Auswahl von relevanten Deskriptoren für eine bestimmte Aktivität dar. Eingesetzt wurden die gleichen Deskriptoren wie in dem oben beschriebenen Projekt. Als charakteristische Molekülgruppen haben wir verschiedene Untermengen der COBRA Datenbank ausgewählt: 195 Thrombin Inhibitoren, 226 Kinase Inhibitoren und 227 Faktor Xa Inhibitoren. Es ist uns gelungen, die Anzahl der Deskriptoren von ursprünglich 407 auf ungefähr 50 zu verringern ohne signifikant an Klassifizierungsgenauigkeit zu verlieren. Unsere Methode haben wir mit einer Standardmethode für diese Anwendung verglichen, der Kolmogorov-Smirnov Statistik. Die SVM-basierte Methode erwies sich hierbei in jedem betrachteten Fall als besser als die Vergleichsmethoden hinsichtlich der Vorhersagegenauigkeit bei der gleichen Anzahl an Deskriptoren. Eine ausführliche Beschreibung ist in Abschnitt 4.4 gegeben. Dort sind auch verschiedene „Wrapper“ für die Deskriptoren-Auswahl beschrieben. Veröffentlichung 8 beschreibt die Anwendung von aktivem Lernen mit SVM. Die Idee des aktiven Lernens liegt in der Auswahl von Molekülen für das Lernverfahren aus dem Bereich an der Grenze der verschiedenen zu unterscheidenden Molekülklassen. Auf diese Weise kann die lokale Klassifikation verbessert werden. Die folgenden Gruppen von Moleküle wurden genutzt: ACE (Angiotensin converting enzyme), COX2 (Cyclooxygenase 2), CRF (Corticotropin releasing factor) Antagonisten, DPP (Dipeptidylpeptidase) IV, HIV (Human immunodeficiency virus) protease, Nuclear Receptors, NK (Neurokinin receptors), PPAR (peroxisome proliferator-activated receptor), Thrombin, GPCR und Matrix Metalloproteinasen. Aktives Lernen konnte die Leistungsfähigkeit des virtuellen Screenings verbessern, wie sich in dieser retrospektiven Studie zeigte. Es bleibt abzuwarten, ob sich das Verfahren durchsetzen wird, denn trotzt des Gewinns an Vorhersagegenauigkeit ist es aufgrund des mehrfachen SVMTrainings aufwändig. Die Publikationen aus den Abschnitten 7.5, 7.6 und 7.7 (Veröffentlichungen 5-7) zeigen praktische Anwendungen unserer SVM-Methoden im Wirkstoffdesign in Kombination mit anderen Verfahren, wie der Ähnlichkeitssuche und neuronalen Netzen zur Eigenschaftsvorhersage. In zwei Fällen haben wir mit dem Verfahren neuartige Liganden für COX-2 (cyclooxygenase 2) und dopamine D3/D2 Rezeptoren gefunden. Wir konnten somit klar zeigen, dass SVM-Methoden für das virtuelle Screening von Substanzdatensammlungen sinnvoll eingesetzt werden können. Es wurde im Rahmen der Arbeit auch ein schnelles Verfahren zur Erzeugung großer kombinatorischer Molekülbibliotheken entwickelt, welches auf der SMILES Notation aufbaut. Im frühen Stadium des Wirstoffdesigns ist es wichtig, eine möglichst „diverse“ Gruppe von Molekülen zu testen. Es gibt verschiedene etablierte Methoden, die eine solche Untermenge auswählen können. Wir haben eine neue Methode entwickelt, die genauer als die bekannte MaxMin-Methode sein sollte. Als erster Schritt wurde die „Probability Density Estimation“ (PDE) für die verfügbaren Moleküle berechnet. [78] Dafür haben wir jedes Molekül mit Deskriptoren beschrieben und die PDE im N-dimensionalen Deskriptorraum berechnet. Die Moleküle wurde mit dem Metropolis Algorithmus ausgewählt. [87] Die Idee liegt darin, wenige Moleküle aus den Bereichen mit hoher Dichte auszuwählen und mehr Moleküle aus den Bereichen mit niedriger Dichte. Die erhaltenen Ergebnisse wiesen jedoch auf zwei Nachteile hin. Erstens wurden Moleküle mit unrealistischen Deskriptorwerten ausgewählt und zweitens war unser Algorithmus zu langsam. Dieser Aspekt der Arbeit wurde daher nicht weiter verfolgt. In Veröffentlichung 6 (Abschnitt 7.6) haben wir in Zusammenarbeit mit der Molecular-Modeling Gruppe von Aventis-Pharma Deutschland (Frankfurt) einen SVM-basierten ADME Filter zur Früherkennung von CYP 2C9 Liganden entwickelt. Dieser nichtlineare SVM-Filter erreichte eine signifikant höhere Vorhersagegenauigkeit (q2 = 0.48) als ein auf den gleichen Daten entwickelten PLS-Modell (q2 = 0.34). Es wurden hierbei Dreipunkt-Pharmakophordeskriptoren eingesetzt, die auf einem dreidimensionalen Molekülmodell aufbauen. Eines der wichtigen Probleme im computerbasierten Wirkstoffdesign ist die Auswahl einer geeigneten Konformation für ein Molekül. Wir haben versucht, SVM auf dieses Problem anzuwenden. Der Trainingdatensatz wurde dazu mit jeweils mehreren Konformationen pro Molekül angereichert und ein SVM Modell gerechnet. Es wurden anschließend die Konformationen mit den am schlechtesten vorhergesagten IC50 Wert aussortiert. Die verbliebenen gemäß dem SVM-Modell bevorzugten Konformationen waren jedoch unrealistisch. Dieses Ergebnis zeigt Grenzen des SVM-Ansatzes auf. Wir glauben jedoch, dass weitere Forschung auf diesem Gebiet zu besseren Ergebnissen führen kann.
After a brief introduction on QCD and effective models in the first chapter, I analyze the dependence of the QCD transition temperature on the quark (or pion) mass in the second chapter. I found that a linear sigma model, which links the transition to chiral symmetry restoration, predicts a much stronger dependence of T_c on m_pi than seen in present lattice data for m_pi >~ 0.4 GeV. On the other hand, an effective Lagrangian for the Polyakov loop requires only small explicit symmetry breaking to describe T_c(m_pi) in the above mass range. In the third and fourth chapter, I study the linear sigma model with O(N) symmetry at nonzero temperature in the framework of the Cornwall-Jackiw-Tomboulis formalism. Extending the set of two-particle irreducible diagrams by adding sunset diagrams to the usual Hartree-Fock (or Hartree) contributions, I derive a new approximation scheme which extends the standard Hartree-Fock (or Hartree) approximation by the inclusion of nonzero decay widths.
Artificial drainage of agricultural land, for example with ditches or drainage tubes, is used to avoid water logging and to manage high groundwater tables. Among other impacts it influences the nutrient balances by increasing leaching losses and by decreasing denitrification. To simulate terrestrial transport of nitrogen on the global scale, a digital global map of artificially drained agricultural areas was developed. The map depicts the percentage of each 5’ by 5’ grid cell that is equipped for artificial drainage. Information on artificial drainage in countries or sub-national units was mainly derived from international inventories. Distribution to grid cells was based, for most countries, on the "Global Croplands Dataset" of Ramankutty et al. (1998) and the "Digital Global Map of Irrigation Areas" of Siebert et al. (2005). For some European countries the CORINE land cover dataset was used instead of the both datasets mentioned above. Maps with outlines of artificially drained areas were available for 6 countries. The global drainage area on the map is 167 Mio hectares. For only 11 out of the 116 countries with information on artificial drainage areas, sub-national information could be taken into account. Due to this coarse spatial resolution of the data sources, we recommended to use the map of artificially drained areas only for continental to global scale assessments. This documentation describes the dataset, the data sources and the map generation, and it discusses the data uncertainty.
We find that on average consumers chose the contract that ex post minimized their net costs. A substantial fraction of consumers (about 40%) still chose the ex post sub-optimal contract, with some incurring hundreds of dollars of avoidable interest costs. Nonetheless, the probability of choosing the sub-optimal contract declines with the dollar magnitude of the potential error, and consumers with larger errors were more likely to subsequently switch to the optimal contract. Thus most of the errors appear not to have been very costly, with the exception that a small minority of consumers persists in holding substantially sub-optimal contracts without switching. Klassifikation: G11, G21, E21, E51
Using a set of regional inflation rates we examine the dynamics of inflation dispersion within the U.S.A., Japan and across U.S. and Canadian regions. We find that inflation rate dispersion is significant throughout the sample period in all three samples. Based on methods applied in the empirical growth literature, we provide evidence in favor of significant mean reversion (ß-convergence) in inflation rates in all considered samples. The evidence on ó-convergence is mixed, however. Observed declines in dispersion are usually associated with decreasing overall inflation levels which indicates a positive relationship between mean inflation and overall inflation rate dispersion. Our findings for the within-distribution dynamics of regional inflation rates show that dynamics are largest for Japanese prefectures, followed by U.S. metropolitan areas. For the combined U.S.-Canadian sample, we find a pattern of within-distribution dynamics that is comparable to that found for regions within the European Monetary Union (EMU). In line with findings in the so-called 'border literature' these results suggest that frictions across European markets are at least as large as they are, e.g., across North American markets. Klassifikation: E31, E52, E58
Using a unique data set of regional inflation rates we are examining the extent and dynamics of inflation dispersion in major EMU countries before and after the introduction of the euro. For both periods, we find strong evidence in favor of mean reversion (ß-convergence) in inflation rates. However, half-lives to convergence are considerable and seem to have increased after 1999. The results indicate that the convergence process is nonlinear in the sense that its speed becomes smaller the further convergence has proceeded. An examination of the dynamics of overall inflation dispersion (ó-convergence) shows that there has been a decline in dispersion in the first half of the 1990s. For the second half of the 1990s, no further decline can be observed. At the end of the sample period, dispersion has even increased. The existence of large persistence in European inflation rates is confirmed when distribution dynamics methodology is applied. At the end of the paper we present evidence for the sustainability of the ECB's inflation target of an EMU-wide average inflation rate of less than but close to 2%. Klassifikation: E31, E52, E58
The paper documents lack of awareness of financial assets in the 1995 and 1998 Bank of Italy Surveys of Household Income and Wealth. It then explores the determinants of awareness, and finds that the probability that survey respondents are aware of stocks, mutual funds and investment accounts is positively correlated with education, household resources, long-term bank relations and proxies for social interaction. Lack of financial awareness has important implications for understanding the stockholding puzzle and for estimating stock market participation costs. Klassifikation: E2, D8, G1
The theory of intertemporal consumption choice makes sharp predictions about the evolution of the entire distribution of household consumption, not just about its conditional mean. In the paper, we study the empirical transition matrix of consumption using a panel drawn from the Bank of Italy Survey of Household Income and Wealth. We estimate the parameters that minimize the distance between the empirical and the theoretical transition matrix of the consumption distribution. The transition matrix generated by our estimates matches remarkably well the empirical matrix, both in the aggregate and in samples stratified by education. Our estimates strongly reject the consumption insurance model and suggest that households smooth income shocks to a lesser extent than implied by the permanent income hypothesis. Klassifikation: D52, D91, I30
Trusting the stock market
(2005)
We provide a new explanation to the limited stock market participation puzzle. In deciding whether to buy stocks, investors factor in the risk of being cheated. The perception of this risk is a function not only of the objective characteristics of the stock, but also of the subjective characteristics of the investor. Less trusting individuals are less likely to buy stock and, conditional on buying stock, they will buy less. The calibration of the model shows that this problem is sufficiently severe to account for the lack of participation of some of the richest investors in the United States as well as for differences in the rate of participation across countries. We also find evidence consistent with these propositions in Dutch and Italian micro data, as well as in cross country data. Klassifikation: D1, D8
Credit card debt puzzles
(2005)
Most US credit card holders revolve high-interest debt, often combined with substantial (i) asset accumulation by retirement, and (ii) low-rate liquid assets. Hyperbolic discounting can resolve only the former puzzle (Laibson et al., 2003). Bertaut and Haliassos (2002) proposed an 'accountant-shopper' framework for the latter. The current paper builds, solves, and simulates a fully-specified accountant-shopper model, to show that this framework can actually generate both types of co-existence, as well as target credit card utilization rates consistent with Gross and Souleles (2002). The benchmark model is compared to setups without self-control problems, with alternative mechanisms, and with impatient but fully rational shoppers. Klassifikation: E210, G110
Some have argued that recent increases in credit risk transfer are desirable because they improve the diversification of risk. Others have suggested that they may be undesirable if they increase the risk of financial crises. Using a model with banking and insurance sectors, we show that credit risk transfer can be beneficial when banks face uniform demand for liquidity. However, when they face idiosyncratic liquidity risk and hedge this risk in an interbank market, credit risk transfer can be detrimental to welfare. It can lead to contagion between the two sectors and increase the risk of crises. Klassifikation: G21, G22
How do markets spread risk when events are unknown or unknowable and where not anticipated in an insurance contract? While the policyholder can "hold up" the insurer for extra contractual payments, the continuing gains from trade on a single contract are often too small to yield useful coverage. By acting as a repository of the reputations of the parties, we show the brokers provide a coordinating mechanism to leverage the collective hold up power of policyholders. This extends both the degree of implicit and explicit coverage. The role is reflected in the terms of broker engagement, specifically in the ownership by the broker of the renewal rights. Finally, we argue that brokers can be motivated to play this role when they receive commissions that are contingent on insurer profits. This last feature questions a recent, well publicized, attack on broker compensation by New York attorney general, Elliot Spitzer. Klassifikation: G22, G24, L14
Biophysical investigation of the ligand-induced assembling of the human type I interferon receptor
(2005)
Type I interferons (IFNs) elicit antiviral, antiproliferative and immunmodulatory responses through binding to a shared receptor consisting of the transmembrane proteins ifnar1 and ifnar2. Differential signaling by different interferons – in particular IFNalpha´s and IFNbeta – suggest different modes of receptor engagement. In this work either single ligand-receptor interactions or the formation of the extracellular part of a signaling complex were investigated referring to thermodynamics, kinetics, stoichiometry and structural organization. Initially an expression and purification strategy for the extracellular domain of ifnar1 (ifnar1-EC) using Sf9 insect cells yielding in mg amounts of glycosylated protein was established. Using reflectometric interference spectroscopy (RIfS) the interactions between IFNalpha2/beta and ifnar1-EC and ifnar2-EC was studied in order to understand the individual energetic contributions within the ternary complex. For IFNalpha2 a Kd of 5 µM for the interaction with ifnar1-EC was determined. Substantially tighter binding of IFNbeta with both ifnar2-EC and ifnar1-EC compared to IFNalpha2 was observed. For neither IFNalpha2 nor IFNbeta stabilization of the complex with ifnar1-EC in presence of soluble ifnar2-EC was detectable. In addition, no direct interaction between ifnar2 and ifnar1 was could be shown. Thus, stem-stem interactions between the extracellular domains of ifnar1 and ifnar2 do not seem to play a role for ternary complex formation. Furthermore, ligand-induced cross-talk between ifnar1-EC and ifnar2-EC being tethered onto solid-supported, fluid lipid bilayers was investigated by RIfS and total internal reflection fluorescence spectroscopy. A very stable binding of IFNalpha2 at high receptor surface concentrations was observed with an apparent kd approximately 200-times lower than for ifnar2-EC alone. This apparent kd was strongly dependent on the surface concentration of the receptor components, suggesting kinetic rather than static stabilization, which was corroborated by competition experiments. These results indicate that signaling is activated by transient cross-talk between ifnar1 and ifnar2, which is by several orders of magnitude more efficiently engaged by IFNbeta than by IFNalpha2. With respect to differential recognition of different IFNs ifnar1-EC was dissected into sub-fragments containing different of the four Ig-like domains. The appropriate folding and glycosylation of these proteins, also purified in mg amounts were confirmed by SDS-PAGE, size exclusion chromatography and CD-spectroscopy. Surprisingly, only one construct containing all three N-terminal Ig-like domains was active in terms of ligand binding, indicating that these domains were required. Competitive binding of IFNalpha2 and IFNbeta to both this fragment and ifnar1-EC was demonstrated. Cellular binding assays with different fragments, however, highlight the key role of the membrane-proximal Ig-like domain for the formation of an in situ IFN-receptor complex and the ensuing signal activation. Even substitution with Ig-like domains from homologous cytokine receptors did not restore high-affinity ligand binding. Receptor assembling analysis on supported lipid bilayer revealed that appropriate orientation of the receptor is required, which is controlled by the membrane-proximal Ig-domain. All results indicate that differential signalling is encoded by the efficiency of signalling complex formation, which is controlled by the binding affinity of IFNs to the extracellular domains of ifnar1 and 2.
Here I analyse 23 populations of D. galeata, a large-lake cladoceran, distributed mainly across the Palaearctic. I detected high levels of clonal diversity and population differentiation using variation at six microsatellite loci across Europe. Most populations were characterised by deviations from H-W equilibrium and significant heterozygote deficiencies. Observed heterozygote deficiencies might be a consequence of simultaneous hatching of individuals produced during different times of the year or of the coexistence of ecologically and genetically differentiated subpopulations. A significant isolation by distance was only found over large geographic distances (> 700 km). This pattern is mainly due to the high genetic differentiation among neighbouring populations. My results suggest that historic populations of Daphnia were once interconnected by gene flow but current populations are now largely isolated. Thus local ecological conditions which determine the level of biparental sexual reproduction and local adaptation are the main factors mediating population structure of D. galeata. The population genetic structure and diversity in D. galeata was investigated at a European scale using six microsatellite loci and 12S rDNA sequence data to infer and compare historical and contemporary patterns of gene flow. D. galeata has the potential for long-distance dispersal via ephippial resting eggs by wind and other dispersing vectors (waterfowl), but shows in general strong population differentiation even among neighbouring populations. A total of 427 individuals were analysed for microsatellite and 85 individuals for mitochondrial (mtDNA) sequence data from 12 populations across Europe. I detected genetic differentiation among populations across Europe and locations within sampling regions for both genetic marker systems (average values: mtDNA FST = 0.574; microsatellite FST = 0.389), resulting in a lack of isolation by distance. Furthermore, several microsatellite alleles and one haplotype were shared across populations. Partitioning of molecular variance was inconsistant for both marker systems. Microsatellite variation was higher within than among populations, whereas mtDNA data yielded an inverse pattern. Relative high levels of nuclear DNA diversity were found across Europe. The amount of mitochondrial diversity was low in Spain, Hungary and Denmark. Gene flow analysis at a European scale did not reveal typical pattern of population recolonization in the light of postglacial colonization hypotheses. Populations, which recently experienced an expansion or population-bottleneck were observed both in middle and northern Europe. Since these populations revealed high genetic diversity in both marker systems, I suggest these areas to represent postglacial zones of secondary contact among divergent lineages of D. galeata. In order to reveal the relationship between population genetic structure of D. galeata and the relative contribution of environmental factors, I used a statistical framework based on canonical correspondence analysis. Although I detected no single ecological gradient mediating the genetic differentiation in either lake regions, it is noteworthy that the same ecological factors were significantly correlated with intra- and interspecific genetic variation of D. galeata. For example, I found a relationship between genetic variation of D. galeata and differentiation with higher and lower trophic levels (phytoplankton, submerged macrophytes and fish) and a relationship between clonal variation and species diversity within Cladocera. Variance partitioning had only a minor contribution of each environmental category (abiotic, biomass/density and diversity) to genetic diversity of D. galeata, while the largest proportion of variation was explained by shared components. My work illustrates the important role of ecological differentiation and adaptation in structuring genetic variation, and it highlights the need for approaches incorporating a landscape context for population divergence.
Die vorliegende Arbeit beschäftigt sich mit der Charakterisierung des ALTRO Chips (ALICE TPC Readout), der ein integraler und wichtiger Bestandteil der Auslesekette des TPC (Time Projection Chamber) Detektors von ALICE (A Large Ion Collider Experiment) ist. ALICE ist ein Experiment am noch im Bau befindlichen LHC (Large Hadron Collider) am CERN mit der zentralen Ausrichtung, Schwerionenkollisionen zu untersuchen. Diese sind von besonderem Interesse, da durch sie ein experimenteller Zugriff zu dem QGP (Quark Gluon Plasma) existiert, dem einzigen vom Standardmodell vorhergesagten Phasenübergang, der unter Laborbedingungen erreichbar ist. Im Jahr 2004 wurden Messungen an einem Teststrahl am CERN PS (Proton Synchrotron) durchgeführt. Der Prototyp wurde voll mit FECs bestückt, was 5400 Kanälen entspricht und einer anderen Gasmixtur (Ne/N2/CO2 90%/5%/5%) befüllt. Für das optimale Leistungsverhalten der ALICE TPC muß der Digitalprozessor im ALTRO, bestehend aus vier Berechnungseinheiten, mit den passenden Werten konfiguriert werden. Der Datenfluss beginnt mit dem BCS1 (Baseline Correction and Subtraction 1) Modul, das systematische Störungen und die Grundlinie entfernt. Da der ALTRO kontinuierlich das anliegende Signal abtastet, entfernt es automatisch langsame Grundlinienveränderungen, die Beispielsweise durch Temperaturänderungen auftreten können. Gefolgt von dem TCF (Tail Cancellation Filter), der den Schweif des langsam fallenden, vom PASA generierten Signals entfernt. Um die nichtsystematischen Störungen der Grundlinie zu entfernen, folgt die BCS2 (Baseline Correction and Subtraction 2), die auf einer gleitenden Mittelwertsberechnung mit Ausschluß von Detektorsignalen über einen doppelten Schwellenwert basiert. Die finale Einheit für die Signalverarbeitung ist die ZSU (Zero Suppression Unit), die Meßpunkte unterhalb eines definierten Schwellwertes entfernt. Hier wird der weg beschrieben die TCF und BCS1 Parameter aus vorhandenen Detektordaten zu extrahieren. Während der Analyse der Daten von kosmischen Teilchen fiel bei Signalen mit hoher Amplitude (>700 ADC) eine zusätzliche Struktur in dem Schweif auf. Der Monitor wurde deswegen mit einem gleitenden Mittelwertfilter erweitert, worauf sich diese Struktur auch in kleineren Signalen (> 200 ADC) zeigte. Dieses Signal wird von Ionen erzeugt, die zur Kathode oder zu den Pads driften, bisher ist jedoch weder die Streuung der Elektronenlawine an der Anode, noch die Variationsbreite in den erzeugten Elektronlawinen verstanden oder gemessen worden. Eine erfolgreiche Messung, sowie Charakterisierung wird in dieser Arbeit beschrieben. Im Jahr 2005 im Sommer beginnt der Einbau der Gaskammern der TPC in ALICE, die Elektronik folgt am Ende dieses Jahres. Parallel hierzu wurde der Prototyp der TPC wieder in Betrieb genommen und im Frühling wird ein kompletter Sektor mit der Detektorelektronik ausgestattet. An diesen zwei Aufbauten wird die ALTRO Charakterisierung fortgeführt, verfeinert und komplettiert.
Event-by-event multiplicity fluctuations in nucleus-nucleus collisions are studied within the HSD and UrQMD transport models. The scaled variances of negative, positive, and all charged hadrons in Pb+Pb at 158 AGeV are analyzed in comparison to the data from the NA49 Collaboration. We find a dominant role of the fluctuations in the nucleon participant number for the final hadron multiplicity fluctuations. This fact can be used to check di erent scenarios of nucleus-nucleus collisions by measuring the final multiplicity fluctuations as a function of collision centrality. The analysis reveals surprising e ects in the recent NA49 data which indicate a rather strong mixing of the projectile and target hadron production sources even in peripheral collisions. PACS numbers: 25.75.-q,25.75.Gz,24.60.-k
Mitochondial NADH:ubiquinone oxidoreductase (complex I) the largest multiprotein enzyme of the respiratory chain, catalyses the transfer of two electrons from NADH to ubiquinone, coupled to the translocation of four protons across the membrane. In addition to the 14 strictly conserved central subunits it contains a variable number of accessory subunits. At present, the best characterized enzyme is complex I from bovine heart with a molecular mass of about 980 kDa and 32 accessory proteins. In this study, the subunit composition of mitochondrial complex I from the aerobic yeast Y. lipolytica has been analysed by a combination of proteomic and genomic approaches. The sequences of 37 complex I subunits were identified. The sum of their individual molecular masses (about 930 kDa) was consistent with the native molecular weight of approximately 900 kDa for Y. lipolytica complex I obtained by BN-PAGE. A genomic analysis with Y. lipolytica and other eukaryotic databases to search for homologues of complex I subunits revealed 31 conserved proteins among the examined species. A novel protein named “X” was found in purified Y. lipolytica complex I by MALDI-MS. This protein exhibits homology to the thiosulfate sulfurtransferase enzyme referred to as rhodanese. The finding of a rhodanese-like protein in isolated complex I of Y. lipolytica allows to assume a special regulatory mechanism of complex I activity through control of the status of its iron-sulfur clusters. The second part of this study was aimed at investigating the possible role of one of these extra subunits, 39 kDa (NUEM) subunit which is related to the SDRs-enzyme family. The members of this family function in different redox and isomerization reactions and contain a conserved NAD(P)H-binding site. It was proposed that the 39 kDa subunit may be involved in a biosynthetic pathway, but the role of this subunit in complex I is unknown. In contrast to the situation in N. crassa, deletion of the 39 kDa encoding gene in Y. lipolytica led to the absence of fully assembled complex I. This result might indicate a different pathway of complex I assembly in both organisms. Several site-directed mutations were generated in the nucleotide binding motif. These had either no effect on enzyme activity and NADPH binding, or prevented complex I assembly. Mutations of arginine-65 that is located at the end of the second b-strand and responsible for selective interaction with the 2’-phosphate group of NADPH retained complex I activity in mitochondrial membranes but the affinity for the cofactor was markedly decreased. Purification of complex I from mutants resulted in decrease or loss of ubiquinone reductase activity. It is very likely that replacement of R65 not only led to a decrease in affinity for NADPH but also caused instability of the enzyme due to steric changes in the 39 kDa subunit. These data indicate that NADPH bound to the 39 kDa subunit (NUEM) is not essential for complex I activity, but probably involved in complex I assembly in Y. lipolytica.
The thesis entitled „Investigations on the significance of nucleo-cytoplasmic transport for the biological function of cellular proteins" aimed to unreveal molecular mechanisms in order to improve our understanding of the impact of nucleo-cytoplasmic transport on cellular functions. Within the scope of this work, it could be shown that regulated nucleo-cytoplasmic transport of a subfamily of homeobox transcription factors controlled their intra- and intercellular transport, and thereby influencing also their transcriptional activity. This study describes a novel regulatory mechanism, which could in general play an important role for the ordered differentiation of complex organisms. Besides cis-active transport Signals, also post-translational modifications can influence the localization and biological activity of proteins in trans. In addition to the known impact of phosphorylation on the transport and activity of STAT1, experimental evidence was provided demonstrating that acetylation affected the interaction of STAT1 with NF-kB p65, and subsequently modulated the expression of apoptosis-inducing NF-kB target genes. The impact of nucleo-cytoplasmic transport on the regulation of apoptosis was underlined by showing that the evolutionary conservation of a NES within the anti-apoptotic protein survivin plays an essential role for its dual function in the inhibition of apoptosis and ordered cell division. Since survivin is considered a bona fide cancer therapy target, these results strongly encourage future work to identify molecular decoys that specifically inhibit the nuclear export of survivin as novel therapeutics. In order to further dissect the regulation of nuclear transport and to efficiently identify transport inhibitors, cell-based assays are urgently required. Therefore, the cellular assay Systems developed in this work may not only serve to identify synthetic nuclear export and Import inhibitors but may also be applied in systematic RNAi-screening approaches to identify novel components of the transport machinery. In addition, the translocation based protease- and protein-interaction biosensors can be applied in various biological Systems, in particular to identify protein-protein interaction inhibitors of cancer relevant proteins. In summary, this work does not only underline the general significance of nucleo-cytoplasmic transport for cell biology, but also demonstrates its potential for the development of novel therapies against diseases like cancer and viral infections.
Plural semantics for natural language understanding : a computational proof-theoretic approach
(2005)
The semantics of natural language plurals poses a number of intricate problems – both from a formal and a computational perspective. In this thesis I investigate problems of representing, disambiguating and reasoning with plurals from a computational perspective. The work defines a computationally suitable representation for important plural constructions, proposes a tractable resolution algorithm for semantic plural ambiguities, and integrates an automatic reasoning component for plurals. My solution combines insights from formal semantics, computational linguistics and automated theorem proving and is based on the following main ideas. Whereas many existing approaches to plural semantics work on a model-theoretic basis using higher-order representation languages I propose a proof-theoretic approach to plural semantics based on a flat firstorder semantic representation language thus showing that a trade-off between expressive power and logical tractability can be found. The problem of automatic disambiguation of plurals is tackled by a deliberate decision to drastically reduce recourse to contextual knowledge for disambiguation but rely instead on structurally available and thus computationally manageable information. A further central aspect of the solution lies in carefully drawing the borderline between real ambiguity and mere indeterminacy in the interpretation of plural noun phrases. As a practical result of my computational proof-theoretic approach to plural semantics I can use my methods to perform automated reasoning with plurals by applying advanced firstorder theorem provers and model-generators available off-the shelf. The results are prototypically implemented within the two logic-oriented natural language understanding applications DRoPs and Attempto. DRoPs provides an automatic plural disambiguation component for uncontrolled natural language whereas Attempto works with a constructive disambiguation strategy for controlled natural language. Both systems provide tools for the automated analysis of technical texts allowing users for example to automatically detect inconsistencies, to perform question answering, to check whether a conjecture follows from a text or to find equivalences and redundancies.