Refine
Year of publication
- 2005 (562) (remove)
Document Type
- Article (215)
- Working Paper (79)
- Doctoral Thesis (58)
- Part of a Book (51)
- Preprint (43)
- Part of Periodical (40)
- Conference Proceeding (36)
- Report (22)
- Book (11)
- Review (3)
Language
- English (562) (remove)
Has Fulltext
- yes (562) (remove)
Keywords
- Artikulation (13)
- Phonetik (13)
- Artikulatorische Phonetik (12)
- Englisch (11)
- Artikulator (8)
- Deutsch (7)
- Europäische Union (7)
- Geldpolitik (7)
- Bedeutungswandel (6)
- Computerlinguistik (6)
Institute
- Physik (72)
- Center for Financial Studies (CFS) (42)
- Wirtschaftswissenschaften (39)
- Biochemie und Chemie (32)
- Medizin (24)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Geowissenschaften (17)
- E-Finance Lab e.V. (16)
- Extern (15)
- Biowissenschaften (14)
Jet physics in ALICE
(2005)
This work aims at the performance of the ALICE detector for the measurement of high-energy jets at mid-pseudo-rapidity in ultra-relativistic nucleus-nucleus collisions at LHC and their potential for the characterization of the partonic matter created in these collisions. In our approach, jets at high energy with E_{T}>50 GeV are reconstructed with a cone jet finder, as typically done for jet measurements in hadronic collisions. Within the ALICE framework we study its capabilities of measuring high-energy jets and quantify obtainable rates and the quality of reconstruction, both, in proton-proton and in lead-lead collisions at LHC conditions. In particular, we address whether modification of the jet fragmentation in the charged-particle sector can be detected within the high particle-multiplicity environment of the central lead-lead collisions. We comparatively treat these topics in view of an EMCAL proposed to complete the central ALICE tracking detectors. The main activities concerning the thesis are the following: a) Determination of the potential for exclusive jet measurements in ALICE. b) Determination of jet rates that can be acquired with the ALICE setup. c) Development of a parton-energy loss model. d) Simulation and study of the energy-loss effect on jet properties.
The results presented here strongly indicate that ubiquitination of the recombinant human alpha1 GlyR at the plasma membrane of Xenopus oocytes is involved in receptor internalisation and degradation. Ubiquitination of the human alpha1 GlyR has been demonstrated by radio-iodination of plasma membrane-boundalpha1 GlyRs, whose subunits differed in molecular weight by additional 7, 14 or 21 kDa, corresponding to the molecular weights of one, two and three conjugated ubiquitin molecules, respectively, and by co-isolation of the non-tagged human alpha1 GlyR through hexahistidyl-tagged ubiquitin. Ubiquitin conjugated GlyRs where prominent at the plasma membrane, but could be hardly detected in total cell homogenates, indicating that ubiquitination takes place exclusively at the plasma membrane. Ubiquitination of the alpha1 GlyR at the plasma membrane was no longer detectable when the ten lysine residues of the cytoplasmic loop between transmembrane segments M3 and M4 were replaced by arginines. Despite this proteolytic cleavage continued to take place at the same extent as with the wild type alpha1 GlyR, suggesting that removal of GlyRs from the plasma membrane and routing to lysosomes for degradation were not dependent on ubiquitination. Also replacing a tyrosine in position 339, which was speculated to be part of an additional endocytosis motif, did not lead to a significant reduction of cleavage of the GlyR alpha1 subunits. However, a mutant lacking both, ubiquitination sites and 339Y, was significantly less processed. These results may suggest that the GlyR alpha1 subunit harbors at least two endocytosis motifs, which may act independently to regulate the density of alpha1 GlyR. Apparently, each of the two signals may be capable of compensating entirely the loss of the other. Part two of this Dissertation demonstrates that the correct topology of the glycine receptor alpha1 subunit depends critically on six positively charged residues within a basic cluster, RFRRKRR, located in the large cytoplasmic loop following the C-terminal end of M3. Neutralization of one or more charges of this cluster, but not of other charged residues in the M3-M4 loop, led to an aberrant translocation into the endoplasmic reticulum lumen of the M3-M4 loop. However, when two of the three basic charges located in the ectodomain linking M2 and M3 were neutralized, in addition to two charges of the basic cluster, endoplasmic reticulum disposition of the M3-M4 loop was prevented. We conclude that a high density of basic residues C-terminal to M3 is required to compensate for the presence of positively charged residues in the M2-M3 ectodomain, which otherwise impair correct membrane integration of the M3 segment. Part three of this Dissertation describes my contribution (blue native PAGE analysis of metabolically labeled alpha7 and 5HT3A receptors and the examination of the glycosylation state of metabolically labeled alpha7 subunits) to a work on the limited assembly capacity of Xenopus oocytes for nicotinic alpha7 subunits. While 5HT3A subunits combined efficiently to pentamers, alpha7 subunits existed in various assembly states including trimers, tetramers, pentamers, and aggregates. Only alpha7 subunits that completed the assembly process to homopentamers acquired complex-type carbohydrates and appeared at the cell surface. We conclude that Xenopus oocytes have a limited capacity to guide the assembly of alpha7 subunits, but not 5HT3A subunits to homopentamers. Accordingly, ER retention of imperfectly assembled alpha7 subunits rather than inefficient routing of fully assembled alpha7 receptors to the cell surface limits surface expression levels of alpha7 nicotinic acetylcholine receptors. Part four of this Dissertation describes my contribution (the biochemical analysis of the human P2X2 and P2X6 subtypes) to studies on the quaternary structure of P2X receptors. Armaz Aschrafi, the main author of the paper showed that subsequent to isolation under non-denaturing conditions from Xenopus oocytes the His-rP2X2 protein migrated on blue native PAGE predominantly in an aggregated form. The only discrete protein band detectable could be assigned to homotrimers of the His-rP2X2 subunit. Because of the exceptional assembly-behaviour of the rP2X2 protein compared to the rP2X1, rP2X3, rP2X4 and rP2X5 proteins, its human orthologue was investigated in the same manner. In contrast to rP2X2 subunits, hP2X2 subunits migrated under virtually identical conditions in a single defined assembly state, which could be clearly assigned to a trimer. P2X6 subunits represent the sole P2X subtype that is unable to form functional homomeric receptors in Xenopus oocytes. The blue native PAGE analysis of metabolically labeled hP2X6 receptors and the examination of the glycosylation state revealed that hP2X6 subunits form tetramers and aggregates that are not exported to the plasma membrane of Xenopus oocytes.
In the present work, the Heidelberg electron beam ion trap (EBIT) at the Max-Planck-Institute für Kernphysik (MPIK) has been used to produce, trap highly charged argon ions and study their magnetic dipole (M1) forbidden transitions. These transitions are of relativistic origin and, hence, provide unique possibilities to perform precise studies of relativistic effects in many electron systems. In this way, the transitions energies of the 1s22s22p for the 2P3/2 - 2P1/2 transition in Ar13+ and the 1s22s2p for the 3P1 - 3P2 transition in Ar14+, for 36Ar and 40Ar isotopes were compared. The observed isotopic effect has confirmed the relativistic nuclear recoil effect corrections due to the finite nuclear mass in a recent calculation made by Tupitsyn [TSC03], in which major inconsistencies of earlier theoretical methods have been corrected for the first time. The finite mass, or recoil effect, composed of the normal mass shift (NMS), and the specific mass shift (SMS) were corrected for relativistic contributions, RNMS and RSMS. The present experimental results have shown that the recoil effects on the Breit level are indeed very important, as well as the effects of the correlated relativistic dynamics in a many electron ion.
We calculate thermal photon and neutral pion spectra in ultrarelativistic heavy-ion collisions in the framework of three-fluid hydrodynamics. Both spectra are quite sensitive to the equation of state used. In particular, within our model, recent data for S + Au at 200 AGeV can only be understood if a scenario with a phase transition (possibly to a quark-gluon plasma) is assumed. Results for Au+Au at 11 AGeV and Pb + Pb at 160 AGeV are also presented.
Different numerical approaches and algorithms arising in the context of modelling of cellular tissue evolution are discussed in this thesis. Being suited in particular to off-lattice agent-based models, the numerical tool of three-dimensional weighted kinetic and dynamic Delaunay triangulations is introduced and discussed for its applicability to adjacency detection. As there exists no implementation of a code that incorporates all necessary features for tissue modelling, algorithms for incremental insertion or deletion of points in Delaunay triangulations and the restoration of the Delaunay property for triangulations of moving point sets are introduced. In addition, the numerical solution of reaction-diffusion equations and their connection to agent-based cell tissue simulations is discussed. In order to demonstrate the applicability of the numerical algorithms, biological problems are studied for different model systems: For multicellular tumour spheroids, the weighted Delaunay triangulation provides a great advantage for adjacency detection, but due to the large cell numbers the model used for the cell-cell interaction has to be simplified to allow for a numerical solution. The agent-based model reproduces macroscopic experimental signatures, but some parameters cannot be fixed with the data available. A much simpler, but in key properties analogous, continuum model based on reaction-diffusion equations is likewise capable of reproducing the experimental data. Both modelling approaches make differing predictions on non-quantified experimental signatures. In the case of the epidermis, a smaller system is considered which enables a more complete treatment of the equations of motion. In particular, a control mechanism of cell proliferation is analysed. Simple assumptions suffice to explain the flow equilibrium observed in the epidermis. In addition, the effect of adhesion on the survival chances of cancerous cells is studied. For some regions in parameter space, stochastic effects may completely alter the outcome. The findings stress the need of establishing a defined experimental model to fix the unknown model parameters and to rule out further models.
Mobile telephony and mobile internet are driving a new application paradigm: location-based services (LBS). Based on a person’s location and context, personalized applications can be deployed. Thus, internet-based systems will continuously collect and process the location in relationship to a personal context of an identified customer. One of the challenges in designing LBS infrastructures is the concurrent design for economic infrastructures and the preservation of privacy of the subjects whose location is tracked. This presentation will explain typical LBS scenarios, the resulting new privacy challenges and user requirements and raises economic questions about privacy-design. The topics will be connected to “mobile identity” to derive what particular identity management issues can be found in LBS.
In this paper, I examine the potential of mobile alerting services empowering investors to react quickly to critical market events. Therefore, an analysis of short-term (intraday) price effects is performed. I find abnormal returns to company announcements which are completed within a timeframe of minutes. To make use of these findings, these price effects are predicted using pre-defined external metrics and different estimation methodologies. Compared to previous research, the results provide support that artificial neural networks and multiple linear regression are good estimation models for forecasting price effects also on an intraday basis. As most of the price effect magnitude and effect delay can be estimated correctly, it is demonstrated how a suitable mobile alerting service combining a low level of user-intrusiveness and timely information supply can be designed.
My graduate thesis is on the "Structural studies of membrane transport proteins". Transporters are membrane proteins that have multiple membrane-spanning a-helices. They are dynamic and diverse proteins, undergoing a large conformational change and transporting wide range of susbtrates. Based on their energy source they can be classified into primary and secondary transport systems. Primary transport systems are driven by the use of chemical (ATP) or light energy, while secondary transporters utilize ion gradients to transport substrates. I began my PhD dissertation on secondary transporters by two-dimensional crystallization and electron crystallographic analysis and recently my focus also has shifted towards 3D crystallization. The following projects constitute my PhD thesis: 1) 2D crystallization of MjNhaP1 and pH induced structural change: MjNhaP1, a Na+/H+ antiporter that is regulated by pH has been implicated in homeostasis of H+ and Na+ in Methanococcus jannaschii, a hyperthermophilic archaeon that grows optimally at 85°C. MjNhaP1 was cloned and expressed in E. coli. Two-dimensional crystals were obtained from purified protein at pH4. Electron cryo-microscopy yielded an 8Å projection map. The map of MjNhaP1 shows elongated densities in the centre of the dimer and a cluster of density peaks on either side of the dimer core, indicative of a bundle of 4-6 membrane-spanning helices. The effect of pH on the structure of MjNhaP1was studied in situ in 2D crystals revealing a major change in density within the helix bundle relative to the dimer interface. This change occurred at pH6 and above. The two conformations at low and high pH most likely represent the closed and open states of the antiporter, respectively. This is the first instance where a conformational change associated with the regulation of a secondary transporter appears to map structurally. Reconstruction of 3D map and high-resolution structure by x-ray crystallography would be necessary to understand the mechanism of ion transport and regulation by pH. 2) 2D crystallization of Proline transporter: Proline transporter (PutP) from E.coli belongs the sodium-solute symporter family that includes disease related sodium dependent glucose and iodide transporter in humans. Sodium and proline are co-transported with a stoichiometry of 1:1. Purified PutP was reconstituted to yield 2D crystals that were hexagonal in nature. The 2D crystals had tendency to stack indicating their willingness to form 3D crystals. A projection map of PutP from negatively stained crystals showed trimeric arrangement of protein. Other members of the SSF family have been shown to be monomers. My analysis of oligomeric state of PutP in detergent by blue native gel indicates a monomer in detergent solution. It is likely that PutP can function as a monomer but at higher concentration and in lipid bilayer it tends to form trimer. 3) Oligomeric state and crystallization of carnitine transporter from E.coli: E.coli carnitine transporter (CaiT) belongs to the BCCT (Betaine, Carnitine and Choline) superfamily that transports molecules with quaternary amine groups. CaiT is predicted to span the membrane 12 times and acts as a L-carnitine/g-butyrobetaine exchanger. Unlike other members in this transporter family, it does not require an ion gradient and does not respond to osmotic stress. Over-expression of the protein yielded ~2mg of protein/L of culture. The structure and oligomeric state of the protein were analyzed in detergent and lipid bilayers. Blue native gel electrophoresis indicated that CaiT was a trimer in detergent solution. Gel filtration and cross-linking studies further support this. Reconstitution of CaiT into lipid bilayers resulted in 2D crystals. Analysis of negatively stained 2D crystals confirmed that CaiT is a trimer in the membrane. Initial 3D crystallization trials have been successful and currently, the crystals diffract to 6Å and are being improved. 4) Monomeric porin OmpG: OmpG is a bacterial outer membrane b-barrel protein. It is monomeric and its size (33kDa) places it as a prime candidate for a structural solution, using the recently developed method of solid state NMR (work in collaboration with Prof.Hartmut Oskinat, FMP, Berlin). A long-term aim would be to study porins as templates for designing nanopores, for DNA sequencing and identification. I have expressed OmpG in inclusion bodies and refolded at an efficiency of >90% into a functional form using detergent. OmpG was then crystallized by 2D crystallization yielding an 8Å projection map whose structure was similar to native protein. In addition, these crystals were used for structure determination by solid state NMR. An initial spectrum of heavy isotopically labeled OmpG has allowed identification of specific amino acid residues including threonine and proline. Additionally, I obtained 3D crystals in detergent that diffract to 5.5Å and are being improved.
Protein-protein interactions within the plane of cellular membranes play a key role for many biological processes and in particular for transmembrane signaling. A prominent example is the ligand-induced crosslinking of cytokine receptors, where 3- dimensional cytokine binding followed by 2-dimensional interaction between the receptor subunits have been recognized to be important for regulating signaling specificity. The fundamental importance of such coupled interactions for cell-surface receptor activation has stimulated numerous theoretical studies, which have hardly been confirmed experimentally. An experimental approach to measure interactions and real time kinetics of type I interferon (IFN) induced assembly between interferon receptor subunits ifnar2 and ifnar1 on membrane was developed and determinants of the 2-dimensional interactions, such as dimensionality, size, valency, orientation, membrane fluidity and receptor density were quantitatively addressed The C-terminal decahistidine tagged extracellular domains (EC) of ifnar1 and ifnar2 were site- specifically tethered onto solid-supported fluid lipid membrane, which carried covalently attached chelator bis-nitrilotriacetic acid (bis-NTA) groups. Interactions on the lipid bilayer were detected with a novel solid phase detection technique, which allows simultaneous detection of ligand binding to a membrane anchored receptors and lateral interaction between them in the real time. This was achieved by combining two optical techniques: label-free reflectance interferometry (RIf) and total internal reflection fluorescence spectroscopy (TIRFS). Fluorescence signals, in the order of 10 fluorophores/µm2, were detected without substantial photobleaching. The sensitivity of the label-free interferometric detection was in the range of 10 pg/mm2. The crosstalk between the two signals was eliminated by means of spectral separation. Fluorescence was detected in the visible region and RIf was performed at 800 nm in the near infrared. Flow through conditions allowed to automate experiments and measure binding events as fast as ~ 5 s-1. Using this technique we have dissected the interactions involved in IFN-induced ifnar crosslinking. 2-dimensional association and dissociation rate constants were independently determined by tethering high stoichiometric excess of one of the receptor subunits and comparing dissociation of the labelled ligand away from the membrane in the absence and presence of the non-labelled high affinity competitor. Dissociation traces were fitted with the two-step dissociation model: the first step being the 2-dimensional separation of the ternary complex followed by the 3- dimensional ligand dissociation into solution. Label-free RIf detection allowed absolute parameterization of the 2-dimensional concentrations of the ifnar subunits on the membrane. The TIRFS signal provided high sensitivity of the ligand dissociation and was correlated against the RIf signal before fitting. These features of the detection system allowed us to parameterize the model, and the 2-dimensional association or dissociation rate constants were the only variables during the fitting. Another FRET based binding assay was developed to determine the 2- dimensional dissociation rate constant using a pulse-chase approach. The donor fluorescence from ifnar2-EC was quenched upon the ternary complex formation with the acceptor-labelled IFN and the nonlabelled ifnar1-EC. The equilibrium was perturbed by rapid tethering of substantial excess of the nonlabelled ifnar2-EC onto the membrane. The exchange of the labelled ifnar2-EC with the nonlabelled one was monitored as the decrease in the FRET signal with the 2-dimensional dissociation of ifnar2-EC from the ternary complex being the rate limiting step. Based on the several mutants and variants of the interacting proteins, the effect of different rate constants and receptor orientation on the 2-dimensional crosslinking dynamics was studied. We have identified several critical features of the 2- dimensional interactions on membranes, which cannot be readily concluded from the solution binding assays. The restricted rotation and the increased lifetime of the encounter complex due to high membrane viscosity are the main determinants of the 2-dimensional association. Tethering ifnar1-EC to the membrane via N-terminal decahistidine tag decreased the 2-dimensional association rate constant 4-5 fold. Electrostatic attraction and steering, the important mechanism to enhance association rate constant between the soluble proteins, are not pronounced for interactions on the membrane. Protein orientation due to membrane anchoring dominates over electrostatic effects and together with the increased lifetime of the encounter complex consequence that 2-dimensional association rate constants are quite similar and do not correlate with association rate constants in solution. The 2- dimensional dissociation rate constants were generally 2-5-fold lower compared to the corresponding 3-dimensional dissociation rate constants in solution. Possible explanations for this are that long lifetime of the encounter complex stabilizes the ternary complex or that membrane tethering affects the interaction diagram. In conclusion, combined TIRFS-RIf detection turn to be powerful and versatile technique to characterize protein-protein interactions on membranes.
This paper makes a case for the future development of European corporate law through regulatory competition rather than EC legislation. It is for the first time becoming legally possible for firms within the EU to select the national company law that they wish to govern their activities. A significant number of firms can be expected to exercise this freedom, and national legislatures can be expected to respond by seeking to make their company laws more attractive to firms. Whilst the UK is likely to be the single most successful jurisdiction in attracting firms, the presence of different models of corporate governance within Europe make it quite possible that competition will result in specialisation rather than convergence, and that no Member State will come to dominate as Delaware has done in the US. Procedural safeguards in the legal framework will direct the selection of laws which increase social welfare, as opposed simply to the welfare of those making the choice. Given that European legislators cannot be sure of the ‘optimal’ model for company law, the future of European company law-making would better be left with Member States than take the form of harmonized legislation.
Virtual screening of potential bioactive substances using the support vector machine approach
(2005)
Die vorliegende Dissertation stellt eine kumulative Arbeit dar, die in insgesamt acht wissenschaftlichen Publikationen (fünf publiziert, zwei eingerichtet und eine in Vorbereitung) dargelegt ist. In diesem Forschungsprojekt wurden Anwendungen von maschinellem Lernen für das virtuelle Screening von Moleküldatenbanken durchgeführt. Das Ziel war primär die Einführung und Überprüfung des Support-Vector-Machine (SVM) Ansatzes für das virtuelle Screening nach potentiellen Wirkstoffkandidaten. In der Einleitung der Arbeit ist die Rolle des virtuellen Screenings im Wirkstoffdesign beschrieben. Methoden des virtuellen Screenings können fast in jedem Bereich der gesamten pharmazeutischen Forschung angewendet werden. Maschinelles Lernen kann einen Einsatz finden von der Auswahl der ersten Moleküle, der Optimierung der Leitstrukturen bis hin zur Vorhersage von ADMET (Absorption, Distribution, Metabolism, Toxicity) Eigenschaften. In Abschnitt 4.2 werden möglichen Verfahren dargestellt, die zur Beschreibung von chemischen Strukturen eingesetzt werden können, um diese Strukturen in ein Format zu bringen (Deskriptoren), das man als Eingabe für maschinelle Lernverfahren wie Neuronale Netze oder SVM nutzen kann. Der Fokus ist dabei auf diejenigen Verfahren gerichtet, die in der vorliegenden Arbeit verwendet wurden. Die meisten Methoden berechnen Deskriptoren, die nur auf der zweidimensionalen (2D) Struktur basieren. Standard-Beispiele hierfür sind physikochemische Eigenschaften, Atom- und Bindungsanzahl etc. (Abschnitt 4.2.1). CATS Deskriptoren, ein topologisches Pharmakophorkonzept, sind ebenfalls 2D-basiert (Abschnitt 4.2.2). Ein anderer Typ von Deskriptoren beschreibt Eigenschaften, die aus einem dreidimensionalen (3D) Molekülmodell abgeleitet werden. Der Erfolg dieser Beschreibung hangt sehr stark davon ab, wie repräsentativ die 3D-Konformation ist, die für die Berechnung des Deskriptors angewendet wurde. Eine weitere Beschreibung, die wir in unserer Arbeit eingesetzt haben, waren Fingerprints. In unserem Fall waren die verwendeten Fingerprints ungeeignet zum Trainieren von Neuronale Netzen, da der Fingerprintvektor zu viele Dimensionen (~ 10 hoch 5) hatte. Im Gegensatz dazu hat das Training von SVM mit Fingerprints funktioniert. SVM hat den Vorteil im Vergleich zu anderen Methoden, dass sie in sehr hochdimensionalen Räumen gut klassifizieren kann. Dieser Zusammenhang zwischen SVM und Fingerprints war eine Neuheit, und wurde von uns erstmalig in die Chemieinformatik eingeführt. In Abschnitt 4.3 fokussiere ich mich auf die SVM-Methode. Für fast alle Klassifikationsaufgaben in dieser Arbeit wurde der SVM-Ansatz verwendet. Ein Schwerpunkt der Dissertation lag auf der SVM-Methode. Wegen Platzbeschränkungen wurde in den beigefügten Veröffentlichungen auf eine detaillierte Beschreibung der SVM verzichtet. Aus diesem Grund wird in Abschnitt 4.3 eine vollständige Einführung in SVM gegeben. Darin enthalten ist eine vollständige Diskussion der SVM Theorie: optimale Hyperfläche, Soft-Margin-Hyperfläche, quadratische Programmierung als Technik, um diese optimale Hyperfläche zu finden. Abschnitt 4.3 enthält auch eine Diskussion von Kernel-Funktionen, welche die genaue Form der optimalen Hyperfläche bestimmen. In Abschnitt 4.4 ist eine Einleitung in verschiede Methoden gegeben, die wir für die Auswahl von Deskriptoren genutzt haben. In diesem Abschnitt wird der Unterschied zwischen einer „Filter“- und der „Wrapper“-basierten Auswahl von Deskriptoren herausgearbeitet. In Veröffentlichung 3 (Abschnitt 7.3) haben wir die Vorteile und Nachteile von Filter- und Wrapper-basierten Methoden im virtuellen Screening vergleichend dargestellt. Abschnitt 7 besteht aus den Publikationen, die unsere Forschungsergebnisse enthalten. Unsere erste Publikation (Veröffentlichung 1) war ein Übersichtsartikel (Abschnitt 7.1). In diesem Artikel haben wir einen Gesamtüberblick der Anwendungen von SVM in der Bio- und Chemieinformatik gegeben. Wir diskutieren Anwendungen von SVM für die Gen-Chip-Analyse, die DNASequenzanalyse und die Vorhersage von Proteinstrukturen und Proteininteraktionen. Wir haben auch Beispiele beschrieben, wo SVM für die Vorhersage der Lokalisation von Proteinen in der Zelle genutzt wurden. Es wird dabei deutlich, dass SVM im Bereich des virtuellen Screenings noch nicht verbreitet war. Um den Einsatz von SVM als Hauptmethode unserer Forschung zu begründen, haben wir in unserer nächsten Publikation (Veröffentlichung 2) (Abschnitt 7.2) einen detaillierten Vergleich zwischen SVM und verschiedenen neuronalen Netzen, die sich als eine Standardmethode im virtuellen Screening etabliert haben, durchgeführt. Verglichen wurde die Trennung von wirstoffartigen und nicht-wirkstoffartigen Molekülen („Druglikeness“-Vorhersage). Die SVM konnte 82% aller Moleküle richtig klassifizieren. Die Klassifizierung war zudem robuster als mit dreilagigen feedforward-ANN bei der Verwendung verschiedener Anzahlen an Hidden-Neuronen. In diesem Projekt haben wir verschiedene Deskriptoren zur Beschreibung der Moleküle berechnet: Ghose-Crippen Fragmentdeskriptoren [86], physikochemische Eigenschaften [9] und topologische Pharmacophore (CATS) [10]. Die Entwicklung von weiteren Verfahren, die auf dem SVM-Konzept aufbauen, haben wir in den Publikationen in den Abschnitten 7.3 und 7.8 beschrieben. Veröffentlichung 3 stellt die Entwicklung einer neuen SVM-basierten Methode zur Auswahl von relevanten Deskriptoren für eine bestimmte Aktivität dar. Eingesetzt wurden die gleichen Deskriptoren wie in dem oben beschriebenen Projekt. Als charakteristische Molekülgruppen haben wir verschiedene Untermengen der COBRA Datenbank ausgewählt: 195 Thrombin Inhibitoren, 226 Kinase Inhibitoren und 227 Faktor Xa Inhibitoren. Es ist uns gelungen, die Anzahl der Deskriptoren von ursprünglich 407 auf ungefähr 50 zu verringern ohne signifikant an Klassifizierungsgenauigkeit zu verlieren. Unsere Methode haben wir mit einer Standardmethode für diese Anwendung verglichen, der Kolmogorov-Smirnov Statistik. Die SVM-basierte Methode erwies sich hierbei in jedem betrachteten Fall als besser als die Vergleichsmethoden hinsichtlich der Vorhersagegenauigkeit bei der gleichen Anzahl an Deskriptoren. Eine ausführliche Beschreibung ist in Abschnitt 4.4 gegeben. Dort sind auch verschiedene „Wrapper“ für die Deskriptoren-Auswahl beschrieben. Veröffentlichung 8 beschreibt die Anwendung von aktivem Lernen mit SVM. Die Idee des aktiven Lernens liegt in der Auswahl von Molekülen für das Lernverfahren aus dem Bereich an der Grenze der verschiedenen zu unterscheidenden Molekülklassen. Auf diese Weise kann die lokale Klassifikation verbessert werden. Die folgenden Gruppen von Moleküle wurden genutzt: ACE (Angiotensin converting enzyme), COX2 (Cyclooxygenase 2), CRF (Corticotropin releasing factor) Antagonisten, DPP (Dipeptidylpeptidase) IV, HIV (Human immunodeficiency virus) protease, Nuclear Receptors, NK (Neurokinin receptors), PPAR (peroxisome proliferator-activated receptor), Thrombin, GPCR und Matrix Metalloproteinasen. Aktives Lernen konnte die Leistungsfähigkeit des virtuellen Screenings verbessern, wie sich in dieser retrospektiven Studie zeigte. Es bleibt abzuwarten, ob sich das Verfahren durchsetzen wird, denn trotzt des Gewinns an Vorhersagegenauigkeit ist es aufgrund des mehrfachen SVMTrainings aufwändig. Die Publikationen aus den Abschnitten 7.5, 7.6 und 7.7 (Veröffentlichungen 5-7) zeigen praktische Anwendungen unserer SVM-Methoden im Wirkstoffdesign in Kombination mit anderen Verfahren, wie der Ähnlichkeitssuche und neuronalen Netzen zur Eigenschaftsvorhersage. In zwei Fällen haben wir mit dem Verfahren neuartige Liganden für COX-2 (cyclooxygenase 2) und dopamine D3/D2 Rezeptoren gefunden. Wir konnten somit klar zeigen, dass SVM-Methoden für das virtuelle Screening von Substanzdatensammlungen sinnvoll eingesetzt werden können. Es wurde im Rahmen der Arbeit auch ein schnelles Verfahren zur Erzeugung großer kombinatorischer Molekülbibliotheken entwickelt, welches auf der SMILES Notation aufbaut. Im frühen Stadium des Wirstoffdesigns ist es wichtig, eine möglichst „diverse“ Gruppe von Molekülen zu testen. Es gibt verschiedene etablierte Methoden, die eine solche Untermenge auswählen können. Wir haben eine neue Methode entwickelt, die genauer als die bekannte MaxMin-Methode sein sollte. Als erster Schritt wurde die „Probability Density Estimation“ (PDE) für die verfügbaren Moleküle berechnet. [78] Dafür haben wir jedes Molekül mit Deskriptoren beschrieben und die PDE im N-dimensionalen Deskriptorraum berechnet. Die Moleküle wurde mit dem Metropolis Algorithmus ausgewählt. [87] Die Idee liegt darin, wenige Moleküle aus den Bereichen mit hoher Dichte auszuwählen und mehr Moleküle aus den Bereichen mit niedriger Dichte. Die erhaltenen Ergebnisse wiesen jedoch auf zwei Nachteile hin. Erstens wurden Moleküle mit unrealistischen Deskriptorwerten ausgewählt und zweitens war unser Algorithmus zu langsam. Dieser Aspekt der Arbeit wurde daher nicht weiter verfolgt. In Veröffentlichung 6 (Abschnitt 7.6) haben wir in Zusammenarbeit mit der Molecular-Modeling Gruppe von Aventis-Pharma Deutschland (Frankfurt) einen SVM-basierten ADME Filter zur Früherkennung von CYP 2C9 Liganden entwickelt. Dieser nichtlineare SVM-Filter erreichte eine signifikant höhere Vorhersagegenauigkeit (q2 = 0.48) als ein auf den gleichen Daten entwickelten PLS-Modell (q2 = 0.34). Es wurden hierbei Dreipunkt-Pharmakophordeskriptoren eingesetzt, die auf einem dreidimensionalen Molekülmodell aufbauen. Eines der wichtigen Probleme im computerbasierten Wirkstoffdesign ist die Auswahl einer geeigneten Konformation für ein Molekül. Wir haben versucht, SVM auf dieses Problem anzuwenden. Der Trainingdatensatz wurde dazu mit jeweils mehreren Konformationen pro Molekül angereichert und ein SVM Modell gerechnet. Es wurden anschließend die Konformationen mit den am schlechtesten vorhergesagten IC50 Wert aussortiert. Die verbliebenen gemäß dem SVM-Modell bevorzugten Konformationen waren jedoch unrealistisch. Dieses Ergebnis zeigt Grenzen des SVM-Ansatzes auf. Wir glauben jedoch, dass weitere Forschung auf diesem Gebiet zu besseren Ergebnissen führen kann.
After a brief introduction on QCD and effective models in the first chapter, I analyze the dependence of the QCD transition temperature on the quark (or pion) mass in the second chapter. I found that a linear sigma model, which links the transition to chiral symmetry restoration, predicts a much stronger dependence of T_c on m_pi than seen in present lattice data for m_pi >~ 0.4 GeV. On the other hand, an effective Lagrangian for the Polyakov loop requires only small explicit symmetry breaking to describe T_c(m_pi) in the above mass range. In the third and fourth chapter, I study the linear sigma model with O(N) symmetry at nonzero temperature in the framework of the Cornwall-Jackiw-Tomboulis formalism. Extending the set of two-particle irreducible diagrams by adding sunset diagrams to the usual Hartree-Fock (or Hartree) contributions, I derive a new approximation scheme which extends the standard Hartree-Fock (or Hartree) approximation by the inclusion of nonzero decay widths.
Artificial drainage of agricultural land, for example with ditches or drainage tubes, is used to avoid water logging and to manage high groundwater tables. Among other impacts it influences the nutrient balances by increasing leaching losses and by decreasing denitrification. To simulate terrestrial transport of nitrogen on the global scale, a digital global map of artificially drained agricultural areas was developed. The map depicts the percentage of each 5’ by 5’ grid cell that is equipped for artificial drainage. Information on artificial drainage in countries or sub-national units was mainly derived from international inventories. Distribution to grid cells was based, for most countries, on the "Global Croplands Dataset" of Ramankutty et al. (1998) and the "Digital Global Map of Irrigation Areas" of Siebert et al. (2005). For some European countries the CORINE land cover dataset was used instead of the both datasets mentioned above. Maps with outlines of artificially drained areas were available for 6 countries. The global drainage area on the map is 167 Mio hectares. For only 11 out of the 116 countries with information on artificial drainage areas, sub-national information could be taken into account. Due to this coarse spatial resolution of the data sources, we recommended to use the map of artificially drained areas only for continental to global scale assessments. This documentation describes the dataset, the data sources and the map generation, and it discusses the data uncertainty.
We find that on average consumers chose the contract that ex post minimized their net costs. A substantial fraction of consumers (about 40%) still chose the ex post sub-optimal contract, with some incurring hundreds of dollars of avoidable interest costs. Nonetheless, the probability of choosing the sub-optimal contract declines with the dollar magnitude of the potential error, and consumers with larger errors were more likely to subsequently switch to the optimal contract. Thus most of the errors appear not to have been very costly, with the exception that a small minority of consumers persists in holding substantially sub-optimal contracts without switching. Klassifikation: G11, G21, E21, E51
Using a set of regional inflation rates we examine the dynamics of inflation dispersion within the U.S.A., Japan and across U.S. and Canadian regions. We find that inflation rate dispersion is significant throughout the sample period in all three samples. Based on methods applied in the empirical growth literature, we provide evidence in favor of significant mean reversion (ß-convergence) in inflation rates in all considered samples. The evidence on ó-convergence is mixed, however. Observed declines in dispersion are usually associated with decreasing overall inflation levels which indicates a positive relationship between mean inflation and overall inflation rate dispersion. Our findings for the within-distribution dynamics of regional inflation rates show that dynamics are largest for Japanese prefectures, followed by U.S. metropolitan areas. For the combined U.S.-Canadian sample, we find a pattern of within-distribution dynamics that is comparable to that found for regions within the European Monetary Union (EMU). In line with findings in the so-called 'border literature' these results suggest that frictions across European markets are at least as large as they are, e.g., across North American markets. Klassifikation: E31, E52, E58
Using a unique data set of regional inflation rates we are examining the extent and dynamics of inflation dispersion in major EMU countries before and after the introduction of the euro. For both periods, we find strong evidence in favor of mean reversion (ß-convergence) in inflation rates. However, half-lives to convergence are considerable and seem to have increased after 1999. The results indicate that the convergence process is nonlinear in the sense that its speed becomes smaller the further convergence has proceeded. An examination of the dynamics of overall inflation dispersion (ó-convergence) shows that there has been a decline in dispersion in the first half of the 1990s. For the second half of the 1990s, no further decline can be observed. At the end of the sample period, dispersion has even increased. The existence of large persistence in European inflation rates is confirmed when distribution dynamics methodology is applied. At the end of the paper we present evidence for the sustainability of the ECB's inflation target of an EMU-wide average inflation rate of less than but close to 2%. Klassifikation: E31, E52, E58
The paper documents lack of awareness of financial assets in the 1995 and 1998 Bank of Italy Surveys of Household Income and Wealth. It then explores the determinants of awareness, and finds that the probability that survey respondents are aware of stocks, mutual funds and investment accounts is positively correlated with education, household resources, long-term bank relations and proxies for social interaction. Lack of financial awareness has important implications for understanding the stockholding puzzle and for estimating stock market participation costs. Klassifikation: E2, D8, G1
The theory of intertemporal consumption choice makes sharp predictions about the evolution of the entire distribution of household consumption, not just about its conditional mean. In the paper, we study the empirical transition matrix of consumption using a panel drawn from the Bank of Italy Survey of Household Income and Wealth. We estimate the parameters that minimize the distance between the empirical and the theoretical transition matrix of the consumption distribution. The transition matrix generated by our estimates matches remarkably well the empirical matrix, both in the aggregate and in samples stratified by education. Our estimates strongly reject the consumption insurance model and suggest that households smooth income shocks to a lesser extent than implied by the permanent income hypothesis. Klassifikation: D52, D91, I30
Trusting the stock market
(2005)
We provide a new explanation to the limited stock market participation puzzle. In deciding whether to buy stocks, investors factor in the risk of being cheated. The perception of this risk is a function not only of the objective characteristics of the stock, but also of the subjective characteristics of the investor. Less trusting individuals are less likely to buy stock and, conditional on buying stock, they will buy less. The calibration of the model shows that this problem is sufficiently severe to account for the lack of participation of some of the richest investors in the United States as well as for differences in the rate of participation across countries. We also find evidence consistent with these propositions in Dutch and Italian micro data, as well as in cross country data. Klassifikation: D1, D8
Credit card debt puzzles
(2005)
Most US credit card holders revolve high-interest debt, often combined with substantial (i) asset accumulation by retirement, and (ii) low-rate liquid assets. Hyperbolic discounting can resolve only the former puzzle (Laibson et al., 2003). Bertaut and Haliassos (2002) proposed an 'accountant-shopper' framework for the latter. The current paper builds, solves, and simulates a fully-specified accountant-shopper model, to show that this framework can actually generate both types of co-existence, as well as target credit card utilization rates consistent with Gross and Souleles (2002). The benchmark model is compared to setups without self-control problems, with alternative mechanisms, and with impatient but fully rational shoppers. Klassifikation: E210, G110
Some have argued that recent increases in credit risk transfer are desirable because they improve the diversification of risk. Others have suggested that they may be undesirable if they increase the risk of financial crises. Using a model with banking and insurance sectors, we show that credit risk transfer can be beneficial when banks face uniform demand for liquidity. However, when they face idiosyncratic liquidity risk and hedge this risk in an interbank market, credit risk transfer can be detrimental to welfare. It can lead to contagion between the two sectors and increase the risk of crises. Klassifikation: G21, G22
How do markets spread risk when events are unknown or unknowable and where not anticipated in an insurance contract? While the policyholder can "hold up" the insurer for extra contractual payments, the continuing gains from trade on a single contract are often too small to yield useful coverage. By acting as a repository of the reputations of the parties, we show the brokers provide a coordinating mechanism to leverage the collective hold up power of policyholders. This extends both the degree of implicit and explicit coverage. The role is reflected in the terms of broker engagement, specifically in the ownership by the broker of the renewal rights. Finally, we argue that brokers can be motivated to play this role when they receive commissions that are contingent on insurer profits. This last feature questions a recent, well publicized, attack on broker compensation by New York attorney general, Elliot Spitzer. Klassifikation: G22, G24, L14
Biophysical investigation of the ligand-induced assembling of the human type I interferon receptor
(2005)
Type I interferons (IFNs) elicit antiviral, antiproliferative and immunmodulatory responses through binding to a shared receptor consisting of the transmembrane proteins ifnar1 and ifnar2. Differential signaling by different interferons – in particular IFNalpha´s and IFNbeta – suggest different modes of receptor engagement. In this work either single ligand-receptor interactions or the formation of the extracellular part of a signaling complex were investigated referring to thermodynamics, kinetics, stoichiometry and structural organization. Initially an expression and purification strategy for the extracellular domain of ifnar1 (ifnar1-EC) using Sf9 insect cells yielding in mg amounts of glycosylated protein was established. Using reflectometric interference spectroscopy (RIfS) the interactions between IFNalpha2/beta and ifnar1-EC and ifnar2-EC was studied in order to understand the individual energetic contributions within the ternary complex. For IFNalpha2 a Kd of 5 µM for the interaction with ifnar1-EC was determined. Substantially tighter binding of IFNbeta with both ifnar2-EC and ifnar1-EC compared to IFNalpha2 was observed. For neither IFNalpha2 nor IFNbeta stabilization of the complex with ifnar1-EC in presence of soluble ifnar2-EC was detectable. In addition, no direct interaction between ifnar2 and ifnar1 was could be shown. Thus, stem-stem interactions between the extracellular domains of ifnar1 and ifnar2 do not seem to play a role for ternary complex formation. Furthermore, ligand-induced cross-talk between ifnar1-EC and ifnar2-EC being tethered onto solid-supported, fluid lipid bilayers was investigated by RIfS and total internal reflection fluorescence spectroscopy. A very stable binding of IFNalpha2 at high receptor surface concentrations was observed with an apparent kd approximately 200-times lower than for ifnar2-EC alone. This apparent kd was strongly dependent on the surface concentration of the receptor components, suggesting kinetic rather than static stabilization, which was corroborated by competition experiments. These results indicate that signaling is activated by transient cross-talk between ifnar1 and ifnar2, which is by several orders of magnitude more efficiently engaged by IFNbeta than by IFNalpha2. With respect to differential recognition of different IFNs ifnar1-EC was dissected into sub-fragments containing different of the four Ig-like domains. The appropriate folding and glycosylation of these proteins, also purified in mg amounts were confirmed by SDS-PAGE, size exclusion chromatography and CD-spectroscopy. Surprisingly, only one construct containing all three N-terminal Ig-like domains was active in terms of ligand binding, indicating that these domains were required. Competitive binding of IFNalpha2 and IFNbeta to both this fragment and ifnar1-EC was demonstrated. Cellular binding assays with different fragments, however, highlight the key role of the membrane-proximal Ig-like domain for the formation of an in situ IFN-receptor complex and the ensuing signal activation. Even substitution with Ig-like domains from homologous cytokine receptors did not restore high-affinity ligand binding. Receptor assembling analysis on supported lipid bilayer revealed that appropriate orientation of the receptor is required, which is controlled by the membrane-proximal Ig-domain. All results indicate that differential signalling is encoded by the efficiency of signalling complex formation, which is controlled by the binding affinity of IFNs to the extracellular domains of ifnar1 and 2.
Here I analyse 23 populations of D. galeata, a large-lake cladoceran, distributed mainly across the Palaearctic. I detected high levels of clonal diversity and population differentiation using variation at six microsatellite loci across Europe. Most populations were characterised by deviations from H-W equilibrium and significant heterozygote deficiencies. Observed heterozygote deficiencies might be a consequence of simultaneous hatching of individuals produced during different times of the year or of the coexistence of ecologically and genetically differentiated subpopulations. A significant isolation by distance was only found over large geographic distances (> 700 km). This pattern is mainly due to the high genetic differentiation among neighbouring populations. My results suggest that historic populations of Daphnia were once interconnected by gene flow but current populations are now largely isolated. Thus local ecological conditions which determine the level of biparental sexual reproduction and local adaptation are the main factors mediating population structure of D. galeata. The population genetic structure and diversity in D. galeata was investigated at a European scale using six microsatellite loci and 12S rDNA sequence data to infer and compare historical and contemporary patterns of gene flow. D. galeata has the potential for long-distance dispersal via ephippial resting eggs by wind and other dispersing vectors (waterfowl), but shows in general strong population differentiation even among neighbouring populations. A total of 427 individuals were analysed for microsatellite and 85 individuals for mitochondrial (mtDNA) sequence data from 12 populations across Europe. I detected genetic differentiation among populations across Europe and locations within sampling regions for both genetic marker systems (average values: mtDNA FST = 0.574; microsatellite FST = 0.389), resulting in a lack of isolation by distance. Furthermore, several microsatellite alleles and one haplotype were shared across populations. Partitioning of molecular variance was inconsistant for both marker systems. Microsatellite variation was higher within than among populations, whereas mtDNA data yielded an inverse pattern. Relative high levels of nuclear DNA diversity were found across Europe. The amount of mitochondrial diversity was low in Spain, Hungary and Denmark. Gene flow analysis at a European scale did not reveal typical pattern of population recolonization in the light of postglacial colonization hypotheses. Populations, which recently experienced an expansion or population-bottleneck were observed both in middle and northern Europe. Since these populations revealed high genetic diversity in both marker systems, I suggest these areas to represent postglacial zones of secondary contact among divergent lineages of D. galeata. In order to reveal the relationship between population genetic structure of D. galeata and the relative contribution of environmental factors, I used a statistical framework based on canonical correspondence analysis. Although I detected no single ecological gradient mediating the genetic differentiation in either lake regions, it is noteworthy that the same ecological factors were significantly correlated with intra- and interspecific genetic variation of D. galeata. For example, I found a relationship between genetic variation of D. galeata and differentiation with higher and lower trophic levels (phytoplankton, submerged macrophytes and fish) and a relationship between clonal variation and species diversity within Cladocera. Variance partitioning had only a minor contribution of each environmental category (abiotic, biomass/density and diversity) to genetic diversity of D. galeata, while the largest proportion of variation was explained by shared components. My work illustrates the important role of ecological differentiation and adaptation in structuring genetic variation, and it highlights the need for approaches incorporating a landscape context for population divergence.
Die vorliegende Arbeit beschäftigt sich mit der Charakterisierung des ALTRO Chips (ALICE TPC Readout), der ein integraler und wichtiger Bestandteil der Auslesekette des TPC (Time Projection Chamber) Detektors von ALICE (A Large Ion Collider Experiment) ist. ALICE ist ein Experiment am noch im Bau befindlichen LHC (Large Hadron Collider) am CERN mit der zentralen Ausrichtung, Schwerionenkollisionen zu untersuchen. Diese sind von besonderem Interesse, da durch sie ein experimenteller Zugriff zu dem QGP (Quark Gluon Plasma) existiert, dem einzigen vom Standardmodell vorhergesagten Phasenübergang, der unter Laborbedingungen erreichbar ist. Im Jahr 2004 wurden Messungen an einem Teststrahl am CERN PS (Proton Synchrotron) durchgeführt. Der Prototyp wurde voll mit FECs bestückt, was 5400 Kanälen entspricht und einer anderen Gasmixtur (Ne/N2/CO2 90%/5%/5%) befüllt. Für das optimale Leistungsverhalten der ALICE TPC muß der Digitalprozessor im ALTRO, bestehend aus vier Berechnungseinheiten, mit den passenden Werten konfiguriert werden. Der Datenfluss beginnt mit dem BCS1 (Baseline Correction and Subtraction 1) Modul, das systematische Störungen und die Grundlinie entfernt. Da der ALTRO kontinuierlich das anliegende Signal abtastet, entfernt es automatisch langsame Grundlinienveränderungen, die Beispielsweise durch Temperaturänderungen auftreten können. Gefolgt von dem TCF (Tail Cancellation Filter), der den Schweif des langsam fallenden, vom PASA generierten Signals entfernt. Um die nichtsystematischen Störungen der Grundlinie zu entfernen, folgt die BCS2 (Baseline Correction and Subtraction 2), die auf einer gleitenden Mittelwertsberechnung mit Ausschluß von Detektorsignalen über einen doppelten Schwellenwert basiert. Die finale Einheit für die Signalverarbeitung ist die ZSU (Zero Suppression Unit), die Meßpunkte unterhalb eines definierten Schwellwertes entfernt. Hier wird der weg beschrieben die TCF und BCS1 Parameter aus vorhandenen Detektordaten zu extrahieren. Während der Analyse der Daten von kosmischen Teilchen fiel bei Signalen mit hoher Amplitude (>700 ADC) eine zusätzliche Struktur in dem Schweif auf. Der Monitor wurde deswegen mit einem gleitenden Mittelwertfilter erweitert, worauf sich diese Struktur auch in kleineren Signalen (> 200 ADC) zeigte. Dieses Signal wird von Ionen erzeugt, die zur Kathode oder zu den Pads driften, bisher ist jedoch weder die Streuung der Elektronenlawine an der Anode, noch die Variationsbreite in den erzeugten Elektronlawinen verstanden oder gemessen worden. Eine erfolgreiche Messung, sowie Charakterisierung wird in dieser Arbeit beschrieben. Im Jahr 2005 im Sommer beginnt der Einbau der Gaskammern der TPC in ALICE, die Elektronik folgt am Ende dieses Jahres. Parallel hierzu wurde der Prototyp der TPC wieder in Betrieb genommen und im Frühling wird ein kompletter Sektor mit der Detektorelektronik ausgestattet. An diesen zwei Aufbauten wird die ALTRO Charakterisierung fortgeführt, verfeinert und komplettiert.
Event-by-event multiplicity fluctuations in nucleus-nucleus collisions are studied within the HSD and UrQMD transport models. The scaled variances of negative, positive, and all charged hadrons in Pb+Pb at 158 AGeV are analyzed in comparison to the data from the NA49 Collaboration. We find a dominant role of the fluctuations in the nucleon participant number for the final hadron multiplicity fluctuations. This fact can be used to check di erent scenarios of nucleus-nucleus collisions by measuring the final multiplicity fluctuations as a function of collision centrality. The analysis reveals surprising e ects in the recent NA49 data which indicate a rather strong mixing of the projectile and target hadron production sources even in peripheral collisions. PACS numbers: 25.75.-q,25.75.Gz,24.60.-k
Mitochondial NADH:ubiquinone oxidoreductase (complex I) the largest multiprotein enzyme of the respiratory chain, catalyses the transfer of two electrons from NADH to ubiquinone, coupled to the translocation of four protons across the membrane. In addition to the 14 strictly conserved central subunits it contains a variable number of accessory subunits. At present, the best characterized enzyme is complex I from bovine heart with a molecular mass of about 980 kDa and 32 accessory proteins. In this study, the subunit composition of mitochondrial complex I from the aerobic yeast Y. lipolytica has been analysed by a combination of proteomic and genomic approaches. The sequences of 37 complex I subunits were identified. The sum of their individual molecular masses (about 930 kDa) was consistent with the native molecular weight of approximately 900 kDa for Y. lipolytica complex I obtained by BN-PAGE. A genomic analysis with Y. lipolytica and other eukaryotic databases to search for homologues of complex I subunits revealed 31 conserved proteins among the examined species. A novel protein named “X” was found in purified Y. lipolytica complex I by MALDI-MS. This protein exhibits homology to the thiosulfate sulfurtransferase enzyme referred to as rhodanese. The finding of a rhodanese-like protein in isolated complex I of Y. lipolytica allows to assume a special regulatory mechanism of complex I activity through control of the status of its iron-sulfur clusters. The second part of this study was aimed at investigating the possible role of one of these extra subunits, 39 kDa (NUEM) subunit which is related to the SDRs-enzyme family. The members of this family function in different redox and isomerization reactions and contain a conserved NAD(P)H-binding site. It was proposed that the 39 kDa subunit may be involved in a biosynthetic pathway, but the role of this subunit in complex I is unknown. In contrast to the situation in N. crassa, deletion of the 39 kDa encoding gene in Y. lipolytica led to the absence of fully assembled complex I. This result might indicate a different pathway of complex I assembly in both organisms. Several site-directed mutations were generated in the nucleotide binding motif. These had either no effect on enzyme activity and NADPH binding, or prevented complex I assembly. Mutations of arginine-65 that is located at the end of the second b-strand and responsible for selective interaction with the 2’-phosphate group of NADPH retained complex I activity in mitochondrial membranes but the affinity for the cofactor was markedly decreased. Purification of complex I from mutants resulted in decrease or loss of ubiquinone reductase activity. It is very likely that replacement of R65 not only led to a decrease in affinity for NADPH but also caused instability of the enzyme due to steric changes in the 39 kDa subunit. These data indicate that NADPH bound to the 39 kDa subunit (NUEM) is not essential for complex I activity, but probably involved in complex I assembly in Y. lipolytica.
The thesis entitled „Investigations on the significance of nucleo-cytoplasmic transport for the biological function of cellular proteins" aimed to unreveal molecular mechanisms in order to improve our understanding of the impact of nucleo-cytoplasmic transport on cellular functions. Within the scope of this work, it could be shown that regulated nucleo-cytoplasmic transport of a subfamily of homeobox transcription factors controlled their intra- and intercellular transport, and thereby influencing also their transcriptional activity. This study describes a novel regulatory mechanism, which could in general play an important role for the ordered differentiation of complex organisms. Besides cis-active transport Signals, also post-translational modifications can influence the localization and biological activity of proteins in trans. In addition to the known impact of phosphorylation on the transport and activity of STAT1, experimental evidence was provided demonstrating that acetylation affected the interaction of STAT1 with NF-kB p65, and subsequently modulated the expression of apoptosis-inducing NF-kB target genes. The impact of nucleo-cytoplasmic transport on the regulation of apoptosis was underlined by showing that the evolutionary conservation of a NES within the anti-apoptotic protein survivin plays an essential role for its dual function in the inhibition of apoptosis and ordered cell division. Since survivin is considered a bona fide cancer therapy target, these results strongly encourage future work to identify molecular decoys that specifically inhibit the nuclear export of survivin as novel therapeutics. In order to further dissect the regulation of nuclear transport and to efficiently identify transport inhibitors, cell-based assays are urgently required. Therefore, the cellular assay Systems developed in this work may not only serve to identify synthetic nuclear export and Import inhibitors but may also be applied in systematic RNAi-screening approaches to identify novel components of the transport machinery. In addition, the translocation based protease- and protein-interaction biosensors can be applied in various biological Systems, in particular to identify protein-protein interaction inhibitors of cancer relevant proteins. In summary, this work does not only underline the general significance of nucleo-cytoplasmic transport for cell biology, but also demonstrates its potential for the development of novel therapies against diseases like cancer and viral infections.
Plural semantics for natural language understanding : a computational proof-theoretic approach
(2005)
The semantics of natural language plurals poses a number of intricate problems – both from a formal and a computational perspective. In this thesis I investigate problems of representing, disambiguating and reasoning with plurals from a computational perspective. The work defines a computationally suitable representation for important plural constructions, proposes a tractable resolution algorithm for semantic plural ambiguities, and integrates an automatic reasoning component for plurals. My solution combines insights from formal semantics, computational linguistics and automated theorem proving and is based on the following main ideas. Whereas many existing approaches to plural semantics work on a model-theoretic basis using higher-order representation languages I propose a proof-theoretic approach to plural semantics based on a flat firstorder semantic representation language thus showing that a trade-off between expressive power and logical tractability can be found. The problem of automatic disambiguation of plurals is tackled by a deliberate decision to drastically reduce recourse to contextual knowledge for disambiguation but rely instead on structurally available and thus computationally manageable information. A further central aspect of the solution lies in carefully drawing the borderline between real ambiguity and mere indeterminacy in the interpretation of plural noun phrases. As a practical result of my computational proof-theoretic approach to plural semantics I can use my methods to perform automated reasoning with plurals by applying advanced firstorder theorem provers and model-generators available off-the shelf. The results are prototypically implemented within the two logic-oriented natural language understanding applications DRoPs and Attempto. DRoPs provides an automatic plural disambiguation component for uncontrolled natural language whereas Attempto works with a constructive disambiguation strategy for controlled natural language. Both systems provide tools for the automated analysis of technical texts allowing users for example to automatically detect inconsistencies, to perform question answering, to check whether a conjecture follows from a text or to find equivalences and redundancies.
Molecular dynamics (MD) simulation serves as an important and widely used computational tool to study molecular systems at an atomic resolution. No experimental technique is capable of generating a complete description of the dynamical structure of the biomolecules in their native solution environment. MD simulations allow us to study the dynamics and structure of the system and, moreover, helps in the interpretation of experimental observations. MD simulation was first introduced and applied by Alder and Wainwright in 1957 \cite{Alder57}. However, the first MD simulation of a macromolecule of biological interest was published 28 years ago \cite{McCammon77}. The simulation was concerned with the bovine pancreatic trypsin inhibitor (BPTI) protein, which has served as the hydrogen molecule'' of protein dynamics because of its small size, high stability, and relatively accurate X-ray structure available in 1977 \cite{Deisenhofer75}. This method is now widely used to tackle larger and more complex biological systems \cite{Groot01,Roux02} and has been facilitated by the development of fast and efficient methods for treating the long-range electrostatic interactions \cite{Essmann95}, the availability of faster parallel computers, and the continuous development of empirical molecular mechanical force fields \cite{Langley98,Cheatham99,Foloppe00}. It took several years until the first MD simulations of nucleic acid systems were performed \cite{Levitt83,Tidor83,Prabhakaran83,Nilsson86}. These investigations, which were also performed in vacuo, clearly demonstrated the importance of proper handling of electrostatics in a highly charged nucleic acid system, and different approaches, such as reduction of the phosphate charges and addition of hydrated counterions, have been applied to remedy this shortcoming and to maintain stable DNA structures. A few years later, the first MD simulation of a DNA molecule, including explicit water molecules and counterions was published \cite{Seibel85}. Various MD simulations on fully solvated RNA molecules with explicit inclusion of mobile ions indicated the importance of proper treatment of the environment of highly charged nucleic acids \cite{Lee95,Zichi95,Auffinger97,Auffinger99}. Given the central roles of RNA in the life of cells, it is important to understand the mechanism by which RNA forms three dimensional structures endowed with properties such as catalysis, ligand binding, and recognition of proteins. Furthermore, the increasing awareness of the essential role of RNA in controlling viral replication and in bacterial protein synthesis emphazises the potential of ribonucleicacids as targets for developing new antibacterial and new antiviral drugs. Driven by fruitful collaborations in the Sonderforschungsbereich RNA-Ligand interactions" the model RNA systems in this study include various RNA tetraloops and HIV-1 TAR RNA. For the latter system, the binding sites of heteroaromatic compounds have been studied employing automated docking calculations \cite{Goodsell90}. The results show that it is possible to use this tool to dock small rigid ligands to an RNA molecule, while large and flexible molecules are clearly problematic. The main part of this work is focused on MD simulations of RNA tetraloops.
Die vorliegende Analyse untersucht die Beschäftigungseffekte von Vermittlungsgutscheinen und Personal-Service-Agenturen mit Hilfe einer makroökonometrischen Evaluation. Neben einer mikroökonometrischen Evaluation, welche die Wirkungen auf individueller Ebene untersucht, kann eine makroökonometrische Analyse Aussagen über die gesamtwirtschaftlichen Effekte der Maßnahmen machen. Die strukturellen Multiplikatorwirkungen im makroökonomischen Kreislaufzusammenhang werden jedoch nicht berücksichtigt. Das ökonometrische Modell zur Analyse der beiden Maßnahmen basiert auf einer Matching-Funktion, die den Suchprozess von Firmen und von Arbeitern nach einem Beschäftigungsverhältnis abbildet. Die empirischen Analysen werden getrennt für Ost- und Westdeutschland sowie für die Strategietypen der Bundesagentur für Arbeit durchgeführt. Sie zeigen, dass die Ausgabe von Vermittlungsgutscheinen nur in „großstädtisch geprägten Bezirken vorwiegend in Westdeutschland mit hoher Arbeitslosigkeit“ (Strategietyp II) einen signifikant positiven Effekt auf den Suchprozess hat. Für die Personal-Service-Agenturen zeigen sich signifikant positive Effekte für Ost- als auch für Westdeutschland. Allerdings fehlt für eine abschließende Bewertung der Ergebnisse für die Personal- Service-Agenturen aufgrund der relativ geringen Teilnehmerzahl noch ein Vergleich mit mikroökonometrischen Analysen.
Serial correlation in dynamic panel data models with weakly exogenous regressor and fixed effects
(2005)
Our paper wants to present and compare two estimation methodologies for dynamic panel data models in the presence of serially correlated errors and weakly exogenous regressors. The ¯rst is the ¯rst di®erence GMM estimator as proposed by Arellano and Bond (1991) and the second is the transformed Maximum Likelihood Estimator as proposed by Hsiao, Pesaran, and Tahmiscioglu (2002). Thereby, we consider the ¯xed e®ects case and weakly exogenous regressors. The ¯nite sample properties of both estimation methodologies are analysed within a simulation experiment. Furthermore, we will present an empirical example to consider the performance of both estimators with real data. JEL Classification: C23, J64
In this paper we evaluate the employment effects of job creation schemes on the participating individuals in Germany. Job creation schemes are a major element of active labour market policy in Germany and are targeted at long-term unemployed and other hard-to-place individuals. Access to very informative administrative data of the Federal Employment Agency justifies the application of a matching estimator and allows to account for individual (group-specific) and regional effect heterogeneity. We extend previous studies in four directions. First, we are able to evaluate the effects on regular (unsubsidised) employment. Second, we observe the outcome of participants and non-participants for nearly three years after programme start and can therefore analyse mid- and long-term effects. Third, we test the sensitivity of the results with respect to various decisions which have to be made during implementation of the matching estimator, e.g. choosing the matching algorithm or estimating the propensity score. Finally, we check if a possible occurrence of 'unobserved heterogeneity' distorts our interpretation. The overall results are rather discouraging, since the employment effects are negative or insignificant for most of the analysed groups. One notable exception are long-term unemployed individuals who benefit from participation. Hence, one policy implication is to address programmes to this problem group more tightly. JEL Classification: J68, H43, C13
Job creation schemes (JCS) have been one important programme of active labour market policy in Germany aiming at the re-integration of hard-to-place unemployed individuals into regular employment. In ontrast to earlier evaluation studies of these programmes based on survey data, we use administrative data containing more than 11,000 participants for our analysis and hence, can take effect heterogeneity explicitly into account. We focus on effect heterogeneity caused by differences in the implementation of programmes (economic sector, types of support and implementing institutions). The results are rather discouraging and show that in general, JCS are unable to improve the re-integration chances of participants into regular employment.
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
The effects of vocational training programmes on the duration of unemployment in Eastern Germany
(2005)
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
Previous empirical studies of job creation schemes in Germany have shown that the average effects for the participating individuals are negative. However, we find that this is not true for all strata of the population. Identifying individual characteristics that are responsible for the effect heterogeneity and using this information for a better allocation of individuals therefore bears some scope for improving programme efficiency. We present several stratification strategies and discuss the occurring effect heterogeneity. Our findings show that job creation schemes do neither harm nor improve the labour market chances for most of the groups. Exceptions are long-term unemployed men in West and long-term unemployed women in East and West Germany who benefit from participation in terms of higher employment rates. JEL: C13 , J68 , H43
Innovations are a key factor to ensure the competitiveness of establishments as well as to enhance the growth and wealth of nations. But more than any other economic activity, decisions about innovations are plagued by failures of the market mechanism. As a response, public instruments have been implemented to stimulate private innovation activities. The effectiveness of these measures, however, is ambiguous and calls for an empirical evaluation. In this paper we make use of the IAB Establishment Panel and apply various microeconometric methods to estimate the effect of public measures on innovation activities of German establishments. We find that neglecting sample selection due to observable as well as to unobservable characteristics leads to an overestimation of the treatment effect and that there are considerable differences with regard to size class and betweenWest and East German establishments.
In recent methodological work the well known ACD approach, originally introduced by Engle and Russell (1998), has been supplemented by the involvement of an unobservable stochastic process which accompanies the underlying process of durations via a discrete mixture of distributions. The Mixture ACD model, emanating from the specialized proposal of De Luca and Gallo (2004), has proved to be a moderate tool for description of financial duration data. The use of one and the same family of ordinary distributions has been common practice until now. Our contribution incites to use the rich parameterized comprehensive family of distributions which allows for interacting different distributional idiosyncrasies. JEL classification: C41, C22, C25, C51, G14.
We propose a new framework for modelling the time dependence in duration processes being in force on financial markets. The pioneering ACD model introduced by Engle and Russell (1998) will be extended in a manner that the duration process will be accompanied by an unobservable stochastic process. The Discrete Mixture ACD framework provides us with a general methodology which puts the idea into practice. It is established by introducing a discrete-valued latent regime variable which can be justified in the light of recent market microstructure theories. The empirical application demonstrates its ability to capture specific characteristics of intraday transaction durations while alternative approaches fail. JEL classification: C41, C22, C25, C51, G14.
We discuss that hadron-induced atmospheric air showers from ultra-high energy cosmic rays are sensitive to QCD interactions at very small momentum fractions x where nonlinear effects should become important. The leading partons from the projectile acquire large random transverse momenta as they pass through the strong field of the target nucleus, which breaks up their coherence. This leads to a steeper x_F-distribution of leading hadrons as compared to low energy collisions, which in turn reduces the position of the shower maximum Xmax. We argue that high-energy hadronic interaction models should account for this effect, caused by the approach to the black-body limit, which may shift fits of the composition of the cosmic ray spectrum near the GZK cutoff towards lighter elements. We further show that present data on Xmax(E) exclude that the rapid ~ 1/x^0.3 growth of the saturation boundary (which is compatible with RHIC and HERA data) persists up to GZK cutoff energies. Measurements of pA collisions at LHC could further test the small-x regime and advance our understanding of high density QCD significantly.
Sharing of substructures like subterms and subcontexts in terms is a common method for space-efficient representation of terms, which allows for example to represent exponentially large terms in polynomial space, or to represent terms with iterated substructures in a compact form. We present singleton tree grammars as a general formalism for the treatment of sharing in terms. Singleton tree grammars (STG) are recursion-free context-free tree grammars without alternatives for non-terminals and at most unary second-order nonterminals. STGs generalize Plandowski's singleton context free grammars to terms (trees). We show that the test, whether two different nonterminals in an STG generate the same term can be done in polynomial time, which implies that the equality test of terms with shared terms and contexts, where composition of contexts is permitted, can be done in polynomial time in the size of the representation. This will allow polynomial-time algorithms for terms exploiting sharing. We hope that this technique will lead to improved upper complexity bounds for variants of second order unification algorithms, in particular for variants of context unification and bounded second order unification.
Plenarvortrag Weltkongress der Rechtsphilosophie und Sozialphilosophie, 24.-29. Mai, Granada 2005. S.a. die deutsche Fassung: "Die anonyme Matrix: Menschenrechtsverletzungen durch "private" transnationale Akteure". Spanische Fassung: Sociedad global, justicia fragmentada: sobre la violatión de los derechos humanos por actores transnacionales 'privados'. In: Manuel Escamilla and Modesto Saavedra (eds.), Law and Justice in a global society, International Association for philosophy of law and social philosophy, Granada 2005, S. 547-562 und in "Anales de öa Catedra Francisco Suarez 2005". S.a. Teubner, Gunther: Globalized Justice - Fragmented Justice. Human Rights Violations by "Private" Transnational Actors
Charmonium production and suppression in heavy-ion collisions at relativistic energies is investigated within di erent models, i.e. the comover absorption model, the threshold suppression model, the statistical coalescence model and the HSD transport approach. In HSD the charmonium dissociation cross sections with mesons are described by a simple phase-space parametrization including an e ective coupling strength |Mi|2 for the charmonium states i =Xc,J/psi, psi'. This allows to include the backward channels for charmonium reproduction by DD channels which are missed in the comover absorption and threshold suppression model employing detailed balance without introducing any new parameters. It is found that all approaches yield a reasonable description of J/psi suppression in S+U and Pb+Pb collisions at SPS energies. However, they di er significantly in the psi'/J/psi ratio versus centrality at SPS and especially at RHIC energies. These pronounced differences can be exploited in future measurements at RHIC to distinguish the hadronic rescattering scenarios from quark coalescence close to the QGP phase boundary.
The quinol:fumarate reductase (QFR) is the terminal reductase of anaerobic fumarate respiration, the most commonly occurring type of anaerobic respiration. This membrane protein complex couples the oxidation of menaquinol to menaquinone to the reduction of fumarate to succinate. The three-dimensional crystal structure of the QFR from Wolinella succinogenes has previoulsy been solved at 2.2 Å resolution. Although the diheme-containing QFR from W. succinogenes is known to catalyze an electroneutral process, structural and functional characterization of parental and variant enzymes has revealed active site locations which indicate electrogenic catalysis across the membrane. A solution to this apparent controversy was proposed with the so-called “Epathway hypothesis”. According to this, transmembrane electron transfer via the heme groups is strictly coupled to a parallel, compensatory transfer of protons via a transiently established pathway, which is inactive in the oxidized state of the enzyme. Proposed constituents of the E-pathway are the side chain of Glu C180, and the ring C propionate of the distal heme. Previous experimental evidence strongly supports such a role for the former constituent. One aim of this thesis is to investigate by a combination of specific 13C-heme propionate labeling and FTIR difference spectroscopy whether the ring C propionate of the distal heme is involved in redox-coupled proton transfer in the QFR from W. succinogenes. In addition to W. succinogenes, the primary structures of the QFR enzymes of two other e- proteobacteria are known. These are Campylobacter jejuni and Helicobacter pylori, which unlike W. succinogenes are human pathogens. The QFR from H. pylori has previously been established to be a potential drug target, and the same is likely for the QFR from C. jejuni. The two pathogenic species colonize mucosal surfaces causing several diseases. The possibility of studying these QFRs from these bacteria and creating more efficient drugs specifically active for this enzyme depends substantially on the availability of large amounts of high-quality protein. Further, biochemical and structural studies on QFR enzymes from e- proteobacteria species other than W. succinogenes can be valuable to enlighten new aspects or corroborate the current understanding of this class of membrane proteins.
We study the collective flow of open charm mesons and charmonia in Au + Au collisions at s = 200 GeV within the hadron-string-dynamics (HSD) transport approach. The detailed studies show that the coupling of D, mesons to the light hadrons leads to comparable directed and elliptic flow as for the light mesons. This also holds approximately for J/ mesons since more than 50% of the final charmonia for central and midcentral collisions stem from D + induced reactions in the transport calculations. The transverse momentum spectra of D, mesons and J/ s are only very moderately changed by the (pre-)hadronic interactions in HSD, which can be traced back to the collective flow generated by elastic interactions with the light hadrons. PACS-Nr. 25.75.-q, 13.60.Le, 14.40.Lb, 14.65.Dw
The study of hidden charm production is an important part of the heavy ion program. The standard approach to this problem [1] assumes that c¯c bound states are created only at the initial stage of the reaction and then partially destroyed at later stages due to interactions with the medium [2, 3, 4].
Nuclear collisions at intermediate, relativistic, and ultra-relativistic energies offer unique opportunities to study in detail manifold fragmentation and clustering phenomena in dense nuclear matter. At intermediate energies, the well known processes of nuclear multifragmentation -- the disintegration of bulk nuclear matter in clusters of a wide range of sizes and masses -- allow the study of the critical point of the equation of state of nuclear matter. At very high energies, ultra-relativistic heavy-ion collisions offer a glimpse at the substructure of hadronic matter by crossing the phase boundary to the quark-gluon plasma. The hadronization of the quark-gluon plasma created in the fireball of a ultra-relativistic heavy-ion collision can be considered, again, as a clustering process. We will present two models which allow the simulation of nuclear multifragmentation and the hadronization via the formation of clusters in an interacting gas of quarks, and will discuss the importance of clustering to our understanding of hadronization in ultra-relativistic heavy-ion collisions.