Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10794)
- Doctoral Thesis (1564)
- Preprint (1539)
- Working Paper (1439)
- Part of Periodical (565)
- Conference Proceeding (510)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17069) (remove)
Keywords
- inflammation (92)
- COVID-19 (89)
- SARS-CoV-2 (62)
- Financial Institutions (47)
- Germany (45)
- climate change (45)
- aging (43)
- ECB (42)
- cancer (42)
- apoptosis (41)
Institute
- Medizin (5095)
- Physik (2958)
- Wirtschaftswissenschaften (1646)
- Frankfurt Institute for Advanced Studies (FIAS) (1576)
- Biowissenschaften (1399)
- Informatik (1249)
- Center for Financial Studies (CFS) (1137)
- Sustainable Architecture for Finance in Europe (SAFE) (1061)
- Biochemie und Chemie (854)
- House of Finance (HoF) (702)
The title compound, C15H25N5, is an aminalization product between 2,6-diacetylpyridine and 1,3-diaminopropane. It crystallizes with two independent molecules in the asymmetric unit with different conformations. In the first molecule, the methyl groups are cis oriented with respect to the pyridine ring [N—C—C—C torsion angles = 72.5 (1) and 80.3 (1)°], while they are trans oriented in the second molecule [N—C—C—C torsion angles = 82.6 (1) and -90.8 (1)°]. Each of the two molecules forms centrosymmetric dimers held together by N—H[cdots, three dots, centered]N hydrogen bonds, thus forming R 2 2(16) rings. The two dimers are interlinked by additional N—H[cdots, three dots, centered]N bonds into R 4 4(14) rings, building chains along the a axis. These patterns influence the orientation (either equatorial or axial) of the N—H bonds.
Neanderthal diets are reported to be based mainly on the consumption of large and medium sized herbivores, while the exploitation of other food types including plants has also been demonstrated. Though some studies conclude that early Homo sapiens were active hunters, the analyses of faunal assemblages, stone tool technologies and stable isotopic studies indicate that they exploited broader dietary resources than Neanderthals. Whereas previous studies assume taxon-specific dietary specializations, we suggest here that the diet of both Neanderthals and early Homo sapiens is determined by ecological conditions. We analyzed molar wear patterns using occlusal fingerprint analysis derived from optical 3D topometry. Molar macrowear accumulates during the lifespan of an individual and thus reflects diet over long periods. Neanderthal and early Homo sapiens maxillary molar macrowear indicates strong eco-geographic dietary variation independent of taxonomic affinities. Based on comparisons with modern hunter-gatherer populations with known diets, Neanderthals as well as early Homo sapiens show high dietary variability in Mediterranean evergreen habitats but a more restricted diet in upper latitude steppe/coniferous forest environments, suggesting a significant consumption of high protein meat resources.
Carbon-13 and oxygen-18 abundances were measured in large mammal skeletal remains (tooth enamel, dentine and bone) from the Chiwondo Beds in Malawi, which were dated by biostratigraphic correlation to ca. 2.5 million years ago. The biologic isotopic patterns, in particular the difference in carbon-13 abundances between grazers and browsers and the difference in oxygen-18 abundances between semi-aquatic and terrestrial herbivores, were preserved in enamel, but not in dentine and bone. The isotopic results obtained from the skeletal remains from the Chiwondo Beds indicate a dominance of savannah habitats with some trees and shrubs. This environment was more arid than the contemporaneous Ndolanya Beds in Tanzania. The present study confirms that robust australopithecines were able to live in relatively arid environments and were not confined to more mesic environments elsewhere in southern Africa.
Communication in the Web 2.0 context mainly works through images. The online video platform YouTube uses this form of visual communication and makes art forms of Western societies visible through their online videos. YouTube, as cultural reservoir and visual archive of moving images, accommodates the whole range of visualising creative processes – from artistic finger exercises to fine arts. A general characteristic of YouTube is the publishing of small everyday gestures of the ‘big ones’ (politicians, stars), like small incidents and their clumsiness in everyday actions, e.g. Beyonce´s fall from the stage or Tom Cruise’s demonic pro-scientology interview. Through their viral distribution on different platforms, these incidents will never be covered up or disappear from the public view. At the same time big gestures and star images are replicated and sometimes reinterpreted by the ‘small people’ who present themselves in the poses and attitudes of the stars. Generally, a coexistence of different perspectives is possible. YouTube allows polysemic and polyvalent views on the everyday and media phenomena. This article relies on YouTube research 2 that started in 2006 at the New Media Department of the Goethe University of Frankfurt. The results of the research have already presented representative forms and basic patterns, that is to say, categories for the clips appearing here. These kinds of clips, recurring in the observation period, have an impact on the basic representation of art or artistic expression within moving images on this platform. Methodologically the focus leads to the investigation (which has to be adequate to the specifics of the medium, or ‘media adequate’) of new visual structures and forms which can create – consciously or unconsciously – an art form. After focusing on the media structures, it will be discussed whether any and, if so, which ‘authentic’ new forms were developed solely on YouTube and whether these forms are innovative and can be characterised as avant-garde. This article first takes a small step in evaluating how to get from a general communication through means of visuality in web 2.0, an often endless chatty cheesy visual noise 3 – to the special quality of a consciously created aesthetic. From where do innovative aesthetic forms emerge, related to their media structures? 4 Are they the products of ‘media amateurs’ 5 or do we have to find new specifications and descriptions for the producers? The definition of a ‘media amateur’ describes technically interested private individuals who acquire and develop technology before commercial use of the technology is even recognisable. Just as artists are developing their own techniques, according to Dieter Daniels, media amateurs are autodidacts who invent techniques, rather than just acquire knowledge about them (see for example the demo scene, the machinima, brickfilm producers as well as many areas of computer gaming in general 6). The media amateur directly intervenes in the production processes of the medium and does not just simply use the medium. What is fascinating is the media amateur’s process of self education – not the result – and the direct impact on the internal structure and the control of the medium. 7 Media amateurs open a previously culturally unformed space of experience. This only partially applies to most of the YouTube clips in the realms of the visual arts; it is here most important to look at the visual content. This article discusses all these concepts and introduces new descriptions for the different forms of production: the technically oriented media master, the do-it-yourselfer, the tinkerer, the amateur handicraftsman and the inventor. It outlines a basic research project on ‘visual media culture’ (a triangulation of research on media structure and iconography) of the presented online video platform. It is a product of the analysis of clips focusing on the media structure, analyzing the creative handling of images and the deviations and differences of pre-set media formats and stereotypes.
The Video Vortex Reader is the first collection of critical texts to deal with the rapidly emerging world of online video – from its explosive rise in 2005 with YouTube, to its future as a significant form of personal media. After years of talk about digital convergence and crossmedia platforms we now witness the merger of the Internet and television at a pace no-one predicted. These contributions from scholars, artists and curators evolved from the first two Video Vortex conferences in Brussels and Amsterdam in 2007 which focused on responses to YouTube, and address key issues around independent production and distribution of online video content. What does this new distribution platform mean for artists and activists? What are the alternatives?
A thick Middle and Late Pleistocene loess/palaeosol sequence is exposed at the gravel quarry Gaul located east of Weilbach in the southern foreland of the Taunus Mountains. The loess/palaeosol sequence correlates to the last three glacial cycles. Seven samples were dated by luminescence methods using an elevated temperature IRSL (post-IR IRSL) protocol for polymineral fine-grains to determine the deposition age of the sediment and to set up a more reliable chronological framework for these deposits. The fading corrected IR50 and the pIRIR225 age estimates show a good agreement for almost all samples. The fading corrected IRSL ages range from 23.7 ± 1.6 ka to >350 ka indicating that the oldest loess was deposited during marine isotope stage (MIS) 10 or earlier and that the humic-rich horizon (Weilbacher Humuszone) was developed during the late phase of MIS 7. Loess taken above the fCc horizon most likely accumulated during MIS 6 indicating that the remains of the palaeosol are not belonging to the last interglacial soil. The two uppermost samples indicate that the youngest loess accumulated during MIS 2 (Upper Würmian). Age estimates for the loess-palaeosol sequence of the gravel quarry Gaul/Weilbach could be obtained up to ~350 ka using the pIRIR225 from feldspar. Keywords: loess, luminescence dating, IRSL, fading, Weilbach, chronostratigraphy
Editorial: Jürgen Stark : The ECB's Chief Economist about inflation targeting, liquidity support and the sovereign debt crisis Research Finance: Yulia Plyakha, Raman Uppal, Grigory Vilkov : "Why Does the Equally Weighted Portfolio Outperform the Value and Price Weighted Portfolios?" Research Law: Manfred Wandt : "Legal Objectives of the Solvency II Framework Directive" Research E-Finance: Roman Beck, Timm Pintner, Martin Wolf : "Individual Mindfulness to Mitigate Information Overload within Financial Organizations" Policy Platform: Peter Gomber, Björn Arndt, Marco Lutat, Tim Uhle : "Regulation of High-Frequency Trading – A European Perspective" Interview: Norbert Walter : "The Risko of Compromising on Price Stability Must not be Taken"
Effort estimates are of utmost economic importance in software development projects. Estimates bridge the gap between managers and the invisible and almost artistic domain of developers. They give a means to managers to track and control projects. Consequently, numerous estimation approaches have been developed over the past decades, starting with Allan Albrecht's Function Point Analysis in the late 1970s. However, this work neither tries to develop just another estimation approach, nor focuses on improving accuracy of existing techniques. Instead of characterizing software development as a technological problem, this work understands software development as a sociological challenge. Consequently, this work focuses on the question, what happens when developers are confronted with estimates representing the major instrument of management control? Do estimates influence developers, or are they unaffected? Is it irrational to expect that developers start to communicate and discuss estimates, conform to them, work strategically, hide progress or delay? This study shows that it is inappropriate to assume an independency of estimated and actual development effort. A theory is developed and tested, that explains how developers and managers influence the relationship between estimated and actual development effort. The theory therefore elaborates the phenomenon of estimation fulfillment.
In der vorliegenden Arbeit konzentrierte ich mich auf mediterrane wirbellose Tierarten, welche sich als Konsequenz ihrer Lebensweise nur schlecht ausbreiten können. Nichtsdestotrotz haben es Süßwasserkrabben der Gattung Potamon und Landschnecken der Gattung Tudorella geschafft, große Gebiete zu besiedeln, die heute durch das Mittelmeer getrennt sind. Für beide Gruppen wurde spekuliert, dass Menschen an ihrer Ausbreitung beteiligt waren. Es war mein Ziel die biogeographischen Muster dieser beiden Gattungen zu analysieren und abzuschätzen, ob Menschen tatsächlich Vektoren ihrer Ausbreitung waren. Meine Analysen fanden auf drei Ebenen statt: Taxonomie, Gattung und Art.
The analysis of biomolecular macrocomplexes requires certain preconditions to be fulfilled. The preparation of biomolecular samples usually results in low yields. Due to this constraint of low availability any method should provide a sufficient sensitivity to cope with typical sample amounts. Biomolecules also often show a reduced stability, i.e. a propensity for fragmentation upon ionisation, which requires reasonable soft methods for the investigation. Furthermore macromolecular complexes usually are composed by means of non-covalent interactions presenting additional demands on the softness. This holds true for specific complexes like protein-ligand or DNA double strand binding. For the formation of non-covalent, specific complexes the biomolecules’ native structure and environment are a basic prerequisite and hence crucial. Therefore it is desirable during analysis to keep the biomolecules in a native environment to preserve their structure and weak interactions. One suitable method for analysing biomolecules is mass spectrometry. Mass spectrometry is capable of high throughput screening as well as determining masses with high accuracy and high sensitivity. Especially since the availability of MALDI-MS and ESI-MS mass spectrometry evolved to a versatile tool to investigate biomolecular complexes. Both, MALDI- and ESI-MS are sufficiently soft methods to observe fragile biomolecules. Yet both methods have their advantages and disadvantages. During the recent years an alternative mass spectrometric approach has been developed in our group, termed LILBID-MS (Laser Induced Liquid Bead Ionisation/Desorption). In LILBID microdroplets of aqueous solution containing buffer, salt and further additives among the analyte molecules are injected into vacuum and irradiated one-by-one by mid-IR laser pulses. The absorption of the energy by the water leads to a rapid ablation of the preformed analyte ions. LILBID is highly tolerant for the addition of salts and detergents allowing to study biomolecular complexes in a native environment. As LILBID-MS is soft enough to avoid fragmentation, specific non-covalent complexes can be analysed directly from their native environment by this method. In addition dissociation can be induced on demand by increasing the laser intensity which allows for the study of subunit compositions. A further prominent property of LILBID is the possibility to study hydrophobic membrane proteins due to the tolerated use of detergents. During the course of this work, several instrumental improvements mostly concerning ion focussing and beam steering were introduced. Together with refinements of different modes of measurement the result is a significantly improved signal-to-noise ratio as well as a further improvement in sensitivity. In addition the accessible m/z range for a given flight time has been vastly increased. The new possibilities that LILBID now offers for the study of biomolecular complexes were investigated. The ability to detect specific binding in LILBID-MS was investigated by means of nucleic acids and their interaction with proteins. It could be shown that the stability of a 16bp dsDNA corresponds to that in solution phase regarding the dependency on concentration and type of the salts used. In addition a competitive experiment with the well-known transcription factor p50 was used to demonstrate the detection of sequence-specific binding with LILBID. The improved sensitivity allowed to detect single stranded DNA at nanomolar concentrations and even the 2686bp plasmid pUC19 could be easily detected without fragmentation using a concentration of only 80nM. In case of the transcription factor p63 the mass spectrometric analysis could help to identify a new model of activation and inhibition. For the first time known quarternary structures of membrane proteins like the light-driven proton pump bacteriorhodopsin and the potassium channel KcsA could be detected with mass spectrometry. For the light-driven proton pump proteorhodopsin the type and the concentration of the used detergents significantly influenced the stability of this protein as well as the preferred quarternary structure.
In nature, society and technology many disordered systems exist, that show emergent behaviour, where the interactions of numerous microscopic agents result in macroscopic, systemic properties, that may not be present on the microscopic scale. Examples include phase transitions in magnetism and percolation, for example in porous unordered media, biological, and social systems. Also technological systems that are explicitly designed to function without central control instances, like their prime example the Internet, or virtual networks, like the World Wide Web, which is defined by the hyperlinks from one web page to another, exhibit emergent properties. The study of the common network characteristics found in previously seemingly unrelated fields of science and the urge to explain their emergence, form a scientific field in its own right, the science of complex networks. In this field, methodologies from physics, leading to simplification and generalization by abstraction, help to shift the focus from the implementation's details on the microscopic level to the macroscopic, coarse grained system level. By describing the macroscopic properties that emerge from microscopic interactions, statistical physics, in particular stochastic and computational methods, has proven to be a valuable tool in the investigation of such systems. The mathematical framework for the description of networks is graph theory, in hindsight founded by Euler in 1736 and an active area of research since then. In recent years, applied graph theory flourished through the advent of large scale data sets, made accessible by the use of computers. A paradigm for microscopic interactions among entities that locally optimize their behaviour to increase their own benefit is game theory, the mathematical framework of decision finding. With first applications in economics e.g. Neumann (1944), game theory is an approved field of mathematics. However, game theoretic behaviour is also found in natural systems, e.g. populations of the bacterium Escherichia coli, as described by Kerr (2002). In the present work, a combination of graph theory and game theory is used to model the interactions of selfish agents that form networks. Following brief introductions to graph theory and game theory, the present work approaches the interplay of local self-organizing rules with network properties and topology from three perspectives. To investigate the dynamics of topology reshaping, coupling of the so called iterated prisoners' dilemma (IPD) to the network structure is proposed and studied in Chapter 4. In dependence of a free parameter in the payoff matrix, the reorganization dynamics result in various emergent network structures. The resulting topologies exhibit an increase in performance, measured by a variance of closeness, of a factor 1.2 to 1.9, depending in the chosen free parameter. Presented in Chapter 5, the second approach puts the focus on a static network structure and studies the cooperativity of the system, measured by the fixation probability. Heterogeneous strategies to distribute incentives for cooperation among the players are proposed. These strategies allow to enhance the cooperative behaviour, while requiring fewer total investments. Putting the emphasis on communication networks in Chapters 6 and 7, the third approach investigates the use of routing metrics to increase the performance of data packet transport networks. Algorithms for the iterative determination of such metrics are demonstrated and investigated. The most successful of these algorithms, the hybrid metric, is able to increase the throughput capacity of a network by a factor of 7. During the investigation of the iterative weight assignments a simple, static weight assignment, the so called logKiKj metric, is found. In contrast to the algorithmic metrics, it results in vanishing computational costs, yet it is able to increase the performance by a factor of 5.
Suicide genes have been broadly used in gene therapy. They can serve as safety tools for conditional elimination of infused cells or for directed tumor therapy. To date, the Herpes simplex virus thymidine kinase/ ganciclovir (HSVtk/GCV) system is the most prominent and the most widely used suicidegene/prodrug combination. Despite its promising performance, the system displays limitations, which include relatively slow killing kinetics and toxicity of the prodrug GCV. Consequently, several groups have either developed new suicide-gene/prodrug combinations or attempted to improve the established HSVtk/GCV suicide system. The present study also aimed towards optimization of the HSVtk/GCV system. To do so, a novel, codon-optimized point mutant (A168H) of HSVtk was developed. The novel mutant was named TK.007. It was extensively tested for its efficiency in two relevant settings: (1) control of severe graft-versus-host disease (GvHD) after adoptive immunotherapy with Tlymphocytes, and (2) direct elimination of targeted tumor cells. TK.007 was compared to the broadly used wild-type, splice-corrected scHSVtk and to a codon-optimized HSVtk (coHSVtk) not bearing the above point mutation. (1) For experiments related to the adoptive immunotherapy approach, HSVtkvariants were expressed from a γ-retroviral MP71 vector as a fusion construct with the selection and marker gene tCD34. Expression levels for TK.007 in transduced lymphoid and myeloid cell lines were significantly higher at initial transduction and over a 12 week period compared to the commonly used scHSVtk and coHSVtk indicating reduced toxicity of TK.007. Killing kinetics of transduced cell lines (PM1 and K562) and primary human T cells were significantly faster for TK.007 in comparison to scHSVtk and coHSVtk in vitro. In vivo-functionality of TK.007 was assessed in an allogeneic transplantation model. T cells derived from C57BL/6J.Ly5.1 donor mice were transduced with MP71 vectors expressing scHSVtk or TK.007. Transduced cells were selected and transplanted into Balb/c Rag2-/- γ-/- immune-deficient recipient mice. Acute, severe GvHD occurred and was effectively abrogated in all mice transplanted with TK.007- transduced T cells, and in five out of six mice transplanted with scHSVtk-transduced cells. In a slightly modified quantitative allogeneic transplantation mouse model, significantly faster and more efficient in vivo killing was demonstrated for TK.007 as compared to scHSVtk, especially at low doses of GCV. (2) In order to assess TK.007 functionality in cells derived from solid tumors, HSVtk-variants were expressed from lentiviral gene ontology (LeGO) vectors in combination with an eGFP/neo-opt selection cassette. Transduced and selected tumor cell lines that derived from several tissues were eliminated at significantly lower GCV doses and to higher extents when transduced with TK.007 compared to scHSVtk. Moreover, a significantly stronger bystander effect of TK.007 was demonstrated. The superior in vitro efficiency of TK.007 was confirmed in an in vivo subcutaneous xenograft mouse model for glioblastoma in NOD/SCID mice. Mice transplanted with TK.007 transduced cells stayed tumor-free after treatment with different GCV-doses. On the contrary, mice of the scHSVtk group either demonstrated only transiently reduced tumor growth in the low-dose GCV group (10 mg/kg) compared to the control groups or suffered from relatively fast relapses after initial tumor shrinking in the standarddose (50 mg/kg) GCV group. As a result, all mice in the scHSVtk group died from vigorous tumor growth. In summary, in two different applications for suicide gene therapy the present study has demonstrated superior functional performance of the novel suicide gene TK.007 as compared to the broadly used wild-type scHSVtk. Differences became particularly pronounced at low doses of GCV. It can be concluded that the new TK.007-gene represents a promising alternative to the commonly used scHSVtk for gene therapeutic applications.
The aim of this work is to develop an effective equation of state for QCD, having the correct asymptotic degrees of freedom, to be used as input for dynamical studies of heavy ion collisions. We present an approach for modeling an EoS that respects the symmetries underlying QCD, and includes the correct asymptotic degrees of freedom, i.e. quarks and gluons at high temperature and hadrons in the low-temperature limit. We achieve this by including quarks degrees of freedom and the thermal contribution of the Polyakov loop in a hadronic chiral sigma-omega model. The hadronic part of the model is a nonlinear realization of an sigma-omega model. As the fundamental symmetries of QCD should also be present in its hadronic states such an approach is widely used to describe hadron properties below and around Tc. The quarks are introduced as thermal quasi particles, coupling to the Polyakov loop, while the dynamics of the Polyakov loop are controlled by a potential term which is fitted to reproduce pure gauge lattice data. In this model the sigma field serves a the order parameter for chiral restoration and the Polyakov loop as order parameter for deconfinement. The hadrons are suppressed at high densities by excluded volume corrections. As a next step, we introduce our new HQ model equation of state in a microscopic+macroscopic hybrid approach to heavy ion collisions. This hybrid approach is based on the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The present implementation allows to compare pure microscopic transport calculations with hydrodynamic calculations using exactly the same initial conditions and freeze-out procedure. The effects of the change in the underlying dynamics - ideal fluid dynamics vs. non-equilibrium transport theory - are explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic and directed flow are shown to be not sensitive to changes in the EoS while the smaller mean free path in the hydrodynamic evolution reflects directly in higher flow results which are consistent with the experimental data. This finding indicates qualitatively that physical mechanisms like viscosity and other non equilibrium effects play an essentially more important role than the EoS when bulk observables like flow are investigated. In the last chapter, results for the thermal production of MEMOs in nucleus-nucleus collisions from a combined micro+macro approach are presented. Multiplicities, rapidity and transverse momentum spectra are predicted for Pb+Pb interaction at different beam energies. The presented excitation functions for various MEMO multiplicities show a clear maximum at the upper FAIR energy regime making this facility the ideal place to study the production of these exotic forms of multistrange objects.
In human neuroscientific research, there has been an increasing interest in how the brain computes the value of an anticipated outcome. However, evidence is still missing about which valuation related brain regions are modulated by the proximity to an expected goal and the previously invested effort to reach a goal. The aim of this dissertation is to investigate the effects of goal proximity and invested effort on valuation related regions in the human brain. We addressed this question in two fMRI studies by integrating a commonly used reward anticipation task in differential versions of a Multitrial Reward Schedule Paradigm. In both experiments, subjects had to perform consecutive reward anticipation tasks under two different reward contingencies: in the delayed condition, participants received a monetary reward only after successful completion of multiple consecutive trials. In the immediate condition, money was earned after every successful trial. In the first study, we could demonstrate that the rostral cingulate zone of the posterior medial frontal cortex signals action value contingent to goal proximity, thereby replicating neurophysiological findings about goal proximity signals in a homologous region in non-human primates. The findings of the second study imply that brain regions associated with general cognitive control processes are modulated by previous effort investment. Furthermore, we found the posterior lateral prefrontal cortex and the orbitofrontal cortex to be involved in coding for the effort-based context of a situation. In sum, these results extend the role of the human rostral cingulate zone in outcome evaluation to the continuous updating of action values over a course of action steps based on the proximity to the expected reward. Furthermore, we tentatively suggest that previous effort investment invokes processes under the control of the executive system, and that posterior lateral prefrontal cortex and the orbitofrontal cortex are involved in an effort-based context representation that can be used for outcome evaluation that is dependent on the characteristics of the current situation.
The focus of the discussion at the conference on September 23, 2004 was on the long-term impact on capital markets and pension systems. The speakers tried to identify the direction and magnitude of potential changes as well as the likelihood of an eventual asset meltdown. The conference's objective was to combine insights from academia with those from the financial community in order to provide a more comprehensive outlook on capital market developments. Conference Reader Nr. 2005/01
Conference Reader zur gemeinsam von Athansios Orphanides (Federal Reserve Board, Washington D.C.), John C. Williams (Federal Reserve Bank of San Francisco), Heinz Hermann (Deutsche Bundesbank), und Volker Wieland (Center for Financial Studies and Goethe University Frankfurt) organisierten Konferenz, die vom 30. - 31. August, 2003 in Eltville stattgefunden hat. Inhaltsverzeichnis: * Volker Wieland (Director Center for Financial Studies): Foreword * Hans Georg Fabritius (Member of the Executive Board of the Deutsche Bundesbank): Opening Remarks * Charles Goodhart (Norman Sosnow Professor of Banking and Finance at the London School of Economics and External Member of the Bank of England's Monetary Policy Commitee): After Dinner Speech * Paper Abstracts * List of Participants
Over the last years there has been an increasing interest in the involvement of the MVA-pathway and of members of the small GTPases, in the development and progression of AD. Earlier investigations mainly focused on the role of cholesterol in disease pathology. This research was supported by retrospective cohort studies, initially showing beneficial effects of the long-term intake of cholesterol lowering statins, on the incidence of the development of sporadic AD. However, in more recent literature increasing attention has been paid to the isoprenoids, FPP and GGPP, due to their crucial role in the post-translational modifications of members of the superfamily of small GTPases. In AD, these proteins were amongst others shown to be involved in mechanisms affecting APP processing, ROS generation and synaptic plasticity. A major factor impeding the clarification of the role of the MVA-pathway intermediates in these mechanisms was the lack of a sensitive and accurate method to determine FPP and GGPP levels in brain tissue. Hence, a state of the art HPLC-FLD method for the quantification of the isoprenoids FPP and GGPP in brain tissue was successfully developed. After the introduction of a double clean-up step from complex brain matrix samples and the synthesis of an appropriate IS (DNP), the method was fully validated according to the latest FDA guideline for bioanalytical method validation. Furthermore, this method was transferred to a faster and more sensitive, state of the art UHPLC-MS/MS application. Additionally, the method was shown to be applicable for mouse brain tissue and data was generated from an in vivo mouse simvastatin study and for different mouse models. According to the aims of the thesis, the current work describes for the first time absolute isoprenoid concentrations in human frontal cortex white and grey matter. Furthermore, this is the first report of isoprenoid levels in the frontal cortex of human AD brains. Further results were shown from mouse brains originating from different mouse models, including the Thy-1 APP mouse model mimicking AD pathology in terms of Aβ formation or C57Bl/6 mice at different ages. AD prevalence can be clearly correlated with increasing age. Therefore, three different generations of mice were investigated. The study demonstrated constant isoprenoid and cholesterol levels in the first half of their life followed by a significant increase of FPP and GGPP in the second half (between 12 and 24 month of age). Cholesterol levels were also elevated in the aged group, but again the effect was less pronounced than shown for the isoprenoids. These results lead to the tentative conclusion that cerebral isoprenoid levels are elevated during aging and that this accumulation is amplified during AD leading to accelerated neuronal dysfunction. In a different mouse study, using the C57Bl/6 mice, in vivo drug intervention with the HMG-CoA reductase inhibitor simvastatin revealed strong inhibition of the rate limiting step of the mevalonate/isoprenoid/cholesterol pathway and resulted in the first report of significantly reduced FPP and GGPP levels in brain tissue of statin treated mice. These results open for the first time the possibility to monitor drug effects on cerebral isoprenoid levels and correlate these data with a modulation of APP processing, which was shown by our group in previous studies. Interestingly, apart from the isoprenoid reduction following statin treatment the reduction of brain cholesterol was also significant but to a lesser extent. These findings support the notion that isoprenoid levels are more susceptible to statin treatment than cholesterol levels. Furthermore, this suggests a strong cellular dependence on FPP and GGPP, as the pool seems to be easily depleted, which finally could lead to cell death. The first investigations of farnesylated Ras and geranylgeranylated Rac protein levels by means of immuno-blotting, substantiated the notion of a decreased abundance of prenylated small GTPases under statin influence as a consequence of reduced isoprenoid levels. These findings demonstrate for the first time a correlation of FPP and GGPP levels with the abundance of small GTPases. These findings together with the results from the AD study prove that isoprenoid levels are not strictly subject to the same regulation as cholesterol levels. To further understand the physiological regulation in the cell, in vitro experiments with different inhibitors of the mevalonate/isoprenoid/cholesterol pathway were conducted. These results confirmed the isoprenoid and cholesterol reducing effects of statin treatment as observed in the aforementioned in vivo mouse study. Interestingly, cholesterol synthesis inhibition targeted after FPP as the branch point, led to significantly elevated FPP levels. FTase inhibition led to significantly reduced FPP levels, whereas inhibition of the GGTase I did not show a significant change of either isoprenoid levels.
In the work presented herein the microscopic transport model BAMPS (Boltzmann Approach to Multi-Parton Scatterings) is applied to simulate the time evolution of the hot partonic medium that is created in Au+Au collisions at the Relativistic Heavy Ion Collider (RHIC) and in Pb+Pb collisions at the recently started Large Hadron Collider (LHC). The study is especially focused on the investigation of the nuclear modification factor R_{AA}, that quantifies the suppression of particle yields at large transverse momentum with respect to a scaled proton+proton reference, and the simultaneous description of the collective properties of the medium in terms of the elliptic flow v_{2} within a common framework.