Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10799)
- Doctoral Thesis (1564)
- Preprint (1545)
- Working Paper (1439)
- Part of Periodical (565)
- Conference Proceeding (510)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17080) (remove)
Keywords
- inflammation (92)
- COVID-19 (89)
- SARS-CoV-2 (62)
- Financial Institutions (47)
- Germany (45)
- climate change (45)
- aging (43)
- ECB (42)
- cancer (42)
- apoptosis (41)
Institute
- Medizin (5095)
- Physik (2969)
- Wirtschaftswissenschaften (1646)
- Frankfurt Institute for Advanced Studies (FIAS) (1576)
- Biowissenschaften (1399)
- Informatik (1249)
- Center for Financial Studies (CFS) (1137)
- Sustainable Architecture for Finance in Europe (SAFE) (1061)
- Biochemie und Chemie (854)
- House of Finance (HoF) (702)
The asymmetric unit of the title compound, [K(C5HF6N2)(H2O)2]n, is composed of two 3,5-bis(trifluoromethyl)pyrazolide anions, two potassium cations and four water molecules. The water molecules and 3,5-bis(trifluoromethyl)pyrazolide anions act as bridges between the potassium cations. Each potassium cation is surrounded by four O atoms [K—O = 2.705 (3)–2.767 (3) Å] and four F atoms [K—F = 2.870 (7)–3.215 (13) Å]. The water molecules and the 3,5-bis(trifluoromethyl)pyrazolide anions are connected by O—H ... N hydrogen bonds, forming layers in the ab plane. All –CF3 groups show rotational disorder between two orientations each.
The two rings in the title compound, C11H12N2O4S, are roughly coplanar [dihedral angle = 6.77 (8)°]. Whereas the two outer methyl groups of the three methoxy groups are almost coplanar with the aromatic ring to which they are attached [C—C—O—C torsion angles = 8.5 (3) and -8.3 (3)°], the methyl group of the central methoxy substituent is not [C—C—C—C = -78.4 (3)°]. The crystal packing is stabilized by N—H ... O hydrogen bonding.
In the title compound, C11H11N3O2, the dihedral angle between the central ethanone fragment and the 4-methoxyphenyl group is 2.9 (2)°, while that between the ethanone fragment and the triazole ring is 83.4 (2)°. The dihedral angle between the planes of the triazole and benzene rings is 81.7 (1)°. The 4-methoxyphenyl group is cis with respect to the ethanone fragment O atom across the exocyclic C—C bond. In the crystal, molecules are linked by C—H ... N interactions into C(9) chains along [001].
The central structural element of the title compound, C24H29NO2, is a carbazole unit substituted with two acetyl residues and an octyl chain. The acetyl residues are nearly coplanar [dihedral angles = 5.37 (14) and 1.0 (3)°] with the carbazole unit which is essentially planar (r.m.s. deviation for all non-H atoms = 0.025 Å). The octyl chain adopts an all-trans conformation. The crystal packing is stabilized by C—H ... O hydrogen bonds.
17-Acetoxymulinic acid
(2010)
The title compound, [systematic name: 5a-acetoxymethyl-3-isopropyl-8-methyl-1,2,3,3a,4,5,5a,6,7,10,10a,10b-dodecahydro-7,10-endo-epidioxycyclohepta[e]indene-3a-carboxylic acid], C22H32O6 (I), is closely related to methyl 5a-acetoxymethyl-3-isopropyl-8-methyl-1,2,3,3a,4,5,5a,6,7,10,10a,10b-dodecahydro-7,10-endo-epidioxycyclohepta[e]indene-3a-carboxylate, (II) [Brito et al., (2008 [triangle]). Acta Cryst. E64, o1209]. There are two molecules in the asymmetric unit, which are linked by two strong intramolecular O—H ... O hydrogen bonds with graph-set motif R 2 2(8). In both (I) and (II), the conformation of the three fused rings are almost identical. The five-membered ring has an envelope conformation, the six-membered ring has a chair conformation and the seven-membered ring has a boat conformation. The most obvious differences between the two compounds is the observed disorder of the acetoxymethyl fragments in both molecules of the asymmetric unit of (I). This disorder is not observed in (II). The crystal structure and the molecular conformation is stabilized by intermolecular C—H ... O hydrogen bonds. The ability to form hydrogen bonds is different in the two compounds. The crystal studied was a non-merohedral twin, the ratio of the twin components being 0.28 (1):0.72 (1)
In the title compound, C4H7N3O·C2H6OS, creatinine [2-amino-1-methyl-1H-imidazol-4(5H)one] exists in the amine form. The ring is planar (r.m.s. deviation for all non-H atoms = 0.017 Å). In the crystal, two creatinine molecules form centrosymmetric hydrogen-bonded dimers linked by pairs of N—H[cdots, three dots, centered]N hydrogen bonds. In addition, creatinine is linked to a dimethyl sulfoxide molecule by an N—H[cdots, three dots, centered]O interaction. The packing shows layers parallel to (120).
The title compound, [Li3(C4F9O)3(C3H6O)3], features an open Li/O cube with an Li ion missing at one corner. Three of the four bridging O atoms of the cube carry a fluorinated tert-butyl residue, whereas the fourth is part of an acetone molecule. Two of the Li atoms are further bonded to a non-bridging acetone molecule. Two of the lithium ion coordination geometries are very distorted LiO4 tetrahedra; the third could be described as a very distorted LiO3 T-shape with two distant F-atom neighbours. The Li[cdots, three dots, centered]Li contact distances for the three-coordinate Li+ ion [2.608 (14) and 2.631 (12) Å] are much shorter that the contact distance [2.940 (13) Å] between the tetrahedrally coordinated species.
The title compound, [Tl4(C4H9O)4], featuring a (Tl—O)4 cube, crystallizes with a quarter-molecule (located on a special position of site symmetry An external file that holds a picture, illustration, etc. Object name is e-66-m1621-efi1.jpg..) and a half-molecule (located on a special position of site symmetry 23.) in the asymmetric unit. The Tl—O bond distances range from 2.463 (12) to 2.506 (12) Å. All O—Tl—O bond angles are smaller than 90° whereas the Tl—O—Tl angles are wider than a rectangular angle.
In the crystal of the title compound, C8H8ClN3S, molecules are connected by N—H[cdots, three dots, centered]S hydrogen bonds into strips parallel to the (112) planes and running along [110]. One of the amino H atoms is not involved in a classical hydrogen bond. In addition, there is a rather short intermolecular Cl ... S distance of 3.3814 (5) Å.
In the title compound, C15H14N2O4, (I), the molecule lies on a twofold rotation axis which passes through the central C atom of the aliphatic chain, giving one half-molecule per asymmetric unit. The structure is a monoclinic polymorph of the triclinic structure previously reported [Brito, Vallejos, Bolte & López-Rodríguez (2010). Acta Cryst. E66, o792], (II). The most obvious difference between them is the O/C/C/C—O/C/C/C torsion angle [58.2 (7)° in (I) and 173.4 (3)/70.2 (3)° in (II) for GG and TG conformations, respectively]. Another important difference is observed in the dihedral angle between the planes of the aromatic rings [86.49 (7)° for (I) and 76.4 (3)° for (II)]. The crystal structure features a weak pi–pi interaction [centroid–centroid distance = 4.1397 (10)Å]; this latter kind of interaction is not evident in the triclinic polymorph.
The title compound, C15H25N5, is an aminalization product between 2,6-diacetylpyridine and 1,3-diaminopropane. It crystallizes with two independent molecules in the asymmetric unit with different conformations. In the first molecule, the methyl groups are cis oriented with respect to the pyridine ring [N—C—C—C torsion angles = 72.5 (1) and 80.3 (1)°], while they are trans oriented in the second molecule [N—C—C—C torsion angles = 82.6 (1) and -90.8 (1)°]. Each of the two molecules forms centrosymmetric dimers held together by N—H[cdots, three dots, centered]N hydrogen bonds, thus forming R 2 2(16) rings. The two dimers are interlinked by additional N—H[cdots, three dots, centered]N bonds into R 4 4(14) rings, building chains along the a axis. These patterns influence the orientation (either equatorial or axial) of the N—H bonds.
Neanderthal diets are reported to be based mainly on the consumption of large and medium sized herbivores, while the exploitation of other food types including plants has also been demonstrated. Though some studies conclude that early Homo sapiens were active hunters, the analyses of faunal assemblages, stone tool technologies and stable isotopic studies indicate that they exploited broader dietary resources than Neanderthals. Whereas previous studies assume taxon-specific dietary specializations, we suggest here that the diet of both Neanderthals and early Homo sapiens is determined by ecological conditions. We analyzed molar wear patterns using occlusal fingerprint analysis derived from optical 3D topometry. Molar macrowear accumulates during the lifespan of an individual and thus reflects diet over long periods. Neanderthal and early Homo sapiens maxillary molar macrowear indicates strong eco-geographic dietary variation independent of taxonomic affinities. Based on comparisons with modern hunter-gatherer populations with known diets, Neanderthals as well as early Homo sapiens show high dietary variability in Mediterranean evergreen habitats but a more restricted diet in upper latitude steppe/coniferous forest environments, suggesting a significant consumption of high protein meat resources.
Carbon-13 and oxygen-18 abundances were measured in large mammal skeletal remains (tooth enamel, dentine and bone) from the Chiwondo Beds in Malawi, which were dated by biostratigraphic correlation to ca. 2.5 million years ago. The biologic isotopic patterns, in particular the difference in carbon-13 abundances between grazers and browsers and the difference in oxygen-18 abundances between semi-aquatic and terrestrial herbivores, were preserved in enamel, but not in dentine and bone. The isotopic results obtained from the skeletal remains from the Chiwondo Beds indicate a dominance of savannah habitats with some trees and shrubs. This environment was more arid than the contemporaneous Ndolanya Beds in Tanzania. The present study confirms that robust australopithecines were able to live in relatively arid environments and were not confined to more mesic environments elsewhere in southern Africa.
Communication in the Web 2.0 context mainly works through images. The online video platform YouTube uses this form of visual communication and makes art forms of Western societies visible through their online videos. YouTube, as cultural reservoir and visual archive of moving images, accommodates the whole range of visualising creative processes – from artistic finger exercises to fine arts. A general characteristic of YouTube is the publishing of small everyday gestures of the ‘big ones’ (politicians, stars), like small incidents and their clumsiness in everyday actions, e.g. Beyonce´s fall from the stage or Tom Cruise’s demonic pro-scientology interview. Through their viral distribution on different platforms, these incidents will never be covered up or disappear from the public view. At the same time big gestures and star images are replicated and sometimes reinterpreted by the ‘small people’ who present themselves in the poses and attitudes of the stars. Generally, a coexistence of different perspectives is possible. YouTube allows polysemic and polyvalent views on the everyday and media phenomena. This article relies on YouTube research 2 that started in 2006 at the New Media Department of the Goethe University of Frankfurt. The results of the research have already presented representative forms and basic patterns, that is to say, categories for the clips appearing here. These kinds of clips, recurring in the observation period, have an impact on the basic representation of art or artistic expression within moving images on this platform. Methodologically the focus leads to the investigation (which has to be adequate to the specifics of the medium, or ‘media adequate’) of new visual structures and forms which can create – consciously or unconsciously – an art form. After focusing on the media structures, it will be discussed whether any and, if so, which ‘authentic’ new forms were developed solely on YouTube and whether these forms are innovative and can be characterised as avant-garde. This article first takes a small step in evaluating how to get from a general communication through means of visuality in web 2.0, an often endless chatty cheesy visual noise 3 – to the special quality of a consciously created aesthetic. From where do innovative aesthetic forms emerge, related to their media structures? 4 Are they the products of ‘media amateurs’ 5 or do we have to find new specifications and descriptions for the producers? The definition of a ‘media amateur’ describes technically interested private individuals who acquire and develop technology before commercial use of the technology is even recognisable. Just as artists are developing their own techniques, according to Dieter Daniels, media amateurs are autodidacts who invent techniques, rather than just acquire knowledge about them (see for example the demo scene, the machinima, brickfilm producers as well as many areas of computer gaming in general 6). The media amateur directly intervenes in the production processes of the medium and does not just simply use the medium. What is fascinating is the media amateur’s process of self education – not the result – and the direct impact on the internal structure and the control of the medium. 7 Media amateurs open a previously culturally unformed space of experience. This only partially applies to most of the YouTube clips in the realms of the visual arts; it is here most important to look at the visual content. This article discusses all these concepts and introduces new descriptions for the different forms of production: the technically oriented media master, the do-it-yourselfer, the tinkerer, the amateur handicraftsman and the inventor. It outlines a basic research project on ‘visual media culture’ (a triangulation of research on media structure and iconography) of the presented online video platform. It is a product of the analysis of clips focusing on the media structure, analyzing the creative handling of images and the deviations and differences of pre-set media formats and stereotypes.
The Video Vortex Reader is the first collection of critical texts to deal with the rapidly emerging world of online video – from its explosive rise in 2005 with YouTube, to its future as a significant form of personal media. After years of talk about digital convergence and crossmedia platforms we now witness the merger of the Internet and television at a pace no-one predicted. These contributions from scholars, artists and curators evolved from the first two Video Vortex conferences in Brussels and Amsterdam in 2007 which focused on responses to YouTube, and address key issues around independent production and distribution of online video content. What does this new distribution platform mean for artists and activists? What are the alternatives?
A thick Middle and Late Pleistocene loess/palaeosol sequence is exposed at the gravel quarry Gaul located east of Weilbach in the southern foreland of the Taunus Mountains. The loess/palaeosol sequence correlates to the last three glacial cycles. Seven samples were dated by luminescence methods using an elevated temperature IRSL (post-IR IRSL) protocol for polymineral fine-grains to determine the deposition age of the sediment and to set up a more reliable chronological framework for these deposits. The fading corrected IR50 and the pIRIR225 age estimates show a good agreement for almost all samples. The fading corrected IRSL ages range from 23.7 ± 1.6 ka to >350 ka indicating that the oldest loess was deposited during marine isotope stage (MIS) 10 or earlier and that the humic-rich horizon (Weilbacher Humuszone) was developed during the late phase of MIS 7. Loess taken above the fCc horizon most likely accumulated during MIS 6 indicating that the remains of the palaeosol are not belonging to the last interglacial soil. The two uppermost samples indicate that the youngest loess accumulated during MIS 2 (Upper Würmian). Age estimates for the loess-palaeosol sequence of the gravel quarry Gaul/Weilbach could be obtained up to ~350 ka using the pIRIR225 from feldspar. Keywords: loess, luminescence dating, IRSL, fading, Weilbach, chronostratigraphy
Editorial: Jürgen Stark : The ECB's Chief Economist about inflation targeting, liquidity support and the sovereign debt crisis Research Finance: Yulia Plyakha, Raman Uppal, Grigory Vilkov : "Why Does the Equally Weighted Portfolio Outperform the Value and Price Weighted Portfolios?" Research Law: Manfred Wandt : "Legal Objectives of the Solvency II Framework Directive" Research E-Finance: Roman Beck, Timm Pintner, Martin Wolf : "Individual Mindfulness to Mitigate Information Overload within Financial Organizations" Policy Platform: Peter Gomber, Björn Arndt, Marco Lutat, Tim Uhle : "Regulation of High-Frequency Trading – A European Perspective" Interview: Norbert Walter : "The Risko of Compromising on Price Stability Must not be Taken"
Effort estimates are of utmost economic importance in software development projects. Estimates bridge the gap between managers and the invisible and almost artistic domain of developers. They give a means to managers to track and control projects. Consequently, numerous estimation approaches have been developed over the past decades, starting with Allan Albrecht's Function Point Analysis in the late 1970s. However, this work neither tries to develop just another estimation approach, nor focuses on improving accuracy of existing techniques. Instead of characterizing software development as a technological problem, this work understands software development as a sociological challenge. Consequently, this work focuses on the question, what happens when developers are confronted with estimates representing the major instrument of management control? Do estimates influence developers, or are they unaffected? Is it irrational to expect that developers start to communicate and discuss estimates, conform to them, work strategically, hide progress or delay? This study shows that it is inappropriate to assume an independency of estimated and actual development effort. A theory is developed and tested, that explains how developers and managers influence the relationship between estimated and actual development effort. The theory therefore elaborates the phenomenon of estimation fulfillment.
In der vorliegenden Arbeit konzentrierte ich mich auf mediterrane wirbellose Tierarten, welche sich als Konsequenz ihrer Lebensweise nur schlecht ausbreiten können. Nichtsdestotrotz haben es Süßwasserkrabben der Gattung Potamon und Landschnecken der Gattung Tudorella geschafft, große Gebiete zu besiedeln, die heute durch das Mittelmeer getrennt sind. Für beide Gruppen wurde spekuliert, dass Menschen an ihrer Ausbreitung beteiligt waren. Es war mein Ziel die biogeographischen Muster dieser beiden Gattungen zu analysieren und abzuschätzen, ob Menschen tatsächlich Vektoren ihrer Ausbreitung waren. Meine Analysen fanden auf drei Ebenen statt: Taxonomie, Gattung und Art.
The analysis of biomolecular macrocomplexes requires certain preconditions to be fulfilled. The preparation of biomolecular samples usually results in low yields. Due to this constraint of low availability any method should provide a sufficient sensitivity to cope with typical sample amounts. Biomolecules also often show a reduced stability, i.e. a propensity for fragmentation upon ionisation, which requires reasonable soft methods for the investigation. Furthermore macromolecular complexes usually are composed by means of non-covalent interactions presenting additional demands on the softness. This holds true for specific complexes like protein-ligand or DNA double strand binding. For the formation of non-covalent, specific complexes the biomolecules’ native structure and environment are a basic prerequisite and hence crucial. Therefore it is desirable during analysis to keep the biomolecules in a native environment to preserve their structure and weak interactions. One suitable method for analysing biomolecules is mass spectrometry. Mass spectrometry is capable of high throughput screening as well as determining masses with high accuracy and high sensitivity. Especially since the availability of MALDI-MS and ESI-MS mass spectrometry evolved to a versatile tool to investigate biomolecular complexes. Both, MALDI- and ESI-MS are sufficiently soft methods to observe fragile biomolecules. Yet both methods have their advantages and disadvantages. During the recent years an alternative mass spectrometric approach has been developed in our group, termed LILBID-MS (Laser Induced Liquid Bead Ionisation/Desorption). In LILBID microdroplets of aqueous solution containing buffer, salt and further additives among the analyte molecules are injected into vacuum and irradiated one-by-one by mid-IR laser pulses. The absorption of the energy by the water leads to a rapid ablation of the preformed analyte ions. LILBID is highly tolerant for the addition of salts and detergents allowing to study biomolecular complexes in a native environment. As LILBID-MS is soft enough to avoid fragmentation, specific non-covalent complexes can be analysed directly from their native environment by this method. In addition dissociation can be induced on demand by increasing the laser intensity which allows for the study of subunit compositions. A further prominent property of LILBID is the possibility to study hydrophobic membrane proteins due to the tolerated use of detergents. During the course of this work, several instrumental improvements mostly concerning ion focussing and beam steering were introduced. Together with refinements of different modes of measurement the result is a significantly improved signal-to-noise ratio as well as a further improvement in sensitivity. In addition the accessible m/z range for a given flight time has been vastly increased. The new possibilities that LILBID now offers for the study of biomolecular complexes were investigated. The ability to detect specific binding in LILBID-MS was investigated by means of nucleic acids and their interaction with proteins. It could be shown that the stability of a 16bp dsDNA corresponds to that in solution phase regarding the dependency on concentration and type of the salts used. In addition a competitive experiment with the well-known transcription factor p50 was used to demonstrate the detection of sequence-specific binding with LILBID. The improved sensitivity allowed to detect single stranded DNA at nanomolar concentrations and even the 2686bp plasmid pUC19 could be easily detected without fragmentation using a concentration of only 80nM. In case of the transcription factor p63 the mass spectrometric analysis could help to identify a new model of activation and inhibition. For the first time known quarternary structures of membrane proteins like the light-driven proton pump bacteriorhodopsin and the potassium channel KcsA could be detected with mass spectrometry. For the light-driven proton pump proteorhodopsin the type and the concentration of the used detergents significantly influenced the stability of this protein as well as the preferred quarternary structure.
In nature, society and technology many disordered systems exist, that show emergent behaviour, where the interactions of numerous microscopic agents result in macroscopic, systemic properties, that may not be present on the microscopic scale. Examples include phase transitions in magnetism and percolation, for example in porous unordered media, biological, and social systems. Also technological systems that are explicitly designed to function without central control instances, like their prime example the Internet, or virtual networks, like the World Wide Web, which is defined by the hyperlinks from one web page to another, exhibit emergent properties. The study of the common network characteristics found in previously seemingly unrelated fields of science and the urge to explain their emergence, form a scientific field in its own right, the science of complex networks. In this field, methodologies from physics, leading to simplification and generalization by abstraction, help to shift the focus from the implementation's details on the microscopic level to the macroscopic, coarse grained system level. By describing the macroscopic properties that emerge from microscopic interactions, statistical physics, in particular stochastic and computational methods, has proven to be a valuable tool in the investigation of such systems. The mathematical framework for the description of networks is graph theory, in hindsight founded by Euler in 1736 and an active area of research since then. In recent years, applied graph theory flourished through the advent of large scale data sets, made accessible by the use of computers. A paradigm for microscopic interactions among entities that locally optimize their behaviour to increase their own benefit is game theory, the mathematical framework of decision finding. With first applications in economics e.g. Neumann (1944), game theory is an approved field of mathematics. However, game theoretic behaviour is also found in natural systems, e.g. populations of the bacterium Escherichia coli, as described by Kerr (2002). In the present work, a combination of graph theory and game theory is used to model the interactions of selfish agents that form networks. Following brief introductions to graph theory and game theory, the present work approaches the interplay of local self-organizing rules with network properties and topology from three perspectives. To investigate the dynamics of topology reshaping, coupling of the so called iterated prisoners' dilemma (IPD) to the network structure is proposed and studied in Chapter 4. In dependence of a free parameter in the payoff matrix, the reorganization dynamics result in various emergent network structures. The resulting topologies exhibit an increase in performance, measured by a variance of closeness, of a factor 1.2 to 1.9, depending in the chosen free parameter. Presented in Chapter 5, the second approach puts the focus on a static network structure and studies the cooperativity of the system, measured by the fixation probability. Heterogeneous strategies to distribute incentives for cooperation among the players are proposed. These strategies allow to enhance the cooperative behaviour, while requiring fewer total investments. Putting the emphasis on communication networks in Chapters 6 and 7, the third approach investigates the use of routing metrics to increase the performance of data packet transport networks. Algorithms for the iterative determination of such metrics are demonstrated and investigated. The most successful of these algorithms, the hybrid metric, is able to increase the throughput capacity of a network by a factor of 7. During the investigation of the iterative weight assignments a simple, static weight assignment, the so called logKiKj metric, is found. In contrast to the algorithmic metrics, it results in vanishing computational costs, yet it is able to increase the performance by a factor of 5.
Suicide genes have been broadly used in gene therapy. They can serve as safety tools for conditional elimination of infused cells or for directed tumor therapy. To date, the Herpes simplex virus thymidine kinase/ ganciclovir (HSVtk/GCV) system is the most prominent and the most widely used suicidegene/prodrug combination. Despite its promising performance, the system displays limitations, which include relatively slow killing kinetics and toxicity of the prodrug GCV. Consequently, several groups have either developed new suicide-gene/prodrug combinations or attempted to improve the established HSVtk/GCV suicide system. The present study also aimed towards optimization of the HSVtk/GCV system. To do so, a novel, codon-optimized point mutant (A168H) of HSVtk was developed. The novel mutant was named TK.007. It was extensively tested for its efficiency in two relevant settings: (1) control of severe graft-versus-host disease (GvHD) after adoptive immunotherapy with Tlymphocytes, and (2) direct elimination of targeted tumor cells. TK.007 was compared to the broadly used wild-type, splice-corrected scHSVtk and to a codon-optimized HSVtk (coHSVtk) not bearing the above point mutation. (1) For experiments related to the adoptive immunotherapy approach, HSVtkvariants were expressed from a γ-retroviral MP71 vector as a fusion construct with the selection and marker gene tCD34. Expression levels for TK.007 in transduced lymphoid and myeloid cell lines were significantly higher at initial transduction and over a 12 week period compared to the commonly used scHSVtk and coHSVtk indicating reduced toxicity of TK.007. Killing kinetics of transduced cell lines (PM1 and K562) and primary human T cells were significantly faster for TK.007 in comparison to scHSVtk and coHSVtk in vitro. In vivo-functionality of TK.007 was assessed in an allogeneic transplantation model. T cells derived from C57BL/6J.Ly5.1 donor mice were transduced with MP71 vectors expressing scHSVtk or TK.007. Transduced cells were selected and transplanted into Balb/c Rag2-/- γ-/- immune-deficient recipient mice. Acute, severe GvHD occurred and was effectively abrogated in all mice transplanted with TK.007- transduced T cells, and in five out of six mice transplanted with scHSVtk-transduced cells. In a slightly modified quantitative allogeneic transplantation mouse model, significantly faster and more efficient in vivo killing was demonstrated for TK.007 as compared to scHSVtk, especially at low doses of GCV. (2) In order to assess TK.007 functionality in cells derived from solid tumors, HSVtk-variants were expressed from lentiviral gene ontology (LeGO) vectors in combination with an eGFP/neo-opt selection cassette. Transduced and selected tumor cell lines that derived from several tissues were eliminated at significantly lower GCV doses and to higher extents when transduced with TK.007 compared to scHSVtk. Moreover, a significantly stronger bystander effect of TK.007 was demonstrated. The superior in vitro efficiency of TK.007 was confirmed in an in vivo subcutaneous xenograft mouse model for glioblastoma in NOD/SCID mice. Mice transplanted with TK.007 transduced cells stayed tumor-free after treatment with different GCV-doses. On the contrary, mice of the scHSVtk group either demonstrated only transiently reduced tumor growth in the low-dose GCV group (10 mg/kg) compared to the control groups or suffered from relatively fast relapses after initial tumor shrinking in the standarddose (50 mg/kg) GCV group. As a result, all mice in the scHSVtk group died from vigorous tumor growth. In summary, in two different applications for suicide gene therapy the present study has demonstrated superior functional performance of the novel suicide gene TK.007 as compared to the broadly used wild-type scHSVtk. Differences became particularly pronounced at low doses of GCV. It can be concluded that the new TK.007-gene represents a promising alternative to the commonly used scHSVtk for gene therapeutic applications.
The aim of this work is to develop an effective equation of state for QCD, having the correct asymptotic degrees of freedom, to be used as input for dynamical studies of heavy ion collisions. We present an approach for modeling an EoS that respects the symmetries underlying QCD, and includes the correct asymptotic degrees of freedom, i.e. quarks and gluons at high temperature and hadrons in the low-temperature limit. We achieve this by including quarks degrees of freedom and the thermal contribution of the Polyakov loop in a hadronic chiral sigma-omega model. The hadronic part of the model is a nonlinear realization of an sigma-omega model. As the fundamental symmetries of QCD should also be present in its hadronic states such an approach is widely used to describe hadron properties below and around Tc. The quarks are introduced as thermal quasi particles, coupling to the Polyakov loop, while the dynamics of the Polyakov loop are controlled by a potential term which is fitted to reproduce pure gauge lattice data. In this model the sigma field serves a the order parameter for chiral restoration and the Polyakov loop as order parameter for deconfinement. The hadrons are suppressed at high densities by excluded volume corrections. As a next step, we introduce our new HQ model equation of state in a microscopic+macroscopic hybrid approach to heavy ion collisions. This hybrid approach is based on the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The present implementation allows to compare pure microscopic transport calculations with hydrodynamic calculations using exactly the same initial conditions and freeze-out procedure. The effects of the change in the underlying dynamics - ideal fluid dynamics vs. non-equilibrium transport theory - are explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic and directed flow are shown to be not sensitive to changes in the EoS while the smaller mean free path in the hydrodynamic evolution reflects directly in higher flow results which are consistent with the experimental data. This finding indicates qualitatively that physical mechanisms like viscosity and other non equilibrium effects play an essentially more important role than the EoS when bulk observables like flow are investigated. In the last chapter, results for the thermal production of MEMOs in nucleus-nucleus collisions from a combined micro+macro approach are presented. Multiplicities, rapidity and transverse momentum spectra are predicted for Pb+Pb interaction at different beam energies. The presented excitation functions for various MEMO multiplicities show a clear maximum at the upper FAIR energy regime making this facility the ideal place to study the production of these exotic forms of multistrange objects.
In human neuroscientific research, there has been an increasing interest in how the brain computes the value of an anticipated outcome. However, evidence is still missing about which valuation related brain regions are modulated by the proximity to an expected goal and the previously invested effort to reach a goal. The aim of this dissertation is to investigate the effects of goal proximity and invested effort on valuation related regions in the human brain. We addressed this question in two fMRI studies by integrating a commonly used reward anticipation task in differential versions of a Multitrial Reward Schedule Paradigm. In both experiments, subjects had to perform consecutive reward anticipation tasks under two different reward contingencies: in the delayed condition, participants received a monetary reward only after successful completion of multiple consecutive trials. In the immediate condition, money was earned after every successful trial. In the first study, we could demonstrate that the rostral cingulate zone of the posterior medial frontal cortex signals action value contingent to goal proximity, thereby replicating neurophysiological findings about goal proximity signals in a homologous region in non-human primates. The findings of the second study imply that brain regions associated with general cognitive control processes are modulated by previous effort investment. Furthermore, we found the posterior lateral prefrontal cortex and the orbitofrontal cortex to be involved in coding for the effort-based context of a situation. In sum, these results extend the role of the human rostral cingulate zone in outcome evaluation to the continuous updating of action values over a course of action steps based on the proximity to the expected reward. Furthermore, we tentatively suggest that previous effort investment invokes processes under the control of the executive system, and that posterior lateral prefrontal cortex and the orbitofrontal cortex are involved in an effort-based context representation that can be used for outcome evaluation that is dependent on the characteristics of the current situation.
The focus of the discussion at the conference on September 23, 2004 was on the long-term impact on capital markets and pension systems. The speakers tried to identify the direction and magnitude of potential changes as well as the likelihood of an eventual asset meltdown. The conference's objective was to combine insights from academia with those from the financial community in order to provide a more comprehensive outlook on capital market developments. Conference Reader Nr. 2005/01
Conference Reader zur gemeinsam von Athansios Orphanides (Federal Reserve Board, Washington D.C.), John C. Williams (Federal Reserve Bank of San Francisco), Heinz Hermann (Deutsche Bundesbank), und Volker Wieland (Center for Financial Studies and Goethe University Frankfurt) organisierten Konferenz, die vom 30. - 31. August, 2003 in Eltville stattgefunden hat. Inhaltsverzeichnis: * Volker Wieland (Director Center for Financial Studies): Foreword * Hans Georg Fabritius (Member of the Executive Board of the Deutsche Bundesbank): Opening Remarks * Charles Goodhart (Norman Sosnow Professor of Banking and Finance at the London School of Economics and External Member of the Bank of England's Monetary Policy Commitee): After Dinner Speech * Paper Abstracts * List of Participants
Over the last years there has been an increasing interest in the involvement of the MVA-pathway and of members of the small GTPases, in the development and progression of AD. Earlier investigations mainly focused on the role of cholesterol in disease pathology. This research was supported by retrospective cohort studies, initially showing beneficial effects of the long-term intake of cholesterol lowering statins, on the incidence of the development of sporadic AD. However, in more recent literature increasing attention has been paid to the isoprenoids, FPP and GGPP, due to their crucial role in the post-translational modifications of members of the superfamily of small GTPases. In AD, these proteins were amongst others shown to be involved in mechanisms affecting APP processing, ROS generation and synaptic plasticity. A major factor impeding the clarification of the role of the MVA-pathway intermediates in these mechanisms was the lack of a sensitive and accurate method to determine FPP and GGPP levels in brain tissue. Hence, a state of the art HPLC-FLD method for the quantification of the isoprenoids FPP and GGPP in brain tissue was successfully developed. After the introduction of a double clean-up step from complex brain matrix samples and the synthesis of an appropriate IS (DNP), the method was fully validated according to the latest FDA guideline for bioanalytical method validation. Furthermore, this method was transferred to a faster and more sensitive, state of the art UHPLC-MS/MS application. Additionally, the method was shown to be applicable for mouse brain tissue and data was generated from an in vivo mouse simvastatin study and for different mouse models. According to the aims of the thesis, the current work describes for the first time absolute isoprenoid concentrations in human frontal cortex white and grey matter. Furthermore, this is the first report of isoprenoid levels in the frontal cortex of human AD brains. Further results were shown from mouse brains originating from different mouse models, including the Thy-1 APP mouse model mimicking AD pathology in terms of Aβ formation or C57Bl/6 mice at different ages. AD prevalence can be clearly correlated with increasing age. Therefore, three different generations of mice were investigated. The study demonstrated constant isoprenoid and cholesterol levels in the first half of their life followed by a significant increase of FPP and GGPP in the second half (between 12 and 24 month of age). Cholesterol levels were also elevated in the aged group, but again the effect was less pronounced than shown for the isoprenoids. These results lead to the tentative conclusion that cerebral isoprenoid levels are elevated during aging and that this accumulation is amplified during AD leading to accelerated neuronal dysfunction. In a different mouse study, using the C57Bl/6 mice, in vivo drug intervention with the HMG-CoA reductase inhibitor simvastatin revealed strong inhibition of the rate limiting step of the mevalonate/isoprenoid/cholesterol pathway and resulted in the first report of significantly reduced FPP and GGPP levels in brain tissue of statin treated mice. These results open for the first time the possibility to monitor drug effects on cerebral isoprenoid levels and correlate these data with a modulation of APP processing, which was shown by our group in previous studies. Interestingly, apart from the isoprenoid reduction following statin treatment the reduction of brain cholesterol was also significant but to a lesser extent. These findings support the notion that isoprenoid levels are more susceptible to statin treatment than cholesterol levels. Furthermore, this suggests a strong cellular dependence on FPP and GGPP, as the pool seems to be easily depleted, which finally could lead to cell death. The first investigations of farnesylated Ras and geranylgeranylated Rac protein levels by means of immuno-blotting, substantiated the notion of a decreased abundance of prenylated small GTPases under statin influence as a consequence of reduced isoprenoid levels. These findings demonstrate for the first time a correlation of FPP and GGPP levels with the abundance of small GTPases. These findings together with the results from the AD study prove that isoprenoid levels are not strictly subject to the same regulation as cholesterol levels. To further understand the physiological regulation in the cell, in vitro experiments with different inhibitors of the mevalonate/isoprenoid/cholesterol pathway were conducted. These results confirmed the isoprenoid and cholesterol reducing effects of statin treatment as observed in the aforementioned in vivo mouse study. Interestingly, cholesterol synthesis inhibition targeted after FPP as the branch point, led to significantly elevated FPP levels. FTase inhibition led to significantly reduced FPP levels, whereas inhibition of the GGTase I did not show a significant change of either isoprenoid levels.
In the work presented herein the microscopic transport model BAMPS (Boltzmann Approach to Multi-Parton Scatterings) is applied to simulate the time evolution of the hot partonic medium that is created in Au+Au collisions at the Relativistic Heavy Ion Collider (RHIC) and in Pb+Pb collisions at the recently started Large Hadron Collider (LHC). The study is especially focused on the investigation of the nuclear modification factor R_{AA}, that quantifies the suppression of particle yields at large transverse momentum with respect to a scaled proton+proton reference, and the simultaneous description of the collective properties of the medium in terms of the elliptic flow v_{2} within a common framework.
One of the key functions of blood vessels is to transport nutrients and oxygen to distant tissues and organs in the body. When blood supply is insufficient, new vessels form to meet the metabolic tissue demands and to re-establish cellular homeostasis. Expansion of the vascular network through sprouting angiogenesis requires the specification of ECs into leading (sprouting) tip and following (non-sprouting) stalk cells. Attracted by guidance cues tip cells dynamically extend and retract filopodia to navigate the nascent vessel sprout, whereas trailing stalk cells proliferate to form the extending vascular tube. All of these processes are under the control of environmental signals (e.g. hypoxia, metabolism) and numerous cytokines and peptide growth factors. The Dll4/Notch pathway coordinates several critical steps of angiogenic blood vessel growth. Even subtle alterations in Notch activity can profoundly influence endothelial cell behavior and blood vessel formation, yet little is known about the intrinsic regulation and dynamics of Notch signaling in endothelial cells. In addition, it remains an open question, how different growth factor signals impinging on sprouting ECs are coordinated with local environmental cues originating from nutrient-deprived, hypoxic tissue to achieve a balanced endothelial cell response. Acetylation of lysines is a critical posttranslational modification of histones, which acts as an important regulatory mechanism to control chromatin structure and gene transcription. In addition to histones, several non-histone proteins are targeted for acetylation reversible acetylation is emerging as a fundamental regulatory mechanism to control protein function, interaction and stability. Previous studies from our group identified the NAD+-dependent deacetylase SIRT1 as a key regulator of blood vessel growth controlling endothelial angiogenic responses. These studies revealed that SIRT1 is highly expressed in the vascular endothelium during blood vessel development, where it controls the angiogenic activity of endothelial cells. Moreover, in this work SIRT1 has been shown to control the activity of key regulators of cardiovascular homeostasis such as eNOS, Foxo1 and p53. The present study describes that SIRT1 antagonizes Notch signaling by deacetylating the Notch intracellular domain (NICD). We showed that loss of SIRT1 enhances DLL4-induced endothelial Notch responses as assessed by different luciferase responsive elements as well as transcriptional analysis of Notch endogenous target genes activation. Conversely, SIRT1 gain of function by overexpression of pharmacological activation decreases induction of Notch targets in response to DLL4 stimulation. We also showed that the NICD can be directly acetylated by PC AF and p300 and that SIRT1 promotes deacetylation of NICD. We have identified 14 lysines that are targeted for acetylation and their mutation abolishes the effects of SIRT1 of Notch responses. Furthermore, over-expression or activation of SIRT1 significantly reduces the levels of NICD protein. Moreover, SIRT1-mediated NICD degradation can be reversed by blockade of the proteasome suggesting a mechanism resulting from ubiquitin-mediated proteolysis. Indeed, we have shown that SIRT1 knockdown or pharmacological inhibition decreased NICD ubiquitination. We propose a novel molecular mechanism of modulation of the amplitude and duration of Notch responses in which acetylation increases NICD stability and therefore permanence at the promoters, while SIRT1, by inducing NICD degradation through its deacetylation, shortens Notch responses. In order to evaluate the physiological relevance of our findings we used different models in which the Notch functions during blood vessel formation have been extensively characterized. First, retinal angiogenesis in mice lacking SIRT1 activity shows decreased branching and reduced endothelial proliferation, similar to what happens after Notch gain of function mutations. ECs from these mice exhibit increased expression of Notch target genes. Second, these results were reproducible during intersomitic vessel growth in sirt1-deficient zebrafish. In both models, the defects could be partially rescued by inhibition of Notch activation. Third, we used an in vitro model of vessel sprouting from differentiating embryonic bodies in response to VEGF in a collagen matrix. Our results showed that Sirt1-deficient cells shows impaired sprouting which correlated with increased NICD levels. In addition, when in competition with wild-type cells in this assay, Sirt1-deficient cells are more prone to occupy the stalk cell position. Taken together, our study identifies reversible acetylation of NICD as a novel molecular mechanism to adapt the dynamics of Notch signaling and suggest that SIRT1 acts as a rheostat to fine-tune endothelial Notch responses. The NAD+-dependent feature of SIRT1 activity possibly links endothelial Notch responses to environmental cues and metabolic changes during nutrient deprivation in ischemic environments or upon other cellular stresses.
This dissertation contains three essays on monetary policy, dynamics of the interest rates and spillovers across economies. In the first essay I examine the effects of monetary policy and its interaction with financial regulation within a micro-founded macroeconometric framework for a closed economy with a heterogeneous banking system, facing a period of low interest rates. I analyse the interplay between monetary policy and banking regulation and study the role of agents’ expectations for the effectiveness of unconventional monetary policy tools. In the next essay, I argue that openness is crucial for understanding the dynamics of the term structure. In an empirical application, I show that my model of the term structure fits well the yield curve in-sample and has a sound ability to forecast interest rates out-of-sample. The model accounts for the expectations hypothesis, replicates the forward premium anomaly and reconciles the uncovered interest rate parity implications. The last essay is concerned with the dynamics of co-movement among macroeconomic aggregates and the degree of convergence or decoupling amongst economies. The model includes measures of financial and trade-based interdependencies and incorporates feedback between macroeconomic variables and time-varying weights. The findings point at the importance of asset price movements and financial linkages.
Organische Materialien haben bis zur Mitte des 20. Jahrhunderts hinsichtlich ihrer elektronischen Eigenschaften keine besondere Aufmerksamkeit auf sich gezogen. Größeres Interesse an diesen Materialien entstand erst durch die Entdeckung einer ungewöhnlich hohen elektrischen Leitfähigkeit des organischen Perylen-Bromin Ladungstransfer-Komplexes durch Inokuchi et al. im Jahr 1954. Diese neue Klasse von Materialien besteht typischerweise aus Donor- und Akzeptor-Molekülen, die in einer bestimmten Stöchiometrie aneinander gebunden sind. Elektrische Ladung wird zwischen den Donor- und Akzeptor-Molekülen transferiert. Um diesen Prozess zu beschreiben, entwickelte Robert Mulliken in den 60er Jahren ein theoretisches Gerüst. Abhängig von der Anordnung der Moleküle und transferierten elektrischen Ladung kann der Ladungstransfer-Komplex (oder Salz) ein Isolator, ein Halbleiter, ein Metall oder sogar ein Supraleiter sein. Noch mehr Aufmerksamkeit erhielten Ladungstransfer-Materialien mit der Entdeckung des ersten quasi-eindimensionalen organischen Metalls TTF-TCNQ (tetrathiafulvalene-tetracyanoquinodimethane) im Jahr 1973. ...
This dissertation connects two independent fields of theoretical neuroscience: on the one hand, the self-organization of topographic connectivity patterns, and on the other hand, invariant object recognition, that is the recognition of objects independently of their various possible retinal representations (for example due to translations or scalings). The topographic representation is used in the presented approach, as a coordinate system, which then allows for the implementation of invariance transformations. Hence this study shows, that it is possible that the brain self-organizes before birth, so that it is able to invariantly recognize objects immediately after birth. Besides the core hypothesis that links prenatal work with object recognition, advancements in both fields themselves are also presented. In the beginning of the thesis, a novel analytically solvable probabilistic generative model for topographic maps is introduced. And at the end of the thesis, a model that integrates classical feature-based ideas with the normalization-based approach is presented. This bilinear model makes use of sparseness as well as slowness to implement "optimal" topographic representations. It is therefore a good candidate for hierarchical processing in the brain and for future research.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen ( ), with a maximum vortex average permanent removal of over 5ppb in late December between 500 and 550K and a corresponding increase of of over 2ppb below about 450K. The simulated vertical redistribution of is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 7.8x10-6cm-3h-1 in the model, the observed denitrification is well reproduced.
In the investigated winter 2002/2003, the denitrification has only moderate impact (≤14%) on the simulated vortex average ozone loss of about 1.1ppm near the 460K level. At higher altitudes, above 600K potential temperature, the simulations show significant ozone depletion through -catalytic cycles due to the unusual early exposure of vortex air to sunlight.
This paper discusses the effect of capital regulation on the risk taking behavior of commercial banks. We first theoretically show that capital regulation works differently in different market structures of banking sectors. In lowly concentrated markets, capital regulation is effective in mitigating risk taking behavior because banks' franchise values are low and banks have incentives to pursue risky strategies in order to increase their franchise values. If franchise values are high, on the other hand, the effect of capital regulation on bank risk taking is ambiguous as banks lack those incentives. We then test the model predictions on a cross-country sample including 421 commercial banks from 61 countries. We find that capital regulation is effective in mitigating risk taking only in markets with a low degree of concentration. The results remain robust after accounting for financial sector development, legal system effciency, and for other country and bank-specific characteristics. Keywords: Banks, market structure, risk shifting, franchise value, capital regulation
Background: The aim of this study was to develop a child-specific classification system for long bone fractures and to examine its reliability and validity on the basis of a prospective multicentre study. Methods: Using the sequentially developed classification system, three samples of between 30 and 185 paediatric limb fractures from a pool of 2308 fractures documented in two multicenter studies were analysed in a blinded fashion by eight orthopaedic surgeons, on a total of 5 occasions. Intra- and interobserver reliability and accuracy were calculated. Results: The reliability improved with successive simplification of the classification. The final version resulted in an overall interobserver agreement of kappa=0.71 with no significant difference between experienced and less experienced raters. Conclusions: In conclusion, the evaluation of the newly proposed classification system resulted in a reliable and routinely applicable system, for which training in its proper use may further improve the reliability. It can be recommended as a useful tool for clinical practice and offers the option for developing treatment recommendations and outcome predictions in the future.
Dynamics of relativistic heavy-ion collisions is investigated on the basis of a simple (1+1)-dimensional hydrodynamical model in light-cone coordinates. The main emphasis is put on studying sensitivity of the dynamics and observables to the equation of state and initial conditions. Low sensitivity of pion rapidity spectra to the presence of the phase transition is demonstrated, and some inconsistencies of the equilibrium scenario are pointed out. Possible non-equilibrium effects are discussed, in particular, a possibility of an explosive disintegration of the deconfined phase into quark-gluon droplets. Simple estimates show that the characteristic droplet size should decrease with increasing the collective expansion rate. These droplets will hadronize individually by emitting hadrons from the surface. This scenario should reveal itself by strong non-statistical fluctuations of observables. Critical Point and Onset of Deconfinement 4th International Workshop July 9-13 2007 GSI Darmstadt,Germany
Event-by-event multiplicity fluctuations in nucleus-nucleus collisions from low SPS up to RHIC energies have been studied within the HSD transport approach. Fluctuations of baryonic number and electric charge also have been explored for Pb+Pb collisions at SPS energies in comparison to the experimental data from NA49. We find a dominant role of the fluctuations in the nucleon participant number for the final hadron multiplicity fluctuations and a strong influence of the experimental acceptance on the final results. Critical Point and Onset of Deconfinement - 4th International Workshop July 9 - 13, 2007 Darmstadt, Germany
Statistical physics of power flows on networks with a high share of fluctuating renewable generation
(2010)
Renewable energy sources will play an important role in future generation of electrical energy. This is due to the fact that fossil fuel reserves are limited and because of the waste caused by conventional electricity generation. The most important sources of renewable energy, wind and solar irradiation, exhibit strong temporal fluctuations. This poses new problems for the security of supply. Further, the power flows become a stochastic character so that new methods are required to predict flows within an electrical grid. The main focus of this work is the description of power flows in a electrical transmission network with a high share of renewable generation of electrical energy. To define an appropriate model, it is important to understand the general set-up of a stable system with fluctuating generation. Therefore, generation time series of solar and wind power are compared to load time series for whole Europe and the required balancing or storage capacities analyzed. With these insights, a simple model is proposed to study the power flows. An approximation to the full power flow equations is used and evaluated with Monte-Carlo simulations. Further, approximations to the distributions of power flows along the links are analytically derived. Finally, the results are compared to the power flows calculated from the generation and load data.
Nanggroe Aceh Darussalam is a multicultural province within a multicultural state. Hence, its political leaders not only face the need to integrate ethnic and cultural diversity into a regional framework, but also have to define Aceh’s role within the Indonesian nation. During its violent past which was characterized by exploitation and military oppression, there were good reasons to emphasize sameness over diversity and to build up the consciousness of a unified Acehnese identity. From both an emic and an etic perspective, it is today widely accepted that there is such a thing as a homogeneous Acehnese culture which is rooted in a glorious, though troublesome, history of repression and rebellion and shaped by a strong Islamic piety. Even if it is true that Acehnese history has created a strong regional identity, it must not be forgotten that people living in this area belong to various ethnic and cultural groups and that they represent a rich variety of different cultures rather than simply a single homogeneous culture. As a matter of fact, the practises and discourses of Islam here also vary depending on the cultural background of the people. As elsewhere in Indonesia and beyond, world religions have to adapt to local customs, have to be appropriated by the local people, and have to be indigenized. This is the reason why adat still continues to play a role in every local context, even if it has been treated with suspicion in many parts of Indonesia since the Dutch colonial administration began using it as a counterforce against Islam in order to implement their divide-and-rule strategy. With this article, I wish to shed some light on the complexities of Acehnese culture, as it encompasses numerous very distinct local cultures and this reflects on the general significance of culture for the construction and reconstruction of post-tsunami Aceh.