Refine
Year of publication
- 2012 (79) (remove)
Document Type
- Doctoral Thesis (79) (remove)
Language
- English (79) (remove)
Has Fulltext
- yes (79)
Is part of the Bibliography
- no (79)
Keywords
- ALICE (1)
- ALICE, Teilchendetektor (1)
- Activation (1)
- Bali (1)
- Banach spaces (1)
- Bank Financing (1)
- Banks (1)
- Blei (1)
- Bovidae (1)
- Central Bank (1)
Institute
- Biowissenschaften (22)
- Physik (22)
- Biochemie und Chemie (9)
- Informatik (4)
- Pharmazie (4)
- Geowissenschaften (3)
- Medizin (3)
- Biodiversität und Klima Forschungszentrum (BiK-F) (2)
- Institut für Ökologie, Evolution und Diversität (2)
- Katholische Theologie (2)
A new era in experimental nuclear physics has begun with the start-up of the Large Hadron Collider at CERN and its dedicated heavy-ion detector system ALICE. Measuring the highest energy density ever produced in nucleus-nucleus collisions, the detector has been designed to study the properties of the created hot and dense medium, assumed to be a Quark-Gluon Plasma.
Comprised of 18 high granularity sub-detectors, ALICE delivers data from a few million electronic channels of proton-proton and heavy-ion collisions.
The produced data volume can reach up to 26 GByte/s for central Pb–Pb
collisions at design luminosity of L = 1027 cm−2 s−1 , challenging not only the data storage, but also the physics analysis. A High-Level Trigger (HLT) has been built and commissioned to reduce that amount of data to a storable value prior to archiving with the means of data filtering and compression without the loss of physics information. Implemented as a large high performance compute cluster, the HLT is able to perform a full reconstruction of all events at the time of data-taking, which allows to trigger, based on the information of a complete event. Rare physics probes, with high transverse momentum, can be identified and selected to enhance the overall physics reach of the experiment.
The commissioning of the HLT is at the center of this thesis. Being deeply embedded in the ALICE data path and, therefore, interfacing all other ALICE subsystems, this commissioning imposed not only a major challenge, but also a massive coordination effort, which was completed with the first proton-proton collisions reconstructed by the HLT. Furthermore, this thesis is completed with the study and implementation of on-line high transverse momentum triggers.
Tumor-associated macrophages (TAM) are a major supportive component within neoplasms and by their plasticity promote all phases of tumor development. Mechanisms of macrophage (M Phi) attraction and differentiation to a tumor-promoting phenotype, defined among others by distinct cytokine patterns such as pronounced immunosuppressive interleukin 10 (IL-10) production, are largely unknown. However, a high apoptosis index within tumors and strong M Phi infiltration correlate with poor prognosis. Thus, I aimed at identifying signaling pathways contributing to generation of TAM-like M Phi by using supernatant of apoptotic cancer cells (ACM) as stimulus.
To distinguish novel factors involved in generating TAM-like M Phi, I used an adenoviral RNAi-based approach. The primary read-out was production of IL-10. However, mediators modulating IL-10 were re-validated for their impact on regulation of the cytokines IL-6, IL-8 and IL-12. Following assay development, optimization and down-scaling to a 384-well format, primary human M Phi were transduced with 8495 constructs of the adenoviral shRNA SilenceSelect® library of Galapagos BV, followed by activation to a TAM-like phenotype using ACM. I identified 96 genes involved in IL-10 production in response to ACM and observed a pronounced cluster of 22 targets regulating IL-10 and IL-6. Principal validation of five targets of the IL-10/IL-6 cluster was performed using siRNA or pharmacological inhibitors. Among those, IL-4 receptor-alpha and cannabinoid receptor 2 were confirmed as regulators of IL-10 and IL-6 secretion.
One protein identified in the screen, the nerve growth factor (NGF) receptor TRKA was chosen for in-depth validation, based on its involvement in IL-10, IL-6 and IL-12 secretion from ACM-stimulated human M Phi. TRKA possesses a cardinal role in neuronal development, but compelling evidence emerges suggesting participation of TRKA in cancer development. First experiments using pharmacological inhibitors principally confirmed the involvement of TRKA in IL-10 secretion by ACM-stimulated M Phi and revealed PI3K/AKT and to a lesser extend MAPK p38 as important signaling molecules downstream of TRKA activation. Signaling through TRKA required the presence of its ligand NGF, as indicated by NGF neutralization experiments. NGF was not induced by or present in ACM, but was constitutively secreted by M Phi. Interestingly, M Phi responded to authentic NGF with neither AKT and p38 phosphorylation nor IL-10 production. TRKA is well known to be transactivated by other receptors and in neurons its cellular localization is decisive for its function. Inhibitors of common transactivation partners did not influence IL-10 production by human M Phi. Rather, ACM-treatment provoked pronounced translocation of TRKA to the plasma membrane within 10 minutes as observed by immunofluorescence staining. Consequently, I was intrigued to clarify mechanisms of TRKA trafficking in response to ACM.
The bioactive lipid sphingosine-1-phosphate (S1P) has been previously identified as important apoptotic cell-derived mediator involved in TAM-like M Phi polarization. Indeed, I observed S1P and src kinase involvement in ACM-mediated IL-10 induction. Furthermore, inhibition of S1P receptor (S1PR) signaling or src kinase activity prevented TRKA translocation, whereas a TRKA inhibitor or anti-NGF did not block TRKA trafficking to the plasma membrane in response to ACM. Thus, autocrine secreted NGF activated TRKA to promote IL-10 secretion, which required previous S1PR/src-dependent translocation of TRKA to the plasma membrane. Following the detailed analysis of IL-10 regulation, I was interested whether other TAM phenotype markers were influenced by ACM and whether their expression was regulated through TRKA-dependent signaling. Five of six markers were up-regulated on mRNA level by ACM, and secretion of IL-6, IL-8 and TNF-alpha was triggered. S1PR-signaling was essential for induction of all but one marker, whereas TRKA signaling was only required for cytokine secretion. Interestingly, none of the investigated TAM markers was regulated identically to IL-10, emphasizing a tight and exclusive regulation machinery of this potent immunosuppressive cytokine.
Finally, I aimed to validate the in vitro findings in human ACM-stimulated M Phi. Therefore, I isolated murine TAM as well as other major mononuclear phagocyte populations from primary oncogene-induced breast cancer tissue. Indeed, TRKA-dependent signaling was required for spontaneous cytokine production selectively by primary murine TAM. Besides IL-10, the TRKA pathway was decisive for secretion of IL-6, TNF-alpha and monocyte chemotactic protein-1, indicating its relevance in cancer-associated inflammation.
In summary, my findings highlight a fine-tuned regulatory system of S1P-dependent TRKA trafficking and autocrine NGF signaling in TAM biology. Both factors, S1P as well as NGF, might be interesting targets for future cancer therapy.
Um der Erkennung durch das körpereigene Immunsystem entkommen, weisen Tumore Modifikationen in ihrer Mikroumgebung auf. Zu diesen gehören u. a. veränderte Sauerstoffkonzentrationen im Tumorkern und die Freisetzung biochemischer Faktoren aus Tumorzellen, welche die Funktion von Tumor-assoziierten Phagozyten, wie z.B. Dendritischen Zellen (DC) beeinflussen. DC sind professionelle Antigen-präsentierende Zellen, die eine Spezialisierung in verschiedene funktionale Subtypen aufweisen. Myeloische DC (mDC) sind besonders effizient in Hinsicht auf die Präsentation von Antigenen, wohingegen plasmazytoide DC (pDC) regulatorisch auf das Immunsystem einwirken. Beide Subtypen spielen eine wichtige Rolle bei der Karzinogenese.
Während humane mDC, zur therapeutischen Verwendung, ex vivo aus Monozyten hergestellt werden können, war dies für humane pDC bisher nicht möglich. Ein war deshalb ein erstes Ziel dieser Arbeit, ein Protokoll zur Generierung humaner pDC aus humanen Monozyten zu entwickeln. Diese wurden mittels des Wachstumsfaktors Fms-related tyrosine kinase 3 ligand (Flt3-L) zu pDC-Äquivalenten differenziert, welche als monocyte-derived pDC (mo-pDC) bezeichnet wurden. In der Tat zeigten mo-pDC ein für humane pDC charakteristisches Oberflächenmarkerprofil und wiesen, im Vergleich zu mDC, eine geringe Kapazität zur Induktion der Proliferation autologer T Zellen und zur Phagozytose apoptotischer Zellen auf. Mo-pDC erwarben im Verlauf ihrer Differenzierung aus Monozyten eine kontinuierlich erhöhte Expression des pDC-spezifischen Transkriptionfaktors E2-2 und seiner spezifischen Zielgene. Der wichtigste funktionale Parameter von pDC ist die Produktion großer Mengen von Interferon-α (IFN-α). Mo-pDC sezernierten, nach vorheriger Aktivierung mit Tumornekrosefaktor-α (TNF-α) oder wenn zu ihrer Differenzierung neben Flt3-L auch Vitamin D3 oder all-trans-Retinolsäure verwendet wurde, ebenfalls große Mengen IFN-α. Wurden mo-pDC unter Hypoxie, einem prominenten Faktor der Tumormikroumgebung, generiert, so waren die Expression des spezifischen Transkriptionsfaktors E2-2 und die Freisetzung von IFN-α stark vermindert. Diese Daten zeigten zunächst, dass mo-pDC für das Studium von Differenzierung und Funktion humaner pDC eingesetzt werden können.
Weiterhin lieferten sie Hinweise auf eine veränderte Differenzierung humaner pDC unter Hypoxie. In einem nächsten Schritt wurde folglich untersucht, ob Hypoxie auch die Differenzierung von pDC aus deren physiologischen Vorläufern beeinflusst. Wurden Knochenmarkszellen der Maus mit Flt3-L unter Normoxie oder Hypoxie kultiviert, so war die Differenzierung zu pDC unter Hypoxie in der Tat unterdrückt. Dies war abhängig von der Hypoxie-induzierten Aktivität des Hypoxie-induzierten Faktors 1 (HIF-1), da die Flt3-Linduzierte Differenzierung von murinen Knochenmarkszellen, in denen die Expression von HIF-1 in pDC-Vorläuferzellen ausgeschaltet war, unter Hypoxie normal verlief.
Zusammenfassend kann also gesagt werden, dass Hypoxie, durch Aktivierung von HIF-1, Differenzierung und Funktion von pDC unterdrückt. Dieser Mechanismus könnte zu ihrer beschriebenen Dysfunktion in humanen Tumoren beitragen.
Neben Hypoxie sind viele andere Faktoren an der Immunsuppression in Tumoren beteiligt.
Eine Komponente der Mikroumgebung in Tumoren ist das Vorhandensein apoptotischer Tumorzellen. Apoptose von Tumorzellen findet, im Kontrast zur generellen Sicht von Tumoren als Apoptose-resistente Entitäten, auch in unbehandelten Tumoren im Überfluss statt. Apoptotische körpereigene Zellen unterdrücken unter physiologischen Bedingungen das Immunsystem. Deshalb könnte das Freisetzen von apoptotischem Material oder die Sekretion von Faktoren aus sterbenden Tumorzellen einen starken Einfluss auf die Funktion von Tumor-assoziierten DC und die damit verbundene Aktivierung von tumoriziden Lymphozyten haben. Eine diesbezügliche Studie war das zweite Ziel der vorliegenden Arbeit. Humane mDC wurden zu diesem Zweck mit Überständen lebender, apoptotischer oder nekrotischer humaner Brustkrebszellen aktiviert und anschließend mit autologen T Zellen ko-kultiviert. Danach wurde das zytotoxische Potential der ko-kultivierten T Zellen analysiert. Interessanterweise unterdrückte die Aktivierung mit Überständen apoptotischer Tumorzellen die DC-vermittelte Generierung tumorizider T Zellen durch die Ausprägung einer Population von regulatorischen T Zellen (Treg), die durch die gleichzeitige Expression der Oberflächenmoleküle CD39 und CD69 charakterisiert war. Die Ausprägung der CD39-und CD69-exprimierenden Treg Zell-Population war abhängig von der Freisetzung des bioaktiven Lipids Sphingosin-1-Phosphat (S1P) aus apoptotischen Zellen, welches durch den S1P-Rezeptor 4 zur Freisetzung des immunregulatorischen Zytokins IL-27 aus mDC führte.
Neutralisierung von IL-27 in AC-aktivierten Ko-Kulturen von mDC und T Zellen blockierte die Generierung von CD39- und CD69-exprimierenden Treg Zellen und resultierte folglich in der Aktivierung zytotoxischer T Zellen. Weiterhin war die Bildung von Adenosin in den Ko-Kulturen für die Unterdrückung zytotoxischer T Zellen vonnöten. Erste Experimente lieferten Hinweise auf eine direkte Interaktion von CD69- und CD39-exprimierenden Treg Zellen mit CD73-exprimierenden zytotoxischen T Zellen. CD39 und CD73 werden für die Bildung von Adenosin aus ATP benötigt, weswegen die Interaktion von Treg Zellen und zytotoxischen T Zellen die Adenosin-Produktion fördern könnte.
Zusammenfassend zeigen die hier präsentierten Befunde wie Faktoren der
Tumormikroumgebung die Funktion von humanen DC Subtypen beeinflussen können. Ein Verständnis der zugrundeliegenden Mechanismen kann wertvolle Informationen für die Wahl effektiver Immuntherapien oder Chemotherapien liefern und so die Therapie humaner Tumore unterstützen.
With increasing heterogeneity of modern hardware, different requirements for 3d applications arise. Despite the fact that real-time rendering of photo-realistic images is possible using today’s graphics cards, still large computational effort is required. Furthermore, smart-phones or computers with older, less powerful graphics cards may not be able to reproduce these results. To retain interactive rendering, usually the detail of a scene is reduced, and so less data needs to be processed. This removal of data, however, may introduce errors, so called artifacts. These artifacts may be distracting for a human spectator when gazing at the display. Thus, the visual quality of the presented scene is reduced. This is counteracted by identifying features of an object that can be removed without introducing artifacts. Most methods utilize geometrical properties, such as distance or shape, to rate the quality of the performed reduction. This information used to generate so called Levels Of Detail (LODs), which are made available to the rendering system. This reduces the detail of an object using the precalculated LODs, e.g. when it is moved into the back of the scene. The appropriate LOD is selected using a metric, and it is replaced with the current displayed version. This exchange must be made smoothly, requiring both LOD-versions to be drawn simultaneously during a transition. Otherwise, this exchange will introduce discontinuities, which are easily discovered by a human spectator. After completion of the transition, only the newly introduced LOD-version is drawn and the previous overhead removed. These LOD-methods usually operate with discrete levels and exploit limitations of both the display and the spectator: the human.
Humans are limited in their vision. This ranges from being unable to distinct colors at varying illumination scenarios to the limitation to focus only at one location at a time. Researchers have developed many applications to exploit these limitations to increase the quality of an applied compression. Some popular methods of vision-based compression are MPEG or JPEG. For example, a JPEG compression exploits the reduced sensitivity of humans regarding color and so encodes colors with a lower resolution. Also, other fields, such as auditive perception, allow the exploitation of human limitations. The MP3 compression, for example, reduces the quality of stored frequencies if other frequencies are masking it. For representation of perception various computer models exist. In our rendering scenario, a model is advantageous that cannot be influenced by a human spectator, such as the visual salience or saliency.
Saliency is a notion from psycho-physics that determines how an object “pops out” of its surrounding. These outstanding objects (or features) are important for the human vision and are directly evaluated by our Human Visual System (HVS). Saliency combines multiple parts of the HVS and allows an identification of regions where humans are likely to look at. In applications, saliency-based methods have been used to control recursive or progressive rendering methods. Especially expensive display methods, such as pathtracing or global illumination calculations, benefit from a perceptual representation as recursions or calculations can be aborted if only small or unperceivable errors are expected to occur. Yet, saliency is commonly applied to 2d images, and an extension towards 3d objects has only partially been presented. Some issues need to be addressed to accomplish a complete transfer.
In this work, we present a smart rendering system that not only utilizes a 3d visual salience model but also applies the reduction in detail directly during rendering. As opposed to normal LOD-methods, this detail reduction is not limited to a predefined set of levels, but rather a dynamic and continuous LOD is created. Furthermore, to apply this reduction in a human-oriented way, a universal function to compute saliency of a 3d object is presented. The definition of this function allows to precalculate and store object-related visual salience information. This stored data is then applicable in any illumination scenario and allows to identify regions of interest on the surface of a 3d object. Unlike preprocessed methods, which generate a view-independent LOD, this identification includes information of the scene as well. Thus, we are able to define a perception-based, view-specific LOD. Performance measures of a prototypical implementation on computers with modern graphic cards achieved interactive frame rates, and several tests have proven the validity of the reduction.
The adaptation of an object is performed with a dynamic data structure, the TreeCut. It is designed to operate on hierarchical representations, which define a multi-resolution object. In such a hierarchy, the leaf nodes contain the highest detail while inner nodes are approximations of their respective subtree. As opposed to classical hierarchical rendering methods, a cut is stored and re-traversal of a tree during rendering is avoided. Due to the explicit cut representation, the TreeCut can be altered using only two core operations: refine and coarse. The refine-operation increases detail by replacing a node of the tree with its children while the coarse-operation removes the node along with its siblings and replaces them with their parent node. These operations do not rely on external information and can be performed in a local manner. These only require direct successor or predecessor information. Different strategies to evolve the TreeCut are presented, which adapt the representation using only information given by the current cut. These evaluate the cut by assigning either a priority or a target-level (or bucket) to each cut-node. The former is modelled as an optimization problem that increases the average priority of a cut while being restricted in some way, e.g. in size. The latter evolves the cut to match a certain distribution. This is applied in cases where a prioritization of nodes is not applicable. Both evaluation strategies operate with linear time complexity with respect to the size of the current TreeCut.
The data layout is chosen to separate rendering data and hierarchy to enable multi-threaded evaluation and display. The object is adapted over multiple frames while the rendering is not interrupted by the used evaluation strategy. Therefore, we separate the representation of the hierarchy from the rendering data. Due to its design, this overhead imposed to the TreeCut data structure does not influence rendering performance, and a linear time complexity for rendering is retained. The TreeCut is not only limited to alter geometrical detail of an object. The TreeCut has successfully been applied to create a non-photo-realistic stippling display, which draws the object with equal sized points in varying density. In this case the bucket-based evaluation strategy is utilized, which determines the distribution of the cut based on local illumination information. As an alternative, an attention drawing mechanism is proposed, which applies the TreeCut evaluation strategies to define the display style of a notification icon. A combination of external priorities is used to derive the appropriate icon version. An application for this mechanism is a messaging system that accounts for the current user situation.
When optimizing an object or scene, perceptual methods allow to account for or exploit human limitations. Therefore, visual salience approaches derive a saliency map, which encodes regions of interest in a 2d map. Rendering algorithms extract importance from such a map and adapt the rendering accordingly, e.g. abort a recursion when the current location is unsalient. The visual salience depends on multiple factors including the view and the illumination of the scene. We extend the existing definition of the 2d saliency and propose a universal function for 3d visual salience: the Bidirectional Saliency Weight Distribution Function (BSWDF). Instead of extracting the saliency from 2d image and approximate 3d information, we directly compute this information using the 3d data. We derive a list of equivalent features for the 3d scenario and add them to the BSWDF. As the BSWDF is universal, also 2d images are covered with the BSWDF, and the calculation of the important regions within images is possible.
To extract the individual features that contribute to visual salience, capabilities of modern graphics card in combination with an accumulation method for rendering is utilized. Inspired from point-based rendering methods local features are summed up in a single surface element (surfel) and are compared with their surround to determine whether they “pop out”. These operations are performed with a shader-program that is executed on the Graphics Processing Unit (GPU) and has direct access to the 3d data. This increases processing speed because no transfer of the data is required. After computation, each of these object-specific features can be combined to derive a saliency map for this object. Surface specific information, e.g. color or curvature, can be preprocessed and stored onto disk. We define a sampling scheme to determine the views that need to be evaluated for each object. With these schemes, the features can be interpolated for any view that occurs during rendering, and the according surface data is reconstructed. These sampling schemes compose a set of images in form of a lookup table. This is similar to existing rendering techniques, which extract illumination information from a lookup. The size of the lookup table increases only with the number of samples or the image size used for creation as the images are of equal size. Thus, the quality of the saliency data is independent of the object’s geometrical complexity. The computation of a BSWDF can be performed either on a Central Processing Unit (CPU) or a GPU, and an implementation requires only a few instructions when using a shader program. If the surface features have been stored during a preprocess, a reprojection of the data is performed and combined with the current information of the object. Once the data is available, the computation of the saliency values is done using a specialized illumination model, and a priority for each primitive is extracted. If the GPU is used, the calculated data has to be transferred from the graphics card. We therefore use the “transform feedback” capabilities, which allow high transfer rates and preserve the order of processed primitives. So, an identification of regions of interest based on the currently used primitives is achieved. The TreeCut evaluation strategies are then able to optimize the representation in an perception-based manner.
As the adaptation utilizes information of the current scene, each change to an object can result in new visual salience information. So, a self-optimizing system is defined: the Feedback System. The output generated by this system converges towards a perception-optimized solution. To proof the saliency information to be useful, user tests have been performed with the results generated by the proposed Feedback System. We compared a saliency-enhanced object compression to a pure geometrical approach, common for LOD-generation. One result of the tests is that saliency information allows to increase compression even further as possible with the pure geometrical methods. The participants were not able to distinguish between objects even if the saliency-based compression had only 60% of the size of the geometrical reduced object. If the size ratio is greater, saliency-based compression is rated, on average, with higher score and these results have a high significance using statistical tests. The Feedback System extends an 3d object with the capability of self-optimization. Not only geometrical detail but also other properties can be limited and optimized using the TreeCut in combination with a BSWDF. We present a dynamic animation, which utilizes a Software Development Kit (SDK) for physical simulations. This was chosen, on the one hand, to show the universal applicability of the proposed system, and on the other hand, to focus on the connection between the TreeCut and the SDK. We adapt the existing framework, and include the SDK within our design. In this case, the TreeCut-operations not only alter geometrical but also simulation detail. This increases calculation performance because both the rendering and the SDK operate on less data after the reduction has been completed.
The selected simulation type is a soft-body simulation. Soft-bodies are deformable in a certain degree but retain their internal connection. An example is a piece of cloth that smoothly fits the underlying surface without tearing apart. Other types are rigid bodies, i.e. idealistic objects that cannot be deformed, and fluids or gaseous materials, which are well suited for point-based simulations. Any of these simulations scales with the number of simulation nodes used, and a reduction of detail increases performance significantly. We define a specialized BSWDF to evaluate simulation specific features, such as motion. The Feedback System then increases detail in highly salient regions, e.g. those with large motion, and saves computation time by reducing detail in static parts of the simulation. So, detail of the simulation is preserved while less nodes are simulated.
The incorporation of perception in real-time rendering is an important part of recent research. Today, the HVS is well understood, and valid computer models have been derived. These models are frequently used in commercial and free software, e.g. JPEG compression. Within this thesis, the Tree-Cut is presented to change the LOD of an object in a dynamic and continuous manner. No definition of the individual levels in advance is required, and the transitions are performed locally. Furthermore, in combination with an identification of important regions by the BSWDF, a perceptual evaluation of a 3d object is achieved. As opposed to existing methods, which approximate data from 2d images, the perceptual information is directly acquired from 3d data. Some of this data can be preprocessed if necessary, to defer additional computations during rendering. The Feedback System, created by the TreeCut and the BSWDF, optimizes the representation and is not limited to visual data alone. We have shown with our prototype that interactive frame rates can be achieved with modern hardware, and we have proven the validity of the reductions by performing several user tests. However, the presented system only focuses on specific aspects, and more research is required to capture even more capabilities that a perception-based rendering system can provide.
Global climate change and land use change will not only alter entire ecosystems and biodiversity patterns, but also the supply of ecosystem services. A better understanding of the consequences is particularly needed in under-investigated regions, such as West Africa. The projected environmental changes suggest negative impacts on nature, thus representing a threat to the human well-being. However, many effects caused by climate and land use change are poorly understood so far. Thus, the main objective of this thesis was to investigate the impact of climate and land use change on vegetation patterns, plant diversity and important provisioning ecosystem services in West Africa. The three different aspects are separately explored and build the chapters of this thesis. The findings help to improve our understanding of the effects of environmental change on ecosystems and human well-being. In the first study, the main objectives were to model trends and the extent of future biome shifts in West Africa that may occur by 2050. Also, I modelled a trend in West African tree cover change, while accounting for human impact. Additionally, uncertainty in future climate projections was evaluated to identify regions with reliable trends and regions where the impacts remain uncertain. The potential future spatial distributions of desert, grassland, savanna, deciduous and evergreen forest were modelled in West Africa, using six bioclimatic models. Future tree cover change was analysed with generalized additive models (GAMs). I used climate data from 17 general circulation models (GCMs) and included human population density and fire intensity to model tree cover. Consensus projections were derived via weighted averages to: 1) reduce inter-model variability, and 2) describe trends extracted from different GCM projections. The strongest predicted effect of climate change was on desert and grasslands, where the bioclimatic envelope of grassland is projected to expand into the Sahara desert by an area of 2 million km2. While savannas are predicted to contract in the south (by 54 ± 22 × 104 km2), deciduous and evergreen forest biomes are expected to expand (64 ± 13 × 104 km2 and 77 ± 26 × 104 km2). However, uncertainty due to different GCMs was particularly high for the grassland and the evergreen forest biome shift. Increasing tree cover (1–10%) was projected for large parts of Benin, Burkina Faso, Côte d’Ivoire, Ghana and Togo, but a decrease was projected for coastal areas (1–20%). Furthermore, human impact negatively affected tree cover and partly changed the direction of the projected climate-driven tendency from increase to decrease. Considering climate change alone, the model results of potential vegetation (biomes) showed a ‘greening’ trend by 2050. However, the modelled effects of human impact suggest future forest degradation. Thus, it is essential to consider both climate change and human impact in order to generate realistic future projections on woody cover. The second study focused on the impact and the interplay of future (2050) climate and land use change on the plant diversity of the West African country Burkina Faso. Synergistic forecasts for this country are lacking to date. Burkina Faso covers a broad bioclimatic gradient which causes a similar gradient in plant diversity. Thus, the impact of climate and land use change can be investigated in regions with different levels of species richness. The LandSHIFT model from the Centre of Environmental System research CESR (Kassel, Germany) was adapted for this study to derive novel regional, spatially explicit future (2050) land use simulations for Burkina Faso. Additionally, the simulations include different assumptions on the technological developments in the agricultural sector. Oneclass support vector machines (SVMs), a machine learning method, were performed with these land use simulations together with current and future (2050) climate projections at a 0.1° resolution (cell: ~ 10 × 10 km). The modelling results showed that the flora of Burkina Faso will be primarily negatively impacted by future climate and land use changes. The species richness will be significantly reduced by 2050 (P < 0.001, paired Wilcoxon signed-rank test). However, contrasting latitudinal patterns were found. Although climate change is predicted to cause species loss in the more humid regions in Southern Burkina Faso (~ 200 species per cell), the model projects an increase of species richness in the Sahel. However, land use change is expected to suppress this increase to the current species diversity level, depending on the technological developments. Climate change is a more important threat to the plant diversity than land use change under the assumption of technological stagnation in the agricultural sector. Overall, the study highlights the impact and interplay of future climate and land use change on plant diversity along a broad bioclimatic gradient in West Africa.Furthermore, the results suggest that plant diversity in dry and humid regions of the tropics might generally respond differently to climate and land use change. This pattern has not been detected by global studies so far. Several of the plant species in West Africa significantly contribute to the livelihoods of the population. The plants provide so-called non-timber forest products (NTFPs), which are important provisioning ecosystem services. However, these services are also threatened by environmental change. Thus, the third study aimed at developing a novel approach to assess the impacts of climate and land use change on the economic benefits derived from NTFPs. This project was carried out in cooperation with Katja Heubach (BiK-F) who provided data on household economics. These data include 60 interviews that were conducted in Northern Benin on annual quantities and revenues of collected NTFPs from the three most important savanna tree species: Adansonia digitata, Parkia biglobosa and Vitellaria paradoxa. The current market prices of the NTFPs were derived from respective local markets. To assess current and future (2050) occurrence probabilities of the three species, I calibrated niche-based models with climate data (from Miroc3.2medres) and land use data (LandSHIFT) at a 0.1° resolution (cell: ~ 10 × 10 km). Land use simulations were taken from the previous study on plant diversity. Three different niche-based models were used: 1) generalized additive models (regression method), 2) generalized boosting models (machine learning method), and 3) flexible discriminant analysis (classification method). The three model simulations were averaged (ensemble forecasting) to increase the robustness of the predictions. To assess future economic gains and losses, respectively, the modelled species’ occurrence probabilities were linked with the spatially assigned monetary values. Highest current annual benefits are obtained from V. paradoxa (54,111 ± 28,126 US$/cell), followed by P. biglobosa (32,246 ± 16,526 US$/cell) and A. digitata (9,514 ± 6,243 US$/cell). However, in the prediction large areas will lose up to 50% of their current economic value by 2050. Vitellaria paradoxa and Parkia biglobosa, which currently reveal the highest economic benefits, are heavily affected. Adansonia digitata is negatively affected less strongly by environmental change and might regionally even supply increasing economic benefits, in particular in the west and east of the investigation area. We conclude that adaptive strategies are needed to create alternative income opportunities, in particular for women that are responsible for collecting the NTFPs. The findings provide a benchmark for local policy-makers to economically compare different land use options and adjust existing management strategies for the near future. Overall, this thesis improves our understanding of the impacts of climate and land use changes on West African vegetation patterns, plant diversity and provisioning ecosystem services. Climate change had spatially varying impacts (positive and negative effects) on the vegetation cover and plant diversity, while predominantly negative effects resulted from human pressure. Regional contrasting impacts of environmental change were also found considering the provisioning ecosystem services.
This dissertation is concerned with the role of prosody and, specifically, linguistic rhythm for the syntactic processing of written text. My aim is to put forward, provide evidence for, and defend the following claims:
1. While processing written sentences, readers make use of their phonological knowledge and generate a mental prosodic-phonological representation of the printed text.
2. The mental prosodic representation is constructed in accordance with a syntactic description of the written string. Constraints at the interface of syntax and phonology provide for the compatibility of the syntactic analysis and the (mental) prosodic rendition of the sentence.
3. The implicit prosodic structure readers impose on the written string entails phonological phrasing and accentuation, but also lower level prosodic features such as linguistic rhythm which emerges from the pattern of stressed and unstressed syllables.
4. Phonological well-formedness conditions accompany and influence the process of syntactic parsing in reading from the very beginning, i.e. already at the level of recognizing lexical categories. At points of underspecified syntactic structure, syntactic parsing decisions may be made on the basis of phonological constraints alone.
5. In reading, the implicit local lexical-prosodic information may be more readily available to the processing mechanism than higher-level discourse structural representations and consequently may have more immediate influence on sentence processing.
6. The process of sentence comprehension in reading is conditioned by factors that are geared towards sentence production.
7. The interplay of syntactic and phonological processes in reading can be explained with recourse to a performance-compatible competence grammar.
The evidence from three reading experiments supports these points and suggests a model of grammatical competence in which constraints from various domains (syntax, semantics, pragmatics, discourse structure, and phonology) interact in providing the possible structural, i.e. grammatical descriptions.
The importance of RNA in molecular and cell biology has long been underestimated. Besides transmitting genetic information, studies of recent years have revealed crucial tasks of RNA especially in gene regulation. Riboswitches are natural RNA-based genetic switches and known only for ten years. They directly sense small-molecule metabolites and regulate in response the expression of the corresponding metabolic genes. Within recent years, artificial riboswitches have been developed that operate according to user-defined demands. Hence, they represent powerful tools for synthetic biology.
This study focused on the development of engineered catalytic riboswitches for conditional gene expression in eukaryotes. A self-cleaving hammerhead ribozyme was linked to a tetracycline binding aptamer in order to regulate ribozyme cleavage allosterically with tetracycline. By integrating such a hybrid molecule into a gene of interest, mRNA cleavage and thereby gene expression is controllable in a ligand dependent manner. The linking domain between ribozyme and aptamer was randomised. Tetracycline inducible ribozymes were isolated after eleven cycles of in vitro selection (SELEX). 80% of the analysed ribozymes show cleavage that strongly depends on tetracycline. In the presence of 1 μM tetracycline, their cleavage rates are comparable to that of the parental hammerhead ribozyme. In the absence of tetracycline, cleavage rates are inhibited up to 333-fold. The allosteric ribozymes bind tetracycline with similar affinity and specificity as the parental aptamer. Ribozyme cleavage is fully induced within minutes after addition of tetracycline. Interestingly, the isolated linker domains exhibit structural consensus motives rather than consensus sequences.
When transferred to yeast, three switches reduced reporter gene expression by 30 - 60% in the presence of tetracycline; none of them controlled gene expression in mammalian cells. In vitro selected molecules do not necessarily retain their characteristics when applied in a cellular context. Therefore, high throughput screening and selection systems have been developed in mammalian cells. The screening system is based on two fluorescent reporter proteins (GFP and mCherry). 1152 individual constructs of the selected ribozyme pool were tested, but none of them reduced reporter gene expression significantly in the presence of tetracycline. The selection system employs a fusion peptide encoding two selection markers (Hygromycin B phosphotransferase and HSV thymidine kinase) facilitating both negative and positive selection. 6.5 x 104 individual constructs of the selected ribozyme pool are currently under investigation.
Nuclear Magnetic Resonance ("NMR") is a powerful and versatile technique relying on nuclei that posses a spin. Since its discovery more than 6 decades ago, NMR and related techniques has become a tool with innumerable applications throughout the fields of Physics, Chemistry, Biology and Medicine. Numerous Nobel Prizes have been awarded for work in the field and a multi billion dollar industry has developed on its basis.
One of NMR's major shortcomings is its inherent lack of sensitivity. Because it relies on the Boltzmann populations of spin states with a minuscule Zeeman splitting, this is particularly true for room temperature experiments.
As a result, in an enormous technological effort to enlarge the Zeeman splitting NMR magnets have been moving to higher and higher magnetic fields. However, even for proton spins possessing the largest magnetic moment of all nuclei, the degree of polarization that can be achieved in the strongest spectroscopic magnets available today (~24 T) at room temperature is merely ~ 8*(10 exp (-5)). In other words, this low polarization theoretically allows a sensitivity enhancement of 104 towards full polarization.
Since Magnetic Resonance Imaging ("MRI") is based on the same principle, it shares this problem with NMR. Furthermore, for technical and physiological reasons full body MRI tomographs do not reach the magnetic field strengths of spectroscopic NMR magnets, making this even more of an issue for MRI.
In consequence, MRI is chiefly restricted to detecting protons, while both MRI and NMR detection of 13C (or other low nuclei) under physiological conditions, i.e. low natural abundance of 13C and a low concentration of the respective substance, suffer from long acquisitions times that are necessary to obtain adequate signal to noise ratios ("SNR").
However, this drawb of NMR can be overcome. The enormous potential sensitivity increase of four orders of magnitude can - at least partially - be exploited by several hyperpolarization techniques, creating entirely new applications and fields of research.
These hyperpolarization techniques comprise chemical approaches like Parahydrogen Induced Polarization ("PHIP") or Photochemically Induced Dynamic Nuclear Polarization ("Photo-CIDNP"), as well as physical techniques like optically pumped (noble) gases13, 14 or Dynamic Nuclear Polarization ("DNP"), which will be the focus of this work. A hyperpolarized substance will render a larger signal without being physically or chemically altered in any other way. It is therefore "marked" without any marker, making it an agent free contrast agent for MRI.
DNP is a technique, in which hyperpolarization of nuclear spins is achieved by microwave (\MW") irradiation of unpaired electron spins in radicals, which are coupled to these nuclei, e.g. 1H, 13C or 15N. The electron spin population is perturbed if the microwave irradiation is resonant with the electron spin transition, which affects the polarization of hyperfine-coupled close nuclei. For large microwave power (i.e. saturating the electron spin transition) the orders of magnitude larger thermal electron spin polarization is effectively transferred to these nuclear spins in the sample. For proton spins the maximum polarization gain amounts to 660, whereas for 13C the sensitivity gain can be as large as 2600. In contrast to e.g. PHIP, which is restricted to specific reaction precursors, DNP is not limited to specific nuclei or hyperpolarization target molecules, making it a very versatile technique. DNP has been first proposed by Overhauser in 1953,15 and experimentally observed shortly thereafter in metals16 and liquids,17 both being systems with mobile electrons. In the 1960s and 70s, DNP was used as a spectroscopic tool in liquids, thoroughly mapping the effect in the low field regime. As well, several other transfer mechanisms were discovered, which are active in the solid state with localized electrons, namely the solid effect the cross effect and thermal mixing. The theory for all three of these mechanisms predicts reduced transfer efficiencies at higher magnetic fields. This fact and the lack of high frequency microwave sources to excite electron spins at magnetic field strengths above 1 T, effectively relegated DNP to a position of an interesting scientifi curiosity.
In the early 1990s, DNP came to a renaissance, when DNP was performed at high field in solid state magic angle spinning ("MAS") experiments using high power gyrotron microwave sources. This pioneering work sparked a surge of new developments and applications.
As well, this success triggered attempts to investigate also the potential of DNP in the liquid state at high magnetic fields, e.g. at 3.4 T35{38 and 9.2 T. To date, DNP can be considered one of the "hot topics" in the field of magnetic resonance, bringing about special issue in magnetic resonance journals and DNP sections on magnetic resonance conferences.
This thesis deals with the development of an in-bore liquid state DNP polarizer for MRI applications operating in ow through mode at a magnetic field strength of 1.5 T. Following this introductory chapter, the theoretical background necessary to understand and interpret the experimental results is explained in chapter 2. Subsequently, chapter 3 deals with the issue of performing liquid state DNP at high magnetic fields and its challenges. The chapter comprises a quick overview of the necessary hardware, the experimental findings for various samples and the interpretation of these findings. along with the ramifications for the aim of this work. Chapter 4 deals with the issue of increasing sensitivity and contrast in MRI, in particular by means of DNP. The chapter illustrates the development of our polarizer by presenting the hardware that was developed and demonstrating its performance under various conditions. As well, several alternative approaches are introduced and compared to our approach. Finally, chapter 5 summarizes the findings and gives an outlook on further developments.
The main purpose of the Transition Radiation Detector (TRD) located in the central barrel of ALICE (A Large Ion Collider Experiment) is electron identification for separation from pions at momenta pt > 1 GeV/c, since in this momentum range the measurements of the specific energy loss (dE/dx) of the Time Projection Chamber (TPC) is no longer sufficient. Furthermore, it provides a fast trigger for high transverse momentum charged particles (pt > 3 GeV/c) and makes a significant contribution to the optimization of the tracking of reaction products in heavy-ion collisions. Its whole setup comprises 18 supermodules out of which 13 are presently operational and mounted cylindrically around the beam axis of the Large Hadron Collider (LHC). A supermodule contains either 30 or 24 chambers, each consisting of a radiator for transition radiation creation, a drift and an amplifying region followed by the read-out electronics. In total, the TRD is an array of 522 chambers operated with about 28 m3 of a Xe-CO2 [85-15%] gas mixture. During the work of this thesis, the testing, commissioning, operation and maintenance of detector parts, the gas system and its online quality monitor, improvements on the detector control user-interface and studies about a new pre-trigger module for data read-out have been accomplished. The TRD gas system mixes, distributes and circulates the operational gas mixture through the detector. Its overall optimization has been achieved by minimizing gas leakage, surveying, controlling, maintaining and continuously improving it as well as designing and carrying out upgrades. Gas quality monitors of the type \GOOFIE" (Gas prOportional cOunter For drIfting Electrons) can be used in gaseous detectors as on-line monitors of the electron drift velocity, gain and gas properties. One of these devices has been implemented within the TRD gas system, while another one surveys the gas of the TPC. Both devices had to be adapted to the specific needs of the detectors, were under constant surveillance and control, and needed to be further developed on both hardware and software side. To improve the operation of the TRD, modifications on its DCS software (Detector Control System) used for monitoring, controlling, operating, regulating and configuring of hardware and computing devices have been carried out. The DCS is designed to enable an operator to interact with equipment through user interfaces that display the information from the system. The main focus of this work was laid on the optimization of the usability and design of the user interface. The front-end electronics of the TRD require an early start signal (\pre-trigger") from the fast forward detectors or the Time-Of-Flight detector during the running periods. The realization of a new hardware concept for the read-out of the TRD pre-trigger system has been studied and first tests were performed. This new module called PIMDDL (Pre-trigger Interface Module Detector Data Link) is meant to acquire all data necessary to simulate and predict the full pre-trigger functionality, and to verify its proper operation. Furthermore, it shall provide all functionalities of the so-called Control Box Bottom as well as keep the functionalities of the already existing PIM (Pre-trigger Interface Module) in order to combine and replace these two modules in the future.
Grave visitation and concepts of life after death : a comparative study in Frankfurt and Hong Kong
(2012)
Grave visitation is a tradition common to many cultures. Yet, this sensitive topic is rarely addressed in cross cultural comparisons. Why do people visit the graves of their parents? What do they do in the cemetery? Could there be a similar set of intentions behind the diverse customs? By examining the visiting patterns in Frankfurt and Hong Kong, this research is aimed at comparing the concepts of life after death that underlie the practice. Phenomenologically oriented, this is an exploratory study based on qualitative interviews. Integrated with in-depth semi-structure interviewing and thematic analysis, the project covered twelve cases in each city. Research participants were purposefully selected. Data analysis was conducted according to the analytical framework approach. After identifying and clustering of themes, three central and interlocking issues were found: 1. the grave as a new home that connects the living and the dead; 2. death and the interpretation of hope; and 3.intergenerational reciprocity and continuing bonds. Though the images of life after death were ambiguously depicted, grave tending reflected shared expectations of the world beyond. Most significantly, visits to the graves strengthened the ties between the living and the dead, revealing a longing for a continued bond regardless of the forms of burial. At the end, this research illustrated not only the meanings of death but also the notion of religiosity through evaluating the secularisation thesis. Emphasising the dynamics of tradition and personal experience, this contextual reading of current death rituals serves as an original source for religious dialogue and education.
Die Bromeliaceae umfassen mehr als 3.100 fast ausschließlich neotropische Arten. Bekannt für ihre außergewöhnliche ökologische Vielseitigkeit haben sich Bromelien erfolgreich in terrestrischen und epiphytischen Lebensräumen ausgebreitet.
Eine umfassende Untersuchung des Gefährdungsgrades aller Bromelienarten Panamas und Costa Ricas stand bisher noch aus und ist insbesondere aufgrund des großen Reichtums an Lebensräumen, der beide Länder auszeichnet, und den vielfältigen Veränderungen geboten.
Im Rahmen der vorliegenden Arbeit wurden während der insgesamt etwa achtmonatigen Feldarbeit 54 Exkursionen in Westpanama durchgeführt und Belege von 61% (126 Arten) der für Panama bekannten Arten gesammelt.
Auf der Basis der Feldarbeit und der in verschiedenen Herbarien durchgeführten Studien (Überprüfung und Digitalisierung von > 8.000 Aufsammlungen) wurden Diversität, Endemismus, Areale und räumliche Muster der Artenvielfalt der Bromeliaceae in Panama und Costa Rica erfasst, dokumentiert und analysiert.
Nur drei der derzeit bekannten acht Unterfamilien der Bromeliaceae finden sich in Panama und vier in Costa Rica. Zwanzig Arten werden hier erstmals für Panama gemeldet. Sechs bisher für Panama gemeldete Bromelienarten wurden als irrtümlich gemeldet identifiziert. Die Flora der Bromeliaceae umfasst nun 16 Gattungen und 206 Arten in Panama sowie 18 Gattungen und 199 Arten in Costa Rica.
33 Arten sind endemisch in Panama, 32 Arten in Costa Rica und 36 Arten sind auf das Gebiet beider Länder beschränkt. Die Gattung Werauhia hat ihr Diversitätszentrum in Panama (47 von insgesamt 87 Arten) und Costa Rica (59/87 Arten) und ist gleichzeitig die artenreichste Gattung in beiden Ländern.
In Panama treten 113 Arten (54,9 %) zwischen 1.000 und 2.000 Höhenmetern auf. Die Art mit der niedrigsten Höhengrenze ist Pitcairnia halophila, die am höchsten angetroffene Art ist Werauhia ororiensis.
Für jede der für Panama und Costa Rica (259 Arten) gemeldeten Bromelienarten wurde eine Verbreitungskarte erstellt; für die in beiden Ländern auftretenden 191 Arten wurde darüber hinaus die potenzielle Verbreitung modelliert.
In Panama ist der prämontane Regenwald mit 138 Arten (einschließlich 25 der insgesamt 33 endemischen Arten) die Holdridge-Vegetationszone mit der höchsten Anzahl an Bromelien. In Costa Rica hat der untere Bergregenwald einen besonders hohen Anteil endemischer Bromelien (13 von insgesamt 32 Arten).
In Panama und Costa Rica beherbergen mittlere Höhenlagen den größten Artenreichtum der Bromeliaceae mit Maximalwerten von etwa 125 Arten im Osten Costa Ricas und in Westpanama. Einige Regionen Panamas verfügen nicht über ausgewiesene Schutzgebiete, weisen jedoch einen hohen Artenreichtum an Bromelien auf (z.B. Teile Westpanamas, El Valle de Anton und benachbarte Gebiete sowie die Serranía de Cañazas).
In der hier vorgestellten Klassifizierung des Gefährdungsgrades gemäß den Richtlinien der IUCN werden für Panama 32 Arten als vom Aussterben bedroht (CR), 36 Arten als Stark Gefährdet (EN) und 36 Arten als Gefährdet (VU) eingestuft. In Costa Rica wird Aechmea aquilega als Ausgestorben (EX) eingeschätzt. Vier Arten werden als vom Aussterben bedroht (CR), 30 Arten als Stark Gefährdet (EN) und 39 Arten als Gefährdet (VU) klassifiziert.
In Panama wurden 184 Arten (89% der insgesamt 206 Arten) in Schutzgebieten nachgewiesen. 122 Arten (59%) wurden sowohl innerhalb als auch außerhalb und 19 Arten (9%) nur außerhalb von Schutzgebieten nachgewiesen. In Costa Rica kommen 182 Bromelienarten (91% der insgesamt 199 Arten) in Schutzgebieten vor, 168 Arten (84%) wurden sowohl innerhalb als auch außerhalb und 14 Arten (7%) nur außerhalb von Schutzgebieten nachgewiesen.
Die Schätzungen zeigen, dass die zu erwartende Gesamtzahl der Bromelienarten in Panama zwischen 224 und 250 Arten liegt, und die zu erwartende Gesamtzahl der Bromelienarten in Costa Rica liegt zwischen 207 und 221 Arten. Den Ergebnissen der Modellierung zufolge wird für eine Anzahl bisher nur für Costa Rica gemeldeter Arten das Auftreten in Panama mit erheblicher Wahrscheinlichkeit prognostiziert (z.B. Guzmania blassi, Werauhia ampla), wie auch umgekehrt das Vorkommen bisher nur für Panama bekannter Arten in Costa Rica (z.B. Aechmea strobilina, Pitcairnia kressii).
Der Erhalt der bestehenden Schutzgebiete sollte ein vorangiges Ziel sein. Darüber hinaus ist es wünschenswert, einige dieser Gebiete auszudehnen und neue Schutzgebiete auszuweisen, um biologisch hochdiverse Gebiete mit einem hohen Anteil endemischer Arten zu schützen.
1 Purpose of the Study:
The purpose of this retrospective study was to assess the volumetric changes of our institutional pediatric neuroblastoma in response to various therapeutic protocols.
2 Materials and Methods:
A retrospective study was conducted on children with neuroblastoma from different anatomical locations including suprarenal, paraspinal, pelvic, mediastinal and cervical neuroblastoma primaries. These children underwent tumor-stage based therapeutic protocols in Johann Wolfgang Goethe University Hospital, Frankfurt am Main, Germany, between January 1996 and July 2008. The study included 72 patients (44 males and 28 females). Patient demographics (age and gender), disease-related symptoms, laboratory results (tumor biomarkers including ferritin, neuron specific enolase, and urine catecholamine) and histopathological reports were collected from the electronic medical archiving system and subsequently analyzed.
Patients were classified into following groups according the anatomical origin of the primary neuroblastoma into:
1) Suprarenal neuroblastoma Group: This group included patients with neuroblastoma arising from the suprarenal gland. This group composed of 54 patients with male to female ratio (32:22).
2) Paravertebral neuroblastoma Group: This group composed of 6 male patients.
3) Mediastinal neuroblastoma Group: This group included patients with mediastinal neuroblastoma and composed of 3 patients (1 male and 2 females).
4) Pelvic neuroblastoma Group: This group included patients with pelvic neuroblastoma and composed of 6 patients (3 males and 3 females).
5) Cervical neuroblastoma Group: This group included patients with cervical neuroblastoma and composed of 2 male patients.
3 Results:
The mean volume of all suprarenal neuroblastoma group involved in the study before therapy was 176.62 cm3 (SD: 234.15) range: 239.4-968.9cm3. The mean initial volume of all suprarenal neuroblastoma group who underwent observation protocol was 86.0378 cm3 (SD: 114.44) range: 5.2-347.94cm3. Volumetric evaluation of suprarenal neuroblastoma following observation (Wait and See) protocol revealed continuous reduction of the tumor volumes in a statistically significant manner during the follow up periods up to 12 months with p value of less than 0.05. The volumetric changes afterwards were statistically insignificant.
The mean initial volume of all suprarenal neuroblastoma group who underwent primary surgery protocol was 42.4 cm3 (SD: 28.5) range: 7.5-90cm3. Complete surgical resection of the tumor was not feasible in all lesions due to local tumor extension and / or infiltration with the associated risk of injury of nearby organs or structures. However statistical analysis of the volumetric changes in the successive follow up periods did not reveal statistical significance.
Volumetric estimation of the tumor in the subsequent follow up periods revealed significant changes within the period first (3-9 month periods). The changes afterwards were statistically non significant. On the other hand, the mean initial volume of all suprarenal neuroblastoma group who underwent combined chemotherapy and Stem cell transplantation protocol only without surgical interference was 99.98cm3 (SD:46.2) range: 48.48-160.48 cm3. In this group the volumetric changes were variable and difference in volumes in follow up was statistically non significant during the follow up period.
The mean initial volume of all abdominal paravertebral neuroblastoma group was 249.197cm3 (SD: 249.63) range: 9.6-934cm3. The mean initial volume of all pelvic neuroblastoma group was 118.88cm3 (SD: 50.61) range: 73.4-173.4cm3. The mean initial volume of all mediastinal neuroblastoma group was 189.7cm3 (SD: 139.057) range: 10.7-415 cm3. The mean initial volume of all cervical neuroblastoma group was 189.7cm3 (SD: 139.057) range: 10.7-415 cm3. The volumetric measurements in the corresponding follow up periods according to the therapeutic protocol of abdominal paravertebral neuroblastoma, pelvic neuroblastoma, mediastinal and cervical neuroblastoma revealed significant change in the tumor volume within the early 3-6 months from the initial therapy while subsequently the tumor volumetric changes were statistically non significant.
4 Conclusion:
In conclusion, the role of MRI volumetry in the evaluation of tumor response is dependent on the risk adapted concept of neuroblastoma with the combination of different imaging modalities as well the therapeutic protocol. MRI Volumetry in addition to new protocols such as Whole-body imaging and 3D visualization techniques are gaining more importance and acceptance.
Quarkonia are very promising probes to study the quark-gluon plasma. The essential baseline for measurements in heavy-ion collisions is high-precision data from proton-proton interactions. However, the basic mechanisms of quarkonium hadroproduction are still being debated. The most common models, the Color-Singlet Model, the non-relativistic QCD approach and the Color-Evaporation Model, are able to describe most of the available cross-section data, despite of their conceptual differences. New measures, such as the polarization, and data at a new energy regime are crucial to test the competing models. Another issue is an eventual interplay between the production process of a quarkonium state and the surrounding pp event. Current Monte Carlo event generators treat the hard scattering independently from the rest of the so-called underlying event. The investigation of possible correlations with the pp event might be very valuable for a detailed understanding of the production processes. ALICE ist the dedicated heavy-ion experiment at the LHC. Its design has been optimized for high-precision measurements in very high track densities and down to low transverse momenta. ALICE is composed of various different detectors at forward and at central rapidities. The most important detectors for this study are the Inner Tracking System and the Time Projection Chamber, allowing to reconstruct and identify electron candidate tracks within eta < 0.9. The Transition Radiation Detector has not been utilized at this stage of the analysis; however, it will strongly improve the particle identification and provide a dedicated trigger in the upcoming beam periods. ...
The present study focuses on specific aspects in the organization of teaching religion in Indonesia. It analyses the position of religion within the Indonesian Basic Law, consequential legislation, and educational policies. How does this framework translate into national and regional policies pertaining to the emergence, institutionalization, and organization of the Hindu class and the Hindu education system in Bali from 1945 to 2008?
Muslim majority Indonesia constitutes an interesting laboratory for doing fundamental research on religious plurality and transformations of religion. The model of organizing the religion class in Indonesia is rooted in a specific historical, socio-cultural, political, and legal context, which is fundamentally different to European models of religious education. In addition, in contrast to classical Islam and modern Islamic states, Indonesia recognizes Asian religions as equal in status with the religions of the book. Besides Islam and Christianity, Hindu Dharma and Buddhism were recognized as state funded religions in 1965. This recognition had important consequences for the Indonesian model of organizing five confessional religion classes and faith-based education systems.
The Balinese are a rare case of a religious and ethnic minority being simultaneously an ethnic and religious majority. Therefore, the Balinese provide an outstanding case to analyze how Indonesia’s religious and educational policies do deal with that particular ethnic and religious minority. In addition, how do the Balinese themselves use the constitutional and legal framework to establish the Hindu religion class in public schools and a private Hindu education system from the level of pre-school to higher education?
A qualitative examination was conducted basing on a combination of theoretical and empirical investigations. The province of Bali and three educational institutions were chosen, because the Balinese were the reformers of Indonesian Hindu Dharma and the inventors of the Hindu education system. As the study focuses on constitutional and legal contexts of the Hindu class and the Hindu education system, teachers’ professional education, and composition of curricula and textbooks, a qualitative approach was applied combining ethnographic fieldwork and case study research. In consequence, the subject positions the study in the academic disciplines of Religious Studies and Area Studies. Data were collected through bibliographical surveys and fieldwork.
The amended 1945 Basic Law and consequential legislation give the same right to state sanctioned religions. The state is based on “One Supreme Lordship” prescribing national monotheism or monism. Indonesia’s spirited statehood is based on a religious, but not confessional interpretation. In addition, the strategy to manage religious plurality is authoritarian, as positive freedom of religion is limited to six state-funded religions, whereas negative religious freedom is not provided for. Despite the equal status of the six state funded religions, discriminative practices prevail with regard to funding those Asian religions. Notwithstanding, the Muslim majority Pancasila state can serve a model function for countries with illiberal politics in the Muslim world.
The first objective of strategic and educational policies is to mould a citizen who has faith in God, follows the commands of God, and has morals. The dimension of spiritual intelligence in education is a particular Indonesian dimension of education, which Indonesian educational planners added to the UNESCO standards of student-centered learning throughout life. Indonesia organizes the religion class and faith-based education systems in a confessional but pluralistic style. The citizens are required to attend the religious class in the religion they adhere to instructed by a teacher of the same belief from elementary to higher education. In addition, the religious mark is a compulsory item in the school report, and whether a pupil/student stays back or is promoted to the next level depends, amongst other factors, on how the religion teacher grades the student.
Unlike the Muslim or Christian based education systems, the Hindu education system is still marginal and minuscule. Its funding is discriminative. Funding and expansion are linked to national policies, and the personal networks of Hindu agents are given the mandate to organize the Hindu administration and education system.
The intriguing effects of electroweak induced parity violation (PV) in molecules have yet to be observed, but experiments on molecular PV promise to provide fascinating insights. They potentially offer a novel testing ground for the low energy sector of the standard model and, in addition, a successful measurement of PV differences between the two enantiomers of a chiral molecule could promote a deeper understanding of molecular chirality, by essentially establishing a new link between particle physics and biochemistry. A key challenge in the design of such experiments is the identification of suitable molecules, which in turn requires widely applicable computational schemes for the prediction of PV experimental signals. To this end, a quasirelativistic density functional theory approach to the calculation of PV effects in nuclear magnetic resonance (NMR) spectra of chiral molecules has been developed and implemented during the course of this thesis. It includes relativistic as well as electron--correlation effects and has been used extensively in the screening of molecules possibly suited for a first observation of molecular PV. Some relevant compound classes have been identified, but none of their selected representatives are predicted to exhibit PV NMR frequency shifts that can be detected under current experimental restrictions. In order to advance the design of molecules which exhibit particularly large PV signals in experiments, systematic effects on PV NMR frequency splittings such as scaling with nuclear charge, conformational dependence and the impact of atomic substitution around the NMR active nucleus have been studied. Previously predicted scaling laws were confirmed and it was determined that the environment of the NMR active nucleus, both in terms of conformation and atomic composition, can be tuned to increase PV frequency shifts by several orders of magnitude. In addition to molecules suited for NMR experiments, a fascinating chiral actinide compound was studied with regard to PV frequency shifts in vibrational spectra. This compound displays the largest such shift ever predicted for an existing molecule, which lies well within the attainable experimental resolution. The challenge now lies in making it compatible with current experimental setups.
The long sought molecular function of membrane raft-associated flotillin proteins is slowly becoming resolved, partially owing to the increasing knowledge about their interaction partners. Being ubiquitously expressed and evolutionarily highly conserved, flotillins carry out important cellular functions, one of which is the regulation of signal transduction pathways. This study shows that the signaling adaptor protein fibroblast growth factor receptor substrate 2 (FRS2) directly interacts both in vivo and in vitro with flotillin-1 (flot-1). FRS2 is an important docking protein of many receptor tyrosine kinases. It regulates downstream signaling by forming molecular complexes with other adaptor proteins and tyrosine phosphatases, and seems to be a critical mediator of sustained extracellular signal regulated kinase (ERK) activity. Flot-1 has also been implicated in the regulation of ERK activity upon EGF and FGF stimuli. Furthermore, flot-1 forms signalosomes with EGFR and the downstream components of the MAP kinase pathway. The newly discovered interaction between FRS2 and flot-1 was shown to be mediated by the phosphotyrosine binding (PTB) domain and, to a lesser extent, the C-terminus (CT) of FRS2 and by the C-terminus of flot-1. Flot-1 coprecipitated together with FRS2 from murine tissues and cell lysates, demonstrating that this interaction also takes place in vivo. Interestingly, flot-2, which shows a high homology to flot-1 and forms stable oligomeric complexes with it, does not appear to directly interact with FRS2. Novel insights into the functional role of the interaction between flot-1 and FRS2 were provided by the results showing that depletion of flot-1 affects the cellular localization of FRS2. In hepatocytes stably depleted of flot-1, FRS2 appeared to be more soluble. Furthermore, upon pervanadate stimulation of the cells, a small fraction of FRS2 was recruited into detergent resistant membranes, but the recruitment did not take place in the absence of flot-1. Triggered by the same stimulus, a fraction of FRS2 was translocated to the nucleus independently of flot-1. Overexpression of FRS2 has previously been shown to result in increased ERK activation. However, in cells depleted of flot-1, FRS2 was not able to compensate for the compromised ERK activation after EGF or FGF stimulation. This might imply that FRS2 and flot-1 are functionally interconnected and that FRS2 resides upstream of flot-1. Taken together, the results presented here indicate that this complex may be involved in the control of signaling downstream of receptor tyrosine kinases and is important for ensuring a proper signaling response. In the absence of flot-1, increased Tyr phosphorylation of FRS2 was observed. It is known that Tyr and Thr phosphorylation of FRS2 are reciprocally regulated. Since ERK is a known executor of the FRS2 Thr phosphorylation, and ERK activity was shown to be severely diminished upon flot-1 depletion, the increased Tyr phosphorylation of FRS2 was in agreement with this and might be a direct consequence of a decreased ERK activity upon flot-1 depletion. FRS2 owes its name to the major and the first described function of this protein as a substrate for FGFR. PTB domain of FRS2 was published to constitutively bind the juxtamembrane domain of FGFR. In this study, the PTB domain was mapped to be involved in the constitutive interaction with flot-1 and the competition was shown to exist between flot-1 and FGFR1 for binding to FRS2. Another novel interaction partner of FRS2 was discovered in the present study. Cbl-associated protein (CAP) is an adaptor protein with three SH3 domains and it plays a role during insulin signaling by recruiting the signaling complex to lipid rafts. CAP was previously shown to interact with flot-1 via the SoHo domain, and this interaction was found to be crucial for the lipid raft recruitment of other signaling components. Both the PTB domain and CT of FRS2 were found to mediate the interaction with CAP, whereas in CAP, the SoHo domain, together with the third SH3 domain, seems to bind to FRS2. SH3 domains mediate the assembly of specific protein complexes by binding to proline rich sequences, several of which are present in FRS2. Due to overlapping interaction domains, FRS2 and flot-1 competed for the binding to CAP. However, the interaction with neither CAP nor flot-1 was necessary for the observed nuclear translocation of FRS2. Since CAP is expressed as several tissue- and developmental stage-specific isoforms, a further aim of this study was to analyze the expression of its isoforms in mouse embryonic fibroblasts (MEFs). Many new isoforms were discovered here which have not been described in the literature so far. They all contain the SoHo domain and three SH3 domains, but differ among themselves by the presence and length of a proline-rich region that preceeds the SoHo domain and by a novel 20-amino acid (AA) stretch between the second and the third SH3 domain. The length of the proline-rich region turned out to be an important factor determining the strength of the interaction with FRS2. The interaction was found to be weakened by the increasing length of this region. The new isoforms possessing the 20-AA stretch are specifically expressed in murine muscular tissues, with the highest level in the heart. During adipogenesis, we observed a shift in the abundance of the isoforms, in that only the isoforms without the insertion were shown to be upregulated on mRNA level. However, during myogenesis, preferentially expressed isoforms were those with the insertion. The collected data implicate that isoforms with the 20-AA insertion might be more ubiquitous in nondifferentiated/embryonic cells and that the observed "isoform-switch" might be dependent on the cell fate and differentiation state.
In der modernen Festkörperphysik spielen elektronisch stark korrelierte Systeme mit ihrem komplexen Vielteilchenverhalten eine zentrale Rolle. Insbesondere das Wechselspiel zwischen thermischen und Quantenfluktuationen in den Ladungs- und Spinfreiheitsgraden führt zur Entstehung verschiedenster neuartiger Grundzustände.
Die vorliegende Dissertation „Ultrasonic and Magnetic Investigations in frustrated Lowdimensional Spin Systems“ beschäftigt sich mit den besonderen physikalischen Eigenschaften niedrig dimensionaler Spinsysteme. Diese Materialklasse, die auch zu den stark korrelierten Systemen zählt, wird seit vielen Jahren intensiv sowohl experimentell als auch theoretisch untersucht. Auf theoretischer Seite sind die niedrigdimensionalen Spinsysteme besonders interessant, da sie als Modellsysteme die exakte Beschreibung des Grundzustandes und des Anregungsspektrums ermöglichen. Von experimenteller Seite ist es in den letzten Jahrzehnten gelungen, verschiedenste Materialklassen niedrigdimensionaler Spinsysteme zu synthetisieren.
In der vorliegenden Arbeit werden die grundlegenden Theorien und physikalischen Konzepte niedrigdimensionaler Spinsysteme diskutiert. Insbesondere auch die Spin-Phonon-Wechselwirkung dieser Materialien, die für die hier beobachteten elastischen Anomalien verantwortlich ist. Weiterhin wird auch das elastische Verhalten bei magnetischen Phasenübergängen beschrieben.
Da die Ultraschallexperimente einen Schwerpunkt dieser Arbeit bilden, wird der Versuchsaufbau zur phasenempfindlichen Detektion von Schallgeschwindigkeit und Ultraschalldämfung ausführlich beschrieben. Diese Messmethode ist ideal zur Untersuchung der Spin-Phonon Wechselwirkung geeignet.
Conceptual design of an ALICE Tier-2 centre integrated into a multi-purpose computing facility
(2012)
This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.
In this thesis, we have investigated strongly correlated bosonic gases in an optical lattice, mostly based on a bosonic version of dynamical mean field theory and its real-space extension. Emphasis is put on possible novel quantum phenomena of these many-body systems and their corresponding underlying physics, including quantum magnetism, pair-superfluidity, thermodynamics, many-body cooling, new quantum phases in the presence of long-range interactions, and excitational properties. Our motivation is to simulate manybody phenomena relevant to strongly correlated materials with ultracold lattice gases, which provide an excellent playground for investigating quantum systems with an unprecedented level of precision and controllability. Due to their high controllability, ultracold gases can be regarded as a quantum simulator of many-body systems in solid-state physics, high energy astrophysics, and quantum optics. In this thesis, specifically, we have explored possible novel quantum phases, thermodynamic properties, many-body cooling schemes, and the spectroscopy of strongly correlated many-body quantum systems. The results presented in this thesis provide theoretical benchmarks for exploring quantum magnetism in upcoming experiments, and an important step towards studying quantum phenomena of ultracold gases in the presence of long-range interactions.
Many hominin species are best physically represented and understood by the sum of their dental morphologies. Generally, taxonomic affinities and evolutionary trends in development (ontogeny) and morphology (phylogeny) can be deduced from dental analyses. More specifically, the study of dental remains can yield a wealth of information on many facets of hominin evolution, life history, physiology and ecological adaptation; in short, the organisms paleobiomics. Functionally, teeth present information about dietary preferences, that is, the dietary niche in ecological context and, in turn, masticatory function. As the amount and types of information that can be gleaned from 2-dimensional tooth measurement exhaust themselves, 3-dimensional microscopic modeling and analysis presents a largely fertile ground for reexamination and reinterpretation of dental characteristics (Bromage et al., 2005). As such, a novel, non-destructive approach has been developed which combines the work of two established technologies (confocal microscopy and 3D modeling) adapted specifically for the purpose of mineralized tissue imaging. Through this method, 3D functional masticatory and therefore occlusal molar microwear is able to be visualized, quantified and comparatively analyzed to assess dietary preference in Javanese Homo erectus. This method differs from other microwear investigative techniques (defining 'pits'- vs- 'scratches', microtexture analysis etc.) in that it defines a molars masticatory microwear functional interactions in 3-dimensions as its baseline dataset for further interpretations and analyses. Due to poor specimen collection techniques employed during the first half of the 20th century, the very complex geologic nature of the Sangiran Dome and disagreements over its chronostratigraphy, only very few scientific works have addressed the Sangiran 7 (S7) Homo erectus molar collection (n=25) (e.g. Grine and Franzen, 1994; Kaifu, 2006). Grine and Franzen's (1994) work was a predominantly qualitative initial assessment of the specimens and identified five specimens that might better be ascribed to a fossil pongid rather than H. erectus. They also noted several molars to which tooth position (M1 or M2) was unable to be ascribed (Grine and Franzen, 1994). Kaifu (2006) comparatively examined crown sizes in several S7 molars.
The Sangiran 7 collection originates from two distinct geologic horizons: ten from the older Sangiran Formation (S7a, ~1.7 to 1.0mya) and fifteen from the younger, overlying Bapang Formation (S7b, ~1.0 to .7mya). During this million year period, Java was connected to the mainland during various glacio-eustatic low-stands in sea level. These mainland connections varied in size, extent, climatic condition and therefore in faunal and floral composition. As the S7 sample may be representative of the earliest Homo erectus migrants into Java and spans long durations of occupation, its investigation yields potential to understand the various influences climatic and ecogeographic fluctuations had on these populations. Since the sample consists only of teeth, an ecodietary approach has been deemed the most logical and appropriate investigative approach. Questions regarding the intra- and inter- S7 sample
relationships will also be addressed.
By comparing various aspects of the H. erectus dentition against that of hunter/ gatherer's (H/G) whose diet is known, functional dietary similarity can be directly correlated. Thus a comparative molar sample consisting of the below historic hunter/ gather's (n=63) has been included in order to assess H. erectus's diet in ecological context: Inuit (n=9), Pacific Northwest Tribes (n=11), Fuegians (n=11), Australian Aborigines (n=12) and Bushman (n=20). Methodologically, this approach produces a 3D facet microwear vector (fmv) signature for each molar which can then be compared for statistical similarity.
Microwear (and, as such, the fmv signatures) was defined by the regular, parallel striations found on specific cusp facets known to arise from patterned, directional masticatory movements. This differs significantly from post-mortem or taphonomic microwear which produces striations at irregular angles on multiple, non-masticatory surfaces (Peuch et al.1985, Teaford, 1988). A 'match value' is produced to determine the similarity of two molars fmv's. The 'match values' are ranked (high to low) and these rankings are used to statistically analyze and infer dietary preference: between Sangiran 7 (as an entire sample) compared against that of the historic hunter/ gatherer H. sapiens whose diet and ecogeography is known; within S7a and S7b and then among the S7 sample (eg. S7a-vs-S7b); whether the purported Pongo molars actually affiliate well with H. erectus, the hunter-gatherer's or if they demonstrate distinctly different fmv signatures altogether; whether fmv signatures are useful in distinguishing molars whose tooth position is in doubt (eg. M1 or M2).
When compared against individual H/G molars, the results show that Sangiran 7 H. erectus most closely correlates with Bushmen across all areas of fmv signature analysis. However, within broader dietary categories (yearly reliant on proteinaceous foods; seasonally reliant on proteinaceous foods; not reliant on proteinaceous foods), it was found that H. erectus most closely allied with the two hunter/ gatherer subpopulations associated with the 'Seasonally reliant on proteinaceous foods' (Australian Aboriginals and Pacific Northwest Tribes). There was also evidence for dietary change or specialization over time. As the environment changed during occupation by the earlier Sangiran to the later Bapang individuals, the dietary preference shifted from a focus on vegetative foods to a diet much more inclusive of proteinaceous resources.
These results are considered logical within the larger ecogeographic and chronostratigraphic context of the Sangiran Dome during the Pleistocene. However, a larger sample would be needed to confirm this. Although general dietary preferences can be drawn from this method, it is not possible at present to define specific foods consumed on a daily basis (eg. tubers or tortoise meat).
Out of the five specimens possibly allied with Pongo, S7-14 matched at the 'high' designation with a hunter/ gatherer, S7-62 matched 'moderately', S7-20 matched 'low' while the remaining two were not able to be matched with any other teeth for various reasons. Although designation to Pongo cannot be ruled on at this time using this method, it does demonstrate that at least two of the teeth correlate well with various hunter/ gatherer's who do not share dietary similarity with Pongo. This suggests their designation as Pongo should be more closely reevaluated. As for the four specimens whose tooth position was unsure, S7-14 matched 'highly' with 1st molars, S7-62 and S7-78 matched 'moderately' with 2nd and 1st molars respectively while S7-20 only matched at the 'low' designation. Although this approach is still exploratory, it adds another analytical tool for use in defining tooth position.
In sum, this method has demonstrated its usefulness in defining and functionally analyzing a novel 3D molar microwear dataset to interpret dietary preference. Future work would include a pan- H. erectus molar sample in order to illuminate broader populational, taxonomic and dietary correlations within and amoung all H. erectus specimens. A larger, more heterogenous historic H/G sample would also be included in order to provide a wider dietary comparative population. This method can be further extended to include and compare any and all hominins as well as any organism which produces micro wear upon it molars. Also, the data obtained and resultant fmv signature diagrams have the potential to be incorporated into 3D VR reconstructions of mandibular movement thus recreating mastication in extinct organisms and leading to more robust anatomical and physiological investigations especially when viewed in the context of larger environmental conditions or changes.
Synaptic plasticity is the basis for information storage, learning and memory and is achieved by modulation of the synaptic transmission. The amount of active AMPA (α-amino-3-hydroxy-5-methyl-4-isoxazol-propionic acid) receptors at the synapse determines the transmission properties, therefore the regulation of AMPA receptor trafficking affects the synaptic strength. The protein GRIP (glutamate receptor interacting protein) binds to AMPA receptors and is one of the important regulators of AMPA receptor stability at the synapse (Dong et al., 1997; Osten et al., 2000). Previous studies have shown that the ablation of ephrinB2 or ephrinB3 in the nervous system leads to severe defects in hippocampal LTP (long term potentiation) and LTD (long term depression) (Grunwald et al., 2004). We found that ephrinB2 ligands play an important role in the stabilization of AMPA receptors at the cellular membrane (Essmann et al., 2008). Treating cultured hippocampal neurons with AMPA resulted in a robust AMPA receptor internalization, which could be inhibited by simultaneous ephrinB2 activation with soluble EphB4-Fc fusion proteins. Conditional hippocampal ephrinB2 knock-out (KO) neurons showed enhanced constitutive internalization of AMPA receptors. Interaction and interference experiments revealed that ephrinB ligands and AMPA receptors are bridged by GRIP. This interaction is regulated by phosphorylation of a single serine residue in close proximity to the C-terminal PDZ protein target site in ephrinB ligands (Essmann et al., 2008). To investigate the in vivo relevance of this previously undescribed feature of ephrinB reverse signaling, we generated ephrinB2 S-9>A knock-in mice, where the serine at position -9 was replaced by an alanine to prevent phosphorylation. The mutated ephrinB2 of this mouse line was expressed and able to form clusters following stimulation with the preclustered receptor EphB4-Fc. Surface ephrinB2 cluster size and cluster number was slightly smaller in comparison to wild type (WT) mice. Analyzing AMPA receptor internalization, we oserved an increased basal GluR2 endocytosis in cultured hippocampal neurons of ephrinB2 S-9>A mice. Dendrite and spine morphology was similar in pyramidal CA1 neurons of brain slices from adult ephrinB2 S-9>A and WT mice, suggesting a redundancy between the different ephrinB familily members.
Apart from regulating AMPA receptor stability at the synapse, GRIP1 also has an important role in the secretory pathway to deliver cargo proteins along microtubules to dendrites and synapses (Setou et al., 2002). Proteins involved in synaptic transmission and plasticity, as well as lipids required for the outgrowth and remodeling of dendrites and axons have to be transported. We showed in our laboratory with a directed proteomic analysis using the tandem affinity purification-mass spectrometry methodology (Angrand et al., 2006) and with immunoprecipitation assays with brain lysates that the small regulatory protein 14-3-3 interacts with GRIP1. Further immunoprecipitation assays with lysates from HeLa cells transfected with various parts and sequence mutants of GRIP1 revealed that threonine 956 in the linker region L2 between PDZ6 and PDZ7 of GRIP1 is necessary for the interaction with 14-3-3. GRIP1 has been postulated to influence dendritic arborization and maintenance in hippocampal neurons in culture due to defective kinesin-dependent transport along microtubules (Hoogenraad et al 2005). In order to address the role of the association of GRIP1 and 14-3-3 in dendritogenesis, we transfected rat hippocampal neurons with GRIP1-WT and GRIP1 mutants and performed Sholl analysis to evaluate dendritic arborization defects. We could observe striking increased formation and growth of dendrites in developing neurons as well as in mature neurons overexpressing GRIP1-WT. However, overexpression of GRIP1-T956A, where the threonine 956 was replaced by an alanine to prevent phosphorylation, did not show enhanced dendritogenesis, indicating a role for threonine 956 phosphorylation in dendrite branching. To investigate the importance of the interaction between GRIP1 and 14-3-3 in vivo, we generated transgenic mouse lines with a GRIP1-T956A transgene or a GRIP1-WT transgene as control. These mice were crossed with heterozygous GRIP1 mice and by further breedings we obtained some surviver mice carrying either the wild type or the mutated GRIP1 transgene in the usually embryonic lethal GRIP1-KO background (Bladt et al., 2002; Takamiya et al., 2004). In embryonic day (E) 14.5 cultured hippocampal GRIP1-KO neurons we could observe reduced dendritic growth. We also showed reduced GluR2 staining on the dendritic surface in cultured hippocampal neurons from GRIP1-KO and GRIP1-KO neurons containing the GRIP1-T956A transgene. GRIP1-KO neurons containing the GRIP1-WT transgene showed a similar surface GluR2 signal intensity as WT neurons. Reduced surface GluR2 staining in GRIP1-KO neurons and GRIP1-KO neurons with the GRIP1-T956A transgene might be a consequence of defective kinesin-dependent transport of GluR2 to dendrites, indicating an important role of threonine 956 phosphorylation of GRIP1 for GluR2 trafficking.
This thesis will first introduce in more detail the Bayesian theory and its use in integrating multiple information sources. I will briefly talk about models and their relation to the dynamics of an environment, and how to combine multiple alternative models. Following that I will discuss the experimental findings on multisensory integration in humans and animals. I start with psychophysical results on various forms of tasks and setups, that show that the brain uses and combines information from multiple cues. Specifically, the discussion will focus on the finding that humans integrate this information in a way that is close to the theoretical optimal performance. Special emphasis will be put on results about the developmental aspects of cue integration, highlighting experiments that could show that children do not perform similar to the Bayesian predictions. This section also includes a short summary of experiments on how subjects handle multiple alternative environmental dynamics. I will also talk about neurobiological findings of cells receiving input from multiple receptors both in dedicated brain areas but also primary sensory areas. I will proceed with an overview of existing theories and computational models of multisensory integration. This will be followed by a discussion on reinforcement learning (RL). First I will talk about the original theory including the two different main approaches model-free and model-based reinforcement learning. The important variables will be introduced as well as different algorithmic implementations. Secondly, a short review on the mapping of those theories onto brain and behaviour will be given. I mention the most in uential papers that showed correlations between the activity in certain brain regions with RL variables, most prominently between dopaminergic neurons and temporal difference errors. I will try to motivate, why I think that this theory can help to explain the development of near-optimal cue integration in humans. The next main chapter will introduce our model that learns to solve the task of audio-visual orienting. Many of the results in this section have been published in [Weisswange et al. 2009b,Weisswange et al. 2011]. The model agent starts without any knowledge of the environment and acts based on predictions of rewards, which will be adapted according to the reward signaling the quality of the performed action. I will show that after training this model performs similarly to the prediction of a Bayesian observer. The model can also deal with more complex environments in which it has to deal with multiple possible underlying generating models (perform causal inference). In these experiments I use di#erent formulations of Bayesian observers for comparison with our model, and find that it is most similar to the fully optimal observer doing model averaging. Additional experiments using various alterations to the environment show the ability of the model to react to changes in the input statistics without explicitly representing probability distributions. I will close the chapter with a discussion on the benefits and shortcomings of the model. The thesis continues whith a report on an application of the learning algorithm introduced before to two real world cue integration tasks on a robotic head. For these tasks our system outperforms a commonly used approximation to Bayesian inference, reliability weighted averaging. The approximation is handy because of its computational simplicity, because it relies on certain assumptions that are usually controlled for in a laboratory setting, but these are often not true for real world data. This chapter is based on the paper [Karaoguz et al. 2011]. Our second modeling approach tries to address the neuronal substrates of the learning process for cue integration. I again use a reward based training scheme, but this time implemented as a modulation of synaptic plasticity mechanisms in a recurrent network of binary threshold neurons. I start the chapter with an additional introduction section to discuss recurrent networks and especially the various forms of neuronal plasticity that I will use in the model. The performance on a task similar to that of chapter 3 will be presented together with an analysis of the in uence of different plasticity mechanisms on it. Again benefits and shortcomings and the general potential of the method will be discussed. I will close the thesis with a general conclusion and some ideas about possible future work.
Plastids are complex organelles that fulfil numerous essential cellular functions, such as
photosynthesis, amino acid and fatty acid synthesis. he majority of proteins required for
these functions are encoded in the nuclear genome and synthesised on cytosolic ribosomes as
precursors, which are posttranslationally transported to and imported into the organelle by
concerted actions of translocons in the outer and inner chloroplast membrane. For most
preproteins, targeting to the organelle is ensured by a specific import signal, a so called
transit peptide, which is specifically recognised by receptors at the chloroplastês surface. A transit peptide is generally defined as essential and sufficient for precursor targeting to and
translocation into chloroplasts, (however, an analysis of the ability of transit peptides to drive translocation of tightly folded passenger domain revealed that the transit peptide is not
always sufficient for the translocation event. A critical signal length requirement of amino
acids has been determined in vivo and in vitro. In the case of shorter transit peptide, the
succeeding portion of the mature domain provides an extension of an unfolded polypeptide
stretch required for successful translocation. The analysis of the unfolding mode of a folded
model passenger during translocation links the observed transit peptide length requirement
to the action of an energising unit present in the intermembrane space of chloroplasts.
The likely candidate for this energising unit space is putative imsHsp70, previously hypothesised to function in translocation of precursor proteins across the outer membrane. However, as the identity of this protein has up to now remained unknown, its existence has
been a matter of debate. The present study focuses on the isolation and characterisation of
imsHsp70 at the molecular level. Mass spectrometry analyses and in vivo localisation studies
demonstrate that while no specific imsHsp70 exists, multiple cytosolic Hsp70 isoforms are
targeted to the intermembrane space, but not to the stroma of chloroplasts. Thus, a so far unrecognised mode of dual targeting to chloroplasts and cytosol is most likely to ensure the
allocation of (sp s into the intermembrane space.
Within the last twenty years, the contraction method has turned out to be a fruitful approach to distributional convergence of sequences of random variables which obey additive recurrences. It was mainly invented for applications in the real-valued framework; however, in recent years, more complex state spaces such as Hilbert spaces have been under consideration. Based upon the family of Zolotarev metrics which were introduced in the late seventies, we develop the method in the context of Banach spaces and work it out in detail in the case of continuous resp. cadlag functions on the unit interval. We formulate sufficient conditions for both the sequence under consideration and its possible limit which satisfies a stochastic fixed-point equation, that allow to deduce functional limit theorems in applications. As a first application we present a new and considerably short proof of the classical invariance principle due to Donsker. It is based on a recursive decomposition. Moreover, we apply the method in the analysis of the complexity of partial match queries in two-dimensional search trees such as quadtrees and 2-d trees. These important data structures have been under heavy investigation since their invention in the seventies. Our results give answers to problems that have been left open in the pioneering work of Flajolet et al. in the eighties and nineties. We expect that the functional contraction method will significantly contribute to solutions for similar problems involving additive recursions in the following years.
Das geographische Verbreitungsgebiet von Arten ist ein fundamentales Struktur gebendes Merkmal der biologischen Welt. Warum Arten so verteilt sind, wie sie sind ist seit langem eine der zentralen Fragen in Ökologie, Biogeographie und Evolution. Gegenwärtig verändern sich, im Wesentlichen als unbeabsichtigtes Nebenprodukt menschlicher ökonomischen Aktivitäten und Populationsdynamik, die geographischen Verbreitungsgebiete von Arten mit entscheidender Bedeutung für Land- und Forstwirtschaft, als Krankheitsvektoren oder als Teil der biologischen Systeme, die Ökosystemfunktionen bereitstellen. Daher ist es dringend notwendig, dass wir unser Verständnis über die Dynamiken, aus denen die geographische Verbreitung von Arten erwachsen, verbessern. Mit dieser Doktorarbeit versuche ich, in drei Untersuchungen zur Dynamik der Verbreitungsgebiete von Singvögeln einen Beitrag zu unserem in Entwicklung begriffenen Verständnis der multiplen Faktoren die Artverbreitungsgebiete beeinflussen, zu leisten.
1) Zu einem mechanistischeren Verständnis von Artmerkmalen und Verbreitungsgebietsgrößen: Ein wichtiger, ungelöster Fragenkomplex in der Makroökologie ist, die immense interspezifische Variation in der Größe geographischer Verbreitungsgebiete zu verstehen. Während man davon ausgeht, dass Artmerkmale wie Fekundität und Körpergröße einen Effekt auf Verbreitungsgebietsgrößen haben, fehlt ein allgemeines Verständnis davon, wie Verbreitungsgebietsgrößen von mehreren Merkmalen gemeinsam beeinflusst werden. Hier beurteilen wir den Effekt von Lebensgeschichtsmerkmalen (Fekundität, Ausbreitungsfähigkeit), ökologischen Merkmalen (Habitatnische, Nahrungsnische, Zugverhalten, Flexibilität im Zugverhalten) und morphologischen Merkmalen (Körpergröße) auf die globale Verbreitungsgebietsgröße von 165 europäischen Singvögeln. Wir identifizieren Hypothesen zur Beziehung von Artmerkmalen und Verbreitungsgebietsgrößen aus der Literatur und verwenden die Methodik der Pfadanalyse, um sie zu testen. Die Größe der globalen geographischen Verbreitungsgebiete europäischer Singvögel wurde von Lebensgeschichtsmerkmalen (Fekundidtät und Ausbreitungsfähigkeit), ökologischen Merkmalen (Habitatnischenbreite, Nahrungsnischenposition und Zugverhalten) und von Körpergröße beeinflusst. Artmerkmale beeinflussten Verbreitungsgebietsgrößen auf direktem und indirektem Weg. Insbesondere der Einfluss von Körpergröße war komplex mit positiven und negativen Effekten über verschiedene Pfade. Die Größe von Verbreitungsgebieten ist sehr wahrscheinlich auch von anderen Faktoren als von Artmerkmalen abhängig. Wir zeigen, dass es notwendig ist, den direkten und indirekten Einfluss einer Vielzahl von Merkmalen zu entwirren, um die Mechanismen, die makroökologische Beziehungen generieren, aufzuklären.
2) Konkurrenz und Ausbreitungsfähigkeit interagieren bei der Bestimmung der geographischen Verbreitung von Vögeln: Es ist weiterhin eine Herausforderung für Ökologie und Evolutionsbiologie, die Faktoren zu verstehen , die die geographische Verbreitung von Arten beeinflussen. Wir untersuchen wie Konkurrenz, Ausbreitungsfähigkeit, das Alter eines Taxons und Habitatverschiebungen seit dem letzten glazialen Maximum das Ausmaß beeinflussen, in dem Arten der Vogelgattung Sylvia in allen Gegenden mit geeigneten Umweltbedingungen vorkommen (d.h. range filling).
Wir haben range filling in der Vogelgattung Sylvia (Grasmücken) unter Verwendung von Boosted Regression Trees und Ridge-Regression quantifiziert. Mittels multipler Regression haben wir für die Effekte von intragenerischer Konkurrenz, Ausbreitungfähigkeit, Alter des Taxons und Habitatverschiebung seit dem letzten glazialen Maximum auf range filling getestet.
Grasmücken mit hoher Ausbreitungsfähigkeit zeigten höheres range filling, aber nur wenn Konkurrenz in Gebieten mit weniger geeignetem Habitat innerhalb ihres potentiellen Verbreitungsgebietes niedrig war. Das Alter eines Taxon und Habitatverschiebung seit dem letzten glazialen Maximum hatten keinen konsistenten Effekt. Wir zeigen, dass die Verbreitungsgebiete von Grasmücken mit hoher Wahrscheinlichkeit durch den simultanen, interaktiven Effekt von Konkurrenz und Ausbreitungsfähigkeit geformt werden. Wenn biotische Interaktionen wie Konkurrenz generell die Fähigkeit von Arten beeinflussen auf der kontinentalen Skala neue Gebiete zu kolonisieren, wird es eine Herausforderung sein, den Effekt von Klimawandel auf Biodiversität vorherzusagen.
3) Nischenverfügbarkeit in Zeit und Raum: Vogelzug der Grasmücken: Im Kontext neuer Fortschritte in der ökologischen Nischenmodellierung sind sowohl die Umwelt als auch die ökologische Nische einer Art als statische Entitäten behandelt und quantifiziert worden. In der Realität sind aber die Umwelt und die Nischenanforderungen einer Art auf einer Vielzahl von Skalen dynamisch. Wir schlagen ein konzeptionelles System vor das berücksichtigt, wie die realisierte Nische und geographische Verbreitung von Arten durch die entkoppelte raumzeitliche Verfügbarkeit unterschiedlicher Umweltbedingungen und durch Veränderungen der Nischenanforderungen über die Lebenszeit eines Organismus geformt werden. Das Testen von aus dem konzeptionellen System abgeleiteten Vorhersagen am Beispiel des Vogelzugs der Grasmücken ergab neue Erkenntnisse: Das Verfolgen der Klimanische im geographischen Raum war höchstwahrscheinlich nicht die treibende Kraft für Migration in der Gattung und steht potentiell im Konflikt mit dem Verfolgen der Landnutzungsnische. Die Nischen der Grasmücken waren während der Brutsaison schmaler, was zeigt, dass Nischenanforderungen zeitlich dynamisch sein können. Wir legen nahe, dass die Berücksichtigung dynamischer Umwelten und Nischenanforderungen zu einer entscheidenden Verbessserung unseres Verständnisses der treibenden Faktoren hinter der Bewegung von Organismen im Raum und der Dynamik ihrer Nischen und Verbreitungsgebiete führt.
The subject of this thesis aimed at a better understanding of the spectacular X-ray burst. The most likely astrophysical site is a very dense neutron star, which accretes H/He-rich matter from a close companion. While falling towards the neutron star, the matter is heated up and a thermonuclear runaway is ignited. The exact description of this process is dominated by the properties of a few proton-rich radioactive isotopes, which have a low interaction probability, hence a high abundance.
The topic of this thesis was therefore an investigation of the short-lived, proton-rich isotopes 31Cl and 32Ar. The Coulomb dissociation method is the modern technique of choice. Excitations with energies up to 20 MeV can be induced by the Lorentz contracted Coulomb field of a lead target. At the GSI Helmholtzzentrum für Schwerionenforschung GmbH in Darmstadt, Germany, a Ar beam was accelerated to an energy of 825 AMeV and fragmented in a beryllium target. The fragment separator was used to select the desired isotopes with a remaining energy of 650 AMeV. They were subsequently directed onto a 208 Pb target in the ALAND/LAND setup. The measurement was performed in inverse kinematics. All reaction products were detected and inclusive and exclusive measurements of the respective Coulomb dissociation cross sections were possible.
During the analysis of the experiment, it was possible to extract the energy-differential excitation spectrum of 31Cl, and to constrain astrophysically important parameters for the time-reversed 30S(p,γ)31Cl reaction. A single resonance at 0.443(37) MeV dominates the stellar reaction rate, which was also deduced and compared to previous calculations.
The integrated Coulomb dissociation cross section of this resonance was determined to 15(6) mb. The astrophysically important one- and two-proton emission channels were analyzed for 32Ar and energy-differential excitation spectra could be derived. The integrated Coulomb dissociation cross section for two proton emission were determined with two different techniques. The inclusive measurement yields a cross section of 214(29stat)(20sys) mb, whereas the exclusive reconstruction results in a cross section of 226(14stat)(23sys) mb. Both results are in very good agreement. The Coulomb dissociation cross section for the one-proton emission channel is extracted solely from the exclusive measurement and is 54(8stat)(6sys) mb.
Furthermore, the development of the Low Energy Neutron detector Array (LENA) for the upcoming R3B setup is described. The detector will be utilized in charge-exchange reactions to detect the low-energy recoil neutrons from (p,n)-type reactions. These reaction studies are of particular importance in the astrophysical context and can be used to constrain half lifes under stellar conditions. In the frame of this work, prototypes of the detector were built and successfully commissioned in several international laboratories.
The analysis was supported by detailed simulations of the detection characteristics.
In this thesis I use effective models to investigate the properties of QCD-like theories at nonzero temperature and baryon chemical potential. First I construct a PNJL model using a lattice spin model with nearestneighbor interactions for the gauge sector and four-fermion interactions for the quarks in (pseudo)real representations of the gauge group. Calculating the phase diagram in the plane of temperature and quark chemical potential in QCD with adjoint quarks, it is qualitatively confirmed that the critical temperature of the chiral phase transition is much higher than the deconfinement transition temperature. At a chemical potential equal to half of the diquark mass in the vacuum, a diquark Bose–Einstein condensation (BEC) phase transition occurs. In the two-color case, a Ginzburg–Landau expansion is used to study the tetracritical behavior around the intersection point of the deconfinement and BEC transition lines which are both of second order. A compact expression for the expectation value of the Polyakov loop in an arbitrary representation of the gauge group is obtained for any number of colors, which allows us to study Casimir scaling at both nonzero temperature and chemical potential. Subsequently I study the thermodynamics of two-color QCD (QC2D) at high temperature and/or density using ZQCD, a dimensionally reduced superrenormalizable effective theory, formulated in terms of a coarse grained Wilson line. In the absence of quarks, the theory is required to respect the Z2 center symmetry, while the effects of quarks of arbitrary masses and chemical potentials are introduced via soft Z2 breaking operators. Perturbative matching of the effective theory parameters to the full theory is carried out explicitly, and it is argued how the new theory can be used to explore the phase diagram of two-color QCD.
The impacts of human activities, notably the conversion of tropical forests into farmland habitat, has profound impacts on biological diversity and ecosystem functions (Millennium Ecosystem Assessment 2005). It is widely debated to what extent human modified landscapes can maintain tropical biodiversity and their ecosystem functionality (e.g. Waltert et al. 2004, Sekercioglu et al. 2007). In this thesis, I have used a huge and temporarily replicated dataset to assess the value of different habitat types differing in land-use intensities for bird communities in tropical East Africa. I investigated bird abundance and species richness along a forest-farmland habitat gradient and assessed spatial and temporal fluctuations of bird assemblages and their food resources.
I could show that forest and farmland habitats harbor distinct bird communities. Moreover, the protection of natural forests merits the highest priority for conserving the high diversity of forest-dependent bird species. My study, however, also shows that farmland habitats in the proximity of natural forest can support a high bird diversity. High bird diversity in tropical farmlands depends on a high structural complexity, such as in small-scale subsistence farmlands. From my findings, I conclude that the conversion of forest to farmland leads to substantial losses in bird diversity, in particular in specialized feeding guilds such as insectivores, while the conversion of structurally heterogeneous subsistence farmlands to sugarcane plantation causes erosion of bird diversity in agricultural ecosystems. Both findings are important for conservation planning in times when tropical forests and agroecosystems are under constantly high pressure due to increasing human population numbers and global demands for biofuel crops (Gibbs et al. 2008). From an ecosystem function perspective, my study demonstrates the potential of agroecosystems in supporting important ecosystem functions, such as seed dispersal by frugivorous birds and pest control by insectivorous birds. I could show that bird abundances in both frugivorous and insectivorous guilds were strongly predicted by their respective food resources, implying that seasonal shifts in fruit and invertebrate abundance at Kakamega forest and surrounding farmlands affect community dynamics and appear to influence local movement patterns of birds. The most interesting finding of this study was that feeding guilds responded idiosyncratically to resource fluctuations. Frugivore richness fluctuated asynchronously in forest and farmland habitats, suggesting foraging movements and fruit tracking across habitat borders. In contrast, I found that insectivores fluctuated synchronously in the two habitat types, suggesting a lack of inter-habitat movements. I therefore predict that insectivorous bird communities in this forest-farmland landscape may be more susceptible to the combined effects of land-use and climate change, due to their narrow habitat niche and limited capacity to track their resources.
The fact that a number of bird species regularly moved across the landscape mosaic in my study system implies that birds are able to provide long-distance seed dispersal across habitat borders. Thus, birds may enhance forest regeneration in human-modified landscapes, such as those in most parts of tropical Africa, given that forest remnants are protected within an agricultural habitat matrix. In order to effectively conserve tropical biodiversity within forest-farmland mosaics, this study advocates for conservation strategies that go beyond forest protection and explicitly integrate farmlands into forest management plans and policies. This should emphasize the retention of keystone habitat elements within tropical farmland landscapes, such as indigenous trees, forest galleries and hedgerows, whose presence enhance species diversity. Such grassroot-level approaches can be operationalized for instance through providing incentives to farmers to maintain their traditional subsistence land-use practices and through community-based livelihood projects aiming at enhancing local habitat heterogeneity and inter-habitat connectivity.
Thermal expansion measurements provide a sensitive tool for exploring a material's thermodynamic
properties in condensed matter physics as they provide useful information
on the electronic, magnetic and lattice properties of a material. In this thesis, thermal
expansion measurements have been carried out both at ambient-pressure and under hydrostatic
pressure conditions. From the materials point of view, the spin-liquid candidate
Kappa-(BEDT-TTF) 2 Cu 2(CN)3 has been studied extensively as a function of temperature and
magnetic field. Azurite, Cu 3 (CO 3) 2 (OH) 2 - a realization of a one-dimensional distorted
Heisenberg chain is also studied both at ambient and hydrostatic pressure to demonstrate
the proper functioning of the newly built setup "thermal expansion under pressure". ...
Seit Anbeginn der Festkörperphysik ist die Frage, warum manche Materialien metallisch sind, andere dagegen isolierend, von zentraler Bedeutung. Eine erste Erklärung wurde durch die Bändertheorie [23, 44] gegeben. Die Elektronen sind dem periodischen Potential der Rumpfatome ausgesetzt, wodurch ein Energiespektrum bestehend aus Bändern erzeugt wird und die Füllung dieser Bänder bestimmt die Leitungseigenschaften des Festkörpers. ...
For millennia, rural West African communities living in or adjacent of savanna ecosystems have been collecting components of local plant species (e.g. fruits, leaves, bark) in order to fulfil essential household subsistence needs (alimentation, medical care, energy demand etc.), to generate cash income and to overcome times of (financial) crisis. Thus, these non-timber forest products (NTFPs) make a considerable contribution to the well-being of local households. However, climate and land use change severely impact West African savanna ecosystems and, consequently, the safe-guarding of dependent rural livelihoods. The conversion of savanna area into cultivated land for subsistence farming owing to the ongoing population growth, as well as the progressive promotion of cash crops (e.g. cotton) is ever-increasing. As a consequence, present land-use management in West Africa has to cope with serious trade-offs. Within this decision-making NTFPs have been constantly understated due to a lack of appropriate economic figures to use within common cost-benefit analysis, and, thus, have been frequently outcompeted by seemingly more profitable land-use options. Therefore, it is crucial to provide appropriate economic data for NTFPs in order to create positive incentives for both decision-makers and NTFP beneficiaries to conserve NTFP-providing trees. The key finding of this analysis is that income from NTFPs accounts for 39 % on average of an annual total household income in Northern Benin, representing the second largest income share next to crop income and proving the respective households to be economically heavily dependent on NTFPs. Thereby, socio-economic characteristics of NTFP users tremendously shape their preferences for woody species. Particularly ethnicity has a major impact on the species used and the economic return obtained by them. Moreover, the study investigated the impacts of climate and land use change on the economic benefits derived from the three economically most important tree species in the region Vitellaria paradoxa, Parkia biglobosa and Adansonia digitata in 2050: Environmental changes will have primarily negative effects on the economic returns from all the three species. At large, the study underpins the economic relevance of NTFPs for rural communities in West African savannas and, consequently, the necessity to appropriately sustain them in order to safe-guard local livelihoods. Providing key figures on the current and future economic benefits obtained from NTFPs can augment common cost-benefit analysis, and, delivering detailed information about peoples’ use preferences for local species, this study clearly contributes to improve the basis of decision-making with reference to local land-use policies.
Interacting ultracold gases in optical lattices: non-equilibrium dynamics and effects of disorder
(2012)
This dissertation aims at giving a theoretical description of various applications of ultracold gases. A particular focus is cast upon the dynamical evolution of bosonic condensates in non-equilibrium by means of the time-dependent Gutzwiller method. Ground state properties of strongly interacting fermionic atoms in box and speckle disordered lattices are investigated via real-space dynamical mean-field theory. ...
The first measurement of the fluctuation of the kaon-to-proton ratio in relativistic heavy-ion collisions is presented. This thesis details the analysis procedure for identifying kaons and protons using the NA49 experiment at CERN-SPS and discusses the results in the context of the current state of the field.
Diatoms contribute largely to the total primary production of the ecosphere and are key players in global biogeochemical cycles. Their chloroplasts are surrounded by four membranes owing to their secondary endosymbiotic origin. Their thylakoids are arranged into three parallel bands and differentiation of thylakoid membranes into grana or stroma is not observed. The fucoxanthin chlorophyll a/c binding proteins act as the light harvesting proteins and play a role in photoprotection during excess light as well. The diatom genome encodes three different families of antenna proteins. Family I are the classical light harvesting proteins called "Lhcf". Family II are the red algae related Lhca-R1/2 proteins called "Lhcr" and family III are the photoprotective LI818 related proteins called "Lhcx".
All known Fcps have a molecular weight in the range of 17-23 kDa. They are membrane proteins and have shorter loops and termini compared to LHCs of higher plants and are therefore extremely hydrophobic. This makes the isolation of single specific Fcps using routine protein purification techniques difficult.
The purification of a specific Fcp containing complex has not been achieved so far and until this is done several questions concerning light harvesting antenna systems of diatoms cannot be answered. For e.g. Which proteins interact specifically? Are various Fcps differently pigmented? Which pigments interact with each other and how? Which proteins contribute to photosystem specific antenna systems? Can pure Fcps be reconstituted into crystals like LHCII proteins? In order to answer these questions specific Fcp containing complexes have to be purified. ...
Menschliche Aktivitäten beeinflussen beinahe alle Bereiche des Lebens auf der Erde (MEA 2005a; UNEP 2007). Die Zerstörung und Veränderung natürlicher Lebensräume sind als Hauptursache für den weltweiten Biodiversitätsverlust identifiziert (Harrison and Bruna 1999; Dale et al. 2000; Foley et al. 2005; MEA 2005a). Zusammen mit dem Klimawandel wird die Landnutzungsveränderung daher als einflussreichster Aspekt anthropogen verursachten globalen Wandels betrachtet (MEA 2005a). Landnutzungsveränderung schließt sowohl die Umwandlung natürlicher Habitate in Agrarland oder Siedlungen als auch die Landnutzungsintensivierung in bereits kultivierten Landschaften mit ein. Diese Veränderungen haben weitreichende Konsequenzen für die Artenvielfalt und resultieren häufig in dem Verlust von Arten mit zunehmender Intensität der Landnutzung (Scholes and Biggs 2005).
Biodiversität und Ökosysteme stellen viele verschiedene Funktionen zur Verfügung, wie z. B. die Sauerstoffproduktion, die Reinigung von Wasser und die Bestäubung von Nutzpflanzen.
Einige dieser Funktionen sind hilfreich, andere wichtig und wieder andere notwendig für das menschliche Wohlergehen (MEA 2005b; UNEP 2007). Mittlerweile sind Ökosystemfunktionen und die vielen Nutzen, die sie erbringen, zu einem zentralen Thema der interdisziplinären Forschung von Sozialwissenschaften und Naturwissenschaften geworden (Barkmann et al. 2008 und darin enthaltene Referenzen). Dadurch bedingt ist es zu einiger Verwirrung bezüglich der verwendeten Begriffe der "Ökosystemfunktion" (engl. "ecosystem function") und dem der "Ökosystemdienstleistung" (engl. "ecosystem service") gekommen (deGroot et al. 2002). Da der Fokus meiner Arbeit auf grundlegenden Funktionen von Ökosystemen liegt, verwende ich im Folgenden den Begriff der Ökosystemfunktion.
Für viele Ökosystemfunktionen ist noch sehr unzureichend bekannt, wie diese von externen Störungen beeinflusst werden (Kremen and Ostfeld 2005; Balvanera et al. 2006). Ökosystemfunktionen werden selten von nur einer einzigen Art aufrechterhalten, sondern meist von einer ganzen Reihe unterschiedlicher taxonomischer Gruppen – alle mit ihren ganz eigenen Ansprüchen. Diese Arten, wie auch deren intra- und interspezifischen Interaktionen, können durchaus nterschiedlich auf die gleiche Störungsquelle oder Störungsintensität reagieren. Dies kann Vorhersagen zum Verhalten von Ökosystemfunktionen extrem erschweren. ...
We provide a mathematical framework to model continuous time trading in limit order markets of a small investor whose transactions have no impact on order book dynamics. The investor can continuously place market and limit orders. A market order is executed immediately at the best currently available price, whereas a limit order is stored until it is executed at its limit price or canceled. The limit orders can be chosen from a continuum of limit prices.
In this framework we show how elementary strategies (hold limit orders with only finitely many different limit prices and rebalance at most finitely often) can be extended in a suitable
way to general continuous time strategies containing orders with infinitely many different limit prices. The general limit buy order strategies are predictable processes with values in the set of nonincreasing demand functions (not necessarily left- or right-continuous in the price variable). It turns out that this family of strategies is closed and any element can be approximated by a sequence of elementary strategies.
Furthermore, we study Merton’s portfolio optimization problem in a specific instance of this framework. Assuming that the risky asset evolves according to a geometric Brownian
motion, a proportional bid-ask spread, and Poisson execution times for the limit orders of the small investor, we show that the optimal strategy consists in using market orders to keep the
proportion of wealth invested in the risky asset within certain boundaries, similar to the result for proportional transaction costs, while within these boundaries limit orders are used to profit from the bid-ask spread.
Ende der 70ger Jahre, fünf Jahre nach der Einführung des ersten kommerziellen, medizinischen Computertomographen wurde die Tomographie am Los Alamos Scientific Laboratory zum ersten Mal für die Diagnose von Teilchenstrahlen angewendet. Bei der Tomographie wird aus eindimensionalen Projektionen, sogenannten Profilen, welche in möglichst vielen Winkeln um ein Objekt herum aufgenommen werden, ein zweidimensionales Abbild der Dichteverteilung (Slice oder Scheibe) approximiert. Dies ist möglich durch das bereits 1917 von Johann Radon eingeführte Fourier-Scheiben-Theorem. In der Theorie kann die zwei-dimensionale Dichteverteilung exakt ermittelt werden, wenn Projektionen mit einer unendlich feinen Auflösung über unendlich viele Winkel um ein Objekt herum in die Rekonstruktion einbezogen werden. Durch die Rekonstruktion vieler Scheiben kann ein drei-dimensionales Abbild der Dichteverteilung in einem Objekt, in diesem Fall einem Ionenstrahl, berechnet werden, sofern dieses nicht optisch dicht ist.
Die Profile in der nicht-invasiven Strahldiagnose entstehen durch CCD-Kameraaufnahmen von strahlinduzierter Fluoreszenz, welche durch den Einlass von Restgas hervorgerufen wird. Es sind aber auch Profile, welche aus anderen Methoden gewonnen werden (z.B. Gittermessungen) denkbar. An Orten mit hoher Energie ist jedoch eine nicht-invasive Form der Profilaufnahme sowohl für die Qualität des Strahls, wie auch den Schutz der Messgeräte unabdingbar.
In den letzten 40 Jahren wurden im Bereich der Strahltomographie viele wichtige Fortschritte erzielt:
1. Anfangs standen nur sehr wenige Profile zur Verfügung, so dass die Methode der gefilterten Rückprojektion(FBP), welche sich direkt aus dem Fourier-Scheiben-Theorem ableitet und welches auch in der Medizin verwendet wird, nicht angewendet werden kann. Um dieses Problem zu lösen wurden iterative Methoden wie die Algebraische Rekonstruktion (ART) und die Methode der Maximalen Entropie (MEM) für die Strahltomographie erschlossen, so dass auch mit sehr geringer Profilanzahl eine Rücktransformation möglich wurde.
2. Neben der Ortsraumtomographie wurde die Phasenraumtomografie entwickelt, so dass mittlerweile eine Rekonstruktion des sechs-dimensionalen Phasenraumes möglich ist, mit welchem ein Ionenstrahl in seiner Gesamtheit beschrieben werden kann.
3. Die Projektionen wurden lange Zeit durch Aufnahmen von mehreren festen Anschlüssen aus gewonnen (Multi-Port-Technik). Auf diese Weise ist die Anzahl der möglichen Projektionen sehr begrenzt. So entwickelte man später eine Methode welche den Strahl mit Hilfe von Quadrupolen dreht (Quad-Scan-Technik), so dass auf diese Weise von einem Anschluss aus viele Projektionen gemessen werden konnten, so dass sogar die FBP angewendet werden konnte.
4. Die meisten Bestrebungen zielten darauf ab, die Tomographie für eine nicht-invasive Emittanzmessmethode zu nutzen, welches bis heute aufgrund der großen und noch immer zunehmenden Energien in modernen Beschleunigern ein wichtiges Problem ist. Um die Tomographie zur Emittanzmessung zu verwenden, führt man eine Rekonstruktion des Phasenraumes durch. Das Problem ist, dass hierfür das a priori Wissen über die Strahltransportmatrix in die Tomographie mit einfließt, die berechnete Strahltransportmatrix
jedoch nicht mit dem tatsächlichen Strahltransport übereinstimmt, da dieser bei hohen Energien durch auftretende Raumladung nicht-linear verändert wird. Hierzu wurden gute Fortschritte in der Abschätzung der tatsächlichen Transportmatrix gemacht um die Phasenraumtomographie trotzdem mit hinreichend gutem Ergebnis durchführen zu können.
Trotz all dieser Fortschritte und Entwicklungen ist die Tomographie bis heute keine weitverbreitete Methode in der Strahldiagnose. Der Grund ist, dass das Einrichten einer Tomografie eine komplexe Abfolge etlicher Entscheidungen und weitgestreutes Wissen aus vielen unterschiedlichen Bereichen erfordert, dieser nicht zu unterschätzende Mehraufwand jedoch auch durch einen signifikanten Nutzen gerechtfertigt sein muss. Der große Nutzen der Tomographie für die Strahldiagnose und Untersuchung der Strahldynamik ist bis heute allerdings weitgehend unerkannt und weiterhin reduziert auf die Entwicklung einer nicht-invasiven Methode für die Emittanzbestimmung. Ein zweites Hindernis stellte bisher auch die Diskrepanz zwischen Genauigkeit und Platzaufwand dar (hohe Genauigkeit durch viele Projektionen mit Quad-Scan-Technik auf mehreren Metern oder niedrige Genauigkeit durch wenig Projektionen mit Multi-Port-Technik auf weniger als einem Meter). Die Tomografie kann großen Nutzen leisten für die Online-Überwachung wichtiger Maschineneparameter im Strahlbetrieb (Monitoring) als auch für detaillierte Analysen zur Strahldynamik (Modellierung) weit über die Implementierung einer nicht-invasiven Emittanzmessmethode hinaus.
Um dies zu gewährleisten Bedarf es Zweierlei. Zum einen muss die Diskrepanz zwischen Genauigkeit und Platzaufwand aufgehoben werden. Hierzu wurde im Rahmen dieser Arbeit eine rotierbare Vakuumkammer entwickelt die nach dem Vorbild medizinischer Tomographen in mehr als 5000 Winkelschritten um den Strahl herum fahren kann, dabei ein Vakuum von mindestens 10-7mbar aufrecht erhält und einen Platzbedarf von weniger als 400 mm in der Strahlstrecke einnimmt. Zum anderen muss die Implementierung der Tomografie durch eine Angabe von schematischen Schritten und Entscheidungen vereinfacht werden. Eine Strahltomographie muss immer auf ihren jeweiligen Zweck hin implementiert werden, da Einzelelemente der Tomografie wie beispielsweise Messvorrichtung und dadurch die Profilanzahl, zu verwendender Tomographiealgorithmus, zu bestimmende Parameter sich je nach Einsatz unterscheiden können. Jedoch können die dazu nötigen Entscheidungen in ein Schema eingeordnet werden, welches die Implementierung der Tomographie vereinfacht und beschleunigt. Hierzu wurde in dieser Arbeit eine Diagnosepipeline und ein Entscheidungsschema eingeführt, sowie die Implementierung nach diesem Schema am Beispiel einer Strahltomographie für die Frankfurter Neutronenquelle (FRANZ) demonstriert und die entsprechenden Fragen und Entscheidungen diskutiert. Es wird gezeigt, wie sich aus den Messdaten über die Aufbereitung der Daten durch die Tomografie die erforderlichen Standardstrahlparameter für ein Monitoring gewinnen lassen. Zusätzlich wird ein Ebenen-Modell eingeführt, über welches nicht-Standardparameter oder neu modellierte Strahlparameter für detaillierte Analysen der Strahldynamik über die Standardparameter hinaus entwickelt werden können. Diese Arbeit soll ein grundlegendes Konzept für die routinemäßige Implementierung der Tomographie in der Strahldiagnose zur Verfügung stellen. Für die Verwendung zum Monitoring im Strahlbetrieb muss die Bestimmung von Standardparametern noch wesentlich im Zeitaufwand verbessert werden. Die Verwendung der Phasenraumtomographie benötigt noch eine Idee um den arcustangensförmigen Verlauf der berechneten Phasenraumrotationswinkel mit der Forderung der FBP nach äquidistanten Projektionswinkeln verträglicher zu machen.
Die vorliegende Arbeit beschäftigt sich mit der zeitstetigen Portfoliooptimierung sowie mit Themen aus dem Bereich des Kreditrisikos. Das Ziel der Portfoliooptimierung ist es, zu einem gegebenen Anfangskapital die bestmöglichen Konsum- und Investmentstrategien zu finden. In dieser Arbeit wird dabei vor allem der Einfluss von Einkommen auf diese Entscheidungen untersucht. Da einerseits jedoch der zukünftige Einkommensstrom vom Zufall bestimmt ist und es andererseits keine Finanzprodukte gibt, die diesen replizieren können, stellt die Einbindung von Einkommen in die Portfoliooptimierung ein großes Problem dar. Es führt dazu, dass die Annahmen eines vollständigen Marktes nicht weiter gelten, so dass die Standardmethoden zur Lösung nicht angewendet werden können. Diese Arbeit analysiert mehrere Ausprägungen dieses Problems und geht auf verschiedene Verfahren zur Lösung ein. Weiterhin untersucht diese Studie den Einfluss des Kreditrisikos einer Firma auf die jeweilige Firmenrendite. Dabei wird vor allem auf eine Anomalie, die bereits umfassend in der Literatur diskutiert wurde, Bezug genommen. Diese Anomalie besagt, dass Firmen mit hohen Ausfallwahrscheinlichkeiten geringere Renditen erwirtschaften als Firmen mit kleineren Ausfallwahrscheinlichkeiten. Eine weitere Frage, die in den Bereich des Kreditrisikos fällt, ist die Frage, inwieweit Modelle dazu in der Lage sind, strukturierte Produkte zu bewerten und abzusichern. Diese Arbeit versucht Antworten darauf zu geben.
The objective of this work is twofold. First, we explore the performance of the density functional theory (DFT) when it is applied to solids with strong electronic correlations, such as transition metal compounds. Along this direction, particular effort is put into the refinement and development of parameterization techniques for deriving effective models on a basis of DFT calculations. Second, within the framework of the DFT, we address a number of questions related to the physics of Mott insulators, such as magnetic frustration and electron-phonon coupling (Cs2CuCl4 and Cs2CuBr4), high-temperature superconductivity (BSCCO) and doping of Mott insulators (TiOCl). In the frustrated antiferromagnets Cs2CuCl4 and Cs2CuBr4, we investigate the interplay between strong electronic correlations and magnetism on one hand and electron-lattice coupling on the other as well as the effect of this interplay on the microscopic model parameters. Another object of our investigations is the oxygen-doped cuprate superconductor BSCCO, where nano-scale electronic inhomogeneities have been observed in scanning tunneling spectroscopy experiments. By means of DFT and many-body calculations, we analyze the connection between the structural and electronic inhomogeneities and the superconducting properties of BSCCO. We use the DFT and molecular dynamic simulations to explain the microscopic origin of the persisting under doping Mott insulating state in the layered compound TiOCl.