Refine
Year of publication
- 2012 (79) (remove)
Document Type
- Doctoral Thesis (79) (remove)
Language
- English (79) (remove)
Has Fulltext
- yes (79)
Is part of the Bibliography
- no (79)
Keywords
- ALICE (1)
- ALICE, Teilchendetektor (1)
- Activation (1)
- Bali (1)
- Banach spaces (1)
- Bank Financing (1)
- Banks (1)
- Blei (1)
- Bovidae (1)
- Central Bank (1)
Institute
- Biowissenschaften (22)
- Physik (22)
- Biochemie und Chemie (9)
- Informatik (4)
- Pharmazie (4)
- Geowissenschaften (3)
- Medizin (3)
- Biodiversität und Klima Forschungszentrum (BiK-F) (2)
- Institut für Ökologie, Evolution und Diversität (2)
- Katholische Theologie (2)
A new era in experimental nuclear physics has begun with the start-up of the Large Hadron Collider at CERN and its dedicated heavy-ion detector system ALICE. Measuring the highest energy density ever produced in nucleus-nucleus collisions, the detector has been designed to study the properties of the created hot and dense medium, assumed to be a Quark-Gluon Plasma.
Comprised of 18 high granularity sub-detectors, ALICE delivers data from a few million electronic channels of proton-proton and heavy-ion collisions.
The produced data volume can reach up to 26 GByte/s for central Pb–Pb
collisions at design luminosity of L = 1027 cm−2 s−1 , challenging not only the data storage, but also the physics analysis. A High-Level Trigger (HLT) has been built and commissioned to reduce that amount of data to a storable value prior to archiving with the means of data filtering and compression without the loss of physics information. Implemented as a large high performance compute cluster, the HLT is able to perform a full reconstruction of all events at the time of data-taking, which allows to trigger, based on the information of a complete event. Rare physics probes, with high transverse momentum, can be identified and selected to enhance the overall physics reach of the experiment.
The commissioning of the HLT is at the center of this thesis. Being deeply embedded in the ALICE data path and, therefore, interfacing all other ALICE subsystems, this commissioning imposed not only a major challenge, but also a massive coordination effort, which was completed with the first proton-proton collisions reconstructed by the HLT. Furthermore, this thesis is completed with the study and implementation of on-line high transverse momentum triggers.
Tumor-associated macrophages (TAM) are a major supportive component within neoplasms and by their plasticity promote all phases of tumor development. Mechanisms of macrophage (M Phi) attraction and differentiation to a tumor-promoting phenotype, defined among others by distinct cytokine patterns such as pronounced immunosuppressive interleukin 10 (IL-10) production, are largely unknown. However, a high apoptosis index within tumors and strong M Phi infiltration correlate with poor prognosis. Thus, I aimed at identifying signaling pathways contributing to generation of TAM-like M Phi by using supernatant of apoptotic cancer cells (ACM) as stimulus.
To distinguish novel factors involved in generating TAM-like M Phi, I used an adenoviral RNAi-based approach. The primary read-out was production of IL-10. However, mediators modulating IL-10 were re-validated for their impact on regulation of the cytokines IL-6, IL-8 and IL-12. Following assay development, optimization and down-scaling to a 384-well format, primary human M Phi were transduced with 8495 constructs of the adenoviral shRNA SilenceSelect® library of Galapagos BV, followed by activation to a TAM-like phenotype using ACM. I identified 96 genes involved in IL-10 production in response to ACM and observed a pronounced cluster of 22 targets regulating IL-10 and IL-6. Principal validation of five targets of the IL-10/IL-6 cluster was performed using siRNA or pharmacological inhibitors. Among those, IL-4 receptor-alpha and cannabinoid receptor 2 were confirmed as regulators of IL-10 and IL-6 secretion.
One protein identified in the screen, the nerve growth factor (NGF) receptor TRKA was chosen for in-depth validation, based on its involvement in IL-10, IL-6 and IL-12 secretion from ACM-stimulated human M Phi. TRKA possesses a cardinal role in neuronal development, but compelling evidence emerges suggesting participation of TRKA in cancer development. First experiments using pharmacological inhibitors principally confirmed the involvement of TRKA in IL-10 secretion by ACM-stimulated M Phi and revealed PI3K/AKT and to a lesser extend MAPK p38 as important signaling molecules downstream of TRKA activation. Signaling through TRKA required the presence of its ligand NGF, as indicated by NGF neutralization experiments. NGF was not induced by or present in ACM, but was constitutively secreted by M Phi. Interestingly, M Phi responded to authentic NGF with neither AKT and p38 phosphorylation nor IL-10 production. TRKA is well known to be transactivated by other receptors and in neurons its cellular localization is decisive for its function. Inhibitors of common transactivation partners did not influence IL-10 production by human M Phi. Rather, ACM-treatment provoked pronounced translocation of TRKA to the plasma membrane within 10 minutes as observed by immunofluorescence staining. Consequently, I was intrigued to clarify mechanisms of TRKA trafficking in response to ACM.
The bioactive lipid sphingosine-1-phosphate (S1P) has been previously identified as important apoptotic cell-derived mediator involved in TAM-like M Phi polarization. Indeed, I observed S1P and src kinase involvement in ACM-mediated IL-10 induction. Furthermore, inhibition of S1P receptor (S1PR) signaling or src kinase activity prevented TRKA translocation, whereas a TRKA inhibitor or anti-NGF did not block TRKA trafficking to the plasma membrane in response to ACM. Thus, autocrine secreted NGF activated TRKA to promote IL-10 secretion, which required previous S1PR/src-dependent translocation of TRKA to the plasma membrane. Following the detailed analysis of IL-10 regulation, I was interested whether other TAM phenotype markers were influenced by ACM and whether their expression was regulated through TRKA-dependent signaling. Five of six markers were up-regulated on mRNA level by ACM, and secretion of IL-6, IL-8 and TNF-alpha was triggered. S1PR-signaling was essential for induction of all but one marker, whereas TRKA signaling was only required for cytokine secretion. Interestingly, none of the investigated TAM markers was regulated identically to IL-10, emphasizing a tight and exclusive regulation machinery of this potent immunosuppressive cytokine.
Finally, I aimed to validate the in vitro findings in human ACM-stimulated M Phi. Therefore, I isolated murine TAM as well as other major mononuclear phagocyte populations from primary oncogene-induced breast cancer tissue. Indeed, TRKA-dependent signaling was required for spontaneous cytokine production selectively by primary murine TAM. Besides IL-10, the TRKA pathway was decisive for secretion of IL-6, TNF-alpha and monocyte chemotactic protein-1, indicating its relevance in cancer-associated inflammation.
In summary, my findings highlight a fine-tuned regulatory system of S1P-dependent TRKA trafficking and autocrine NGF signaling in TAM biology. Both factors, S1P as well as NGF, might be interesting targets for future cancer therapy.
Um der Erkennung durch das körpereigene Immunsystem entkommen, weisen Tumore Modifikationen in ihrer Mikroumgebung auf. Zu diesen gehören u. a. veränderte Sauerstoffkonzentrationen im Tumorkern und die Freisetzung biochemischer Faktoren aus Tumorzellen, welche die Funktion von Tumor-assoziierten Phagozyten, wie z.B. Dendritischen Zellen (DC) beeinflussen. DC sind professionelle Antigen-präsentierende Zellen, die eine Spezialisierung in verschiedene funktionale Subtypen aufweisen. Myeloische DC (mDC) sind besonders effizient in Hinsicht auf die Präsentation von Antigenen, wohingegen plasmazytoide DC (pDC) regulatorisch auf das Immunsystem einwirken. Beide Subtypen spielen eine wichtige Rolle bei der Karzinogenese.
Während humane mDC, zur therapeutischen Verwendung, ex vivo aus Monozyten hergestellt werden können, war dies für humane pDC bisher nicht möglich. Ein war deshalb ein erstes Ziel dieser Arbeit, ein Protokoll zur Generierung humaner pDC aus humanen Monozyten zu entwickeln. Diese wurden mittels des Wachstumsfaktors Fms-related tyrosine kinase 3 ligand (Flt3-L) zu pDC-Äquivalenten differenziert, welche als monocyte-derived pDC (mo-pDC) bezeichnet wurden. In der Tat zeigten mo-pDC ein für humane pDC charakteristisches Oberflächenmarkerprofil und wiesen, im Vergleich zu mDC, eine geringe Kapazität zur Induktion der Proliferation autologer T Zellen und zur Phagozytose apoptotischer Zellen auf. Mo-pDC erwarben im Verlauf ihrer Differenzierung aus Monozyten eine kontinuierlich erhöhte Expression des pDC-spezifischen Transkriptionfaktors E2-2 und seiner spezifischen Zielgene. Der wichtigste funktionale Parameter von pDC ist die Produktion großer Mengen von Interferon-α (IFN-α). Mo-pDC sezernierten, nach vorheriger Aktivierung mit Tumornekrosefaktor-α (TNF-α) oder wenn zu ihrer Differenzierung neben Flt3-L auch Vitamin D3 oder all-trans-Retinolsäure verwendet wurde, ebenfalls große Mengen IFN-α. Wurden mo-pDC unter Hypoxie, einem prominenten Faktor der Tumormikroumgebung, generiert, so waren die Expression des spezifischen Transkriptionsfaktors E2-2 und die Freisetzung von IFN-α stark vermindert. Diese Daten zeigten zunächst, dass mo-pDC für das Studium von Differenzierung und Funktion humaner pDC eingesetzt werden können.
Weiterhin lieferten sie Hinweise auf eine veränderte Differenzierung humaner pDC unter Hypoxie. In einem nächsten Schritt wurde folglich untersucht, ob Hypoxie auch die Differenzierung von pDC aus deren physiologischen Vorläufern beeinflusst. Wurden Knochenmarkszellen der Maus mit Flt3-L unter Normoxie oder Hypoxie kultiviert, so war die Differenzierung zu pDC unter Hypoxie in der Tat unterdrückt. Dies war abhängig von der Hypoxie-induzierten Aktivität des Hypoxie-induzierten Faktors 1 (HIF-1), da die Flt3-Linduzierte Differenzierung von murinen Knochenmarkszellen, in denen die Expression von HIF-1 in pDC-Vorläuferzellen ausgeschaltet war, unter Hypoxie normal verlief.
Zusammenfassend kann also gesagt werden, dass Hypoxie, durch Aktivierung von HIF-1, Differenzierung und Funktion von pDC unterdrückt. Dieser Mechanismus könnte zu ihrer beschriebenen Dysfunktion in humanen Tumoren beitragen.
Neben Hypoxie sind viele andere Faktoren an der Immunsuppression in Tumoren beteiligt.
Eine Komponente der Mikroumgebung in Tumoren ist das Vorhandensein apoptotischer Tumorzellen. Apoptose von Tumorzellen findet, im Kontrast zur generellen Sicht von Tumoren als Apoptose-resistente Entitäten, auch in unbehandelten Tumoren im Überfluss statt. Apoptotische körpereigene Zellen unterdrücken unter physiologischen Bedingungen das Immunsystem. Deshalb könnte das Freisetzen von apoptotischem Material oder die Sekretion von Faktoren aus sterbenden Tumorzellen einen starken Einfluss auf die Funktion von Tumor-assoziierten DC und die damit verbundene Aktivierung von tumoriziden Lymphozyten haben. Eine diesbezügliche Studie war das zweite Ziel der vorliegenden Arbeit. Humane mDC wurden zu diesem Zweck mit Überständen lebender, apoptotischer oder nekrotischer humaner Brustkrebszellen aktiviert und anschließend mit autologen T Zellen ko-kultiviert. Danach wurde das zytotoxische Potential der ko-kultivierten T Zellen analysiert. Interessanterweise unterdrückte die Aktivierung mit Überständen apoptotischer Tumorzellen die DC-vermittelte Generierung tumorizider T Zellen durch die Ausprägung einer Population von regulatorischen T Zellen (Treg), die durch die gleichzeitige Expression der Oberflächenmoleküle CD39 und CD69 charakterisiert war. Die Ausprägung der CD39-und CD69-exprimierenden Treg Zell-Population war abhängig von der Freisetzung des bioaktiven Lipids Sphingosin-1-Phosphat (S1P) aus apoptotischen Zellen, welches durch den S1P-Rezeptor 4 zur Freisetzung des immunregulatorischen Zytokins IL-27 aus mDC führte.
Neutralisierung von IL-27 in AC-aktivierten Ko-Kulturen von mDC und T Zellen blockierte die Generierung von CD39- und CD69-exprimierenden Treg Zellen und resultierte folglich in der Aktivierung zytotoxischer T Zellen. Weiterhin war die Bildung von Adenosin in den Ko-Kulturen für die Unterdrückung zytotoxischer T Zellen vonnöten. Erste Experimente lieferten Hinweise auf eine direkte Interaktion von CD69- und CD39-exprimierenden Treg Zellen mit CD73-exprimierenden zytotoxischen T Zellen. CD39 und CD73 werden für die Bildung von Adenosin aus ATP benötigt, weswegen die Interaktion von Treg Zellen und zytotoxischen T Zellen die Adenosin-Produktion fördern könnte.
Zusammenfassend zeigen die hier präsentierten Befunde wie Faktoren der
Tumormikroumgebung die Funktion von humanen DC Subtypen beeinflussen können. Ein Verständnis der zugrundeliegenden Mechanismen kann wertvolle Informationen für die Wahl effektiver Immuntherapien oder Chemotherapien liefern und so die Therapie humaner Tumore unterstützen.
With increasing heterogeneity of modern hardware, different requirements for 3d applications arise. Despite the fact that real-time rendering of photo-realistic images is possible using today’s graphics cards, still large computational effort is required. Furthermore, smart-phones or computers with older, less powerful graphics cards may not be able to reproduce these results. To retain interactive rendering, usually the detail of a scene is reduced, and so less data needs to be processed. This removal of data, however, may introduce errors, so called artifacts. These artifacts may be distracting for a human spectator when gazing at the display. Thus, the visual quality of the presented scene is reduced. This is counteracted by identifying features of an object that can be removed without introducing artifacts. Most methods utilize geometrical properties, such as distance or shape, to rate the quality of the performed reduction. This information used to generate so called Levels Of Detail (LODs), which are made available to the rendering system. This reduces the detail of an object using the precalculated LODs, e.g. when it is moved into the back of the scene. The appropriate LOD is selected using a metric, and it is replaced with the current displayed version. This exchange must be made smoothly, requiring both LOD-versions to be drawn simultaneously during a transition. Otherwise, this exchange will introduce discontinuities, which are easily discovered by a human spectator. After completion of the transition, only the newly introduced LOD-version is drawn and the previous overhead removed. These LOD-methods usually operate with discrete levels and exploit limitations of both the display and the spectator: the human.
Humans are limited in their vision. This ranges from being unable to distinct colors at varying illumination scenarios to the limitation to focus only at one location at a time. Researchers have developed many applications to exploit these limitations to increase the quality of an applied compression. Some popular methods of vision-based compression are MPEG or JPEG. For example, a JPEG compression exploits the reduced sensitivity of humans regarding color and so encodes colors with a lower resolution. Also, other fields, such as auditive perception, allow the exploitation of human limitations. The MP3 compression, for example, reduces the quality of stored frequencies if other frequencies are masking it. For representation of perception various computer models exist. In our rendering scenario, a model is advantageous that cannot be influenced by a human spectator, such as the visual salience or saliency.
Saliency is a notion from psycho-physics that determines how an object “pops out” of its surrounding. These outstanding objects (or features) are important for the human vision and are directly evaluated by our Human Visual System (HVS). Saliency combines multiple parts of the HVS and allows an identification of regions where humans are likely to look at. In applications, saliency-based methods have been used to control recursive or progressive rendering methods. Especially expensive display methods, such as pathtracing or global illumination calculations, benefit from a perceptual representation as recursions or calculations can be aborted if only small or unperceivable errors are expected to occur. Yet, saliency is commonly applied to 2d images, and an extension towards 3d objects has only partially been presented. Some issues need to be addressed to accomplish a complete transfer.
In this work, we present a smart rendering system that not only utilizes a 3d visual salience model but also applies the reduction in detail directly during rendering. As opposed to normal LOD-methods, this detail reduction is not limited to a predefined set of levels, but rather a dynamic and continuous LOD is created. Furthermore, to apply this reduction in a human-oriented way, a universal function to compute saliency of a 3d object is presented. The definition of this function allows to precalculate and store object-related visual salience information. This stored data is then applicable in any illumination scenario and allows to identify regions of interest on the surface of a 3d object. Unlike preprocessed methods, which generate a view-independent LOD, this identification includes information of the scene as well. Thus, we are able to define a perception-based, view-specific LOD. Performance measures of a prototypical implementation on computers with modern graphic cards achieved interactive frame rates, and several tests have proven the validity of the reduction.
The adaptation of an object is performed with a dynamic data structure, the TreeCut. It is designed to operate on hierarchical representations, which define a multi-resolution object. In such a hierarchy, the leaf nodes contain the highest detail while inner nodes are approximations of their respective subtree. As opposed to classical hierarchical rendering methods, a cut is stored and re-traversal of a tree during rendering is avoided. Due to the explicit cut representation, the TreeCut can be altered using only two core operations: refine and coarse. The refine-operation increases detail by replacing a node of the tree with its children while the coarse-operation removes the node along with its siblings and replaces them with their parent node. These operations do not rely on external information and can be performed in a local manner. These only require direct successor or predecessor information. Different strategies to evolve the TreeCut are presented, which adapt the representation using only information given by the current cut. These evaluate the cut by assigning either a priority or a target-level (or bucket) to each cut-node. The former is modelled as an optimization problem that increases the average priority of a cut while being restricted in some way, e.g. in size. The latter evolves the cut to match a certain distribution. This is applied in cases where a prioritization of nodes is not applicable. Both evaluation strategies operate with linear time complexity with respect to the size of the current TreeCut.
The data layout is chosen to separate rendering data and hierarchy to enable multi-threaded evaluation and display. The object is adapted over multiple frames while the rendering is not interrupted by the used evaluation strategy. Therefore, we separate the representation of the hierarchy from the rendering data. Due to its design, this overhead imposed to the TreeCut data structure does not influence rendering performance, and a linear time complexity for rendering is retained. The TreeCut is not only limited to alter geometrical detail of an object. The TreeCut has successfully been applied to create a non-photo-realistic stippling display, which draws the object with equal sized points in varying density. In this case the bucket-based evaluation strategy is utilized, which determines the distribution of the cut based on local illumination information. As an alternative, an attention drawing mechanism is proposed, which applies the TreeCut evaluation strategies to define the display style of a notification icon. A combination of external priorities is used to derive the appropriate icon version. An application for this mechanism is a messaging system that accounts for the current user situation.
When optimizing an object or scene, perceptual methods allow to account for or exploit human limitations. Therefore, visual salience approaches derive a saliency map, which encodes regions of interest in a 2d map. Rendering algorithms extract importance from such a map and adapt the rendering accordingly, e.g. abort a recursion when the current location is unsalient. The visual salience depends on multiple factors including the view and the illumination of the scene. We extend the existing definition of the 2d saliency and propose a universal function for 3d visual salience: the Bidirectional Saliency Weight Distribution Function (BSWDF). Instead of extracting the saliency from 2d image and approximate 3d information, we directly compute this information using the 3d data. We derive a list of equivalent features for the 3d scenario and add them to the BSWDF. As the BSWDF is universal, also 2d images are covered with the BSWDF, and the calculation of the important regions within images is possible.
To extract the individual features that contribute to visual salience, capabilities of modern graphics card in combination with an accumulation method for rendering is utilized. Inspired from point-based rendering methods local features are summed up in a single surface element (surfel) and are compared with their surround to determine whether they “pop out”. These operations are performed with a shader-program that is executed on the Graphics Processing Unit (GPU) and has direct access to the 3d data. This increases processing speed because no transfer of the data is required. After computation, each of these object-specific features can be combined to derive a saliency map for this object. Surface specific information, e.g. color or curvature, can be preprocessed and stored onto disk. We define a sampling scheme to determine the views that need to be evaluated for each object. With these schemes, the features can be interpolated for any view that occurs during rendering, and the according surface data is reconstructed. These sampling schemes compose a set of images in form of a lookup table. This is similar to existing rendering techniques, which extract illumination information from a lookup. The size of the lookup table increases only with the number of samples or the image size used for creation as the images are of equal size. Thus, the quality of the saliency data is independent of the object’s geometrical complexity. The computation of a BSWDF can be performed either on a Central Processing Unit (CPU) or a GPU, and an implementation requires only a few instructions when using a shader program. If the surface features have been stored during a preprocess, a reprojection of the data is performed and combined with the current information of the object. Once the data is available, the computation of the saliency values is done using a specialized illumination model, and a priority for each primitive is extracted. If the GPU is used, the calculated data has to be transferred from the graphics card. We therefore use the “transform feedback” capabilities, which allow high transfer rates and preserve the order of processed primitives. So, an identification of regions of interest based on the currently used primitives is achieved. The TreeCut evaluation strategies are then able to optimize the representation in an perception-based manner.
As the adaptation utilizes information of the current scene, each change to an object can result in new visual salience information. So, a self-optimizing system is defined: the Feedback System. The output generated by this system converges towards a perception-optimized solution. To proof the saliency information to be useful, user tests have been performed with the results generated by the proposed Feedback System. We compared a saliency-enhanced object compression to a pure geometrical approach, common for LOD-generation. One result of the tests is that saliency information allows to increase compression even further as possible with the pure geometrical methods. The participants were not able to distinguish between objects even if the saliency-based compression had only 60% of the size of the geometrical reduced object. If the size ratio is greater, saliency-based compression is rated, on average, with higher score and these results have a high significance using statistical tests. The Feedback System extends an 3d object with the capability of self-optimization. Not only geometrical detail but also other properties can be limited and optimized using the TreeCut in combination with a BSWDF. We present a dynamic animation, which utilizes a Software Development Kit (SDK) for physical simulations. This was chosen, on the one hand, to show the universal applicability of the proposed system, and on the other hand, to focus on the connection between the TreeCut and the SDK. We adapt the existing framework, and include the SDK within our design. In this case, the TreeCut-operations not only alter geometrical but also simulation detail. This increases calculation performance because both the rendering and the SDK operate on less data after the reduction has been completed.
The selected simulation type is a soft-body simulation. Soft-bodies are deformable in a certain degree but retain their internal connection. An example is a piece of cloth that smoothly fits the underlying surface without tearing apart. Other types are rigid bodies, i.e. idealistic objects that cannot be deformed, and fluids or gaseous materials, which are well suited for point-based simulations. Any of these simulations scales with the number of simulation nodes used, and a reduction of detail increases performance significantly. We define a specialized BSWDF to evaluate simulation specific features, such as motion. The Feedback System then increases detail in highly salient regions, e.g. those with large motion, and saves computation time by reducing detail in static parts of the simulation. So, detail of the simulation is preserved while less nodes are simulated.
The incorporation of perception in real-time rendering is an important part of recent research. Today, the HVS is well understood, and valid computer models have been derived. These models are frequently used in commercial and free software, e.g. JPEG compression. Within this thesis, the Tree-Cut is presented to change the LOD of an object in a dynamic and continuous manner. No definition of the individual levels in advance is required, and the transitions are performed locally. Furthermore, in combination with an identification of important regions by the BSWDF, a perceptual evaluation of a 3d object is achieved. As opposed to existing methods, which approximate data from 2d images, the perceptual information is directly acquired from 3d data. Some of this data can be preprocessed if necessary, to defer additional computations during rendering. The Feedback System, created by the TreeCut and the BSWDF, optimizes the representation and is not limited to visual data alone. We have shown with our prototype that interactive frame rates can be achieved with modern hardware, and we have proven the validity of the reductions by performing several user tests. However, the presented system only focuses on specific aspects, and more research is required to capture even more capabilities that a perception-based rendering system can provide.
Global climate change and land use change will not only alter entire ecosystems and biodiversity patterns, but also the supply of ecosystem services. A better understanding of the consequences is particularly needed in under-investigated regions, such as West Africa. The projected environmental changes suggest negative impacts on nature, thus representing a threat to the human well-being. However, many effects caused by climate and land use change are poorly understood so far. Thus, the main objective of this thesis was to investigate the impact of climate and land use change on vegetation patterns, plant diversity and important provisioning ecosystem services in West Africa. The three different aspects are separately explored and build the chapters of this thesis. The findings help to improve our understanding of the effects of environmental change on ecosystems and human well-being. In the first study, the main objectives were to model trends and the extent of future biome shifts in West Africa that may occur by 2050. Also, I modelled a trend in West African tree cover change, while accounting for human impact. Additionally, uncertainty in future climate projections was evaluated to identify regions with reliable trends and regions where the impacts remain uncertain. The potential future spatial distributions of desert, grassland, savanna, deciduous and evergreen forest were modelled in West Africa, using six bioclimatic models. Future tree cover change was analysed with generalized additive models (GAMs). I used climate data from 17 general circulation models (GCMs) and included human population density and fire intensity to model tree cover. Consensus projections were derived via weighted averages to: 1) reduce inter-model variability, and 2) describe trends extracted from different GCM projections. The strongest predicted effect of climate change was on desert and grasslands, where the bioclimatic envelope of grassland is projected to expand into the Sahara desert by an area of 2 million km2. While savannas are predicted to contract in the south (by 54 ± 22 × 104 km2), deciduous and evergreen forest biomes are expected to expand (64 ± 13 × 104 km2 and 77 ± 26 × 104 km2). However, uncertainty due to different GCMs was particularly high for the grassland and the evergreen forest biome shift. Increasing tree cover (1–10%) was projected for large parts of Benin, Burkina Faso, Côte d’Ivoire, Ghana and Togo, but a decrease was projected for coastal areas (1–20%). Furthermore, human impact negatively affected tree cover and partly changed the direction of the projected climate-driven tendency from increase to decrease. Considering climate change alone, the model results of potential vegetation (biomes) showed a ‘greening’ trend by 2050. However, the modelled effects of human impact suggest future forest degradation. Thus, it is essential to consider both climate change and human impact in order to generate realistic future projections on woody cover. The second study focused on the impact and the interplay of future (2050) climate and land use change on the plant diversity of the West African country Burkina Faso. Synergistic forecasts for this country are lacking to date. Burkina Faso covers a broad bioclimatic gradient which causes a similar gradient in plant diversity. Thus, the impact of climate and land use change can be investigated in regions with different levels of species richness. The LandSHIFT model from the Centre of Environmental System research CESR (Kassel, Germany) was adapted for this study to derive novel regional, spatially explicit future (2050) land use simulations for Burkina Faso. Additionally, the simulations include different assumptions on the technological developments in the agricultural sector. Oneclass support vector machines (SVMs), a machine learning method, were performed with these land use simulations together with current and future (2050) climate projections at a 0.1° resolution (cell: ~ 10 × 10 km). The modelling results showed that the flora of Burkina Faso will be primarily negatively impacted by future climate and land use changes. The species richness will be significantly reduced by 2050 (P < 0.001, paired Wilcoxon signed-rank test). However, contrasting latitudinal patterns were found. Although climate change is predicted to cause species loss in the more humid regions in Southern Burkina Faso (~ 200 species per cell), the model projects an increase of species richness in the Sahel. However, land use change is expected to suppress this increase to the current species diversity level, depending on the technological developments. Climate change is a more important threat to the plant diversity than land use change under the assumption of technological stagnation in the agricultural sector. Overall, the study highlights the impact and interplay of future climate and land use change on plant diversity along a broad bioclimatic gradient in West Africa.Furthermore, the results suggest that plant diversity in dry and humid regions of the tropics might generally respond differently to climate and land use change. This pattern has not been detected by global studies so far. Several of the plant species in West Africa significantly contribute to the livelihoods of the population. The plants provide so-called non-timber forest products (NTFPs), which are important provisioning ecosystem services. However, these services are also threatened by environmental change. Thus, the third study aimed at developing a novel approach to assess the impacts of climate and land use change on the economic benefits derived from NTFPs. This project was carried out in cooperation with Katja Heubach (BiK-F) who provided data on household economics. These data include 60 interviews that were conducted in Northern Benin on annual quantities and revenues of collected NTFPs from the three most important savanna tree species: Adansonia digitata, Parkia biglobosa and Vitellaria paradoxa. The current market prices of the NTFPs were derived from respective local markets. To assess current and future (2050) occurrence probabilities of the three species, I calibrated niche-based models with climate data (from Miroc3.2medres) and land use data (LandSHIFT) at a 0.1° resolution (cell: ~ 10 × 10 km). Land use simulations were taken from the previous study on plant diversity. Three different niche-based models were used: 1) generalized additive models (regression method), 2) generalized boosting models (machine learning method), and 3) flexible discriminant analysis (classification method). The three model simulations were averaged (ensemble forecasting) to increase the robustness of the predictions. To assess future economic gains and losses, respectively, the modelled species’ occurrence probabilities were linked with the spatially assigned monetary values. Highest current annual benefits are obtained from V. paradoxa (54,111 ± 28,126 US$/cell), followed by P. biglobosa (32,246 ± 16,526 US$/cell) and A. digitata (9,514 ± 6,243 US$/cell). However, in the prediction large areas will lose up to 50% of their current economic value by 2050. Vitellaria paradoxa and Parkia biglobosa, which currently reveal the highest economic benefits, are heavily affected. Adansonia digitata is negatively affected less strongly by environmental change and might regionally even supply increasing economic benefits, in particular in the west and east of the investigation area. We conclude that adaptive strategies are needed to create alternative income opportunities, in particular for women that are responsible for collecting the NTFPs. The findings provide a benchmark for local policy-makers to economically compare different land use options and adjust existing management strategies for the near future. Overall, this thesis improves our understanding of the impacts of climate and land use changes on West African vegetation patterns, plant diversity and provisioning ecosystem services. Climate change had spatially varying impacts (positive and negative effects) on the vegetation cover and plant diversity, while predominantly negative effects resulted from human pressure. Regional contrasting impacts of environmental change were also found considering the provisioning ecosystem services.
This dissertation is concerned with the role of prosody and, specifically, linguistic rhythm for the syntactic processing of written text. My aim is to put forward, provide evidence for, and defend the following claims:
1. While processing written sentences, readers make use of their phonological knowledge and generate a mental prosodic-phonological representation of the printed text.
2. The mental prosodic representation is constructed in accordance with a syntactic description of the written string. Constraints at the interface of syntax and phonology provide for the compatibility of the syntactic analysis and the (mental) prosodic rendition of the sentence.
3. The implicit prosodic structure readers impose on the written string entails phonological phrasing and accentuation, but also lower level prosodic features such as linguistic rhythm which emerges from the pattern of stressed and unstressed syllables.
4. Phonological well-formedness conditions accompany and influence the process of syntactic parsing in reading from the very beginning, i.e. already at the level of recognizing lexical categories. At points of underspecified syntactic structure, syntactic parsing decisions may be made on the basis of phonological constraints alone.
5. In reading, the implicit local lexical-prosodic information may be more readily available to the processing mechanism than higher-level discourse structural representations and consequently may have more immediate influence on sentence processing.
6. The process of sentence comprehension in reading is conditioned by factors that are geared towards sentence production.
7. The interplay of syntactic and phonological processes in reading can be explained with recourse to a performance-compatible competence grammar.
The evidence from three reading experiments supports these points and suggests a model of grammatical competence in which constraints from various domains (syntax, semantics, pragmatics, discourse structure, and phonology) interact in providing the possible structural, i.e. grammatical descriptions.
The importance of RNA in molecular and cell biology has long been underestimated. Besides transmitting genetic information, studies of recent years have revealed crucial tasks of RNA especially in gene regulation. Riboswitches are natural RNA-based genetic switches and known only for ten years. They directly sense small-molecule metabolites and regulate in response the expression of the corresponding metabolic genes. Within recent years, artificial riboswitches have been developed that operate according to user-defined demands. Hence, they represent powerful tools for synthetic biology.
This study focused on the development of engineered catalytic riboswitches for conditional gene expression in eukaryotes. A self-cleaving hammerhead ribozyme was linked to a tetracycline binding aptamer in order to regulate ribozyme cleavage allosterically with tetracycline. By integrating such a hybrid molecule into a gene of interest, mRNA cleavage and thereby gene expression is controllable in a ligand dependent manner. The linking domain between ribozyme and aptamer was randomised. Tetracycline inducible ribozymes were isolated after eleven cycles of in vitro selection (SELEX). 80% of the analysed ribozymes show cleavage that strongly depends on tetracycline. In the presence of 1 μM tetracycline, their cleavage rates are comparable to that of the parental hammerhead ribozyme. In the absence of tetracycline, cleavage rates are inhibited up to 333-fold. The allosteric ribozymes bind tetracycline with similar affinity and specificity as the parental aptamer. Ribozyme cleavage is fully induced within minutes after addition of tetracycline. Interestingly, the isolated linker domains exhibit structural consensus motives rather than consensus sequences.
When transferred to yeast, three switches reduced reporter gene expression by 30 - 60% in the presence of tetracycline; none of them controlled gene expression in mammalian cells. In vitro selected molecules do not necessarily retain their characteristics when applied in a cellular context. Therefore, high throughput screening and selection systems have been developed in mammalian cells. The screening system is based on two fluorescent reporter proteins (GFP and mCherry). 1152 individual constructs of the selected ribozyme pool were tested, but none of them reduced reporter gene expression significantly in the presence of tetracycline. The selection system employs a fusion peptide encoding two selection markers (Hygromycin B phosphotransferase and HSV thymidine kinase) facilitating both negative and positive selection. 6.5 x 104 individual constructs of the selected ribozyme pool are currently under investigation.
Nuclear Magnetic Resonance ("NMR") is a powerful and versatile technique relying on nuclei that posses a spin. Since its discovery more than 6 decades ago, NMR and related techniques has become a tool with innumerable applications throughout the fields of Physics, Chemistry, Biology and Medicine. Numerous Nobel Prizes have been awarded for work in the field and a multi billion dollar industry has developed on its basis.
One of NMR's major shortcomings is its inherent lack of sensitivity. Because it relies on the Boltzmann populations of spin states with a minuscule Zeeman splitting, this is particularly true for room temperature experiments.
As a result, in an enormous technological effort to enlarge the Zeeman splitting NMR magnets have been moving to higher and higher magnetic fields. However, even for proton spins possessing the largest magnetic moment of all nuclei, the degree of polarization that can be achieved in the strongest spectroscopic magnets available today (~24 T) at room temperature is merely ~ 8*(10 exp (-5)). In other words, this low polarization theoretically allows a sensitivity enhancement of 104 towards full polarization.
Since Magnetic Resonance Imaging ("MRI") is based on the same principle, it shares this problem with NMR. Furthermore, for technical and physiological reasons full body MRI tomographs do not reach the magnetic field strengths of spectroscopic NMR magnets, making this even more of an issue for MRI.
In consequence, MRI is chiefly restricted to detecting protons, while both MRI and NMR detection of 13C (or other low nuclei) under physiological conditions, i.e. low natural abundance of 13C and a low concentration of the respective substance, suffer from long acquisitions times that are necessary to obtain adequate signal to noise ratios ("SNR").
However, this drawb of NMR can be overcome. The enormous potential sensitivity increase of four orders of magnitude can - at least partially - be exploited by several hyperpolarization techniques, creating entirely new applications and fields of research.
These hyperpolarization techniques comprise chemical approaches like Parahydrogen Induced Polarization ("PHIP") or Photochemically Induced Dynamic Nuclear Polarization ("Photo-CIDNP"), as well as physical techniques like optically pumped (noble) gases13, 14 or Dynamic Nuclear Polarization ("DNP"), which will be the focus of this work. A hyperpolarized substance will render a larger signal without being physically or chemically altered in any other way. It is therefore "marked" without any marker, making it an agent free contrast agent for MRI.
DNP is a technique, in which hyperpolarization of nuclear spins is achieved by microwave (\MW") irradiation of unpaired electron spins in radicals, which are coupled to these nuclei, e.g. 1H, 13C or 15N. The electron spin population is perturbed if the microwave irradiation is resonant with the electron spin transition, which affects the polarization of hyperfine-coupled close nuclei. For large microwave power (i.e. saturating the electron spin transition) the orders of magnitude larger thermal electron spin polarization is effectively transferred to these nuclear spins in the sample. For proton spins the maximum polarization gain amounts to 660, whereas for 13C the sensitivity gain can be as large as 2600. In contrast to e.g. PHIP, which is restricted to specific reaction precursors, DNP is not limited to specific nuclei or hyperpolarization target molecules, making it a very versatile technique. DNP has been first proposed by Overhauser in 1953,15 and experimentally observed shortly thereafter in metals16 and liquids,17 both being systems with mobile electrons. In the 1960s and 70s, DNP was used as a spectroscopic tool in liquids, thoroughly mapping the effect in the low field regime. As well, several other transfer mechanisms were discovered, which are active in the solid state with localized electrons, namely the solid effect the cross effect and thermal mixing. The theory for all three of these mechanisms predicts reduced transfer efficiencies at higher magnetic fields. This fact and the lack of high frequency microwave sources to excite electron spins at magnetic field strengths above 1 T, effectively relegated DNP to a position of an interesting scientifi curiosity.
In the early 1990s, DNP came to a renaissance, when DNP was performed at high field in solid state magic angle spinning ("MAS") experiments using high power gyrotron microwave sources. This pioneering work sparked a surge of new developments and applications.
As well, this success triggered attempts to investigate also the potential of DNP in the liquid state at high magnetic fields, e.g. at 3.4 T35{38 and 9.2 T. To date, DNP can be considered one of the "hot topics" in the field of magnetic resonance, bringing about special issue in magnetic resonance journals and DNP sections on magnetic resonance conferences.
This thesis deals with the development of an in-bore liquid state DNP polarizer for MRI applications operating in ow through mode at a magnetic field strength of 1.5 T. Following this introductory chapter, the theoretical background necessary to understand and interpret the experimental results is explained in chapter 2. Subsequently, chapter 3 deals with the issue of performing liquid state DNP at high magnetic fields and its challenges. The chapter comprises a quick overview of the necessary hardware, the experimental findings for various samples and the interpretation of these findings. along with the ramifications for the aim of this work. Chapter 4 deals with the issue of increasing sensitivity and contrast in MRI, in particular by means of DNP. The chapter illustrates the development of our polarizer by presenting the hardware that was developed and demonstrating its performance under various conditions. As well, several alternative approaches are introduced and compared to our approach. Finally, chapter 5 summarizes the findings and gives an outlook on further developments.
The main purpose of the Transition Radiation Detector (TRD) located in the central barrel of ALICE (A Large Ion Collider Experiment) is electron identification for separation from pions at momenta pt > 1 GeV/c, since in this momentum range the measurements of the specific energy loss (dE/dx) of the Time Projection Chamber (TPC) is no longer sufficient. Furthermore, it provides a fast trigger for high transverse momentum charged particles (pt > 3 GeV/c) and makes a significant contribution to the optimization of the tracking of reaction products in heavy-ion collisions. Its whole setup comprises 18 supermodules out of which 13 are presently operational and mounted cylindrically around the beam axis of the Large Hadron Collider (LHC). A supermodule contains either 30 or 24 chambers, each consisting of a radiator for transition radiation creation, a drift and an amplifying region followed by the read-out electronics. In total, the TRD is an array of 522 chambers operated with about 28 m3 of a Xe-CO2 [85-15%] gas mixture. During the work of this thesis, the testing, commissioning, operation and maintenance of detector parts, the gas system and its online quality monitor, improvements on the detector control user-interface and studies about a new pre-trigger module for data read-out have been accomplished. The TRD gas system mixes, distributes and circulates the operational gas mixture through the detector. Its overall optimization has been achieved by minimizing gas leakage, surveying, controlling, maintaining and continuously improving it as well as designing and carrying out upgrades. Gas quality monitors of the type \GOOFIE" (Gas prOportional cOunter For drIfting Electrons) can be used in gaseous detectors as on-line monitors of the electron drift velocity, gain and gas properties. One of these devices has been implemented within the TRD gas system, while another one surveys the gas of the TPC. Both devices had to be adapted to the specific needs of the detectors, were under constant surveillance and control, and needed to be further developed on both hardware and software side. To improve the operation of the TRD, modifications on its DCS software (Detector Control System) used for monitoring, controlling, operating, regulating and configuring of hardware and computing devices have been carried out. The DCS is designed to enable an operator to interact with equipment through user interfaces that display the information from the system. The main focus of this work was laid on the optimization of the usability and design of the user interface. The front-end electronics of the TRD require an early start signal (\pre-trigger") from the fast forward detectors or the Time-Of-Flight detector during the running periods. The realization of a new hardware concept for the read-out of the TRD pre-trigger system has been studied and first tests were performed. This new module called PIMDDL (Pre-trigger Interface Module Detector Data Link) is meant to acquire all data necessary to simulate and predict the full pre-trigger functionality, and to verify its proper operation. Furthermore, it shall provide all functionalities of the so-called Control Box Bottom as well as keep the functionalities of the already existing PIM (Pre-trigger Interface Module) in order to combine and replace these two modules in the future.
Grave visitation and concepts of life after death : a comparative study in Frankfurt and Hong Kong
(2012)
Grave visitation is a tradition common to many cultures. Yet, this sensitive topic is rarely addressed in cross cultural comparisons. Why do people visit the graves of their parents? What do they do in the cemetery? Could there be a similar set of intentions behind the diverse customs? By examining the visiting patterns in Frankfurt and Hong Kong, this research is aimed at comparing the concepts of life after death that underlie the practice. Phenomenologically oriented, this is an exploratory study based on qualitative interviews. Integrated with in-depth semi-structure interviewing and thematic analysis, the project covered twelve cases in each city. Research participants were purposefully selected. Data analysis was conducted according to the analytical framework approach. After identifying and clustering of themes, three central and interlocking issues were found: 1. the grave as a new home that connects the living and the dead; 2. death and the interpretation of hope; and 3.intergenerational reciprocity and continuing bonds. Though the images of life after death were ambiguously depicted, grave tending reflected shared expectations of the world beyond. Most significantly, visits to the graves strengthened the ties between the living and the dead, revealing a longing for a continued bond regardless of the forms of burial. At the end, this research illustrated not only the meanings of death but also the notion of religiosity through evaluating the secularisation thesis. Emphasising the dynamics of tradition and personal experience, this contextual reading of current death rituals serves as an original source for religious dialogue and education.