Refine
Year of publication
- 2017 (182) (remove)
Document Type
- Doctoral Thesis (182) (remove)
Has Fulltext
- yes (182)
Is part of the Bibliography
- no (182) (remove)
Keywords
- ALICE (2)
- FPGA (2)
- Grundschule (2)
- QCD (2)
- Stadtentwicklung (2)
- 1,3-Diamine (1)
- 14 fructidor IX (1)
- 2-Photonen (1)
- Africa (1)
- African Agency (1)
Institute
- Biowissenschaften (32)
- Medizin (31)
- Physik (23)
- Biochemie und Chemie (19)
- Biochemie, Chemie und Pharmazie (15)
- Informatik und Mathematik (9)
- Neuere Philologien (8)
- Psychologie und Sportwissenschaften (7)
- Geowissenschaften (6)
- Erziehungswissenschaften (4)
In this work the flexibility requirements of a highly renewable European electricity network that has to cover fluctuations of wind and solar power generation on different temporal and spatial scales are studied. Cost optimal ways to do so are analysed that include optimal distribution of the infrastructure, large scale transmission, storage, and dispatchable generators. In order to examine these issues, a model of increasing sophistication is built, first considering different flexibility classes of conventional generation, then adding storage, before finally considering transmission to see the effects of each.
To conclude, in this work it was shown that slowly flexible base load generators can only be used in energy systems with renewable shares of less than 50%, independent of the expansion of an interconnecting transmission network within Europe. Furthermore, for a system with a dominant fraction of renewable generation, highly flexible generators are essentially the only necessary class of backup generators. The total backup capacity can only be decreased significantly if interconnecting transmission is allowed, clearly favouring a European-wide energy network. These results are independent of the complexity level of the cost assumptions used for the models. The use of storage technologies allows to reduce the required conventional backup capacity further. This highlights the importance of including additional technologies into the energy system that provide flexibility to balance fluctuations caused by the renewable energy sources. These technologies could for example be advanced energy storage systems, interconnecting transmission in the electricity network, and hydro power plants.
It was demonstrated that a cost optimal European electricity system with almost 100% renewable generation can have total system costs comparable to today's system cost. However, this requires a very large transmission grid expansion to nine times the line volume of the present-day system. Limiting transmission increases the system cost by up to a third, however, a compromise grid with four times today's line volume already locks in most of the cost benefits. Therefore, it is very clear that by increasing the pan-European network connectivity, a cost efficient inclusion of renewable energies can be achieved, which is strongly needed to reach current climate change prevention goals.
It was also shown that a similarly cost efficient, highly renewable European electricity system can be achieved that considers a wide range of additional policy constraints and plausible changes of economic parameters.
Echolocation allows bats to orientate in darkness without using visual information. Bats emit spatially directed high frequency calls and infer spatial information from echoes coming from call reflections in objects (Simmons 2012; Moss and Surlykke 2001, 2010). The echoes provide momentary snapshots, which have to be integrated to create an acoustic image of the surroundings. The spatial resolution of the computed image increases with the quantity of received echoes. Thus, a high call rate is required for a detailed representation of the surroundings.
One important parameter that the bats extract from the echoes is an object’s distance. The distance is inferred from the echo delay, which represents the duration between call emission and echo arrival (Kössl et al. 2014). The echo delay decreases with decreasing distance and delay-tuned neurons have been characterized in the ascending auditory pathway, which runs from the inferior colliculus (Wenstrup et al. 2012; Macías et al. 2016; Wenstrup and Portfors 2011; Dear and Suga 1995) to the auditory cortex (Hagemann et al. 2010; Suga and O'Neill 1979; O'Neill and Suga 1982).
Electrophysiological studies usually characterize neuronal processing by using artificial and simplified versions of the echolocation signals as stimuli (Hagemann et al. 2010; Hagemann et al. 2011; Hechavarría and Kössl 2014; Hechavarría et al. 2013). The high controllability of artificial stimuli simplifies the inference of the neuronal mechanisms underlying distance processing. But, it remains largely unexplored how the neurons process delay information from echolocation sequences. The main purpose of the thesis is to investigate how natural echolocation sequences are processed in the brain of the bat Carollia perspicillata. Bats actively control the sensory information that it gathers during echolocation. This allows experimenters to easily identify and record the acoustic stimuli that are behaviorally relevant for orientation. For recording echolocation sequences, a bat was placed in the mass of a swinging pendulum (Kobler et al. 1985; Beetz et al. 2016b). During the swing the bat emitted echolocation calls that were reflected in surrounding objects. An ultrasound sensitive microphone traveling with the bat and positioned above the bat’s head recorded the echolocation sequence. The echolocation sequence carried delay information of an approach flight and was used as stimulus for neuronal recordings from the auditory cortex and inferior colliculus of the bats.
Presentation of high stimulus rates to other species, such as rats, guinea pigs, suppresses cortical neuron activity (Wehr and Zador 2005; Creutzfeldt et al. 1980). Therefore, I tested if neurons of bats are suppressed when they are stimulated with high acoustic rates represented in echolocation sequences (sequence situation). Additionally, the bats were stimulated with randomized call echo elements of the sequence and an interstimulus time interval of 400 ms (element situation). To quantify neuronal suppression induced by the sequence, I compared the response pattern to the sequence situation with the concatenated response patterns to the element situation. Surprisingly, although the bats should be adapted for processing high acoustic rates, their cortical neurons are vastly suppressed in the sequence situation (Beetz et al. 2016b). However, instead of being completely suppressed during the sequence situation, the neurons partially recover from suppression at a unit specific call echo element. Multi-electrode recordings from the cortex allow assessment of the representation of echo delays along the cortical surface. At the cortical level, delay-tuned neurons are topographically organized. Cortical suppression improves sharpness of neuronal tuning and decreases the blurriness of the topographic map. With neuronal recordings from the inferior colliculus, I tested whether the echolocation sequence also induced neuronal suppression at subcortical level. The sequence induced suppression was weaker in the inferior colliculus than in the cortex. The collicular response makes the neurons able to track the acoustic events in the echolocation sequence. Collicular suppression mainly improves the signal-to-noise ratio. In conclusion, the results demonstrate that cortical suppression is not necessarily a shortcoming for temporal processing of rapidly occurring stimuli as it has previously been interpreted.
Natural environments are usually composed of multiple objects. Thus, each echolocation call reflects off multiple objects resulting in multiple echoes following the calls. At present, it is largely unexplored how neurons process echolocation sequences containing echo information from more than one object (multi-object sequences). Therefore, I stimulated bats with a multi-object sequence which contained echo information from three objects. The objects were different distances away from each other. I tested the influence of each object on the neuronal tuning by stimulating the bats with different sequences created from filtering object specific echoes from the multi-object sequence. The cortex most reliably processes echo information from the nearest object whereas echo information from distant objects is not processed due to neuronal suppression. Collicular neurons process less selectively echo information from certain objects and respond to each echo.
For proper echolocation, bats have to distinguish between own biosonar signals and the signals coming from conspecifics. This can be quite challenging when many bats echolocate adjacent to each other. In behavioral experiments, the echolocation performance of C. perspicillata was tested in the presence of potentially interfering sounds. In the presence of acoustic noise, the bats increase the sensory acquisition rate which may increase the update rate of sensory processing. Neuronal recordings from the auditory cortex and inferior colliculus could strengthen the hypothesis. Although there were signs of acoustic interference or jamming at neuronal level, the neurons were not completely suppressed and responded to the rest of the echolocation sequence.
The composition of cellular membranes is extremely complex and the mechanisms underlying their homeostasis are poorly understood. Organelles within a eukaryotic cell require a non-random distribution of membrane lipids and a tight regulation of the membrane lipid composition is a prerequisite for the maintenance of specific organellar functions. Physical membrane properties such as bilayer thickness, lipid packing density and surface charge are governed by the lipid composition and change gradually from the early to the late secretory pathway. As the endoplasmic reticulum (ER) is situated at the beginning of the cells secretory pathway, it has to accept and accommodate a great variety and quantity of secretory and transmembrane proteins, which enter the ER on their way to their final cellular destination. Secretory proteins can be translocated into the lumen of the ER co- or posttanslationally and membrane proteins are being inserted and released into the ER membrane. In the oxidative milieu of the ER-lumen, supported by a variety of chaperones, proteins can fold into their native form.
If the folding capacity of the ER-lumen is exceeded, an accumulation of mis- or unfolded proteins in the lumen of the ER occurs, consequently triggering the unfolded protein response (UPR). This highly conserved program activates a wide-spread transcriptional response to restore protein folding homeostasis. In fact, 7 – 8% of all genes in the yeast Saccharomyces cerevisiae (S. cerevisiae) are regulated by the UPR. The mechanism underlying the activation of the UPR by protein folding stress has been investigated thoroughly in the last decades and many of its mechanistic details have been elucidated. Recently, it became evident that aberrant lipid compositions of the ER membrane, collectively referred to as lipid bilayer stress, are equally potent in activating the UPR. The underlying molecular mechanism of this membrane-activated UPR, however, remained unclear.
This study focuses on the UPR in S. cerevisiae and characterizes the inositol requiring enzyme 1 (Ire1) as the sole UPR sensor in S. cerevisiae. Active Ire1 forms oligomers and, collaboratively with the tRNA ligase Rlg1, splices immature mRNA of the transcription factor HAC1, which results in the synthesis of mature HAC1 mRNA and the production of the active Hac1 protein, which binds to UPR-elements in the nucleus and activates the expression of UPR target genes. Here, the combination of in vivo and in vitro experiments is being used, which is supplemented by molecular dynamics (MD) simulations performed by Roberto Covino and Gerhard Hummer (MPI for Biophysics, Frankfurt), aiming to identify the molecular mechanism of Ire1 activation by lipid bilayer stress. This study focuses on the analysis of the juxta- and transmembrane region of Ire1. Bioinformatic analyses revealed a putative ER-lumenal amphipathic helix (AH) N-terminally of and partially overlapping with the transmembrane helix (TMH). This predicted AH contains a large hydrophobic face, which inserts into the ER membrane, forcing the TMH into a tilted orientation within the membrane. The resulting unusual architecture of Ire1’s AH and TMH constitutes a unique structural element required for the activation of Ire1 by lipid bilayer stress.
To investigate the function of the AH in the physiological context, different variants of Ire1 were produced under the control of their endogenous promoter and from their endogenous locus. The functional role of the AH was tested, by disrupting its amphipathic character by the introduction of charged residues into the hydrophobic face of the AH. The role of a conserved negative residue between the TMH and the AH (E540 in S. cerevisiae) was tested by substituting it by a unipolar, polar, or positively charged residue. These variants were intensively characterized using a series of assays:
This thesis provides evidence that the AH is crucial for the function of Ire1: Mutant variants with a disrupted (F531R, V535R) or otherwise modified AH (E540A) exhibited a lower degree of oligomerization and failed to catalyze the splicing of the HAC1 mRNA as the Wildtype control. Likewise, the induction of PDI1, a target gene of the UPR, was greatly reduced in mutants with a disrupted or defective AH. These data revealed an important functional role of the AH for normal Ire1 function.
An in vitro system was established to analyze the membrane-mediated oligomerization of Ire1. This system enabled the isolated functional analysis of the AH and TMH during Ire1 activation by lipid bilayer stress. A fusion construct, coding for the maltose binding protein (MBP) from Escherichia coli (E. coli), N-terminally to the AH and TMH of Ire1 was produced. The heterologous production in E. coli, the purification and reconstitution of this minimal sensor of Ire1 in liposomes was established as part of this study. To analyze the oligomeric status of the minimal sensor in different lipid environments, continuous wave electron paramagnetic resonance (cwEPR) spectroscopic experiments were performed. These experiments revealed that the molecular packing density of the lipids had a significant influence of the oligomerization of the spin-labeled membrane sensor: increasing packing densities resulted in sensor oligomerization. The AH-disruptive F531R mutant, in which the amphipathic character of the AH was destroyed, showed no membrane-sensitive changes in its oligomerization status.
Thus, the activation of Ire1 by lipid bilayer stress is achieved by a membrane-based mechanism. According to the current model, the AH induces a local membrane compression by inserting its large hydrophobic face into the membrane. As membrane thickness and acyl chain order are interconnected, this compression simultaneously results in an increased local disordering of lipid acyl chains. Supporting MD simulations performed by Roberto Covino and Gerhard Hummer revealed that the bilayer compression is significantly more pronounced in a densely packed lipid environment, than in a lipid environment of lower lipid packing density. Hence, the energetic cost of the local compression increases with the packing density of the membrane, but is compensated for by the oligomerization of Ire1. This minimization of energetic cost induced by the membrane deformation of Ire1 forms the basis for the activation of Ire1 by lipid bilayer stress.
The African continent is regularly portrayed as an indolent space with a well-known reputation as a chaotic continent. Viewed as lacking vision, means and capacities, Africa is perceived at best as a place that is marked by a permanent status quo, stagnation, or in worst case scenarios, as a declining continent. Various references to the continent are synonymous with famine, poverty, war, etc. Such portrayals are all the more intriguing given that the continent is known for its abundant natural resources, such as timber, oil, natural gas, minerals, etc., whose reserves are, moreover, not well known both by the African people and their leaders. As a result, there is still much progress to be made in tapping into the resources in order to improve the daily lives of African citizens.
In such a context dominated by infantile carelessness throughout the continent, the interventions of actors from outside the continent are the only hopes of bringing some vitality to this continent which is cloaked in "la grande nuit – the great darkness" (Mbembé 2013). Thus during the main sequences of recent history, representing different forms of Western penetration and activity on the African continent (slavery, imperialism, colonization), all the Western world’s contributions have obviously not sufficed to boost Africa and take it out of its never ending childhood. It has remained just as passive and apathetic today as it was yesterday.
The attraction of Asian actors to the continent is even more recent. And consistent with its abovementioned indolence, Africa is seen as an easy and defenceless prey for the Korean, Japanese, Indian, Malaysian, or Chinese conquerors. In the latter case, the insatiable appetite for natural resources whose reserves are being rapidly depleted is the cornerstone of their foreign aid policy. This led China to colonize the continent, showing a preference for Pariah Regimes which held no appeal for the West, by sending an army of workers to extract those resources (Lum et al. 2009), in defiance of all national and international regulations and based on completely opaque contracts.
Although the concept of African Agency was rapidly developed in several African countries, the aim of this study was more specific to Cameroon’s mining sector in which different entrepreneurs from abroad got involved over time. The thesis investigates whether indigenous citizens took part in any way in the development of mining projects in the country. Thus, the work assesses and analyses actions and reactions initiated and undertaken by local people in the context of China’s presence within Cameroon’s mining sector to promote and advance their interests over those of foreign investors. In addition, the author has no knowledge of any other study investigating African Agency in the mining sector as a whole in Cameroon.
In conducting this study, a multi-method research framework was developed including a series of methods used to collect data and analyse concepts of African Agency associated Political Ecology as they developed within Cameroon’s mining sector. Specifically, those methods comprised quantitative research when it came to collecting data using a positivist and empirical approach constructed by deducing evidence from statistical data collected by means of the 167 questionnaire surveys administered to local inhabitants and workers randomly selected on mining sites and in riparian communities. The questionnaires helped to capture Cameroonians' perceptions of the recent phenomenon of the gradual but significant influx of international actors and precisely Chinese players in the mining sector on the one hand, and on the other hand, observational data was collected across the GVC as developed in the Betare-Oya region. As a complement to the former technique, qualitative methods helped to study and deepen understanding of human behaviour and the social world in a holistic perspective through individual interviews, focus groups, and direct observations on the ground. In addition, the spatial analysis method based on the land use classification technique served to detect changes to land use/land cover that have been brought on by mechanised mining activities undertaken in this region. The sequencing of data collected and their processing from a ground theory perspective led to the formulation and specification of Cameroon’s Ecological Agency theory.
One of the earliest steps of this work consisted in a literature review and in placing the African Agency concept in a broader context. It then led to the state of the art, specifications about research content of the work and the main theories undergirding this thesis. Before examining developments that emerged during the last decade, a historical perspective was provided to the topic in order to show how African societies started mining operations and how they dealt with foreign partners interested in their mining resources. The aim was to show that while Western imperialism presented a challenge for the sector, it did not erase local participation, even despite the constraints associated with such involvement.
...
Biological ageing is a degenerative and irreversible process, ultimately leading to death of the organism. The process is complex and under the control of genetic, environmental and stochastic traits. Although many theories have been established during the last decades, none of these are able to fully describe the complex mechanisms, which lead to ageing. Generally, biological processes and environmental factors lead to molecular damage and an accumulation of impaired cellular components. In contrast, counteracting surveillance systems are effective, including repair, remodelling and degradation of damaged or impaired components, respectively. Nevertheless, at some point these systems are no longer effective, either because the increasing amount of molecular damages can not longer be removed efficiently or because the repairing and removing mechanisms themselves become affected by impairing effects. The organism finally declines and dies. To investigate and to understand these counteracting mechanisms and the complex interplay of decline and maintenance, holistic and systems biological investigations are required. Hence, the processes which lead to ageing in the fungal model organism Podospora anserina, had been analysed using different advanced bioinformatics methods. In contrast to many other ageing models, P. anserina exhibits a short lifespan, a less biochemical complexity and it provides a good accessibility for genetic manipulations.
To achieve a general overview on the different biochemical processes, which are affected during ageing in P. anserina, an initial comprehensive investigation was applied, which aimed to reveal genes significantly regulated and expressed in an age-dependent manner. This investigation was based on an age-dependent transcriptome analysis. Sophisticated and comprehensive analyses revealed different age-related pathways and indicated that especially autophagy may play a crucial role during ageing. For example, it was found that the expression of autophagy-associated genes increases in the course of ageing.
Subsequently, to investigate and to characterise the autophagy pathway, its associated single components and their interactions, Path2PPI, a new bioinformatics approach, was developed. Path2PPI enables the prediction of protein-protein interaction networks of particular pathways by means of a homology comparison approach and was applied to construct the protein-protein interaction network of autophagy in P. anserina.
The predicted network was extended by experimental data, comprising the transcriptome data as well as newly generated protein-protein interaction data achieved from a yeast two-hybrid analysis. Using different mathematical and statistical methods the topological properties of the constructed network had been compared with those of randomly generated networks to approve its biological significance. In addition, based on this topological and functional analysis, the most important proteins were determined and functional modules were identified, which correspond to the different sub-pathways of autophagy. Due to the integrated transcriptome data the autophagy network could be linked to the ageing process. For example, different proteins had been identified, which genes are continuously up- or down-regulated during ageing and it was shown for the first time that autophagy-associated genes are significantly often co-expressed during ageing.
The presented biological network provides a systems biological view on autophagy and enables further studies, which aim to analyse the relationship of autophagy and ageing. Furthermore, it allows the investigation of potential methods for intervention into the ageing process and to extend the healthy lifespan of P. anserina as well as of other eukaryotic organisms, in particular humans.
Heat stress transcription factors (Hsfs) play essential role in heat stress response and thermotolerance by controlling the transcriptional activation of heat stress response (HSR) genes including molecular chaperones. Plant Hsf families show a striking multiplicity, with more than 20 members in the many plant species. Among Hsfs, HsfA1s act as the master regulators of heat stress (HS) response and HsfA2 becomes one of the most abundant Hsfs during HS. Using transgenic plans with suppressed expression of HsfA2 we have shown that this Hsf is involved in acquired thermotolerance of S. lycopersicum cv Moneymaker as HsfA2 is required for high expression and maintenance of increased levels of Hsps during repeated cycles of HS treatment.
Interestingly, HsfA2 undergoes temperature-dependent alternative splicing (AS) which results in the generation of seven transcript variants. Three of these transcripts (HsfA2-Iα-γ), generated due to alternative splicing of a second, newly identified intron encode for the full length protein involved in acquired thermotolerance. Another 3 transcripts (HsfA2-IIIα-γ) are generated due to alternative splicing in intron 1, leading in all cases to a premature termination codon and targeting of these transcripts for degradation via the non-sense mRNA decay mechanism (NMD).
Interestingly, excision of intron 2, results into the generation of a second previously unreported protein isoform, annotated as HsfA2-II. HsfA2-II shows similar transcriptional activity to the full-length protein HsfA2-I in the presence of HsfA1a but lacks the nuclear export signal (NES) required for nucleocytoplasmic shuttling which allows efficient nuclear retention and stimulation of transcription of HS-induced genes. Furthermore, stability assays showed that HsfA2-II exhibits lower protein stability compared to HsfA2-I.
The presence of a second intron and the generation of a second protein isoform we identified in other Solanaceae species as well. Remarkably, we observed major differences in the splicing efficiency of HsfA2 intron 2 among different tomato species. Several wild tomato accessions exhibit higher splicing efficiency that favors the generation of HsfA2-II, while in these species the splice variant HsfA2-Iγ is absent. This natural variation in splicing efficiency specifically occurring at temperatures around 37.5oC is associated with the presence of 3 intronic polymorphisms. In the case of wild species these polymorphisms seemingly restrict the binding of RS2Z36, identified as a putative splicing silencer for HsfA2 intron 2.
Tomato accessions with the polymorphic “wild” HsfA2 show enhanced thermotolerance against a direct severe heat stress incident due to the stronger increase of Hsps and other stress induced genes. Introgression of the “wild” S. pennellii HsfA2 locus into the cultivar M82, resulted in enhanced seedling thermotolerance highlighting the potential use of the polymorphic HsfA2 for breeding.
We conclude that alterations in the splicing efficiency of HsfA2 have contributed to the adaption of tomato species to different environments and these differences might be directly related to natural variation in their thermotolerance.
The ALICE High-Level-Trigger (HLT) is a large scale computing farm designed and constructed for the purpose of the realtime reconstruction of particle interactions (events) inside the ALICE detector. The reconstruction of such events is based on the raw data produced in collisions inside the ALICE at the Large Hadron Collider. The online reconstruction in the HLT allows the triggering on certain event topologies and a significant data reduction by applying compression algorithms. Moreover, it enables a real-time verification of the quality of the data.
To receive the raw data from the various sub-detectors of ALICE, the HLT is equipped with 226 custom built FPGA-based PCI-X cards, the H-RORCs. The H-RORC interfaces the detector readout electronics to the nodes of the HLT farm. In addition to the transfer of raw data, 108 H-RORCs host 216 Fast-Cluster-Finder (FCF) processors for the Time-Projection-Chamber (TPC). The TPC is the main tracking detector of ALICE and contributes with up to 16 GB/s to over 90% of the overall data volume. The FCF processor implements the first of two steps in the data reconstruction of the TPC. It calculates the space points and their properties from charge clouds (clusters) created by charged particles traversing the TPCs gas volume. Those space points are not only the base for the tracking algorithm, but also allow for a Huffman-based data compression, which reduces the data volume by a factor of 4 to 6.
The FCF processor is designed to cope with any incoming data rate up to the maximum bandwidth of the incoming optical link (160 MB/s) without creating back-pressure to the detectors readout electronics. A performance comparison with the software implementation of the algorithm shows a speedup factor of about 20 compared with one AMD Opteron 6172 Core @ 2.1 GHz, the CPU type used in the HLT during the LHC Run1 campaign. Comparison with an Intel E5-2690 Core @ 3.0 GHz, the CPU type used by the HLT for the LHC Run2 campaign, results in a speedup factor of 8.5. In total numbers, the 216 FCF processors provide the computing performance of 4255 AMD Opteron cores or 2203 Intel cores of the previously mentioned type. The performance of the reconstruction with respect to the physics analysis is equivalent or better than the official ALICE Offline clusterizer. Therefore, ALICE data taking was switched in 2011 to FCF cluster recording and compression only, discarding the raw data from the TPC. Due to the capability to compress the clusters, the recorded data volume could be increased by a factor of 4 to 6.
For the LHC Run3 campaign, starting in 2020, the FCF builds the foundation of the ALICE data taking and processing strategy. The raw data volume (before processing) of the upgraded TPC will exceed 3 TB/s. As a consequence, online processing of the raw data and compression of the results before it enters the online computing farms is an essential and crucial part of the computing model.
Within the scope of this thesis, the H-RORC card and the FCF processor were developed and built from scratch. It covers the conceptual design, the optimisation and implementation, as well as the verification. It is completed by performance benchmarks and experiences from real data taking.
The mainstream law and economics approach has dominated positive analysis and normative design of economic regulations. This approach represents a form of applied neoclassical and new institutional economics. Neoclassical and/or new institutional economic theories, models, and analytical concepts are applied automatically to economic regulatory problems.
This automatic application of neoclassical economics to economic regulatory problems loses sight of the valid insights of non-neoclassical schools of economic thought and theories, which may illuminate important aspects of the regulatory problems. This thesis, therefore, advocates an integrated law and economics approach to economic regulations. This approach identifies the relevant insights of neoclassical and non-neoclassical schools of thought and theories and refines them through a process of cross-criticism. In this process, the insights of each school of thought are subjected to the critiques of other schools of thought. The resulting refined insights, which are more likely to be valid, are then integrated consistently through various techniques of integration.
Not only does neoclassical (micro and macro) law and economics overlook the valid insights of non-neoclassical schools of thought, it is also highly reductionist. It ignores the interdependencies of legal institutions, highlighted mainly by the comparative capitalism literature, and the structural interlinkages among socio-economic actors, highlighted by economic sociology and complexity economics. Rather, it takes rational individuals and their interactions subject to the constraint of isolated institution(s) as its unit of analysis. In place of this reductionist perspective, the thesis argues for a systemic approach to economic regulations. This systemic perspective replaces the reductionist unit of neoclassical regulatory analysis with a systemic unit of analysis that consists of the least non-decomposable actors’ network and its associated least non-decomposable institutional network. Then, the thesis develops an operationalized and replicable systemic framework for systemic analysis and design of institutional networks.
Both the systemic and integrated approaches are theoretically consistent and complementary. The systemic approach is in essence a way of thinking that requires a broad and rich informational basis that can be secured by using the integrated approach. Due to their complementarity, they give rise to what I call “the integrated and systemic law and economics approach.” The thesis operationalizes this approach by setting out well-defined replicable steps and applying them to concrete regulatory problems, namely, the choice of a corporate governance model for developing countries and the development of a normative theory of economic regulations. These concrete applications demonstrate the critical bite of the integrated and systemic approach, which reveals significant shortcomings of mainstream law and economics’ answers to these regulatory questions. They also show the constructive potential of the integrated and systemic approach in overcoming the critiques advanced to the neoclassical regulatory conclusions.
The operationalized integrated and systemic approach is both a law and economics as well as a law and development approach. It does not only provide an alternative to mainstream law and economics analysis and design of economic regulations. It also fills a significant analytical lacuna in the law and development literature that lacks an analytical framework for analysis and design of context-specific legal institutions that can promote economic development in developing economies.
The East African Rift System (EARS) was initiated in the Eocene epoch between 50 and 21 Ma probably due to the influence of mantle plumes that caused volcanism, flood basalts and rifting extensions in Ethiopa and the Afar region. As a result of magmatic intrusions and adiabatic decompression melting within the lithosphere caused by the impact of the Kenya plume, there was a southward propagation of the EARS of about 30 – 15 Ma from Ethiopia to Kenya, which coincide with the occurrence of volcanism. The EARS developed towards the south along the margins of the Tanzania Craton between 15 and 8 Ma. Previous findings of low-velocity anomalies within the upper mantle and the mantle transition zone indicate an upwelling of hot mantle material in the vicinity of the Afar region and the East African Rift. This study includes the analysis of P- and S-receiver functions in order to determine further impacts on the lithosphere from below. The aim was to determine the topographic undulations of further boundary layers and to identify their variability owing to the rifting processes and the formation of the EARS. The study area included the Tanzania Craton and the surrounding rift branches of the East African Rift System.
The region of the Rwenzori Mountains can be analysed in detail because of the large dataset of the RiftLink project. The use of the P-receiver function technique and the H-K stacking method enabled to determine different vP /vS ratios depending on the tectonic setting in the Rwenzori region: Rift shoulders (vP /vS =1.74), Albert Rift segment (vP /vS =1.80), Edward Rift segment (vP /vS =1.87) and Rwenzori Mountains (vP /vS =1.86). To determine the topography of the Moho, it is necessary to take into account the thickness of the sedimentary layer, the surface topography, the azimuthal variations in crustal thickness and the impact of local anomalies. After correcting these effects on the Moho depths, significant variations in Moho topography could be determined. The Moho depths range from 29 to 39 km beneath the rift shoulders of the Albertine Rift. Within the rift valley, the crustal thickness varies between 25 – 31 km in the Edward Rift segment and 22 – 30 km in the Albert Rift segment. An averaged crustal thickness of about 26 km within the rift valley indicates the lack of the crustal root beneath the Rwenzoris. Similar variations in crustal thickness were determined by using an automatic procedure for analysing S-receiver functions that was developed in this study.
The S-receiver functions are created by applying a rotation criterion in order to rotate the Z, N and E components into the L, Q and T components. It is necessary to perform trial rotations using different incident and azimuth angles to determine the correct rotation angles. The latter are identified by the use of the rotation criterion, including the amplitude ratio of the converted Moho signal to the direct S/SKS-wave signal. The L component is rotated correctly in the direction of the incident shear wave in the case of the maximum amplitude ratio. After analysing the frequency content of the receiver functions in order to sort out harmonic and long-periodic traces, the individual Moho signals are checked for consistency in order to remove atypic signals. To increase the signal-to-noise ratios on the traces, the S-receiver functions are stacked. For this purpose, the signals of the direct shear waves must originate from similar epicenters. On the basis of similar ray paths, the receiver functions show comparable waveforms and converted signals. To perform the stacking procedure, it is necessary to merge the datasets of the adjacent stations in order to obtain a sufficient number of receiver functions. This analysis is based on the assumption that the incident seismic waves arriving at the adjacent stations penetrate to some extent the same underground structures in the case of similar wave propagation paths. This approach accounts for the fact that the converted signals do not result exclusively from the piercing points at the boundary layers. Further signals originate from the conversions at the boundary layer within the Fresnel Zone. The piercing points are derived from the significant signals in the receiver functions. Depending on the order of arrival of the converted phases on the traces, the signals are attributed to the theoretical discontinuities DIS1, DIS2, DIS3 and DIS4. However, partly due to the low signal-to-noise ratios on the traces, it is difficult to identify the real conversions on the traces and to ensure that the converted signals are attributed to the correct boundary layers. For this reason, it is necessary to check the consistency of the conversion depths among each other. In the case of inconsistent conversion depths, the corresponding signals are either adjusted to another seismic boundary layer or removed from the dataset. To verify the functionality of the automatic procedure and to determine the resolvability with respect to two boundary layers, several models are tested including horizontal and dipping discontinuities. To resolve distinct discontinuities, their depths must differ by at least 60 km, otherwise, due to similar depth ranges of the different boundary layers, the converted signals cannot be separated from each other. As a consequence, the converted signals that originate from different discontinuities are attributed to a single one. Further tests including break-off edges of seismic discontinuities are performed to check the attributions of the converted signals to the discontinuities. Owing to the varying number of boundary layers, the converted signals cannot be attributed to the discontinuities according to the order of their arrivals on the traces. It is necessary to correct their attributions to the seismic discontinuities in order to resolve the boundary layers.
The crust-mantle boundary and further discontinuities within the lithospheric mantle are investigated by applying this automatic procedure. Depending on the tectonic setting, the conversion depths of the Moho range from about 30 – 45 km beneath the western rift shoulder to 20 – 35 km within the rift valley up to 30 – 40 km beneath the eastern rift shoulder. The long wavelengths of the shear waves hamper the correct identification of the converted phases in the S-receiver functions. With respect to the relative differences in conversion depth, the topographic undulations of the crust-mantle boundary are consistent with the Moho depths derived from P-receiver functions. In contrast to the Rwenzori region, it is difficult to resolve completely the trend of the Moho in the remaining area of the East African Rift due to the small dataset provided by IRIS. The results exibit an increase in crustal thickness to up to 45 km in the region of the Cenozoic volcanics such as Virunga, Kivu, Rungwe and Kenya. The greatest Moho depths of more than 50 km are located near Mount Kilimanjaro. In addition to the Moho, the analysis of the S-receiver functions revealed two further boundary layers at depths of 60 – 140 km and 110 – 260 km, which are associated with a mid-lithospheric discontinuity and the lithosphere-asthenosphere boundary, respectively. The shallowest conversion depths of the LAB are focussed to small-scale regions within the rift branches, namely the northern Albertine Rift, the Chyulu Hills and the Mozambique Belt, which are located around the Tanzania Craton. The larger thickness of the lithosphere beneath the cratonic terrain indicates that the Tanzania Craton is not significantly eroded. However, there are indications that the lithosphere beneath the craton and the rift branches is penetrated by ascending asthenospheric melts to depths of up to 140 and 60 km, respectively. The top of the ascending melts is associated with the occurrence of the mid-lithospheric discontinuity. The shallowest conversion depths of this boundary layer (60 – 90 km) are related to the rifted areas of the EARS and the Cenozoic volcanic provinces, which are located along the Albertine Rift, the Kenya Rift and the Rukwa-Malawi rift zones. The deepest conversion depths of up to 140 km are related to the Rwenzori Belt, the Ugandan Basement Complex and the interior of the Tanzania Craton.
Da die Implantologie ein fester Bestandteil der modernen Zahnheilkunde geworden ist, wird auch die Evaluation des Kieferknochens immer relevanter. Diese hohe Bedeutung des Kieferknochens ist damit zu begründen, dass für den Langzeiterfolg eines Implantates die knöcherne Einheilung im Rahmen der Osseointegration die Grundvoraussetzung bildet. Zudem erfordern verschiedene Knochenqualitäten des Kieferknochens unterschiedliche Implantatdurchmesser und Bohrprotokolle. Schon hier zeigt sich, wie relevant weitreichende Kenntnisse über den Kieferknochen sind.
Auch im Rahmen der Implantatforschung und -entwicklung ist es von großer Bedeutung, Knochen zu evaluieren und damit kalibrieren und kategorisieren zu können, um vergleichbare Versuchswerte generieren zu können. In der aktuellen Literatur liegen zahlreiche Studien zu der Knochendichte, -qualität und -quantität des Kieferknochens vor, Evaluationsmethoden des Kieferknochens sind jedoch rar. Aufgrund dessen befasst sich diese Arbeit mit der Neuentwicklung einer Evaluationsmethode von Knochen.
Zu diesem Zweck wurde in dieser Arbeit der an ein inseriertes Implantat-Dummy angrenzende Knochen mit einem Bone-Evaluation Tool bewertet und geprüft, ob eine Korrelation zwischen dem Eindrehmoment des Implantat-Dummys, der Kompaktadicke und dem Eindrehmoment des nachfolgenden Bone-Evaluation-Tools besteht. Eine bestehende Korrelation würde bedeuten, dass dieses Evaluation-Tool in der Lage ist, Knochen bezüglich seiner Güte zu bewerten und zu kalibrieren.
Durchgeführt wurden die Versuche an dem distalen Ende von bovinen Rippensegmenten sowie an Segmenten des bovinen Femurkopfes. Beide sollten den Kieferknochen der humanen Mandibula simulieren. Es wurden zwei im Durchmesser differierende Bohrprotokolle angewendet, welche als „Hard Bone Small“ (HBS) und „Hard Bone Large“ (HBL) bezeichnet wurden. Als erstes erfolgte jeweils eine Vorbohrung (ø HBS: 3,3 mm; ø HBL: 4,0 mm), gefolgt von der Insertion des Implantat-Dummys (ø HBS: 3,5 mm; ø HBL: 4,2 mm). Als nächstes erfolgte die Entfernung (Aufbohrung) der Gewindeimpressionen, die durch den Implantat-Dummy generiert wurden (ø HBS: 3,8 mm; ø HBL: 4,5 mm). Anschließend wurde das Bone-Evaluation-Tool inseriert (ø HBS: 4,0 mm; ø HBL: 4,7 mm). Zum Schluss wurden die Rippensegmente mittig der Insertionsstelle aufgehackt und an beiden Hälften jeweils median der Insertionsstelle die Kompaktadicke gemessen und die Werte gemittelt.
Anhand der Ergebnisse konnte gezeigt werden, dass beide Bohrprotolle (HBS und HBL) verwendet werden können, um bovinen Rippenknochen zu evaluieren (p<0,001), da eine statistisch signifikante Korrelation zwischen Drehmoment ID mit Drehmoment BET und der Kompaktadicke bewiesen wurde (p<0,001). In Folgearbeiten wird geprüft, ob sich diese Bohrprotokolle auch auf menschlichen Kadaverknochen übertragen lassen.