Refine
Year of publication
- 2017 (102) (remove)
Document Type
- Doctoral Thesis (102) (remove)
Language
- English (102) (remove)
Has Fulltext
- yes (102)
Is part of the Bibliography
- no (102)
Keywords
- ALICE (2)
- FPGA (2)
- QCD (2)
- Africa (1)
- African Agency (1)
- Ageing (1)
- Animal Behavior (1)
- Aufreinigung (1)
- Autophagy (1)
- C. elegans (1)
Institute
Floodplains and other wetlands depend on seasonal river flooding and play an important role in the terrestrial water cycle. They influence evapotranspiration, water storage and river discharge dynamics, and they are the habitat of a large number of animals and plants. Thus, to assess the Earth’s system and its changes, a robust understanding of the dynamics of floodplain wetlands including inundated areas, water storages, and water flows is required.
This PhD thesis aims at improving the modeling of large floodplains and wetlands within the global-scale hydrological model WaterGAP, in order to better estimate water flows and water storage variations in different storage compartments. Within the scope of this thesis, I have developed a new approach to simulate dynamic floodplain inundation on a global-scale. This approach introduces an algorithm into WaterGAP, which has a spatial resolution of 0.5 degree (longitude and latitude) globally. The new approach uses subgrid-scale topography, based on high-resolution digital elevation models, to describe the floodplain elevation profile within each grid cell by applying a hypsographic curve. The approach comprises the modeling of a two-way river-floodplain interaction, the separate downstream water transport within the river and the floodplain – both with temporally and spatially different variable flow velocities – and the floodplain-groundwater interactions. The WaterGAP version that includes the floodplain algorithm, WaterGAP 2.2b_fpl, estimates floodplain and river water storage, inundated area and water table elevation, and also simulates backwater effects.
WaterGAP 2.2b_fpl was applied to model river discharge, river flow velocity, water storages, water heights and surface water extent on a global-scale. Model results were comprehensively validated against ground observations and remote sensing data. Overall, the modeled and observed data are in agreement. In comparison to the former version WaterGAP 2.2b, the model performance has improved significantly. The improvements are most remarkable in the Amazon River basin. However, the seasonal variation of surface water extent and total water storage anomalies are still too low in many regions on the globe when compared to observations. A detailed analysis of the simulated results suggests that in the Amazon River basin the introduction of backwater effects is important for realistically simulating water storages and surface water extent. Future efforts should focus on the simulation of water levels in order to better model the flow routing according to water slope. To further improve the model performance in specific regions, I recommend that the globally constant model parameters that affect inundation initiation, river-floodplain interaction, DEM correction for vegetation, and backwater amount at basin or subbasin-scale be adjusted.
Gepulste dipolare EPR-Spektroskopie ist eine wertvolle Methode, um Abstände von 1.5 bis 10 nm zwischen zwei Spinmarkern zu messen. Diese Information kann für Strukturbestimmungen hilfreich sein, wo traditionelle Methoden wie Kristallstrukturanalyse und NMR nicht angewendet werden können. Zusätzlich ist es möglich, Änderungen in Konformation und Flexibilität zu verfolgen. Für diese Studien haben sich stabile Nitroxidradikale als Spinmarker etabliert. Diese werden spezifisch durch die site-directed spin labelling Methode (SDSL) kovalent an das zu untersuchende Biomolekül gebunden. In den letzten Jahren wurden weitere Spinmarker für Abstandsbestimmungen mittels EPR-Spektroskopie entwickelt. Besonders interessant sind Triarylmethylradikale (im Folgenden abgekürzt als Trityl) und paramagnetische Metallzentren.
Im Vergleich zu Nitroxidradikalen hat das Tritylradikal einige Vorteile: Eine höhere Stabilität in einer reduzierenden Umgebung wie im Inneren von Zellen, längere Elektronenspin-Relaxationszeiten bei Raumtemperatur und ein schmaleres EPR-Spektrum. Deswegen ist dieses organische Radikal ein alternativer Spinmarker, der besonders gut für die Forschung von Biomolekülen in einer nativen Umgebung unter physiologischen Bedingungen geeignet ist. Auch paramagnetische Metallzentren sind weniger reduktionsempfindlich als Nitroxidradikale. Zusätzlich sind diese Spinmarker interessant in biologischen Fragestellungen. Zum Beispiel besitzen zahlreiche Enzyme paramagnetische Manganzentren als Cofaktoren. Zudem kann Magnesium, ein wesentlicher Cofaktor in Enzymen, Nukleinsäuren und Nukleotid-Bindungsdomänen der G- und Membranproteine, oft durch das paramagnetische Mangan ersetzt werden. Um Abstandsmessungen an Biomolekülen, die nur ein Metallzentrum besitzen, durchzuführen, können zusätzliche Spinmarker in Form eines Nitroxid-, Tritylradikals oder eines anderen paramagnetischen Metallkomplexes mithilfe der SDSL-Methode kovalent gebunden werden.
Nitroxidradikale, Tritylradikale und Metallzentren haben deutlich unterschiedliche EPR-spektroskopische Eigenschaften, welche oft als orthogonale Spinmarker bezeichnet werden. Solche Spinmarker sind nützlich für die Untersuchung von verschiedenen Untereinheiten bei makromolekularen Komplexen. Somit können die intramolekularen Abstände innerhalb einer Untereinheit sowie intermolekularen Abstände zwischen den unterschiedlichen Untereinheiten mit nur einer einzigen Probe bestimmt werden. Zusätzlich können die orthogonalen Marker sehr effektiv genutzt werden, um Metallzentren in Biomolekülen mithilfe der Trilateration-Strategie genau zu lokalisieren.
Die hier vorliegende Doktorarbeit beschäftigt sich mit der Nutzung dieser neuen Spinmarker für Abstandsmessungen. Solche Spinmarker sind noch kaum erforscht, obwohl sie für biologische Anwendungen eine große Rolle spielen könnten.
Das erste Ziel dieser Doktorarbeit war eine Studie über Tritylradikale mithilfe der dipolaren EPR-Spektroskopie. Zu diesem Zweck wurden sowohl double quantum coherence (DQC) und single frequency technique for refocussing dipolar couplings (SIFTER) Experimente als auch Hochfrequenz pulsed electron electron double resonance (PELDOR) Experimente mit einem Trityl-Modellsystem durchgeführt. Dabei wurden die Besonderheiten der unterschiedlichen dipolaren Spektroskopiemethoden mit diesem Spinmarker untersucht, um die Empfindlichkeit und Robustheit für die Abstandsmessungen zu optimieren.
Das zweite Ziel war eine Studie über den Einfluss der Hochspin-Multiplizität des Mangans auf die Abstandsbestimmungen. Für diesen Zweck wurde zuerst ein Modellsystem mit einem orthogonalen Mn2+ Ion und Nitroxidradikal mithilfe der PELDOR-Spektroskopie untersucht. Anschließend wurde ein weiteres Modellsystem mit zwei Mn2+-Ionen untersucht, um PELDOR und relaxation-induced dipolar modulation enhancement (RIDME) Experimente bezüglich ihrer Empfindlichkeit und Robustheit sowie Genauigkeit der Datenanalyse zu optimieren.
Das Trityl-Modellsystem wurde in der Arbeitsgruppe von Prof. Sigurdsson synthetisiert. Die EPR Messungen wurden bei zwei verschiedenen Mikrowellenfrequenzen (34 und 180 GHz) durchgeführt. Es wurde gezeigt, dass die Auswahl der optimalen Methode von den EPR-spektroskopischen Eigenschaften des Systems bei den jeweiligen Mikrowellenfrequenzen abhängig ist. Das EPR-Spektrum des Trityls ist bei 34 GHz so schmal, dass das ganze Spektrum von einem üblichen Mikrowellenpuls angeregt werden kann. In diesem Fall sind die DQC und SIFTER Experimente am besten geeignet. Der mit diesen Methoden bestimmte Abstand von 4.9 nm ist in guter Übereinstimmung mit Werten aus der Literatur. Es wurde festgestellt, dass die SIFTER Messung eine höhere Empfindlichkeit als DQC besitzt, da das Signal-zu-Rausch Verhältnis um den Faktor vier größer ist. Außerdem ist die SIFTER-Methode experimentell weniger anspruchsvoll, da ein deutlich kürzerer Phasenzyklus für die Mikrowellenpulse benötigt wird. ...
The future heavy-ion experiment CBM (FAIR/GSI, Darmstadt, Germany) will focus on the measurements of very rare probes, which require the experiment to operate under extreme interaction rates of up to 10 MHz. Due to high multiplicity of charged particles in heavy-ion collisions, this will lead to the data rates of up to 1 TB/s. In order to meet the modern achievable archival rate, this data ow has to be reduced online by more than two orders of magnitude.
The rare observables are featured with complicated trigger signatures and require full event topology reconstruction to be performed online. The huge data rates together with the absence of simple hardware triggers make traditional latency limited trigger architectures typical for conventional experiments inapplicable for the case of CBM. Instead, CBM will employ a novel data acquisition concept with autonomous, self-triggered front-end electronics.
While in conventional experiments with event-by-event processing the association of detector hits with corresponding physical event is known a priori, it is not true for the CBM experiment, where the reconstruction algorithms should be modified in order to process non-event-associated data. At the highest interaction rates the time difference between hits belonging to the same collision will be larger than the average time difference between two consecutive collisions. Thus, events will overlap in time. Due to a possible overlap of events one needs to analyze time-slices rather than isolated events.
The time-stamped data will be shipped and collected into a readout buffer in a form of a time-slice of a certain length. The time-slice data will be delivered to a large computer farm, where the archival decision will be obtained after performing online reconstruction. In this case association of hit information with physical events must be performed in software and requires full online event reconstruction not only in space, but also in time, so-called 4-dimensional (4D) track reconstruction.
Within the scope of this work the 4D track finder algorithm for online reconstruction has been developed. The 4D CA track finder is able to reproduce performance and speed of the traditional event-based algorithm. The 4D CA track finder is both vectorized (using SIMD instructions) and parallelized (between CPU cores). The algorithm shows strong scalability on many-core systems. The speed-up factor of 10.1 has been achieved on a CPU with 10 hyper-threaded physical cores.
The 4D CA track finder algorithm is ready for the time-slice-based reconstruction in the CBM experiment.
The mainstream law and economics approach has dominated positive analysis and normative design of economic regulations. This approach represents a form of applied neoclassical and new institutional economics. Neoclassical and/or new institutional economic theories, models, and analytical concepts are applied automatically to economic regulatory problems.
This automatic application of neoclassical economics to economic regulatory problems loses sight of the valid insights of non-neoclassical schools of economic thought and theories, which may illuminate important aspects of the regulatory problems. This thesis, therefore, advocates an integrated law and economics approach to economic regulations. This approach identifies the relevant insights of neoclassical and non-neoclassical schools of thought and theories and refines them through a process of cross-criticism. In this process, the insights of each school of thought are subjected to the critiques of other schools of thought. The resulting refined insights, which are more likely to be valid, are then integrated consistently through various techniques of integration.
Not only does neoclassical (micro and macro) law and economics overlook the valid insights of non-neoclassical schools of thought, it is also highly reductionist. It ignores the interdependencies of legal institutions, highlighted mainly by the comparative capitalism literature, and the structural interlinkages among socio-economic actors, highlighted by economic sociology and complexity economics. Rather, it takes rational individuals and their interactions subject to the constraint of isolated institution(s) as its unit of analysis. In place of this reductionist perspective, the thesis argues for a systemic approach to economic regulations. This systemic perspective replaces the reductionist unit of neoclassical regulatory analysis with a systemic unit of analysis that consists of the least non-decomposable actors’ network and its associated least non-decomposable institutional network. Then, the thesis develops an operationalized and replicable systemic framework for systemic analysis and design of institutional networks.
Both the systemic and integrated approaches are theoretically consistent and complementary. The systemic approach is in essence a way of thinking that requires a broad and rich informational basis that can be secured by using the integrated approach. Due to their complementarity, they give rise to what I call “the integrated and systemic law and economics approach.” The thesis operationalizes this approach by setting out well-defined replicable steps and applying them to concrete regulatory problems, namely, the choice of a corporate governance model for developing countries and the development of a normative theory of economic regulations. These concrete applications demonstrate the critical bite of the integrated and systemic approach, which reveals significant shortcomings of mainstream law and economics’ answers to these regulatory questions. They also show the constructive potential of the integrated and systemic approach in overcoming the critiques advanced to the neoclassical regulatory conclusions.
The operationalized integrated and systemic approach is both a law and economics as well as a law and development approach. It does not only provide an alternative to mainstream law and economics analysis and design of economic regulations. It also fills a significant analytical lacuna in the law and development literature that lacks an analytical framework for analysis and design of context-specific legal institutions that can promote economic development in developing economies.
The ALICE High-Level-Trigger (HLT) is a large scale computing farm designed and constructed for the purpose of the realtime reconstruction of particle interactions (events) inside the ALICE detector. The reconstruction of such events is based on the raw data produced in collisions inside the ALICE at the Large Hadron Collider. The online reconstruction in the HLT allows the triggering on certain event topologies and a significant data reduction by applying compression algorithms. Moreover, it enables a real-time verification of the quality of the data.
To receive the raw data from the various sub-detectors of ALICE, the HLT is equipped with 226 custom built FPGA-based PCI-X cards, the H-RORCs. The H-RORC interfaces the detector readout electronics to the nodes of the HLT farm. In addition to the transfer of raw data, 108 H-RORCs host 216 Fast-Cluster-Finder (FCF) processors for the Time-Projection-Chamber (TPC). The TPC is the main tracking detector of ALICE and contributes with up to 16 GB/s to over 90% of the overall data volume. The FCF processor implements the first of two steps in the data reconstruction of the TPC. It calculates the space points and their properties from charge clouds (clusters) created by charged particles traversing the TPCs gas volume. Those space points are not only the base for the tracking algorithm, but also allow for a Huffman-based data compression, which reduces the data volume by a factor of 4 to 6.
The FCF processor is designed to cope with any incoming data rate up to the maximum bandwidth of the incoming optical link (160 MB/s) without creating back-pressure to the detectors readout electronics. A performance comparison with the software implementation of the algorithm shows a speedup factor of about 20 compared with one AMD Opteron 6172 Core @ 2.1 GHz, the CPU type used in the HLT during the LHC Run1 campaign. Comparison with an Intel E5-2690 Core @ 3.0 GHz, the CPU type used by the HLT for the LHC Run2 campaign, results in a speedup factor of 8.5. In total numbers, the 216 FCF processors provide the computing performance of 4255 AMD Opteron cores or 2203 Intel cores of the previously mentioned type. The performance of the reconstruction with respect to the physics analysis is equivalent or better than the official ALICE Offline clusterizer. Therefore, ALICE data taking was switched in 2011 to FCF cluster recording and compression only, discarding the raw data from the TPC. Due to the capability to compress the clusters, the recorded data volume could be increased by a factor of 4 to 6.
For the LHC Run3 campaign, starting in 2020, the FCF builds the foundation of the ALICE data taking and processing strategy. The raw data volume (before processing) of the upgraded TPC will exceed 3 TB/s. As a consequence, online processing of the raw data and compression of the results before it enters the online computing farms is an essential and crucial part of the computing model.
Within the scope of this thesis, the H-RORC card and the FCF processor were developed and built from scratch. It covers the conceptual design, the optimisation and implementation, as well as the verification. It is completed by performance benchmarks and experiences from real data taking.
In this thesis we study strongly correlated electron systems within the Density Functional Theory (DFT) in combination with the Dynamical Mean-Field Theory (DMFT).
First, we give an introduction into the theoretical methods and then apply them to study realistic materials. We present results on the hole-doped 122-family of the iron-based superconductors and the transition-metal oxide SrVO3. Our investigations show that a proper treatment of strong electronic correlations is necessary to describe the experimental observations.
Development of the timing system for the Bunch-to-Bucket transfer between the FAIR accelerators
(2017)
The FAIR project is aiming at providing high-energy beams of ions of all elements from hydrogen to uranium, antiprotons and rare isotopes with high intensities. The existing accelerator facility of GSI and the future FAIR facility employ a variety of circular accelerators like heavy ion synchrotrons (SIS18 and SIS100) and storage rings (ESR, CRYRING, CR and HESR) for the preparation of secondary beams and experiments. Bunches are required to be transferred into rf buckets among GSI and FAIR ring accelerators for different purposes. Without the proper transfer, the beam will be subject to various beam quality deterioration and even to beam losses. Hence, the proper bunch-to-bucket (B2B) transfer between two rings is of great importance for FAIR and is the topic, which has been investigated in this thesis.
These circular accelerators of GSI and FAIR have different ratios in their circumference. For example, the circumference ratio between SIS100 and SIS18 is an integer and between SIS18 and ESR is close to an integer and between CR and HESR is far away from an integer. The ring accelerators are connected via a complicated system of beam transfer lines, targets for the secondary particle production and the high energy separators mentioned above. For FAIR, not only the primary beams are required to be transferred from one ring to another, but also the secondary beams, e.g. the antiproton or rare isotope beams produced by the antiproton (pbar) target, the fragment separator (FRS) or the superconducting fragment separator (Super-FRS). An important topic for this system of accelerators is the proper transfer of beam between the different circular accelerators. Bunches of one ring must be transferred into buckets of another ring within an upper bound time constraint (e.g. 10 ms for most FAIR use cases) and with an acceptable B2B injection center mismatch +-1 degree for most FAIR use cases). Hence, a flexible FAIR B2B transfer system is required to realize the different complex B2B transfers between the FAIR rings in the future. In the focus of the system development and of this thesis is the transfer from SIS18 to SIS100, which can be tested at GSI on the transfer from SIS18 to ESR and from ESR to CRYRING. The system is based on the existing technical basis at GSI, the low-level radio frequency (LLRF) system and the FAIR control system. It coordinates with the Machine Protection System (MPS), which protects SIS100 and subsequent accelerators and experiments from damage caused by high intensity primary beams in case of malfunctioning. Besides, it indicates the beam status and the actual beam injection time for the beam instrumentation and diagnostics.
The conceptual realization of the FAIR B2B transfer system was introduced in this thesis for the first time. It achieves the most FAIR B2B transfers with a tolerable B2B injection center mismatch (e.g. +-1 degree) and within an upper bound time (e.g. 10 ms). It supports two synchronization methods, the phase shift and frequency beating methods. It is flexible to support the beam transfer between two rings with different ratios in their circumference and several B2B transfers running at the same time, e.g. the B2B transfer from SIS18 to SIS100 and at the same time the B2B transfer from ESR to CRYRING. It is capable to transfer beam of different ion species from one machine cycle to another and to transfer beams between two rings via the FRS, the pbar target and the Super-FRS. It allows various complex bucket filling pattern. In addition, it coordinates with the MPS system, which protects the SIS100 and subsequent accelerators or experiments from beam induced damage.
A list of criteria for the preservation of beam qualities during the rf frequency modulation of the phase shift method was analyzed. As an example the beam reaction on three different rf frequency modulation examples were analyzed for SIS18 beams. According to the beam dynamic analysis, there is a maximum value for the rf frequency modulation. The first derivative of the rf frequency modulation must be continuous and small enough and the second derivative must be small enough.
In addition to the analysis from the viewpoint of beam dynamics, two test setups were built. The first test setup was used to characterize the FAIR timing network – white rabbit network for the B2B transfer. In the second test setup, the firmware of the FAIR B2B transfer system was evaluated, which was running on the soft CPU, LatticeMico32, of the Scalable Control Unit - the FAIR standard Front End Controller. Besides, the boundary conditions of the different trigger scenarios of the SIS18 extraction and SIS100 injection kicker magnets were investigated. Finally, the application of the FAIR B2B transfer system for all FAIR use cases was demonstrated.
The dissertation plays a significant important role for the realization of the FAIR B2B transfer system and the further practical application of the system to all FAIR use cases.
Terahertz (THz) physics are an emerging field of research dealing with electromagnetic radiation in the far-infrared to microwave region. The development of innovative technologies for the generation and detection of THz radiation has only in the recent past led to a tremendous rise of both fundamental research as well as investigation of possible fields of application for THz radiation. The most prominent reason has long been the scarce accessibility of the THz region of the electromagnetic spectrum - commonly loosely located between 0.1 and 30 THz - to broad research, and it was mostly limited to astronomy and high energy physics facilities. Over the recent years, numerous novel concepts on both the source and detector side have been proposed and successfully implemented to overcome this so-called THz gap. New technology has become available and paved the way for wide-spread experimental laboratory work and accompanying theoretical investigations. First application studies have emerged and in some cases even commercial development of the field of THz physics is on the rise. Despite these enormous progresses, a continuing demand for more efficient THz detectors still impels current technological research. Relatively low source powers are often a major limiting factor and the request for new detection concepts, their understanding and implementation, as well as the optimization on a device basis has been and still remains in place. One of these concepts is the use of field-effect transistors (FETs) high above their conventional cut-off frequencies as electronic THz detectors. The concept has been proposed in a number of theoretical publications by M. Dyakonov and M. Shur in the early 1990's, who pioneered to show that under certain boundary conditions, non-linear collective excitations of the charge carrier system of a two-dimensional electron gas (2DEG) by incident THz radiation can exhibit rectifying behaviour - a detection principle, which has become known as plasma wave or plasmonic mixing. Up until this day, the concept has been successfully implemented in many device realizations - most advanced in established silicon CMOS technology - and stands on the edge of becoming commercially available on a large scale. The main direction of the work presented in this thesis was the modeling and experimental characterization of antenna-coupled FETs for THz detection - termed TeraFETs in this and the author's previous works - which have been implemented in different material systems. The materials presented in this thesis are AlGaN/GaN HEMTs and graphene FETs. In a number of scientific collaborations, TeraFETs were designed based on a hydrodynamic transport model, fabricated in the respective materials, and characterized mainly in the lower THz frequency region from 0.2 to 1.2 THz. The theoretical description of the plasma wave mixing mechanism in TeraFETs, as initiated by Dyakonov and Shur, was based on a fluid-dynamic transport model for charge carriers in the transistor channel. The THz radiation induces propagating charge density oscillations (plasma waves) in the 2DEG, which via non-linear self-mixing cause rectification of the incident THz signals. Over the course of this work, it became evident in the on-going detector characterization experiments that this original theoretical model of the detection process widely applied in the respective literature does not suffice to describe some of the experimental findings in TeraFET detection signals. Thorough measurements showed signal contributions, which are identified in this work to be of thermoelectric origin arising from an inherent asymmetric local heating of charge carriers in the devices. Depending on the material, these contributions constituted a mere side effect to plasmonic detection (AlGaN/GaN) or even reached a comparable magnitude (graphene FETs). To include these effects in the detector model, the original reduced fluid-dynamic description was extended to a hydrodynamic transport model. The model yields at the current stage a reasonable qualitative agreement to the measured THz detection signals. This thesis presents the formulation of a hydrodynamic charge carrier transport model and its specific implementation in a circuit simulation tool. A second modeling aspect is that the transport equations cover only the intrinsic plasmonic detection process in the active gated part of the TeraFET's transistor channel. In order to model and simulate the behavior of real devices, extrinsic detector parts such as ungated channel regions, parasitic resistances and capacitances, integrated antenna impedance, and others must be considered. The implemented detector model allows to simulate THz detection in real devices with the above influences included. Besides presentation of the detector model, experimental THz characterization of the fabricated TeraFETs is presented in this work. Careful device design yielded record detection performance for detectors in both investigated materials. The respective results are shown and the experimental observations of the thermoelectric effect in TeraFETs are compared to modeling results. It is the goal of this work to provide a framework for further theoretical and experimental studies of the plasmonic and thermoelectric effect in TeraFETs, which could eventually lead to a new type of THz detectors particularly exploiting the thermoelectric effect to enhance the sensitivity of today's plasmonic TeraFETs.
Echolocation allows bats to orientate in darkness without using visual information. Bats emit spatially directed high frequency calls and infer spatial information from echoes coming from call reflections in objects (Simmons 2012; Moss and Surlykke 2001, 2010). The echoes provide momentary snapshots, which have to be integrated to create an acoustic image of the surroundings. The spatial resolution of the computed image increases with the quantity of received echoes. Thus, a high call rate is required for a detailed representation of the surroundings.
One important parameter that the bats extract from the echoes is an object’s distance. The distance is inferred from the echo delay, which represents the duration between call emission and echo arrival (Kössl et al. 2014). The echo delay decreases with decreasing distance and delay-tuned neurons have been characterized in the ascending auditory pathway, which runs from the inferior colliculus (Wenstrup et al. 2012; Macías et al. 2016; Wenstrup and Portfors 2011; Dear and Suga 1995) to the auditory cortex (Hagemann et al. 2010; Suga and O'Neill 1979; O'Neill and Suga 1982).
Electrophysiological studies usually characterize neuronal processing by using artificial and simplified versions of the echolocation signals as stimuli (Hagemann et al. 2010; Hagemann et al. 2011; Hechavarría and Kössl 2014; Hechavarría et al. 2013). The high controllability of artificial stimuli simplifies the inference of the neuronal mechanisms underlying distance processing. But, it remains largely unexplored how the neurons process delay information from echolocation sequences. The main purpose of the thesis is to investigate how natural echolocation sequences are processed in the brain of the bat Carollia perspicillata. Bats actively control the sensory information that it gathers during echolocation. This allows experimenters to easily identify and record the acoustic stimuli that are behaviorally relevant for orientation. For recording echolocation sequences, a bat was placed in the mass of a swinging pendulum (Kobler et al. 1985; Beetz et al. 2016b). During the swing the bat emitted echolocation calls that were reflected in surrounding objects. An ultrasound sensitive microphone traveling with the bat and positioned above the bat’s head recorded the echolocation sequence. The echolocation sequence carried delay information of an approach flight and was used as stimulus for neuronal recordings from the auditory cortex and inferior colliculus of the bats.
Presentation of high stimulus rates to other species, such as rats, guinea pigs, suppresses cortical neuron activity (Wehr and Zador 2005; Creutzfeldt et al. 1980). Therefore, I tested if neurons of bats are suppressed when they are stimulated with high acoustic rates represented in echolocation sequences (sequence situation). Additionally, the bats were stimulated with randomized call echo elements of the sequence and an interstimulus time interval of 400 ms (element situation). To quantify neuronal suppression induced by the sequence, I compared the response pattern to the sequence situation with the concatenated response patterns to the element situation. Surprisingly, although the bats should be adapted for processing high acoustic rates, their cortical neurons are vastly suppressed in the sequence situation (Beetz et al. 2016b). However, instead of being completely suppressed during the sequence situation, the neurons partially recover from suppression at a unit specific call echo element. Multi-electrode recordings from the cortex allow assessment of the representation of echo delays along the cortical surface. At the cortical level, delay-tuned neurons are topographically organized. Cortical suppression improves sharpness of neuronal tuning and decreases the blurriness of the topographic map. With neuronal recordings from the inferior colliculus, I tested whether the echolocation sequence also induced neuronal suppression at subcortical level. The sequence induced suppression was weaker in the inferior colliculus than in the cortex. The collicular response makes the neurons able to track the acoustic events in the echolocation sequence. Collicular suppression mainly improves the signal-to-noise ratio. In conclusion, the results demonstrate that cortical suppression is not necessarily a shortcoming for temporal processing of rapidly occurring stimuli as it has previously been interpreted.
Natural environments are usually composed of multiple objects. Thus, each echolocation call reflects off multiple objects resulting in multiple echoes following the calls. At present, it is largely unexplored how neurons process echolocation sequences containing echo information from more than one object (multi-object sequences). Therefore, I stimulated bats with a multi-object sequence which contained echo information from three objects. The objects were different distances away from each other. I tested the influence of each object on the neuronal tuning by stimulating the bats with different sequences created from filtering object specific echoes from the multi-object sequence. The cortex most reliably processes echo information from the nearest object whereas echo information from distant objects is not processed due to neuronal suppression. Collicular neurons process less selectively echo information from certain objects and respond to each echo.
For proper echolocation, bats have to distinguish between own biosonar signals and the signals coming from conspecifics. This can be quite challenging when many bats echolocate adjacent to each other. In behavioral experiments, the echolocation performance of C. perspicillata was tested in the presence of potentially interfering sounds. In the presence of acoustic noise, the bats increase the sensory acquisition rate which may increase the update rate of sensory processing. Neuronal recordings from the auditory cortex and inferior colliculus could strengthen the hypothesis. Although there were signs of acoustic interference or jamming at neuronal level, the neurons were not completely suppressed and responded to the rest of the echolocation sequence.