Refine
Year of publication
- 2014 (168) (remove)
Document Type
- Doctoral Thesis (168) (remove)
Has Fulltext
- yes (168) (remove)
Is part of the Bibliography
- no (168)
Keywords
- Allergie (2)
- Beschleuniger (2)
- Nukleosynthese (2)
- Pierre Bourdieu (2)
- Radio Frequenz Quadrupol (2)
- 1,4-dioxane (1)
- ADHD (1)
- Ageing (1)
- Altgermanistik (1)
- Arthur Schnitzler (1)
Institute
- Physik (28)
- Biochemie und Chemie (26)
- Biowissenschaften (16)
- Medizin (15)
- Pharmazie (15)
- Geowissenschaften (9)
- Informatik (9)
- Neuere Philologien (8)
- Sportwissenschaften (7)
- Kulturwissenschaften (5)
The Late Cretaceous is known to be mostly affected by warm periods interrupted temporarily by a number of cooling events. The reconstruction of the paleoclimatic conditions during a period of high concentration of CO2 in the atmosphere is of great importance for the creation of future climate models. We applied the recently developed method reconstructing the SST from the TEX86 (TetraEther indeX of tetraethers consisting of 86 carbon atoms).
The sample material used for the present study was obtained from the tropical Late Cretaceous southern Tethys upwelling system (Negev/Israel), lasting from the Late Santonian to the Early Maastrichtian (~ 85 to 68 Ma). On the core samples from the Shefela basin, representing the outer belt of the upwelling system and the outcrop profile from the open mine Mishor Rotem (Efe Syncline), representing the inner belt, various bulk geochemical and biomarker studies were performed in this thesis.
Derived from TEX86 data, a significant long-term SST cooling trend from 36.0 to 29.3 °C is recognized during the Late Santonian and the Early Campanian in the southern Tethys margin. This is consistent with the opening and deepening of the Equatorial Atlantic Gateway (EAG) and the intrusion of cooler deep water from the southern Atlantic Ocean influencing the global SSTs and also the Tethys Ocean. Furthermore, the cooler near shore SST usually found in modern upwelling systems could be verified in case of the ancient upwelling system investigated in the present study. The calculated mean SST in the inner belt (27.7 °C) represented in the Efe Syncline was 1.5 °C cooler in comparison to the more seaward located outer belt (Shefela basin).
Moreover, geochemical and biomarker analyses were used to identify both the accumulation of high amounts of phosphate in the PM and good preservation of organic matter (OM) in the lower part of the OSM section. Total organic carbon (TOC) contents are highly variable over the whole profile reaching from 0.6 % in the MM, to 24.5 % in the OSM. Total iron (TFe) varies from 0.1 % in the PM to 3.3 % in the OSM and total sulfur (TS) varies between 0.1 % in the MM and 3.4 % in the OSM. Different correlations of TS, TOC and TFe were used to identify the conditions during the deposition of the different facies types. Natural sulfurization was found to play a key role in the preservation of the OM particularly in the lower part of the OSM. Samples from the OSM and the PM were deposited under dysoxic to anoxic conditions and iron limitation lasted during the deposition of the OSM and the PM, which effected the incorporation of sulfur into OM.
Phosphorus is highly accumulated in the sediments of the PM with a mean proportion of 11.5 % total phosphorus (TP), which is drastically reduced to a mean value of 0.9 % in the OSM and the MM. From the correlation of the bulk geochemical parameters TOC/TOCOR ratio and TP a major contribution of sulfate reducing bacteria to the phosphate deposition is concluded. This interrelation has previously been investigated in recent coastal upwelling systems off Peru, Chile, California and Namibia. This was further supported by the analysis of branched and monounsaturated fatty acids indicating the occurrence of sulfate reducing and sulfide oxidizing bacteria during the deposition.
According to the results from the analysis of n-alkanes and C27- to C29-steranes up to 95 % of the OM was of marine origin.
Organic sulfur compounds (OSC) were a major compound class in the aromatic hydrocarbon fraction and n-Alkyl and isoprenoid thiophenes were the most abundant, with highest amounts found for 2-methyl-5-tridecyl-thiophene (28 µg/g TOC). The relatively high abundance of ββ-C35 hopanoid thiophenes and epithiosteranes is equivalent to an incorporation of sulfur during the early stages of diagenesis.
Moreover, the geochemical parameters δ13Corg, δ15Norg, C/N and the pristane/phytane (Pr/Ph) ratio, were studied for reconstruction of seafloor and water column depositional environments. The high C/N ratio along with relatively low values of δ15Norg (4 ‰ to 6 ‰) and δ13Corg (-29 ‰ to -28 ‰) are consistent with a significant preferential loss of nitrogen-rich organic compounds during diagenesis. Oxygen-depleted conditions lasted during the deposition of the PM and the bottom of the OSM, reflected by the low Pr/Ph ratio of 0.11–0.7. In the upper part of the OSM and the MM the conditions changed from anoxic to dysoxic or oxic conditions. This environmental trend is consistent with co-occurring foraminiferal assemblages in the studied succession and implies that the benthic species in the Negev sequence were adapted to persistent minimum oxygen conditions by performing complete denitrification as recently found in many modern benthic foraminifera.
Furthermore, the anammox process could have influenced the nitrogen composition of the sediments. In this anaerobically process nitrite and ammonia are converted to molecular nitrogen.
Die Familie von der Tann entstammt altem fuldischem Lehnsadel und war in den Prozess der Entstehung der fränkischen Reichsritterschaft von Beginn an eingebunden. Zur Reformationszeit taten sich vor allem die Brüder Eberhard und Alexander von der Tann hervor, die als fürstliche Ratgeber in Kursachsen bzw. Hessen tätig waren. Vor allem Eberhard von der Tann hat in vielfacher Weise an der Verwirklichung von Luthers Reformvorstellungen mitgewirkt. So war er 1555 auf dem Reichstag zu Augsburg als protestantischer Verhandlungsführer entscheidend am Zustandekommen des Religionsfriedens beteiligt, eine Tatsache, die in der einschlägigen Forschung bis heute noch wenig beachtet wurde.
The subject of this thesis is the experimental investigation of the neutron-capture cross sections of the neutron-rich, short-lived boron isotopes 13B and 14B, as they are thought to influence the rapid neutron-capture process (r process) nucleosynthesis in a neutrino-driven wind scenario.
The 13;14B(n,g)14;15B reactions were studied in inverse kinematics via Coulomb dissociation at the LAND/R3B setup (Reactions with Relativistic Radioactive Beams). A radioactive beam of 14;15B was produced via in-flight fragmentation and directed onto a lead-target at about 500 AMeV. The neutron breakup of the projectile within the electromagnetic field of the target nucleus was investigated in a kinematically complete measurement. All outgoing reaction products were detected and analyzed in order to reconstruct the excitation energy.
The differential Coulomb dissociation cross sections as a function of the excitation energy were obtained and first experimental constraints on the photoabsorption and the neutron-capture cross sections were deduced. The results were compared to theoretical approximations of the cross sections in question. The Coulomb dissociation cross section of 15B into 14B(g.s.) + n was determined to be s(15B;14B(g:s:)+n) CD = 81(8stat)(10syst) mb ; while the Coulomb dissociation cross section of 14B into a neutron and 13B in its ground state was found to be s(14B;13B(g:s:)+n) CD = 281(25stat)(43syst) mb: Furthermore, new information on the nuclear structure of 14B were achieved, as the spectral shape of the differential Coulomb dissociation cross section indicates a halolike structure of the nucleus.
Additionally, the Coulomb dissociation of 11Be was investigated and compared to previous measurements in order to verify the present analysis. The corresponding Coulomb dissociation cross section of 11Be into 10Be(g.s.) + n was found to be 450(40stat)(54syst ) mb, which is in good agreement with the results of Palit et al.
Die Arbeit „Zwischen Kooperation und Wetteifer – Interaktionen und mediale Organisation von Kreativität am Beispiel des koopetitiven Ideennetzwerks jovoto“ untersucht internetbasierte Kokreativität – die gemeinsame Ideenentwicklung im Mediennetzwerk. Im Unterschied zu vorangegangen Untersuchungen, die sich mit den Motiven für die häufig unentgeltlichen kokreativen Aktivitäten und dem Innovationspotential dieser Organisationsform beschäftigten, liegt der Fokus dieser Studie auf der Kommunikation der Akteure untereinander während der Kokreation. Als Fallbeispiel wird die Design-Ideenplattform jovoto ausgewählt, die Kreativität auf der Basis von Koopetition – der Gleichzeitigkeit von Kooperation und Wettbewerb – unter den Teilnehmern fördert. Die Ideenautoren im Netzwerk von jovoto entwickeln kreative Lösungen in den Bereichen Produktdesign, Kampagnen, Innovation und Architektur. Die Teilnehmer treten mit ihren Ideen im Wettbewerb gegeneinander an; gleichzeitig kommentieren und bewerten sie sich gegenseitig im Prozess der Ideenentwicklung. Aus den Bewertungen der Community ergeben sich die Gewinner des ausgeschriebenen Preisgeldes. Aus dieser Gleichzeitigkeit von Wettbewerb und Kooperation ergibt sich die Forschungsfrage dieser Untersuchung: Wie ist das Verhältnis zwischen Kooperation und Wettbewerb im Kokreationsnetzwerk jovoto bestimmt, und wie wirkt dies auf Kreativität? Um diese Frage zu beantworten, untersuche ich die auf der Plattform dokumentierten Interaktionen (Kommentar-Threads) zwischen den Ideenentwicklern und anderen Community-Mitgliedern mit qualitativen und quantitativen Methoden und analysiere zwanzig von mir geführte halbstrukturierte Leitfaden-Interviews mit den Ideenautoren auf jovoto. Zur theoretischen Einordnung der beobachteten Phänomene stütze ich mich sowohl auf Kultur- und Kommunikationstheorien des radikal-konstruktivistischen Erkenntnismodells als auch auf die kulturellen Spieltheorien von Johan Huizinga und Roger Caillois. Ich beziehe zudem Ansätze ein, die Kokreativität als eine Form der kulturellen Produktivität beschreiben. Einen weiteren Anhaltspunkt bilden Studien, die sich mit der produktiven Beziehung zwischen Kooperation und Wettbewerb auseinandersetzen. Ergänzt werden diese Erkenntnisse bspw. durch die These zur Intelligenz der Crowd in heterogenen Gruppen von Entscheidern und durch Untersuchungen zum positiven Einfluss von Differenzen und Konflikten auf die Gruppenkreativität. Ich führe diese Vorarbeiten zu einer Modellvorstellung zusammen: In dieser verknüpft der koopetitive Handlungskontext ein Kooperationsspiel mit einem Wettbewerbsspiel, und zwar mithilfe von Kreativität: Diese ermöglicht, dass der Fokus im Spiel – von der Betonung und Vergrößerung des Gemeinsamen zwischen den Teilnehmern zur Betonung ihrer Differenzen – wechseln kann. Hieraus leite ich Hypothesen ab, die ich empirisch überprüfe: Während einer sechsmonatigen Beobachtung des Plattformgeschehens habe ich Daten zu 135 Wettbewerben auf jovoto erhoben. Die Analyse von über 2.400 Kommentaren ergibt, dass die beiden Leitkategorien „Bestätigung“ und „Herausforderung“ das kokreative Kommunikationsgeschehen charakterisieren. Hiervon ausgehende qualitative und quantitative Untersuchungen zu 54 Diskussions-Threads ergeben: In der Platzierung erfolgreiche Ideen werden von intensiveren Diskussionen begleitet als weniger erfolgreiche. Bemerkenswert ist, dass sie nicht nur eine größere Zahl bestätigender Kommentare erhalten, sondern auch mehr Herausforderungen. Die im Schnitt höchste Punktzahl geht mit einem Verhältnis von rund acht Bestätigungen je Herausforderung einher. Dieses Ergebnis bestätigt die Ausgangshypothese, dass es sich bei den Ideenwettbewerben um ein Kommunikationsspiel mit kooperativer, wetteifernder und kreativer Komponente handelt: In den Interaktionen zu den Ideenbeiträgen, insbesondere den erfolgreichen, herrscht ein Wechsel zwischen Bestätigungen und Herausforderungen vor. Aus den Aussagen der Ideenautoren in den geführten Interviews wird ein zentraler Konflikt deutlich: Die Tätigkeit bringt einen hohen Aufwand und wenig Aussicht auf Gewinn mit sich. Unterm Strich scheint sie jedoch lohnenswert, da die Akteure wichtige Lernerfahrungen im Netzwerk sammeln können und die eigenen Fähigkeiten einzuschätzen lernen. Dass die kreativen Beiträge anderer Wettbewerbsteilnehmer von anderen rege diskutiert werden, belegt den Erfolg des Organisationsmodells der Kokreation. Dieser verhält sich konträr zu den Vorhersagen herkömmlicher ökonomischer Theorie, die rein eigennützige Akteure annimmt, und deutet auf die Relevanz von Theorien hin, die wechselseitiges Feedback und dessen Gratifikationen als Faktoren in der netzwerkbasierte Produktion zentral berücksichtigen.
Cognitive flexibility and cognitive stability : neural and behavioral correlates in men and mice
(2014)
The ability to flexibly adjust behavior according to a changing environment is crucial to ensure a species' survival. However, the successful pursuit of goals also requires the stable maintenance of behavior in the face of potential distractors. Thus, cognitive flexibility and cognitive stability are important processes for the cognitive control of behavior. There is a large body of behavioral and neuroimaging research concerning cognitive control in general, but also specifically on cognitive flexibility and cognitive stability, albeit most often assessed in separate task paradigms. Nevertheless, whether cognitive flexibility and cognitive stability depend upon separate or shared neuronal bases is still a matter of debate. Complementing empirical research, computational models have become an important strategy in neuroscientific research, as they have the potential of providing mechanistic explanations of empirical observations, for example by allowing for the direct manipulation of molecular parameters in simulated neural networks. The computational model underlying the so-called Dual-State Theory contains specific hypotheses with respect to cognitive flexibility and cognitive stability. The neural networks simulated by this model exhibit multiple stable firing states, i.e., the neural network can maintain a high firing state also without continuing external input due to a network architecture consisting of recurrently connected neurons. Transitions between such network states, also called attractor states, can be induced by external input, and represent working memory contents or active task rules. Simulations showed that the stability of these attractor states, and thus of task rule representations, depend on the dopamine state of the system and can consequently vary between persons. The Dual-State Theory predicts an antagonistic relationship between cognitive flexibility and cognitive stability, as robust attractor states would facilitate the inhibition of distractors, but impair efficient task switching, while rather unstable attractor states would promote efficient transitions between representations but would also come at the cost of increased distractibility.
Based on the Dual-State Theory, a task paradigm was designed allowing for the simultaneous assessment of cognitive flexibility, in the sense of rule-based task switching, and cognitive stability, in the sense of inhibiting irrelevant distractors. Furthermore, a behavioral measure was developed to assess the individual attractor state stability, named spontaneous switching rate (SSR). In the first study of this work, this paradigm was tested in a sample of healthy human subjects using functional magnetic resonance imaging (fMRI). An overlapping fronto-parietal network was activated for both cognitive flexibility and cognitive stability. Furthermore, behavioral as well as neuroimaging results are in favor of an antagonistic relationship between cognitive flexibility and cognitive stability. A specific prefrontal region, the inferior frontal junction (IFJ), was implied to potentially contain the relevant neural networks mediating the transitions between attractor states, i.e., task rule representations, as its activity was modulated by the SSR such that persons with rather unstable attractor states activated it less during task switching while showing better performance. Most importantly, functional connectivity of the IFJ was antagonistically modulated by the SSR: more flexible persons connected it less to another prefrontal area during task switching, while showing higher functional connectivity during distractor inhibition.
In a second study, a larger human sample was assessed and further hypotheses derived from the Dual-State Theory on variability of neural processing were tested: we hypothesized that persons with high brain signal variability should have less stable network states and thus benefit on tasks requiring cognitive flexibility but suffer from it when the task requires a higher degree of cognitive stability. Furthermore, recent fMRI-research on brain signal variability revealed beneficial effects of higher brain signal variability on cognitive performance in general. Using a novel customized analysis pipeline to measure trial-to-trial fMRI-signal variability, we indeed found differential effects of brain signal variability: higher levels of brain signal variability were found to be beneficial for effectiveness, i.e., performance in terms of error rates, for both cognitive flexibility and stability. However, brain signal variability impaired the efficiency in terms of response times of inhibiting distractors, i.e., cognitive stability.
Due to further predictions of the Dual-State Theory concerning schizophrenia and the dopaminergic system, it was considered valuable to pursue a translational approach and thus allowing for the employment of animal models of psychiatric diseases. Consequently, in a first step the human paradigm was translated for a murine population using an innovative touchscreen approach. Results showed analogous behavioral effects in wildtype mice as before in healthy humans: the antagonistic relation between cognitive flexibility and cognitive stability was replicated and also for mice, a behavioral measure for the individual attractor stability was established and validated, named the individual spontaneous switching score.
To conclude, we established a novel paradigm assessing both cognitive flexibility and stability simultaneously showing an antagonistic relationship between these two cognitive functions on the behavioral level in two different species, which supports predictions from the Dual-State Theory. This was further underlined by evidence on the activation, functional connectivity and signal variability level in the human brain.
Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus—despite the complexity of LQCD applications—it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D= kernel for a single GPU, achieving 120GFLOPS. D=—the most compute intensive kernel in LQCD simulations—is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD calculations require a sampling of the phase space. The hybrid Monte Carlo (HMC) algorithm performs this. For this task, a single AMD Radeon HD 7970 GPU provides four times the performance of two AMD Opteron 6220 running an optimized reference code. The same advantage is achieved in terms of energy-efficiency. In terms of normalized total cost of acquisition (TCA), GPU-based clusters match conventional large-scale LQCD systems. Contrary to those, however, they can be scaled up from a single node. Examples of large GPU-based systems are LOEWE-CSC and SANAM. On both, CL2QCD has already been used in production for LQCD studies.
The present work deals with the integration of variable renewable energy sources, wind and solar energy into the European and US power grid. In contrast to other networks, such as the gas supply mains, the electricity network is practically not able to store energy. Generation and consumption therefore always have tobe balanced. Currently, the load curve is viewed as a rigid boundary condition, which must be followed by the generation system. The basic idea of the approach followed here is that weather-dependent generation causes a shift of focus of the electricity supply. At high shares of wind and solar generation, the role of the rigid boundary condition falls to the residual load, that is, the remaining load after subtraction of renewable generation. The goal is to include the weather dependence as well as the load curve in the design of the future electricity supply.
After a brief introduction, the present work first turns to the underlying weather-, generation and load data, which form the starting point of the analysis. In addition, some basic concepts of energy economics are discussed, which are needed in the following.
In the main part of the thesis, several algorithms are developed to determine the load flow in a network with a high share of wind and solar energy and to determine the backup supply needed at the same time. Minimization of the energy needed from controllable power plants, the capacity variable power plants, and the capacity of storing serve as guiding principles. In addition, the optimization problem of grid extensions is considered. It is shown that it can be formulated as a convex optimization problem. It turns out that with an optimized, international transmission network which is about four times the currently available transmission capacity, much of the potential savings in backup energy (about 40%) in Europe can be reached. In contrast, a twelvefold increase the transmission capacity would be necessary for a complete implementation of all possible savings in dispatchable power plants.
The reduction of the dispatchable generation capacity and storage capacity, however, presents a greater challenge. Due to correlations in the generation of time series of individual countries, it may be reduced only with difficulty, and by only about 30%.
In the following, the influence of the relative share of wind and solar energy is illuminated and examined the interplay with the line capacitance. A stronger transmission network tends to lead to a higher proportion of wind energy being better integrated. With increasing line capacity, the optimal mix in Europe therefore shifts from about 70% to 80% wind. Similar analyses are carried out for the US with comparable results.
In addition, the cost of the overall system can be reduced. It is interesting at this point that the advantages for the network integration may outweigh higher production costs of individual technologies, so that it is more favourable from the viewpoint of the entire system to use the more expensive technologies.
Finally, attention is given to the flexibility of the dispatchable power plants. Starting from a Fourier-like decomposition of the load curve as it was a few years ago, when hardly renewable generation capacity was present, capacities of different flexibility classes of dispatchable power plant are calculated. For this purpose, it is assumed that the power plant park is able to follow the load curve without significant surplusses or deficits. From this examination, it is derived what capacity must at least be available without having to resort to a detailed database of existing power plants.
Assuming a strong European cooperation, with a stronger international transmission network, the dispatchable power capacity can be significantly reduced while maintaining security of supply and generating relatively small surplusses in dispatchable power plants.
Die vorliegende Arbeit ist das Extrakt umfangreicher Untersuchungen an ausgewählten organischen und metallorganischen Verbindungen im Hinblick auf ihre Polymorphie. Nicht nur die praktischen Arbeiten, wie Synthesen, Polymorphie-Screenings (nach eigens entwickeltem Vorgehen), die chemisch-physikalischen Charakterisierungen neuer polymorpher Formen, sondern auch die Kristallstrukturbestimmungen aus Röntgenbeugungsdaten wurden durchgeführt. Im Fokus der Untersuchungen standen Pigmentvorprodukte, Pigmente, pharmazeutische Wirkstoffe und weitere organische und metallorganische Verbindungen.
Ein Auszug zu Pigmentvorprodukten bilden 2-Ammoniobenzolsulfonate, welche klassische Vorprodukte verlackter Hydrazonpigmente sind. Die bislang unbekannte tautomere Form und Kristallstruktur der CLT-Säure als Zwitterion konnte erfolgreich aus dem Zusammenspiel von IR-, Festkörper-NMR-Spektroskopie und Röntgen-Pulverdiffraktometrie ermittelt werden [1]. Mit Hilfe eines Polymorphie-Screenings konnten nicht nur zwei neue Pseudopolymorphe, sondern auch das Ansolvat der CLT-Säure selbst kristallisiert und ihre Strukturen aus Röntgen-Einkristalldaten bestimmt werden. Hierbei konnte das, aufgrund der Strukturbestimmung aus Röntgen-Pulverdaten proklamierten Tautomer der CLT-Säure verifiziert werden [2]. Mit Hilfe thermischer und röntgenographischer Untersuchungen an drei Derivaten der CLT-Säure, konnte der Einfluss des Substitutionsmusters (Chlor- und Methylsubstituenten) auf die Kristallpackung aufgezeigt werden [3]. Mit Hilfe eines Polymorphie-Screenings an der Iso-CLT-Säure konnten Reaktionen der zum Polymorphie-Screening eingesetzten Lösungsmittel und der Iso-CLT-Säure beobachtet und mittels thermischer und röntgenographischer Ergebnisse aufgeklärt werden. In zwei Fällen konnte eine Deprotonierung und im dritten Fall eine Desulfonierung beobachtet werden. Durch Erwärmen der deprotonierten und desulfonierten Verbindung(en) ist die Rückgewinnung der Iso-CLT-Säure möglich [4]. Ein Polymorphie-Screening an Pigment Red 53 lieferte vierundzwanzig neue Phasen, welche identifiziert und charakterisiert werden konnten. Ferner konnten von neun Phasen die Kristallstrukturen bestimmt werden (acht aus Röntgen-Einkristall- und eine aus Röntgen-Pulverdaten). Mit Hilfe dieser konnte ein Teil der Funktionalisierung von Lösungsmittelmolekülen in Pigment Red 53 aufgedeckt werden. Diverse Beziehungen pseudopolymorpher Formen zueinander konnten auf Basis der gewonnenen Erkenntnisse bestimmt werden [5]. Mit Hilfe der Ergebnisse aus den Untersuchungen zu Pigment Red 53 konnte mittels Modifizierung der bekannten Syntheseroute zu Pigment Red 53:2 die bekannte α-Phase erhalten werden. Weiterhin konnten neben zehn bekannten weitere fünfzehn neue Polymorphe identifiziert und weitestgehend charakterisiert werden. Die Kristallstrukturen von fünf bekannten und zwei neuen Phasen (sechs aus Röntgen-Einkristall- und eine aus Röntgen-Pulverdaten [6]) konnten bestimmt werden. Anhand der Ergebnisse aus den chemisch-physikalischen Charakterisierungen konnten Einzelbeziehungen zwischen den Phasen beobachtet werden. Die eigene Synthese von Pigment Red 57:1 resultierte in der bereits bekannten α-Phase. Ein Polymorphie-Screening an der α-Phase lieferte neben der bereits bekannten β-Phase elf neue Modifikationen. Schließlich konnte mit Hilfe der gesammelten Daten zur α-, β- und γ-Phase ein Zusammenhang zwischen De-/Rehydratation und dem damit einhergehenden Farbwechsel hergestellt und nachgewiesen werden [7]. Die bislang unbekannte Protonierung und Kristallstruktur von Nimustin-Hydrochlorid konnten erfolg-reich aus der Symbiose von Festkörper-NMR und Röntgen-Pulverdiffraktometrie des Handelsproduktes ermittelt werden [8]. Ebenso konnte die bislang unbekannte Kristallstruktur von 5′-Deoxy-5-fluorouridin konnte erfolgreich aus Röntgen-Pulverdaten des Handelsproduktes bestimmt werden [9]. Nach einem umfangreichen Polymorphie-Screening gelang es, Einkristalle von Tizanidin-Hydrochlorid zu erhalten und dessen Struktur aus Röntgen-Einkristalldaten zu bestimmen. Die bislang angenommene Tautomerie von Tizanidin im Festkörper und flüssiger Phase konnte mittels Röntgen-Einkristalldiffraktometrie und 1H-NMR korrigiert werden [10]. Während thermischer Untersuchungen wurden zwei bisher nicht beschriebene Polymorphe (ein Hochtemperatur- und ein Raumtemperaturpolymorph) gefunden. Schließlich konnte die Kristall-struktur des zweiten bei Raumtemperatur stabilen Polymorphs aus Röntgen-Pulverdaten bestimmt werden.
Es konnte die Kristallstruktur von 4,5,9,10-Tetramethoxypyren [11], eines neuen 2:1 Co-Kristalls aus Chinolin und Fumarsäure [12] und einer biradikalen Azoverbindung [13] aus Röntgen-Pulverdaten bestimmt werden. Neben diesen Verbindungen konnten aus analytischen und röntgenographischen Untersuchungen an ausgewählten Stereoisomeren von Inositol dreizehn neue Phasen erhalten werden. Zudem konnten mehrere Schmelzpunkte mittels DSC-Messungen korrigiert oder als Phasenübergänge oder Zersetzungs-punkte identifiziert werden. Acht Strukturen geordneter Phasen konnten aus Röntgen-Pulverdaten bestimmt werden. Zusätzlich konnten fünf der dreizehn Phasen als Rotatorphasen identifiziert und deren Elementarzelle ermittelt werden [14]. Auf Basis einer neuen Syntheseroute konnten ein Cobalt(II)- und ein Zink(II)-fumarat-Anhydrat, welche im Vergleich zu den bisher bekannten Hydraten besser wasser-löslich sind, erhalten werden. Diese Anhydrate konnten zur Kristallisation eingesetzt werden und lieferten alle bisher bekannten und zusätzlich drei neue Kristallphasen (zwei neue Cobalt(II)-fumarat-Hydrate und ein neues Zink(II)-fumarat-Hydrat). Die Kristallstrukturen der drei neuen Kristallphasen konnten aus Röntgen-Einkristalldaten bestimmt werden [15 und 16].
[1] S. L. Bekö, S. D. Thoms, J. Brüning, E. Alig, J. van de Streek, A. Lakatos, C. Glaubitz & M. U. Schmidt (2010), Z. Kristallogr. 225, 382–387; [2] S. L. Bekö, J. W. Bats & M. U. Schmidt (2012), Acta Cryst. C 68, o45–o50; [3] S. L. Bekö, C. Czech, M. A. Neumann & M. U. Schmidt. "Doubly substituted 2-ammonio-benzenesulfonates: Substituent influence on the packing pattern", eingereicht; [4] S. L. Bekö, J. W. Bats, E. Alig & M. U. Schmidt (2013), J. Chem. Cryst. 43, 655–663; [5] S. L. Bekö, E. Alig, J. W. Bats, M. Bolte & M. U. Schmidt. "Polymorphism of C.I. Pigment Red 53", eingereicht; [6] T. Gorelik, M. U. Schmidt, J. Brüning, S. Bekö & U. Kolb (2009), Cryst. Growth Des. 9, 3898–3903; [7] S. L. Bekö, S. M. Hammer & M. U. Schmidt (2012), Angew. Chem. Int. Ed. 51, 4735–4738 und Angew. Chem. 124, 4814–4818; [8] S. L. Bekö, D. Urmann, A. Lakatos, C. Glaubitz & M. U. Schmidt (2012), Acta Cryst. C 68, o144–o148; [9] S. L. Bekö, D. Urmann & M. U. Schmidt (2012), J. Chem. Cryst. 42, 933–940; [10] S. L. Bekö, S. D. Thoms, M. U. Schmidt & M. Bolte (2012), Acta Cryst. C 68, o28–o32; [11] M. Rudloff, S. L. Bekö, D. Chercka, R. Sachser, M. U. Schmidt, K. Müllen & M. Huth. "Structural and electronic properties oft he organic charge transfer system 4,5,9,10-tetramethoxypyrene - 7,7,8,8-tetracyanoquinodimethane", eingereicht; [12] S. L. Bekö, M. U. Schmidt & A. D. Bond (2012), CrystEngComm 14, 1967–1971; [13] S. L. Bekö, S. D. Thoms & M. U. Schmidt (2013), Acta Cryst. C 43, 1513–1515; [14] S. L. Bekö, E. Alig, M. U. Schmidt & J. van de Streek (2014), IUCrJ 1, 61–73; [15] S. L. Bekö, J. W. Bats & M. U. Schmidt (2009), Acta Cryst. C 65, m347–m351; [16] S. L. Bekö, J. W. Bats & M. U. Schmidt. "One-dimensional zinc(II) fumarate coordination polymers", akzeptiert.
The laser-driven acceleration of protons from thin foils irradiated by hollow high-intensity laser beams in the regime of target normal sheath acceleration is reported for the first time. The use of hollow beams aims at reducing the initial emission solid angle of the TNSA source, due to a flattening of the electron sheath at the target rear side. The experiments were conducted at the PHELIX laser facility at the GSI Helmholtzzentrum für Schwerionenforschung GmbH with laser intensities in the range from 10^18 to 10^20 W/cm^2. We observed an average reduction of the half opening angle by (3.07±0.42)° or (13.2±2)% when the targets have a thickness between 12 to 14 μm. In addition, the highest proton energies were achieved with the hollow laser beam in comparison to the typical Gaussian focal spot.
Fast nuclei are ionizing radiation which can cause deleterious effects to irradiated cells. The modelling of the interactions of such ions with matter and the related effects are very important to physics, radiobiology, medicine and space science and technology. A powerful method to study the interactions of ionizing radiation with biological systems was developed in the field of microdosimetry. Microdosimetry spectra characterize the energy deposition to objects of cellular size, i.e., a few micrometers.
In the present thesis the interaction of ions with tissue-like media was investigated using the Monte Carlo model for Heavy-Ion Therapy (MCHIT) developed at the Frankfurt Institute for Advanced Studies. MCHIT is a Geant4-based application intended to benchmark the physical models of Geant4 and investigate the physical properties of therapeutic ion beams. We have implemented new features in MCHIT in order to calculate microdosimetric quantities characterizing the radiation fields of accelerated nucleons and nuclei. The results of our Monte Carlo simulations were compared with recent experimental microdosimetry data.
In addition to microdosimetry calculations with MCHIT, we also investigated the biological properties of ion beams, e.g. their relative biological effectiveness (RBE), by means of the modified Microdosimetric-Kinetic model (MKM). The MKM uses microdosimetry spectra in describing cell response to radiation. MCHIT+MKM allowed us to study the physical and biological properties of ion beams. The main results of the thesis are as follows:
MCHIT is able to describe the spatial distribution of the physical dose in tissue-like media and microdosimetry spectra for ions with energies relevant to space research and ion-beam cancer therapy; MCHIT+MKM predicts a reduction of the biological effectiveness of ions propagating in extended medium due to nuclear fragmentation reactions; We predicted favourable biological dose-depth profiles for monoenergetic helium and lithium beams similar to the one for carbon beam. Well-adjusted biological dose distributions for H-1, He-4, C-12 and O-16 with a very flat spread-out Bragg peak (SOBP) plateau were calculated with MCHIT+MKM; MCHIT+MKM predicts less damage to healthy tissues in the entrance channel for SOBP He-4 and C-12 beams compared to H-1 and O-16 ones. No definitive advantages for oxygen ions with respect to carbon were found.
Biopharmazeutika sind heutzutage ein wichtiger Bestandteil des Arzneimittelmarktes. Ihr komplexer Aufbau und ihre Mikroheterogenität erfordern eine genaue strukturelle Charakterisierung auf verschiedenen Ebenen der Moleküle, wobei die Anwendung neuer Methoden von den entsprechenden Richtlinien durchaus erwünscht ist. Die Massenspektrometrie als Analysemethode hat sich in diesem Gebiet bereits fest etabliert. Verschiedenste massenspektrometrische Untersuchungen können an den intakten Biopharmazeutika sowie an größeren und kleineren Bruchstücken derselben durchgeführt werden. Trotzdem wird meist auf wenige, lange etablierte Protokolle zurückgegriffen, die häufig mit langwieriger Probenvorbereitung verbunden sind. Bei der Analyse der Glykosylierung wird immer noch die chromatographische Trennung mit anschließender Detektion durch UV- oder Fluoreszenzmessung bevorzugt.
In dieser Arbeit sollten die Möglichkeiten der Massenspektrometrie bei der Analyse von Biopharmazeutika genauer untersucht werden. Dazu gehört auch, den hohen Informationsgehalt der üblichen chromatographischen Auftrennung von Peptiden aus einem proteolytischen Verdau vollständig zu nutzen. Es wurde gezeigt, dass die manuelle Auswertung der Analyse zusätzliche Ergebnisse bringt, und dass gleichzeitig eine Analyse von posttranslationalen und prozessbedingten Modifikationen möglich ist. Zudem wurde der Verdau mit der Protease Trypsin auf das jeweilige Biopharmazeutikum und auf das Ziel der Analyse optimiert. Da mit Trypsin eine vollständige Sequenzabdeckung nicht erreichbar war, wurden zusätzlich verschiedene weniger spezifische Proteasen angewendet. Alle untersuchten weniger spezifischen Proteasen (Elastase, Chymotrypsin und Thermolysin) waren für eine solche Analyse gut geeignet. Die Komplementarität von MALDI- und ESI-MS-Analysen konnte durch ihre Kombination optimal ausgeschöpft werden. Zudem wurden weitere Methoden zur Erhöhung der Sequenzabdeckung wie die Derivatisierung der Peptide mit TMTzero vorgestellt.
Für die Analyse intakter Biopharmazeutika wurden neben der Größenausschlusschromatograph und gelelektrophoretischen Trennungen sowohl MALDI- als auch ESI-MS-Analysen verwendet. Die Trennung großer Proteinmoleküle in kleinere Untereinheiten erleichterte dabei die massenspektrometrische Analyse maßgeblich. Die Fragmentierung der Biopharmazeutika mittels MALDI-ISD war für die Bestimmung der Protein-N- und C-Termini sehr gut geeignet.
Die Analyse der Glykosylierung wurde an den freien N-Glykanen aus einem PNGaseF-Verdau sowie an Glykopeptiden aus einem Verdau mit Pronase durchgeführt. Die freien N-Glykane konnten zudem für die MALDI-MS-Analyse mit der MALDI-Matrix 3-Aminochinolin direkt auf dem Probenteller derivatisiert werden. Die Derivatisierung und Vermessung der N-Glykane wurde zunächst an verschiedenen Standardoligosacchariden, Humanmilcholigosacchariden und N-Glykanen aus Standardglykoproteinen optimiert. Durch die Fragmentierung der N-Glykane konnten diese sequenziert und isomere Strukturen unterschieden werden.
Bei einem Pronaseverdau wurden Proteine so weit verdaut, dass nur noch einzelne Aminosäuren bzw. Di- oder Tripeptide übrig blieben. Lediglich die Glykosylierungsstellen waren durch die voluminösen Glykanstrukturen vor dem Verdau geschützt und behielten eine kurze Peptidsequenz, die für eine Identifizierung der Glykosylierungsstelle ausreichend war. So konnten die N- und O-Glykopeptide direkt ohne Aufreinigung mittels MALDI-MS aus den Verdauansätzen analysiert werden, ohne dass nicht glykosylierte Peptide störten. Das Verdauprotokoll wurde zunächst an mehreren Standard-N- und -O-Glykoproteinen optimiert und anschließend auf die untersuchten Biopharmazeutika angewendet. N- und O-Glykopeptide konnten sogar nebeneinander analysiert werden. Die hohe Massengenauigkeit des verwendeten MALDI LTQ-Orbitrap Massenspektrometers ließ eine eindeutige Identifizierung der Glykopeptide mit Hilfe eines dafür entwickelten Programms zu. Weiterhin konnte die Identifizierung durch die Fragmentierung der Glykopeptide unterstützt werden.
Somit konnten in dieser Arbeit verschiedene massenspektrometrische Analysen von Biopharmazeutika neu entwickelt, optimiert oder vereinfacht werden. Dabei wurden für jede Strukturebene (intaktes Molekül, größere und kleinere Fragmente) sowohl Ansätze mit MALDI-MS als auch mit ESI-MS verfolgt. Einige Methoden, die in der Proteomforschung bereits Anwendung fanden, konnten erfolgreich auf Biopharmazeutika übertragen werden. Die Arbeit zeigt, dass die Massenspektrometrie ein großes Potential in der Analyse der Biopharmazeutika besitzt, das aber bisher noch nicht vollständig ausgeschöpft wird. Durch die Wahl der richtigen Methoden und der geeigneten Instrumentierung wird eine vollständige strukturelle Charakterisierung ermöglicht.
HIV vaccine preclinical testing is difficult because HIV’s only relevant hosts are humans and no correlates of protection are known. To this end, we are working on the humanization of different mouse strains with human peripheral blood mononuclear cells (PBMCs) as well as human hematopoietic stem cells (HSC) to generate a useful small animal model.
We generated immune deficient mice (NOD Scid IL2gc -/- /NOD Rag1-/- IL2gc -/-) expressing human MHC class II (HLA-DQ8) on a mouse class II deficient background (Ab-/-). Here, the human HLA-DQ8 should interact with the matching T cell receptors of transferred matching human PBMCs and therefore could support the functionality of the transferred human CD4+ cells in the mice.
Mice that were adoptively transferred with human HLA-DQ8 PBMCs only showed engraftment of CD3+ T cells. Surprisingly, the presence of HLA class II did not significantly change the repopulation rates in the mice. Also, the presence of HLA class II did not advance B cell engraftment, such that humoral immune responses were undetectable. However, the overall survival of DQ8-expressing mice was significantly prolonged, compared to mice expressing mouse MHC class II molecules, and correlated with an increased time span until onset of GvHD.
To avoid GVHD and to increase and maintain the level of human cell reconstitution over a long period of time, the same mouse strains were reconstituted with human HSC. Compared to PBMC-repopulated mice, HSC-reconstituted mice develop almost all subpopulations of the human immune system detectable at week 12 after HSC transfer. These mice developed adaptive immune responses after Tetanus Toxoide (TT) immunizations. In addition, we are testing the susceptibility of these humanized mice to different HIV strains with a detailed look at immune responses.
Die Arbeit befasst sich auf dem Weg einer qualitativ empirischen Studie mit dem Thema „informelles Lernen“. Im Mittelpunkt steht die Erforschung der unterschiedlichen Phänomene informellen Lernens – der Lerninhalte, -formen und -modalitäten sowie die Bedeutung des Kontextes für diese Lernform. Darüber hinaus wird auf einer theoretischen Ebene eine Auseinandersetzung mit dem Diskurs um das informelle Lernen angestrebt. Neben einem Literaturbericht – dessen Ergebnis die theoretische wie empirische Unterbelichtung des informellen Lernens ist – wird nach einem systematischen Verständnis vom Lernen in informellen Kontexten gesucht. Im Fokus der Arbeit liegt aber die empirische Befassung mit dem Phänomen des informellen Lernens. Zu diesem Zweck wurden – verortet im Kontext eines kommunalen Wildtiermanagementprojektes – zweistufige Interviews sowie teilnehmende Beobachtungen durchgeführt und analysiert. Auf Basis dieser empirischen Daten konnten unterschiedliche Lernergebnisse und -formen informellen Lernens sowie unterschiedliche lernrelevante (Kontext-)Faktoren rekonstruiert werden. Darüber wurden spezifische Merkmale und Besonderheiten des Lernens in informellen Kontexten erarbeitet.
During the last decade of the 20th century, the field of mass spectrometry has seen a revolutionary change in its application and scope. The introduction of soft ionization methods for the analysis of biological molecules has expanded the area of mass spectrometry from its early roots in the analysis of inorganic and organic species into the fields of biology and medicine.
Today, the use of the mass spectrometry is extended to a wide range of applications in biotechnology and pharmaceutical industry, in geological, environmental and clinical research. In biochemistry, the principles of mass spectrometry are, however, broadly applicable in accurate molecular weight determination, reaction monitoring, amino acid sequencing, oligonucleotide sequencing and protein structure.
In order to carry out their biological activities, proteins interact most often to each other and form transient or stable complexes. In addition, some proteins specifically interact also with other proteins or with non-protein molecules, such as DNA, RNA or metabolites, these interactions being critical for their function. Hence, defining the composition of protein complexes, as well as understanding how protein complexes are assembled and regulated yield invaluable insights into protein function. Coupled with an isolation technique to purify a specific protein complex of interest, mass spectrometry can rapidly and reliably identify the components of complexes. In addition, quantitative MS techniques offer the possibility of studying dynamically regulated interactions....
Terrestrial climate and ecosystem evolution during ‘Greenhouse Earth’ phases of the early Paleogene remain incompletely known. Particularly, paleobotanical records from high southern latitudes are giving only limited insights into the Paleocene and early Eocene vegetation of the region. Hence, data from continuous well-calibrated sequences are required to make progress with the reconstruction of terrestrial climate and ecosystem dynamics from the southern latitudes during the early Paleogene.
In order to elucidate the terrestrial conditions from the high southern latitudes during the early Paleogene, terrestrial palynology was applied in the present study to two well-dated deep-marine sediment cores located at the Australo-Antarctic region: (i) IODP Site U1356 (Wilkes Land margin, East Antarctica) and (ii) ODP Site 1172 (East Tasman Plateau, southwest Pacific Ocean). The studied sequence from IODP Site U1356 comprises mid-shelfal sediments from the early to middle Eocene (53.9 – 46 million years ago [Ma]). For the ODP Site 1172, the studied succession is characterized by sediments deposited in shallow marine environments of the middle Paleocene to the early Eocene (60.7 – 54.2 Ma).
Based on the obtained pollen and spores (sporomorphs) results from the studied sequences of Site U1356 and Site 1172, this study aims to: (1) decipher the terrestrial climate conditions along the Australo-Antarctic region from the middle Paleocene to the middle Eocene; (2) evaluate the structure, diversity and compositional patterns of forests that throve in the Australo-Antarctic region during the early Paleogene; (3) understand the response of forests from the high southern latitudes to the climate dynamics from the early Paleogene; (4) establish a connection between the generated terrestrial palynomorph data and published Sea Surface Temperatures (SSTs) from the same cores.
To decipher the terrestrial climatic conditions on the Australo-Antarctic region, this study relies on the nearest living relative (NLR) concept that assumes that fossil taxa have similar climate requirements as their modern counterparts. This approach was applied to the sporomorph results of Site U1356 and Site 1172, following mainly the bioclimatic analysis. With regard to the structure and diversity patterns of the vegetation from the same region, the present study presents combined qualitative (i.e., reconstruction of the vegetation based mainly on the habitats of the known living relatives) and quantitative (i.e., application of ordination techniques, rarefaction and diversity indices) analyses of the fossil sporomorphs results.
The overall results from the paleoclimatic and vegetation reconstruction approaches applied in the present study, indicate that temperate and paratropical forests during the early Paleogene throve under different climatic conditions on the Wilkes Land margin and on Tasmania, at paleolatitudes of ∼70°S and ∼65°S, respectively.
Specifically, the sporomorph results from Site U1356, suggest that a highly diverse forest similar to present-day forests from New Caledonia was thriving on Antarctica during the early Eocene (53.9 – 51.9 Ma). These forests were characterized by the presence of termophilous taxa that are restricted today to tropical and subtropical settings, notably Bombacoideae, Strasburgeria, Beauprea, Spathiphyllum, Anacolosa and Lygodium. In combination with MBT/CBT paleotemperature results, they provide strong evidence for near-tropical warmth at least in the coastal lowlands along the Wilkes Land margin. The coeval presence of frost tolerant taxa such as Nothofagus, Araucariaceae and Podocarpaceae during the early Eocene on the same record suggests that paratropical forests were thriving along the Wilkes Land margin. Due to the presence of this kind of vegetation, it is possible to suggest that forests in this region were subject to a climatic gradient related to differences in elevation and/or the proximity to the coastline.
By the middle Eocene, the paratropical forests that characterized the vegetation of the early Eocene on the Wilkes Land margin were replaced by low diversity temperate forests dominated by Nothofagus, and similar to present-day cool-temperate forests from New Zealand. The dominance of these forests and the absence of thermophilous elements together with the lower temperatures suggested by the MBT/CBT and the sporomorph-based temperatures indicate consistently cooler conditions during this time interval.
With regard to the sporomorph results of Site 1172, this study suggests that three vegetation types were thriving on Tasmania from the middle Paleocene to the early Eocene under different climatic conditions. During the middle to late Paleocene, warm-temperate forests dominated by Podocarpaceae and Araucariaceae were the prevailing vegetation on Tasmania. The dominance of these forests was interrupted by the transient predominance of cool-temperate forests dominated by Nothofagus and Araucariaceae across the middle/late Paleocene transition interval (~59.5 to ~59.0 Ma). This cool-temperate forest was characterized by a lack of frost-sensitive elements (i.e., palms and cycads) indicating cooler conditions with harsher winters on Tasmania during this time interval. By the early Eocene, and linked with the Paleocene Eocene Thermal Maximum (PETM), Paleocene temperate forests dominated by gymnosperms were replaced by paratropical rainforests with the remarkable presence of the tropical mangrove palm Nypa during the PETM and the earliest Eocene. The overall results from Site U1356 and Site 1172, provide a new assessment of the terrestrial climatic conditions in the Australo-Antarctic region for validating climate models and understanding the response of high-latitude terrestrial ecosystems to the climate dynamics of the early Paleogene on southern latitudes.
The climatic conditions in the higher latitudes during the early Paleogene were further unravelled by comparing the obtained terrestrial and marine results. The integration of the obtained sporomorph data with previously published TEX86-based SSTs from Site 1172 documents that the vegetation dynamics were closely linked with the temperature evolution from the Australo-Antarctic region. Moreover, the comparison of TEX86-based SSTs and sporomorph-based climatic estimations from Site 1172 suggests a warm-season bias of both calibrations of TEX86 (i.e., TEX86Hand TEX86H), when this proxy is applied to high southern latitudes records of the early Paleogene.
The work presented in this thesis is devoted to two classes of mathematical population genetics models, namely the Kingman-coalescent and the Beta-coalescents. Chapters 2, 3 and 4 of the thesis include results concerned with the first model, whereas Chapter 5 presents contributions to the second class of models.
The objective of the present doctoral thesis was to investigate the occurrence, distribution, and behaviour of six hydrophilic ethers: ethyl tert-butyl ether (ETBE), 1,4-dioxane, ethylene glycol dimethyl ether (monoglyme), diethylene glycol dimethyl ether (diglyme), triethylene glycol dimethyl ether (triglyme), and tetraethylene glycol dimethyl ether (tetraglyme) in surface-, waste-, ground- and drinking water samples. Solid phase extraction and gas chromatography/mass spectrometry were used to analyze the six hydrophilic ethers. Altogether more than 150 surface water samples, almost 100 of each groundwater and wastewater samples, and 10 raw and drinking water samples were analyzed during the research project.
Initially, the method was validated in order to simultaneously determine the analytes of interest in various aquatic environments. A solid phase extraction method that uses coconut charcoal (Resprep® activated coconut charcoal, Restek) or carbon molecular sieve material (SupelcleanTM Envi-CarbTM Plus, Supelco) for analyte absorption were found suitable for determination of ETBE, 1,4-dioxane, and glymes in surface-, drinking-, ground- and wastewater samples. Precision and accuracy of both methods was demonstrated for all analytes of interest. The recovery of target compounds from the ultrapure water spiked at 1.0 µg L−1 was between 86.8 % and 98.2 %, with relative standard deviation below 6 %. The samples spiked at 10.0 µg L−1 gave slightly higher recovery of 90.6 % to 112.2 % with a relative standard deviation below 3.4 % for each analyte. Detection and quantification limits in ultrapure water and surface waters were furthermore established. The limit of quantitation (LOQ) in ultrapure water ranged between 0.024 µg L−1 to 0.057 µg L−1 using Restek cartridges, and 0.030 µg L−1 to 0.069 µg L−1 using Supelco cartridges. In the surface water samples the calculated LOQ was 0.032 µg L−1 to 0.067µg L−1 using coconut charcoal material and 0.032 µg L−1 to 0.052 µg L−1 using the carbon molecular sieve material. Moreover, stability of the unpreserved and preserved water samples as well as the extracts was determined. Preservation of samples with sodium bisulfate (at 1 gram per Liter) resulted in much better stability of the ethers in water samples. Subsequently, 27 samples obtained from seven surface water bodies in Germany (Rivers Rhine, Lippe, Main, Oder, Rur, Schwarzbach and Wesel-Datteln Canal) were analyzed for the six hydrophilic ethers. ETBE was present in only two surface waters (Rhine River and Wesel-Datteln Canal) with concentrations close to the LOQ (up to 0.065 µg L−1). 1,4-Dioxane was detected in all of the water samples at concentrations reaching 1.93 µg L–1. Monoglyme was identified only in the Main and Rhine Rivers at the maximum concentration of 0.114 µg L–1 and 0.427 µg L–1, respectively. Very high concentrations (up to 1.73 µg L−1) of diglyme, triglyme, and tetraglyme were detected in the samples from the Oder River. These glymes were also detected in the Rhine River; however the concentrations did not exceed 0.200 µg L–1. Furthermore, tetraglyme was detected in the Main River at an average concentration of 0.409 µg L–1 (n = 6) and in one sample from the Rur River at 0.192 µg L–1.
Four sampling campaigns were conducted at the Oderbruch polder between October 2009 and May 2012, in order to study the behavior of the hydrophilic ethers and organophosphates during riverbank filtration and in the anoxic aquifer. Moreover the suitability of these target compounds was assessed for their use as groundwater organic tracers. At the time of each sampling campaign, concentrations of triglyme and tetraglyme in the Oder River were between 20–185 ng L–1 (n = 4) and 273¬–1576 ng L–1 (n = 4). Monoglyme, diglyme, and 1,4-dioxane were analyzed only during the two last sampling campaigns. At that time, the concentration of diglyme in Oder River was 65¬–94 ng L-1 (n = 2) and 1,4-dioxane 1610¬–3290 ng L–1 (n = 2). In the drainage ditch, following bank filtration, concentrations of ethers ranged between 1090 ng L–1 and 1467 ng L–1 for 1,4-dioxane, 23¬ng L–1 and 41 ng L–1 for diglyme, 37 ng L–1 and 149 ng L–1 for triglyme, and 496 ng L–1 and 1403 ng L–1 for tetraglyme. In the anoxic aquifer, 1,4-dioxane showed the greatest persistence during the groundwater passage. At the distance of 1150 m from the river and an estimated groundwater age of 41.9 years, a concentration above 200 ng L−1 was detected. A positive correlation was found for the inorganic tracer chloride (Cl−) with 1,4-dioxane and tetraglyme. Similarities in the behavior of Cl− and the organic compound suggested that 1,4-dioxane and tetraglyme are controlled by the same hydraulic process and therefore can be used as additional tracers to study the dynamics of the groundwater system. These results show that high concentrations of ethers are present in the surface water and are not removed during bank filtration processes. Moreover, the hydrophilic ethers persist in the anoxic aquifer and little or no degradation is expected, supporting, their possible application as organic tracers.
A separate sampling project was conducted for 1,4-dioxane that focused primarily on its fate in the aquatic environment. This study provided missing information on the extent of water pollution with 1,4-dioxane is Germany. Numerous waste-, surface-, ground- and drinking water samples were collected in order to determine the persistence of 1,4-dioxane in the aquatic environment. The occurrence of 1,4-dioxane was determined in wastewater samples from four municipal sewage treatment plants (STP). The influent and effluent samples were collected during weekly campaigns. The average influent concentrations in all four plants ranged from 262 ± 32 ng L−1 to 834 ± 480 ng L−1, whereas the average effluents concentrations were between 267 ± 35 ng L−1 and 62,260 ± 36,000 ng L−1. The source of increased 1,4-dioxane concentrations in one of the effluents was identified to originate from impurities in the methanol used in the postanoxic denitrification process. Spatial and temporal distribution of 1,4-dioxane in the river Main, Rhine, and Oder was also examined. Concentrations reaching 2,200 ng L−1 in the Oder River, and 860 ng L−1 in both Main and Rhine River were detected. The average load during the sampling was estimated to be 6.5 kg d−1 in the Main, 34.1 kg d−1 in the Oder, and 134.5 kg d−1 in the Rhine River. In all of the sampled rivers, concentrations of 1,4-dioxane increased with distance from the mouth of the river and were found to negatively correlate with the discharge of the river. In order to determine if 1,4-dioxane can reach drinking water supplies, samples from a Rhine River bank filtration site and potable water from two drinking water production facilities were analyzed for the presence of 1,4-dioxane in the raw water and finished potable water. The raw water (following bank filtration) contained 650 ng L−1 to 670 ng L−1 of 1,4-dioxane, whereas the concentration in the finished drinking water fell only to 600 ng L−1 and 490 ng L−1, respectively.
During the final project, investigations of the source identification of high glyme concentrations in the Oder River were carried out. During four sampling campaigns between January, 2012 and April, 2013, 50 samples from the Oder River in the Oderbruch region and Poland were collected. During the first two samplings in the Oderbruch polder, glymes were detected at concentration reaching 0.07 µg L-1 (diglyme), 0.54 µg L−1 (triglyme) and 1.73 µg L−1 (tetraglyme) in the Oder River. The extensive sampling campaign of the Oder River (about 500 km) in Poland helped to identify the area of possible glyme entry into the river. During that sampling the maximum concentrations of triglyme and tetraglyme were 0.46 µg L−1 and 2.21 µg L−1, respectively. A closer investigation of the identified area of pollution, helped to determine the possible sources of glymes in the Oder River. Hence, the final sampling focused on the Kaczawa River, a left tributary of the Oder River and Czarna Woda, a left tributary of Kaczawa River. Moreover, samples from an industrial wastewater treatment plant were collected. Samples from Czarna Woda stream and Kaczawa River contained even higher concentrations of diglyme, triglyme, and tetraglyme, reaching 5.18 µg L−1, 12.87 µg L−1 and 80.81 µg L−1, respectively. Finally, three water samples from a wastewater treatment plant receiving influents from a copper smelter were analyzed. Diglyme, triglyme, and tetraglyme were present at an average concentration of 569 µg L−1, 4300 µg L−1, and 65900 µg L−1, respectively in the wastewater. Further research helped to identify the source of the glymes in the wastewater. The gas desulfurization process – Solinox implemented in the nearby copper smelter uses glymes as physical absorption medium for sulfur dioxide.
Results of this doctoral research provide important information about the occurrence, distribution, and behavior of hydrophilic ethers: 1,4-dioxane, monoglyme, diglyme, triglyme, and tetraglyme in the aquatic environment. A method capable of analyzing a wide range of ether compounds: from a volatile ETBE to a high molecular weight tetraglyme was validated. 1,4-Dioxane and tetraglyme were found to be applicable as organic tracers, since they are not easily attenuated during bank filtration and the anoxic groundwater passage. The extent of water pollution with 1,4-dioxane was shown in waste-, surface-, ground-, and drinking waters. One source of extremely high concentrations of 1,4-dioxane in a municipal sewage treatment plant applying postanoxic denitrification was identified, however more information is needed on the entry of 1,4-dioxane into surface waters. Moreover, 1,4-dioxane was present in drinking water samples from river bank filtration, which demonstrates its persistence in the aquatic environment and its low degradation potential during bank filtration and subsequent water treatment. Furthermore, this was the first study that focused primarily on identifying sources of glymes in surface waters. Glymes find a widespread use in industrial sectors, hence establishing their origin in the surface water is difficult (as with 1,4-dioxane). In this work, a gas desulphurization process was identified to be a dominating source of glyme pollution in the Oder River.
Der Entwicklung eines Instrumentes, eines standardisierten schriftlichen Intensiv-Interviews zur Messung von Einstellungen zu Recht und Gesetz, werden in Teil A der Arbeit theoretische Überlegungen vorangestellt. Der in der Rechtswissenschaft zentrale und umstrittene Begriff „Recht“ wird nur in seiner aktuellen und allgemeinen Bedeutung aufgenommen, in einer allgemeinen Bedeutung wie sie von Laien erfasst wird. Der Begriff „Recht“ wird weiter eingegrenzt auf ein Normverständnis in strafrechtlicher Sicht.
Alltägliche Situationen aus verschiedenen Gebieten des Strafrechts (Fälle) sollen die „Items“ bilden, zu denen Jugendliche, Heranwachsende und junge Erwachsene aus unterschiedlichen Schichten der Bevölkerung und mit verschiedenem Bildungsstand ihre Auffassungen aufschreiben. Begriffe wie Recht, Norm, Moral, Einstellung, Meinung, Stereotyp, Vorurteil und auch Überlegungen zum Verständnis von Recht und Gesetz als „Wert“ werden aufgenommen und im Zusammenhang mit der Entwicklung eines neuen Forschungsverfahrens erörtert.
5 Hypothesen werden formuliert zu Einstellungen von Recht und Gesetz und zur Wert-Orientierung von Individuen.
Vielfältige Überlegungen zur Entwicklung der Items (der Fälle) des standardisierten schriftlichen Intensiv-Interviews und der den Befragten vorzulegenden Stufen-Antworten stehen an. Die zu den einzelnen Items vorgegebenen Stufen-Antworten sollen Normorientierung, Einstellungen zu Recht und Gesetz, in verschiedenem Ausmaß abbilden. Der Inhalt wenigstens einer Item-Stufen-Antwort entspricht der Norm und der Inhalt einer Antwort ist klar nicht normorientiert. Die zusätzlich formulierten alternativen Stufen-Antworten zwischen einer Antwort mit klarer Normorientierung und einer Antwort mit fehlender Normorientierung sind „mehr oder weniger normorientiert“, sie berücksichtigen Aspekte der Normorientierung. Jene werden dargelegt und diskutiert. Besonders beachtet wird die Punkt-Bewertung der entworfenen Stufen-Antworten. Jene bilden schließlich den Einstellungs-Score des Individuums, den „Messwert“, der Auskunft gibt über seine individuelle Normorientierung, seine Einstellung zu Strafrecht und Gesetz.
Es wird im Voraus festgelegt, welche „Summen-Scores“ eine „positive“ Einstellung, welche nur eine „neutrale“ und welche „Summen-Scores“ eine „negative“ Einstellung zu Recht und Gesetz abbilden.
Voruntersuchungen zum Verständnis der Items (der Fälle), eine Überprüfung der Item-Formulierungen, Untersuchungen zum Verständnis der Test-Instruktion und zur Durchführung des Verfahrens mit Hilfe von Befragungen von etwa 100 Jugendlichen und Erwachsenen aus unterschiedlichen sozialen Schichten und mit verschiedenem Bildungsgrad führen in 2006 schließlich zur Endfassung des standardisierten schriftlichen Intensiv-Interviews. Jenes wird dargestellt zusammen mit den entworfenen zusätzlichen Fragen zur Person der Untersuchungsteilnehmer.
Teil B der Arbeit beschreibt und diskutiert erste empirische Befragungen von 13 anfallenden Stichproben mit dem Intensiv-Interview, die in den Jahren 2006 bis 2010 mit 100 Jugendlichen und jungen Erwachsenen, Studierenden einer Universität oder Hochschule, Schülern der 9. Klasse einer Hauptschule und Schülern der 11. Klasse zweier Gymnasien durchgeführt wurden.
Die einzelnen Stichproben mit ihren Personmerkmalen werden charakterisiert.
Stets wird überprüft, ob die ermittelten Ergebnisse die formulierten Hypothesen eher unterstützen oder ob die Hypothesen mit den erhobenen Daten nicht begründet werden können.
Validierungsbemühungen zum Verfahren beziehen sich in den Stichproben 1-5 auch auf die Beantwortung einzelner Items. Sie überprüfen, welche Items die Befragten relativ ähnlich beantworten und zu welchen Items die Testpersonen in unterschiedlicher Weise Stellung nehmen.
Zu 13 anfallenden Stichproben wird gefragt: lassen sich mit dem neuen Verfahren Unterschiede in der Einstellung zu Recht und Gesetz zwischen den Befragten beschreiben? Haben weibliche Jugendliche, Heranwachsende und junge Erwachsene statistisch bedeutsam positivere Einstellung zu Recht und Gesetz als männliche? Können statistisch bedeutsame geschlechtsspezifische Differenzen bei berichteten Konflikten mit dem Gesetz nachgewiesen werden? Gibt es statistisch bedeutsame Unterschiede in den Einstellungen zu Recht und Gesetz zwischen Schülern der 9. Klasse Hauptschule, Schülern der 11. Klasse Gymnasium und einer homogenen Stichprobe von Studierenden einer Universität oder Hochschule?
Zusätzlich erhoben werden die religiöse und politische Orientierung der Probanden.
Jene werden in ihrer Beziehung zu den Einstellungen zu Recht und Gesetz untersucht und verglichen.
Zur Wert-Einstellung der Probanden wird mit einem dafür entworfenen Verfahren untersucht, welche Position erhält der Wert-Bereich „Freiheit, Rechtssicherheit, Gleichheit vor dem Gesetz“ im Vergleich zu neun weiteren Wert-Bereichen in den drei Status-Gruppen?