Refine
Year of publication
- 2013 (86) (remove)
Document Type
- Doctoral Thesis (86) (remove)
Language
- English (86) (remove)
Has Fulltext
- yes (86)
Is part of the Bibliography
- no (86) (remove)
Keywords
- UrQMD (2)
- A-Discriminant (1)
- ALICE (1)
- ALICE experiment (1)
- Acculturation (1)
- Adolescence (1)
- Akkulturation (1)
- Aktivierungsmethode (1)
- Amoeba (1)
- Aussiedler (1)
Institute
- Biowissenschaften (18)
- Biochemie und Chemie (15)
- Physik (15)
- Geowissenschaften (6)
- Informatik (4)
- Informatik und Mathematik (4)
- Mathematik (4)
- Psychologie (3)
- Medizin (2)
- Neuere Philologien (2)
Mantle convection is the process by which heat from the Earth’s core is transferred upwards to the surface and it is accepted to explain the dynamics of the Earth’s interior. On geological time-scales, mantle material flows like a viscous fluid as a consequence of the buoyancy forces arising from thermal expansion. Indeed, mantel convection provides a framework which links together the major disciplines, such as seismology, mineral physics, geochemistry tectonic and geology. The numerical model has been applied to understand the dynamic, structure and evaluation of the Earth, and other terrestrial planets and the investigations continue to explore, different aspects of the mantle convection.
In fact, to model this phenomenon, two complementary approaches are possible. On the one hand, one can solve self-consistently the equations of thermal convection, including parameters and employing physical relationships derived from mineral physics. Our understanding of mantle convection depends ultimately upon the success of such fully self-consistent dynamic models in explaining observable features of the flow. Although, these models presently unable to predict the actual convection pattern of the Earth, they are extremely useful to investigate general characteristics of given physical systems. On the other hand, to permit comparison with specific observables associated with the flow, one can consider a more restricted problem. Instead of focusing on the time evolution of mantle flow, if we know a priori the temperature - and hence presumably the density - anomalies that drive the convection, we can try to build a snapshot of the present-day flow pattern, consistent with those anomalies, that can successfully predict the observables. As matter of fact, the aim of this study is to investigate both approaches in comparison with the main geophysical constraints on mantle structure. These constraints include the geoid anomalies, the dynamic surface and core-mantle boundary topography and tectonic plate motions.
The most appropriate mathematical basis functions for describing a bounded and continuous function on a spherical surface are spherical harmonics. We may therefore expand the geodynamic observables in terms of spherical harmonics. We have investigated two methods of the global spherical harmonic analysis by specific attention to the dynamic geoid computation of the geodynamic models. The first method is the quadrature method in which the loss of the orthogonality of the Legendre functions in transition from continues to discrete case is the major drawback to the method. Particularly, we showed that in the absence of the tesseral harmonics, quadrature formulation leads to obtain inaccurate results. The second method is the least-squares which can be considered as the best linear unbiased estimator that provides the exact results. We showed that even with a low resolution grid data it is possible to reconstruct the data and achieve an accurate result by using this method, which is extremely remarkable in three-dimensional global convection studies. However, special care has to be taken since there is some source of errors that might influence the efficiency of this method.
In general, to better understanding of the properties of the mantle, it is useful to assess observable characteristics of plumes in the mantle, including geoid, topography and heat flow anomalies. However, only few studies exist on geoid and topography for axi-symmetric convection and their models were restricted to isoviscous (or stratified) mantle and low Rayleigh numbers. We studied fully coupled depth and temperature dependent Arrhenius type of viscosity in axi-symmetric spherical shell geometry in order to investigate the shape of geoid anomalies and dynamic topography above a plume. Indeed, the topography and geoid anomalies produced from plumes are sensitive to rheology of the mantle and rheology of the plume; both have effects on shape and amplitude of the geoid anomalies. As results we are able to define different classes of plumes by their geoid signals.
Mainly depth-dependent viscosity models show a geoid with negative sign above the plume which can turn to the positive sign by decrease the viscosity contrast. This can be considered as a transition between the strongly depth dependent and the constant viscosity case. Our results basically support the idea by Morgan [1965] and McKenzie [1977]. They have shown the magnitude and even the sign of the total gravity anomaly depend on the spatial variation in effective viscosity. In addition, Hager [1984] has concluded that the total gravity field is depend on the radial distribution of effective viscosity, and a small change in viscosity contrast leads to varying sign of the response function.
In the case of temperature-dependent viscosity, the formation of an immobile lithosphere is a natural result, and the flow as well as the total geoid becomes strongly time dependent. When we increase the activation energy, all geoids associated with the first arriving plumes look like bell shaped whereas for typical plumes, after reaching a statistical steady state, bell-shaped geoids with decreasing amplitude as well as linear flank shaped geoids are observed. It is surprising that in spite of large differences in lateral and depth varying viscosities, the shapes of the geoid anomalies remained rather similar. We also identified different behaviors in the combined model with temperature-and pressure-dependent viscosity. In fact, in spite of the strongly different rheology, the geoid anomalies in all cases were surprisingly similar. Furthermore, we proposed a scaling law for the geoid which makes our results directly applicable to other planets. Moreover, we can apply the results of our calculation to find relations between different rheology and sub-lid temperature, since we know that the mantle temperature can change significantly with variation in pressure-temperature dependent viscosity. It is also possible to define a range of stagnant lid thickness related to the amplitude of the geoid which can be reasonable for study of the lid thickness in Venus or Mars.
Nevertheless, in these series of models, we simplified a number of complexities within the Earth. One of the most important of such simplification is the Boussinesq approximation. This approximation is valid if the temperature scale height (i.e. the depth over which temperature increases by a factor of “ ” due to adiabatic compression) is much greater than the convection depth. However, a temperature scale height in the Earth’s mantle is at best only slightly greater than the mantle depth. Hence, the Boussinesq approximation could mask some very important stratification and compressibility effects that influence both the spatial and temporal structure of the convection. Therefore, in more advance models we considered compressibility in our mantle convection models, assuming that density vary both radially and laterally, being determined as a function of pressure and temperature through an appropriate equation of the state. Moreover, thermodynamic properties assumed to be a function of depth.
We examined the details of the structure of the spherical axi-symmetric Anelastic Liquid Approximation model (ALA) with special attention to the Arrhenius rheology, and compare it to the cases of compressible convection without depth dependent thermodynamical properties, and to cases of the extended Boussinesq approximation. At the same time, the effects of the interaction between temperature and pressure-dependent viscosity and thermodynamic parameters in the compressible mantle convection on the geoid and topography have been studied. We showed that assuming compressible convection with depth-dependent thermodynamic properties strongly influence the geoid undulations. Using compressible convection with constant thermodynamic properties is physically inconsistent and may lead to spurious results for the geoid and convection pattern. Indeed, by a systematic study of different approaches of compressibility in the spherical shell convection for different Arrhenius viscosity laws we proved that only in the unrealistic case of zero activation energy the different compressibility modes result in comparable convection and geoid patterns. In all other rheological cases, large differences have been obtained, that stressing the important role of consistent compressible thermodynamic properties for mantle convection.
In addition, we examine the impact of compressibility as well as different rheologies on the power law relation that connects the Nusselt number to the Rayleigh number. We have discovered that the power law index of the relationship is controlled by the rheology, independent of which approximation is used. Instead, the bound of this relation is controlled by a combination of different approximation and rheology.
Next, instead of focusing on the time evolution of mantle flow, we have carried out three-dimensional spherical shell models of mantle circulation to investigate the effects of joint radial and lateral viscosity variations on the Earth’s non-hydrostatic geoid, surface and core-mantle boundary topographies. These models include realistic lateral viscosity variations (LVV) in the lithosphere, upper mantle and lower mantle in combination with different stratified viscosity structures. We have demonstrated that the contradictory results concerning the effects of LVV can be clarified by the most straight-forward problem in geoid modeling; namely, rather poorly known stratified viscosity structure. We explored three classes of dynamic geoid models due to lateral viscosity variations. In the first class, the LVV strongly improved the fit to the observed geoid. Indeed, when the viscosity contrast between lower and upper mantles is not large enough to produce a good fit to geoid the LVVs are able to perform this action by adjusting amplitudes, so that it becomes comparable with observation. In the second class, inducing the LVV moderately improved the fit. Actually, when the geoid induced by a stratified viscosity structure already has a good correlation with observation, then the LVV causes its amplitude to further improve. In the last class, if the viscosity contrast between upper and lower mantle would be high enough, inducing LVV deteriorate the fit to the observed geoid.. Indeed, depending on the stratified viscosity, inducing the LVV may take place in one of these categories.
We also quantified the effects of LVV in the mantle and lithosphere individually. We found that the presence of LVV in the mantle (upper and lower) improves the fit to the observed geoid regardless of stratified viscosity. While LVV in the lithosphere is a crucial parameter, and dependent of the stratified viscosity, may increase or decrease the geoid fit. In fact, when the lower mantle considers being viscous enough, it would support the negative buoyancy of subducting slabs. Thus, it transmits some of the stress back to the top boundary and causes a weak coupling between slab and surface. Therefore, by including the low viscous plate boundaries in this model, the slabs and overriding plates decouples and the fit to the observed geoid degrades. In contrast, when the lower mantle viscosity is not sufficiently stiff, the presence of the low viscous plate boundaries assists to weaken the strong mechanical coupling between slab and surface. Hence, a better fit achieved.
In the absence of apparent mutations, alteration of gene expression patterns represents the key mechanism by which normal cells evolve to cancer cells.
Gene expression is tightly regulated by posttranscriptional processes. Within this context, RNA-binding proteins (RBPs) represent fundamental factors, since they control mechanisms, such as mRNA-stabilization, -translation and -degradation. Human antigen R (HuR) was among the first RBPs that have been directly associated to carcinogenesis. HuR modulates the stability and translation of mRNAs which encode proteins facilitating various ‘hallmarks of cancer’, namely proliferation, evasion of growth suppression, angiogenesis, cell death resistance, invasion and metastasis. Furthermore, it is well established that tumor-promoting inflammation contributes to tumorigenesis. In this process, monocytes are attracted to the site of the tumor and educated towards a tumor-promoting macrophage phenotype. While HuR has been extensively studied in various tumor cell types, little is known about HuR in hepatocellular carcinoma (HCC). Thus, the aim of my work was to characterize the contribution of HuR to the development of cancer characteristics in HCC. I was particularly interested to investigate if HuR facilitates tumor-promoting inflammation, since a role for HuR has not been described in this context. To this end, I depleted HuR in HepG2 cells (HuR k/d) and used a co-culture model of HepG2 tumor spheroids and infiltrating monocytes to study the impact of HuR on the tumor microenvironment. I could show that depletion of HuR resulted in the reduction of cell numbers. Additionally, the expression of proliferation marker KI-67 and proto-oncogene c-Myc was reduced, supporting a proliferative role of HuR. Furthermore, exposure to cytotoxic staurosporine elevated apoptosis in HuR k/d cells compared to control cells. Concomitantly, the expression of the anti-apoptotic mediator B-cell lymphoma protein-2 (Bcl-2) was markedly reduced in the HuR k/d cells, pointing to an involvement of HuR in cell survival processes.
Accordingly, a pro-survival function of HuR was also observed in tumor spheroids, since HuR k/d spheroids exhibited a larger necrotic core region at earlier time points and showed elevated numbers of dead cells compared to control (Ctr.) spheroids. Interestingly, HuR k/d spheroids isplayed reduced numbers of infiltrated macrophages, suggesting that HuR contributes to a tumor-promoting, inflammatory microenvironment by recruiting monocytes/macrophages to the tumor site. Aiming at identifying HuR-regulated factors responsible for the recruitment of monocytes, I found reduced levels of the chemokine interleukin 8 (IL-8) in supernatants of HuR k/d spheroids, supporting a critical involvement of HuR in the chemoattraction of monocytes. Analyzing supernatants of co-cultures of macrophages and HuR k/d or Ctr. spheroids revealed additional differences in chemokine secretion patterns. Interestingly, protein levels of many chemokines were elevated in co-cultures of HuR k/d spheroids compared to control co-cultures. Albeit enhanced chemokine secretion was observed, less monocytes are recruited into HuR k/d spheroids, further underlining the necessity of HuR in cancer related monocyte/macrophage attraction and infiltration. Differences between chemokine profiles of mono- and co-cultured spheroids could be attributable to changes in spheroid-derived chemokines as a result of the crosstalk with the immune cells. Provided the chemokines originate from monocytes/macrophages, the different secretion patterns suggest that HuR contributes to the modulation of the functional phenotype of infiltrated macrophages, since the tumorenvironment is critically involved in the shaping of macrophage phenotypes. Regions of low-oxygen (hypoxia) represent another critical feature of tumors. Therefore, I next analyzed the impact of HuR on the hypoxic response. Loss of HuR attenuated hypoxia-inducible factor (HIF) 2α expression after exposure to hypoxia, while HIF-1α protein levels remained unaltered. Considering previous results of our group, showing that HIF-2α depletion (HIF-2α k/d) resulted in the enhanced expression of HIF-1α protein, I aimed to determine the involvement of HuR in the compensatory upregulation of HIF-1α protein in HIF-2α k/d cells. I could demonstrate that not only total HuR protein levels, but specifically cytoplasmic HuR was elevated in HIF-2α depleted cells pointing to enhanced HuR activity. Silencing HuR in HIF-2α deficient cells attenuated enhanced HIF-1α protein expression, thus confirming a direct role of HuR in the compensatory upregulation of HIF-1α. This as also reflected on HIF-1α target gene expression. I further investigated the mechanism underlying the compensatory HIF-1α expression in HIF-2α deficient cells. Analyzing HIF-1α mRNA expression, I excluded enhanced HIF1-α transcription and stability to account for elevated HIF-1α expression in HIF-2α k/d cells. HIF-1α promoter activity assays confirmed the mRNA data. Furthermore, HIF-1α protein half-life was not elevated in HIF-2α k/d cells compared to control cells, indicating that HIF-1α protein stability is not altered in HIF-2α k/d cells. Analysis of the association of HIF-1α with the translational machinery using polysomal fractionation finally revealed an increased istribution of HIF-1α mRNA in the heavier polysomal fractions in HIF-2α k/d cells compared to control cells. Since augmented ribosome occupancy is an indicator for more efficient translation, I propose enhanced HIF-1α translation as underlying principle of the compensatory increase in HIF-1α protein levels in HIF-2α k/d cells. In summary, my results demonstrate that HuR is critical for the development of cancer characteristics in HCC. Future work analyzing the impact of HuR on tumor-promoting inflammation, specifically macrophage attraction and activation could provide new trategies to inhibit macrophage-driven tumor progression. Furthermore, I provide evidence that HuR contributes to the hypoxic response by regulating the expression of HIF-1α and HIF-2α. Targeting single HIF-isoforms for tumor therapy should be carefully considered, because of their compensatory regulation when one α-subunit is depleted. Thus, therapeutic strategies targeting factors such as HuR that control both α-subunits and at the same time prevent compensation might be more promising.
The spider genus Eusparassus Simon, 1903 (Araneae: Sparassidae: Eusparassinae; stone huntsman spider) is revised worldwide to include 30 valid species distributed exclusively in Africa and Eurasia. The type species E. dufouri Simon, 1932 is redescribed and a neotype is designated from Portugal. An extended diagnosis for the genus is presented. Eight new species are described: Eusparassus arabicus Moradmand, 2013 (male, female) from Arabian Peninsula, E. educatus Moradmand, 2013 (male, female) from Namibia, E. reverentia Moradmand, 2013 (male, female) from Burkina Faso and Nigeria, E. jaegeri Moradmand, 2013 (male, female) from South Africa and Botswana, E. jocquei Moradmand, 2013 (male, female) from Zimbabwe, E. borakalalo Moradmand, 2013 (female) from South Africa, E. schoemanae Moradmand, 2013 (male, female) from South Africa and Namibia and E. mesopotamicus Moradmand and Jäger, 2012 (male and female) from Iraq, Iran and Turkey. 22 species are re-described six of them are transferred from the genus Olios Walckenaer, 1837. Six species-groups are proposed: the dufouri-group [8 species: E. dufouri, E. levantinus Urones, 2006, E. barbarus (Lucas, 1846), E. atlanticus Simon, 1909, E. syrticus Simon, 1909, E. oraniensis (Lucas, 1846), E. letourneuxi (Simon, 1874), E. fritschi (Koch, 1873); Iberian Peninsula to parts of north-western Africa], walckenaeri-group [3 species: E. walckenaeri (Audouin, 1826), E. laevatus (Simon, 1897), E. arabicus; eastern Mediterranean to Arabia and parts of north-eastern Africa], doriae-group [7 species: E. doriae (Simon, 1874), E. kronebergi Denis, 1958, E. maynardi (Pocock, 1901), E. potanini (Simon, 1895), E. fuscimanus Denis, 1958, E. oculatus (Kroneberg, 1846) and E. mesopotamicus; Middle East to Central and South Asia], vestigator-group (3 species: E. vestigator (Simon, 1897), E. reverentia, E. pearsoni (Pocock, 1901); central to eastern Africa and an isolated area in NW India], jaegeri-group [4 species: E. jaegeri, E. jocquei, E. borakalalo, E. schoemanae; southern and south-eastern Africa], tuckeri-group [2 species: E. tuckeri (Lawrence, 1927), E. educatus; south-western Africa). Two species, E. pontii Caporiacco, 1935 and E. xerxes (Pocock, 1901) cannot be placed in any of the above groups. Two species are transferred from Eusparassus to Olios: O. flavovittatus (Caporiacco, 1935) and O. quesitio Moradmand, 2013. 14 species are recognized as misplaced in Eusparassus, thus nearly half of the described species prior to this revision were placed mistakenly in this genus. Neotypes are designated for E. walckenaeri from Egypt, E. barbarus, E. oraniensis and E. letourneuxi (all three from Algeria) to establish their identity. The male and female of Cercetius perezi Simon, 1902, which was known only from the immature holotype, are described for the first time. It is recognized that the monotypic and little used generic name Cercetius Simon, 1902 — a species, which had been known only from the immature holotype — as a synonym of the widely used name Eusparassus. The case proposal 3596 (conservation of name Eusparassus) is under consideration by ICZN.
The first comprehensive molecular phylogeny of the family Sparassidae with focus on the genus Eusparassus is investigated using four molecular markers (mitochondrial COI and 16S; nuclear H3 and 28S). The monophyly of Eusparassus and the dufouri, walckenaeri and doriae species-groups are recovered with the latter two groups more closely related. The monophyly of the tuckeri-group is not supported and the position of E. jaegeri as the only available member of the jaegeri-group is not resolved within the Eusparassus clade. DNA samples of the vestigator-group were not accessible for this study. The origination of the genus Eusparassus around 70 million years ago (MA) is estimated according to molecular clock analyses. Using this recent result in combination with some biogeographic and geological data, the Namib Desert is proposed as the place of ancestral origin for Eusparassus and putative Eusparassinae genera.
Further analyses are done on the phylogenetic relationships of Sparassidae and its subfamilies. The Eusparassinae are not confirmed as monophyletic, with the two original genera Eusparassus and Pseudomicrommata in separate clades and only the latter clusters with most other assumed Eusparassinae, here termed the "African clade". Monophyly of the subfamilies Sparianthinae, Heteropodinae sensu stricto, Palystinae and Deleninae is recovered. The Sparianthinae are supported as the most basal clade, diverging considerably early (143 MA) from all other Sparassidae. The Sparassinae and genus Olios are found to be polyphyletic. The Sparassidae are confirmed as monophyletic and as most basal group within the RTA-clade. The divergence time of Sparassidae from the RTA-clade is estimated with 186 MA in the Jurassic. No affiliation of Sparassidae to other members of the "Laterigradae" (Philodromidae, Selenopidae and Thomisidae) is observed, thus the crab-like posture of this group was proposed a result of convergent evolution. Only the families Philodromidae and Selenopidae are found members of a supported clade. Including a considerable amount of RTA-clade representatives, the higher-level clade Dionycha is not but monophyly of the RTA-clade itself is supported.
A stochastic model for the joint evaluation of burstiness and regularity in oscillatory spike trains
(2013)
The thesis provides a stochastic model to quantify and classify neuronal firing patterns of oscillatory spike trains. A spike train is a finite sequence of time points at which a neuron has an electric discharge (spike) which is recorded over a finite time interval. In this work, these spike times are analyzed regarding special firing patterns like the presence or absence of oscillatory activity and clusters (so called bursts). These bursts do not have a clear and unique definition in the literature. They are often fired in response to behaviorally relevant stimuli, e.g., an unexpected reward or a novel stimulus, but may also appear spontaneously. Oscillatory activity has been found to be related to complex information processing such as feature binding or figure ground segregation in the visual cortex. Thus, in the context of neurophysiology, it is important to quantify and classify these firing patterns and their change under certain experimental conditions like pharmacological treatment or genetical manipulation. In neuroscientific practice, the classification is often done by visual inspection criteria without giving reproducible results. Furthermore, descriptive methods are used for the quantification of spike trains without relating the extracted measures to properties of the underlying processes.
For that reason, a doubly stochastic point process model is proposed and termed 'Gaussian Locking to a free Oscillator' - GLO. The model has been developed on the basis of empirical observations in dopaminergic neurons and in cooperation with neurophysiologists. The GLO model uses as a first stage an unobservable oscillatory background rhythm which is represented by a stationary random walk whose increments are normally distributed. Two different model types are used to describe single spike firing or clusters of spikes. For both model types, the distribution of the random number of spikes per beat has different probability distributions (Bernoulli in the single spike case or Poisson in the cluster case). In the second stage, the random spike times are placed around their birth beat according to a normal distribution. These spike times represent the observed point process which has five easily interpretable parameters to describe the regularity and the burstiness of the firing patterns.
It turns out that the point process is stationary, simple and ergodic. It can be characterized as a cluster process and for the bursty firing mode as a Cox process. Furthermore, the distribution of the waiting times between spikes can be derived for some parameter combination. The conditional intensity function of the point process is derived which is also called autocorrelation function (ACF) in the neuroscience literature. This function arises by conditioning on a spike at time zero and measures the intensity of spikes x time units later. The autocorrelation histogram (ACH) is an estimate for the ACF. The parameters of the GLO are estimated by fitting the ACF to the ACH with a nonlinear least squares algorithm. This is a common procedure in neuroscientific practice and has the advantage that the GLO ACF can be computed for all parameter combinations and that its properties are closely related to the burstiness and regularity of the process. The precision of estimation is investigated for different scenarios using Monte-Carlo simulations and bootstrap methods.
The GLO provides the neuroscientist with objective and reproducible classification rules for the firing patterns on the basis of the model ACF. These rules are inspired by visual inspection criteria often used in neuroscientific practice and thus support and complement usual analysis of empirical spike trains. When applied to a sample data set, the model is able to detect significant changes in the regularity and burst behavior of the cells and provides confidence intervals for the parameter estimates.
Das Thema dieser Arbeit war die Untersuchung der natürlichen Variationen von den zwei primordialen Uranisotopen (238U und 235U) mit einem Schwerpunkt auf Proben, die (1) die kontinentale Kruste und ihre Verwitterungsprodukte (d.h. Granite, Shales und Flusswasser) repräsentieren, (2) Produkte der hydrothermalen Alteration vom mittelozeanischen Rücken widerspiegeln (d.h. alterierte Basalte, Karbonatgänge und hydrothermales Wasser) und (3) aus abgegrenzten euxinischen Becken (d.h. Proben aus der Wassersäule und den dazugehörigen Sedimenten) stammen. Das allgemeine Ziel war das Verständnis, unter welchen Bedingungen und Mechanismen eine Fraktionierung der zwei häufigsten Uranisotope (238U und 235U) in der Natur erfolgt, zu verbessern.
Die untersuchten Haupt- und Nebenflüsse unterscheiden sich sowohl in Ihrer Urankonzentration (c(U)) als auch in Ihrer Uranisotopenzusammensetzung (δ238U), wobei die Nebenflüsse eine geringere Urankonzentration (0.87 nmol/kg bis 3.08 nmol/kg) und eine schwerere Uranisotopenzusammensetzung aufweisen (-0.29 ‰ bis +0.01 ‰ im δ238U) im Vergleich zu den Hauptflüssen (c(U) = 5.19 nmol/kg bis 11.69 nmol/kg und d238U = -0.31 ‰ bis +0.13 ‰) aufweisen. Die untersuchten Gesteinsproben fallen alle in einen recht schmalen Bereich von δ238U, zwischen -0.45 ‰ und -0.21 ‰, mit einem Durchschnittswert von -0.30 ‰ ± 0.04 ‰ (doppelte Standardabweichung). Deren Uranisotopenvariationen sind unabhängig von der Urankonzentration (11.8 µg/g bis 1.3 µg/g), dem Alter (3.80 Ga bis 328 Ma), der Probenlokalität und Grad der Differenzierung. Basierend auf den Ergebnissen der Hauptflüsse, die die Uranhauptquelle für den Ozean darstellen, schlagen wir für zukünftige Berechnungen in der Massenbilanz des Urans einen neuen Wert als beste Abschätzung für die Quelle des Urans im Ozean vor, δ238U = -0.23 ‰.
Die Produkte der hydrothermalen Alteration, alterierte Basalte und Kalziumkarbonatgänge, zeigten etwas stärkere Isotopenvariationen (δ238U zwischen -0.63 ‰ und +0.27 ‰) als erwartet und die hydrothermalen Fluide wiesen eine etwas leichtere Uranisotopenzusammensetzung als Meerwasser ((-0.43 ± 0.25) ‰ vs. (-0.37 ± 0.03) ‰) auf. Diese Ergebnisse sind in Übereinstimmung mit einem Modell, dass annimmt, dass die beobachtete Isotopenfraktionierung hauptsächlich ein Ergebnis von Redoxprozessen ist, z.B. die partielle Reduktion von löslichem UVI aus dem Meerwasser während der hydrothermalen Alteration, was zu einer Anreicherung der schweren Uranisotope in der reduzierten Uranspezies (UIV) führt und 2) das bevorzugte Entfernen von UIV aus den hydrothermalen Fluid und der Einbau in die alterierte ozeanische Kruste. Durch diesen Prozess wird das hydrothermale Fluid an schweren Uranisotopen verarmt und somit würden auch die alterierten Basalte und Karbonate ein niedriges δ238U aufweisen, wenn sie mit dem isotopisch leichten hydrothermalen Fluid in Kontakt gekommen sind.
Die Untersuchung von Wasser- und Sedimentproben aus der Ostsee und dem anoxischen Kyllaren Fjord (Norwegen) auf deren Uran- und Mo-Isotopenzusammensetzung zeigte, dass die Uranisotopenzusammensetzung der Sedimente abhängt von (1) dem Ausmaß des Uranaustrags aus der Wassersäule (in einer ähnlichen Art und Weise wie bei den Molybdänisotopen) und (2) der Sedimentationsrate, d.h. der Fraktion von authigenem- relativ zum dedritischen Uran in den Sedimenten. Aufgrund der hohen Sedimentationsrate zeigen die Sedimente aus dem Kyllaren Fjord nur eine moderate authigene Urananreicherung und eine leichtere Uranisotopenzusammensetzung als Sedimente aus dem Schwarzen Meer. In den anoxischen Becken der Ostsee erfolgt dagegen eine starke Mo- und schwache U-Isotopenfraktionierung zwischen Wasser und Sediment. Durch die regelmäßigen auftretenden Spülereignisse mit sauerstoffreichem Wasser wurden vermutlich die ursprünglichen anoxischen Mo- und U-Isotopensignaturen der Sedimente verändert. Demzufolge müssen die Sedimente durchgehend anoxischen Bedingungen ausgesetzt sein, um eine Mo- und U-Isotopensignatur von den Redoxbedingungen während der Ablagerungen zu speichern.
Der Vergleich zwischen Molybdän- und Uranisotopen in der Ostsee und dem anoxischen Kyllaren Fjord zeigte, dass sich Uran- und Molybdänisotope in stark euxinischen Wassersäulen (c(H2S) > 11 µmol/L) entgegengesetzt verhalten. Dementsprechend ergänzen sich die beiden Isotopensysteme und können genutzt werden, um die Ablagerungsbedingungen in abgeschlossenen Becken und die Redoxentwicklung des Paläoozeans zu untersuchen.
Driven by rapid technological advancements, the amount of data that is created, captured, communicated, and stored worldwide has grown exponentially over the past decades. Along with this development it has become critical for many disciplines of science and business to being able to gather and analyze large amounts of data. The sheer volume of the data often exceeds the capabilities of classical storage systems, with the result that current large-scale storage systems are highly distributed and are comprised of a high number of individual storage components. As with any other electronic device, the reliability of storage hardware is governed by certain probability distributions, which in turn are influenced by the physical processes utilized to store the information. The traditional way to deal with the inherent unreliability of combined storage systems is to replicate the data several times. Another popular approach to achieve failure tolerance is to calculate the block-wise parity in one or more dimensions. With better understanding of the different failure modes of storage components, it has become evident that sophisticated high-level error detection and correction techniques are indispensable for the ever-growing distributed systems. The utilization of powerful cyclic error-correcting codes, however, comes with a high computational penalty, since the required operations over finite fields do not map very well onto current commodity processors. This thesis introduces a versatile coding scheme with fully adjustable fault-tolerance that is tailored specifically to modern processor architectures. To reduce stress on the memory subsystem the conventional table-based algorithm for multiplication over finite fields has been replaced with a polynomial version. This arithmetically intense algorithm is better suited to the wide SIMD units of the currently available general purpose processors, but also displays significant benefits when used with modern many-core accelerator devices (for instance the popular general purpose graphics processing units). A CPU implementation using SSE and a GPU version using CUDA are presented. The performance of the multiplication depends on the distribution of the polynomial coefficients in the finite field elements. This property has been used to create suitable matrices that generate a linear systematic erasure-correcting code which shows a significantly increased multiplication performance for the relevant matrix elements. Several approaches to obtain the optimized generator matrices are elaborated and their implications are discussed. A Monte-Carlo-based construction method allows it to influence the specific shape of the generator matrices and thus to adapt them to special storage and archiving workloads. Extensive benchmarks on CPU and GPU demonstrate the superior performance and the future application scenarios of this novel erasure-resilient coding scheme.
Diese Dissertation stellt die systematische Einbeziehung von Eichkorrekturen in die Theorie der thermischen Leptogenese vor, welche eine Erklärung für die Frage nach dem Ursprung der Materie in unserem Universum bereitstellt.
Geht man vom weithin anerkannten Urknallmodell aus, so müsste hierbei zu gleichen Teilen Materie sowie Antimaterie entstanden sein. Aufgrund von Annihilationsprozessen sollte demnach die gesamte Materie zerstrahlt sein und ein leeres Universum zurückbleiben. Da dies aber nicht der Fall ist, stellt sich die Frage, wie das Ungleichgewicht zwischen Materie und Antimaterie entstehen konnte. Der Wert der Asymmetrie lässt sich mit Hilfe von Experimenten sehr genau bestimmen. Für eine systematische theoretische Beschreibung dieser Problematik stellte A. Sacharow drei Bedingungen auf: 1. die Verletzung der Baryonenzahl, 2. die Verletzung der Invarianz von Ladungskonjugation C sowie der Zusammensetzung von Ladungskonjugation und Parität CP sowie 3. eine Abweichung vom thermischen Gleichgewicht.
Da das Urknallmodell und das Standardmodell der Teilchenphysik nicht in der Lage sind, diese Asymmetrie zu beschreiben, beschäftigt sich die vorliegende Dissertation mit der Theorie der thermischen Leptogenese, welche statt von einer ursprünglichen Baryonenasymmetrie von einer Leptonenasymmetrie ausgeht. Zu einem späteren Zeitpunkt wird diese dann mittels Sphaleron-Prozesse, welche die Baryonenzahl verletzen, in eine Baryonenasymmetrie übertragen. Hierzu werden neue Teilchen zum Standardmodell hinzugefügt: schwere Majorana-Neutrinos. Diese zerfallen im thermischen Nichtgleichgewicht CP-verletzend in die bekannten Standardmodell-Leptonen und Higgs-Teilchen.
In dieser Arbeit wird eine hierarchische Anordnung der drei schweren Neutrinomassen betrachtet. Dies hat zur Folge, dass zwei der drei Majorana-Neutrinos ausintegriert werden können und eine effektive Theorie aufgestellt werden kann. Dieses Modell wird auch vanilla leptogenesis genannt und im Folgenden verwendet.
Die Dissertation ist wie folgt gegliedert. Die einleitenden Betrachtungen sind Gegenstand der Kapitel 1 und 2. Dort werden weiterhin andere Modelle zur Lösung des Problems der Baryonenasymmetrie kurz vorgestellt. Die thermische Leptogenese wird eingeführt und der See-saw-Mechanismus sowie die CP-Asymmetrie genauer beschrieben. Am Ende des Kapitels wird der klassische Ansatz für Leptogenese über Boltzmann Gleichungen präsentiert.
In Kapitel 3 werden die Grundlagen für Quantenfeldtheorien im Nichtgleichgewicht eingeführt. Die wichtigsten Definitionen im Falle des thermischen Gleichgewichts werden gegeben, anschließend findet sich die Verallgemeinerung auf Nichtgleichgewichtszustände. Die Bewegungsgleichungen, die sogenannten Kadanoff-Baym-Gleichungen, werden im Folgenden sowohl für skalare Teilchen als auch für Fermionen gelöst.
Kapitel 4 stellt die Notwendigkeit der Einbeziehung von Eichkorrekturen im Kontext der thermischen Leptogenese vor. Durch die Definition einer Leptonenzahlmatrix lässt sich die Asymmetrie durch die Kadanoff-Baym Gleichung für Leptonen umschreiben. Da der Vergleich von Boltzmann und Kadanoff-Baym Gleichungen im letzten Teil dieses Kapitels Unterschiede im Zeitverhalten zeigt, werden im Kadanoff-Baym Ansatz thermische Standardmodell-Breiten des Higgsfeldes und der Leptonen per Hand eingeführt. Mit dieser naiven Erweiterung erhält man ein gleiches Verhalten für die Leptonenzahlmatrix, lokal in der Zeit wie die Lösung der Boltzmann Gleichung. Eine systematische Einführung von Standardmodellkorrekturen für thermische Leptogenese ist daher unumgänglich, weshalb im Rahmen der vorliegenden Dissertation von Grund auf Eichkorrekturen der Diagramme, die zur Asymmetrie führen, berücksichtigt werden.
Die vier für diese Arbeit wichtigen Skalenbereich bedingen zwei Resummationsschemata, Hard Thermal Loop (HTL) und Collinear Thermal Loop (CTL), welche in Kapitel 5 vorgestellt werden. Dies führt schließlich auf zwei Differenzialgleichungen für die Berechnung der thermischen Produktionsrate des Majorana-Neutrinos, welche in Kapitel 6 numerisch weiter ausgewertet werden.
In Kapitel 7 erfolgt zunächst eine naive Berechnung aller eichkorrigierter 3-Schleifen-Diagramme, die zu den beiden die Asymmetrie verursachenden Diagrammen gehören. Da eine einfache Berechnung der 3-Schleifen-Diagramme nicht ausreicht, wird an dieser Stelle ein neues, zylindrisches Diagramm eingeführt, welches alle wichtigen Beiträge, insbesondere die HTL- und CTL-resummierten, enthält. Am Ende des Kapitels findet sich der erste geschlossene Ausdruck für die eichkorrigierte Leptonenzahlmatrix in führender Ordnung in allen Kopplungen.
Abschließend gibt es eine kurze Zusammenfassung und einen Ausblick in Kapitel 8. In dieser Dissertation findet sich zum ersten Mal ein systematischer Zugang zur Berücksichtigung aller Eichwechselwirkungen in der Theorie der thermischen Leptogenese. Ein geschlossener Ausdruck für die eichkorrigierte Leptonenasymmetrie konnte vorgestellt werden.
Ziel dieser Arbeit war, die Reaktion von biologischen Gewebeproben auf dünn- und dicht-ionisierende Strahlung zu evaluieren. Dafür wurden die Gewebeproben konventioneller Röntgenstrahlung sowie einem ausgedehnten 12C-Ionen Bragg-Peak ausgesetzt. Zur Bestrahlung der biologischen Proben mit 12C wurde mit dem GSI-eigenen Simulationsprogramm TRiP98 ein Tiefendosisprofil eines ausgedehnten Bragg-peaks erstellt. Ein weiteres Ziel dieser Arbeit war, dieses Tiefendosisprofil mit drei anderen Simulationsprogrammen (ATIMA, MCHIT, TRIM) zu reproduzieren und zu vergleichen.
ATIMA und TRIM sind allgemeine Programme für den Energieverlust von Ionen in Materie. Sie können das von TRiP98 berechnetet Tiefendosisprofil nur ungenügend reproduzieren, da sie aufgrund fehlender Fragmentierung ein linear ansteigendes Tiefendosisprofil berechnen. Das Monte Carlo-Programm MCHIT, welches speziell für die Wechselwirkung von Ionen mit Materie in medizinischer Anwendung entwickelt wurde, zeigt die beste Übereinstimmung mit der TRiP98-Referenzkurve. Bis auf eine leicht höhere Durchschnittsdosis um 0.1 Gy konnte das Tiefendosisprofil nahezu exakt reproduziert werden.
Die biologischen Proben bestanden aus Schnittkulturen gesunder Maus-Lebern und Explantatkulturen gesunder Maus-Pankreata, um Nebenwirkungen ionisierender Strahlen abzuschätzen. Zusätzlich wurde die Reaktion auf 12C-Bestrahlung in neoplastischem Lebergewebe transgener c-myc/TGF-α Mäuse mit induzierbarem Lebertumor bestimmt. Um eine mögliche Tageszeitabhängigkeit der Gewebereaktion auf die Bestrahlung zu untersuchen, wurden die Schnitt- und Explantatkulturen zu zwei unterschiedlichen Tageszeiten präpariert: zur Mitte des subjektiven Tages und zur Mitte der subjektiven Nacht.
Die Präparate wurden für mehrere Tage auf einer Membran an einer Grenzschicht von Flüssigkeit und Luft kultiviert. Leber- und Pankreaskulturen gesunder C3H wildtyp Mäuse wurden mit einer Dosis von 2 Gy, 5 Gy oder 10 Gy Röntgenstrahlen bestrahlt. Leber- und Pankreaskulturen transgener Mäuse wurden mit ausgedehnten C-Ionen Bragg Peaks gleicher Dosen bestrahlt. Als Kontrolle dienten unbestrahlte Proben. Alle Proben wurden 1 h bzw. 24 h nach der Bestrahlung fixiert und immunhistochemisch auf Marker für Proliferation (Ki67), Apoptose (Caspase3) und DNA- Doppelstrangbrüche (γH2AX) untersucht.
Während die Pankreas-Präparate im Hinblick auf die untersuchten Parameter leider keine auswertbaren Ergebnisse ergaben, zeigten die untersuchten Parameter im gesunden Lebergewebe deutliche Tag-Nacht Unterschiede: die Proliferationsrate war zur Mitte des subjektiven Tages signifikant höher als zur Mitte der subjektiven Nacht. Umgekehrt waren die Raten für DNA-Doppelstrangbrüche zur Mitte der subjektiven Nacht signifikant erhöht. Diese Tag-Nacht Unterschiede ließen sich in neoplastischem Lebergewebe nicht nachweisen. Unabhängig von der Art und Dosis, hatte die Bestrahlung im gesunden Lebergewebe keinen Einfluss auf die untersuchten Parameter. In neoplastischem Lebergewebe hingegen wird die Rate an DNA-Doppelstrangbrüchen durch eine Bestrahlung dosisabhängig erhöht.
Die Auswirkungen ionisierender Strahlen auf das circadiane Uhrwerk wurden in Gewebeproben transgener Per2luc-Mäuse überprüft. Per2luc-Mäuse exprimieren das Enzym Luziferase unter der Kontrolle des Promoters von Per2, einem wichtigen Bestandteil des circadianen Uhrwerks. Daher erlaubt die Analyse dieser Tiere, den circadianen Rhythmus des molekularen Uhrwerks in Leber und anderen Geweben durch Messung der Luziferase-Aktivität in Echtzeit aufzuzeichnen. Wie in Leber- und Nebennierenkulturen dieser Tiere gezeigt werden konnte, führten ioniserende Strahlen dosisabhängig zu einem Phasenvorsprung des circadianen Uhrwerks.
Die Ergebnisse erlauben die Schlussfolgerung, dass ionisierende Strahlen das circadiane Uhrwerk verstellen, Proliferation und Apoptose in gesundem Lebergewebe jedoch kaum beeinflussen.
In our daily life, we carry out lots of tasks like typing, playing tennis, and playing the piano, without even noticing there is sequence learning involved. No matter how simple or complex they are, these tasks require the sequential planning and execution of a series of movements. As an ability of primary importance in one’s life, and an ability that everyone manages to learn, action-sequence learning has been studied by researchers from different fields: psychologists, neurophysiologists as well as roboticists. In the concept of sequence learning, perceptual learning and motor learning, implicit and explicit learning have been studied and discussed independently.
We are interested in infancy research, because infants, with underdeveloped brain functions and with limited motor ability, have little experience with the world and not yet built internal models as presumption of how to interpret the world. A series of infant experiments in the 1980s provided evidence that infants can rapidly develop anticipatory eye movements for visual events. Even when infants have no control of those spatial-temporal patterns, they can respond actually prior to the onset of the visual event, referred as "Anticipation".
In this work, we applied a gaze-contingent paradigm using real-time eye tracking to put 6- and 8-month-old infants in direct control of their visual surroundings. This paradigm allows the infant to change an image on a screen by looking at a peripheral red disc, which functions as a switch. We found that infants quickly learn to perform eye movements to trigger the appearance of new stimuli and that they anticipate the consequences of their actions in an early stage of the experiment.
Attention-shift from learning one stimulus to the next novel stimulus is important in sequence learning. In the test phase of infant visual habituation with two objects, we propose a new theory of explaining the familiarity-to-novelty shift. In our opinion an infant’s interest in a stimulus is related to its learning progress, the improvement of performance. As a consequence, infants prefer the stimulus which their current learning progress is maximal for, naturally giving rise to a familiarity-to-novelty shift in certain situations. Our network model predicts that the familiarity-to-novelty-shift only emerges for complex stimuli that produce bell-shaped learning curves after brief familiarization, but does not emerge for simple stimuli that produce exponentially decreasing learning curves or for long familiarization time, which is consistent with experimental results. This research suggests the infant's interest in a stimulus may be related to its current learning progress. This can give rise to a dynamic familiarity-to-novelty shift depending on both the infant's learning efficiency and the task complexity.
We know that for both infants and adults, the performance on certain motor-sequence tasks can be improved through practice. However, adults usually have to perform complex tasks in complicated environments; for example, learning multiple tasks is unavoidable in our daily life. In existing research, learning multiple tasks showed puzzling and seemingly contradictory results. On the one hand, a wide variety of proactive and retroactive interference effects have been observed when multiple tasks have to be learned. On the other hand, some studies have reported facilitation and transfer of learning between different tasks.
In order to find out the interaction between multiple-task learning, and to find an optimal training schedule, we use a recurrent neural network to model a series of experiments on movement sequence learning. The network model learns to carry out the correct movement sequences through training and reproduces differences between training schedules such as blocked training vs. random training in psychophysics experiments. The network model also shows striking similarity to human performance, and makes prediction for tasks similarity and different training schedules.
In conclusion, the thesis presents learning sequences of actions in infants and recurrent neural networks. We carried out a gaze-contingent experiment to study infants’ rapid anticipation of their own action outcomes, and we also constructed two recurrent neural network models, with one model explaining infant attention shift in visual habituation, and the other model directing to task similarity and training schedule in motor sequence control in adults.