### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (3590) (remove)

#### Keywords

- Deutschland (9)
- Gentherapie (8)
- HIV (8)
- Membranproteine (8)
- NMR-Spektroskopie (8)
- Schwerionenphysik (8)
- Alzheimer-Krankheit (7)
- Molekulardynamik (7)
- Nanopartikel (7)
- RNS (7)

#### Institute

- Medizin (1013)
- Biochemie und Chemie (600)
- Biowissenschaften (483)
- Physik (365)
- Pharmazie (284)
- Geowissenschaften (95)
- Psychologie (80)
- Gesellschaftswissenschaften (74)
- Kulturwissenschaften (56)
- Informatik (55)

- Analyses on the diversity of drought tolerance in grasses of the genus Panicum (2014)
- Drought stress is one of the major abiotic factors diminishing crop productivity world wide. In the course of climate change, regions which already experience dry seasons nowadays will suffer from elongated drought periods and water shortage. These climatic changes will not only have an impact on the regional flora and fauna but also on the people inhabiting these areas. It is therefore of great importance to understand the reactions of plants to drought stress to help breeding and biotechnological approaches for the benefit of new robust cereal cultures growing under low water regimes. In this dissertation four grasses of the genus Panicum, P. bisulcatum (C3), P. laetum, P. miliaceum and P. turgidum (all C4 NAD-ME) were subjected to drought stress. The plants diverse reactions were investigated on a physiological as well as on a molecular level to deepen the understanding of drought stress responses. Drought stress was imposed for a species-specific period until a relative leaf water content (RWC) of ~50 % was reached in each grass. Physiological measurements were conducted on leaves with a RWC of ~50 % investigating chlorophyll a fluorescence parameters with a Plant Efficiency Analyzer (PEA) and gas exchange parameters like the photosynthesis rate and stomatal conductance with a Gas Fluorescence Chamber (GFS-3000). Subsequent molecular analysis were conducted on leaf samples taken (RWC = 50 %) analysing different proteins and the transcriptome of the Panicum species. The physiological measurements revealed a higher photosynthesis rate for the C4 grasses under drought stress with no significant differences between the C4 species. Also the water use efficiency was significantly higher in the C4 species in comparison to the C3 species independent from the water regime supporting results from the literature. The chlorophyll a measurements revealed the strongest adaptation to water shortage in the C4 species P. turgidum followed by the C3 species P. bisulcatum. It has been shown before (GHANNOUM 2009) that the C4 photosynthesis apparatus is more prone to drought stress than the C3 apparatus – despite the higher water use efficiency. Results also suggested that the great adaptation of P. turgidum to drought stress arose from its ability to recover from drought stress (all JIP test parameters showed no significant differences between control and recovery samples). The additional down-regulation of PS II but not of PS I under drought stress also helped the plant to endure times of water shortage and facilitated the recovery when water was available again. Protein analyses on the content of PEPC, OEC and RubisCO (LSU and SSU) revealed no changes. Dehydrin 1 in contrast was strongly up-regulated under drought stress and Summary 108 recovery in all four Panicum species. The stable content of the OEC protein was therefore not the catalyst of rising K peaks measured by chlorophyll a fluorescence and a reduced OEC activity was supposed. Transcriptomic analyses revealed a myriad of differentially regulated tags. Due to unsequenced genomes, tags could only be partially (8 % maximum for P. turgidum) annotated to their specific genes. Diverse methods were therefore used to annotate the most highly regulated tags to their genes and their products. Special emphasis was put on the regulation of five gene products confirming the regulation schemata from the HT-SuperSAGE analyses. Interestingly one protein – the NCED1 – was down-regulated under stress conditions, in contrast to results from the literature. It is therefore of great importance to investigate longer lasting drought to understand the full range of drought stress adaptation. Future genome sequencing projects might also include the Panicum species investigated in this dissertation and important gene candidates with no hits (maybe completely new to the research community) might help breeding and biotechnology approaches to produce more drought resistant crop species.

- Atomistic molecular dynamics approach for channeling of charged particles in oriented crystals (2015)

- Development of the readout controller for the CBM Micro-Vertex Detector (2015)
- The upcoming CBM Experiment at FAIR aims at exploring the region of highest net baryonic densities reproducible in energetic heavy ion collisions. Due to the very high beam intensities expected at FAIR, unprecedented data regarding rare observables such as charm quarks and hyperons will be accessible. Open charm mesons are particularly interesting, since they support the reconstruction of the total charm cross-section in order to search for exotic phenomena, e.g. a phase transition towards the quark-gluon plasma which is predicted by several theoretical models. Open charm studies will be performed via secondary vertex reconstruction with a suitable Micro-Vertex Detector (MVD). The CBM-MVD is currently in the development and prototyping phase with primary design goals concentrating on spatial resolution, radiation hardness, material budget, and readout performance. CMOS Monolithic Active Pixel Sensors (MAPS) provide an excellent spatial resolution for the MVD in the order of few um in combination with a low material budget (50 um thickness) and high radiation hardness. The active volume of the devices is formed from the epitaxial layer of standard CMOS wafers. This allows for integration of pixels together with analogue and digital data processing circuits on one single chip. This option was explored with the MIMOSA-26 prototype, which integrates functionalities like pedestal correction, correlated double sampling, discrimination and data sparsification based on zero suppression combined with a small and dense pixel matrix. The pixel array composed of 576 lines of 1152 pixels is read out in a column-parallel rolling shutter mode. One discriminator per column and the digital data processing circuits are located on the same chip in a 3 mm wide area beneath the pixel matrix allowing for binary hit encoding. This area also contains the circuits for pedestal correction and the configuration memory, which is programmed via JTAG. The preprocessed digital data is read out via two 80 Mbit/s LVDS links per sensor, which stream their data continuously based on a low-level protocol. Within the scope of this thesis, a readout concept of the CBM-MVD is proposed and studied based on the current MIMOSA sensor generation. The backbone of the system is formed by the Readout Controller boards (ROCs) featuring FPGA microchips and optical links. Several ROC prototypes are considered using the synergy with the HADES Experiment. Finally, the TRB3 board is selected as a possible candidate for the initial FAIR experiments. Furthermore, a highly scalable, hardware independent FPGA firmware is implemented in order to steer and read out multiple MIMOSA-26 sensors. The reconfigurable firmware is also designed with the support for future MIMOSA sensor generations. The free-streaming sensor data is deserialized and error-checked, prior to its transmission over a suitable network interface. In order to demonstrate the validity of the concept, a readout network similar to the HADES Data Acquisition (DAQ) system is developed. The ROC is tested on the HADES TRB2 boards and data is acquired using suitable MAPS add-on boards and the TrbNet protocol. In the context of the CBM-MVD prototype project, a readout network with 12 MIMOSA-26 sensors has been prepared for an in-beam test at the CERN SPS facility. A comprehensive control system is designed comprising customized software tools. The subsequent in-beam test is used to validate the design choices. As a result, the system could be operated synchronously and dead-time free for several days. The readout network behavior in a realistic operating environment has been carefully studied with the outcome the the TrbNet based approach handles the MVD prototype setup without any difficulties. A procedure to keep the sensors synchronous even in case of a data overflow has been pioneered as well. After the beam test, improvements and conceptual changes to the readout systems are being addressed which allow an integration into the global CBM DAQ system.

- Capital gains taxes : modeling in continuous time and impacts on investment decisions (2015)
- In the first part of the thesis, we show that the payment flow of a linear tax on trading gains from a security with a semimartingale price process can be constructed for all càglàd and adapted trading strategies. It is characterized as the unique continuous extension of the tax payments for elementary strategies w.r.t. the convergence "uniformly in probability". In this framework, we prove that under quite mild assumptions dividend payoffs have almost surely a negative effect on investor’s after-tax wealth if the riskless interest rate is always positive. In addition, we give an example for tax-efficient strategies for which the tax payment flow can be computed explicitly. In the second part of the thesis, we investigate the impact of capital gains taxes on optimal investment decisions in a quite simple model. Namely, we consider a risk neutral investor who owns one risky stock from which she assumes that it has a lower expected return than the riskless bank account and determine the optimal stopping time at which she sells the stock to invest the proceeds in the bank account up to the maturity date. In the case of linear taxes and a positive riskless interest rate, the problem is nontrivial because at the selling time the investor has to realize book profits which triggers tax payments. We derive a boundary that is continuous and increasing in time, and decreasing in the volatility of the stock such that the investor sells the stock at the first time its price is smaller or equal to this boundary.

- On the contribution of autoionizing states to XUV radiation-induced double ionization of nitrous oxide (N20) (2015)
- The implementation of pump-probe experiments with ultrashort laser pulses enables the study of dynamical processes in atoms or molecules, which may provide a deeper inside in their physical origin. The application of this method to systems as nitrous oxide, which is not only a simple example for polyatomic molecules but which also plays a crucial role in the greenhouse effect, promises interesting and beneficial findings. This thesis presents, on the one hand, the technical extension of an existing experimental setup for high-harmonic generation (HHG) and ultra-fast laser physics by an extreme ultraviolet (XUV) spectrometer for the in-situ observation of the harmonic spectrum during ongoing measurements. The present setup enables the production of short laser pulse trains in the XUV spectral range with durations of a few hundred attoseconds (1 as = 10^−18 s) via HHG and supports to perform XUV-IR pump-probe experiments using the infrared (IR) driving field with durations of a few femtoseconds. Moreover, a reaction microscope is implemented, which enables the coincident detection of several charged particles emerging from an ionization or dissociation process and to reconstruct their full 3-D-momentum vectors. With this technique it is possible to perform time-resolved momentum spectroscopy of few-particle quantum systems. Here, the design and the calibration of the XUV spectrometer is presented as well as a first application to the analysis of experimental data by providing information on the produced photon energies. On the other hand, the results of an XUV-pump IR-probe measurement on nitrous oxide (N2O) are discussed. With the broad harmonic spectrum (∼ 17 − 45 eV) it is possible to address several states of the singly and doubly ionized cation. One reaction channel is the single ionization into a stable state of N2O+. Here, the coincidently measured photoelectron energies allow the observation of sidebands, which served to estimate the pulse durations of the involved XUV pulse trains as well as of the fundamental IR pulses. Additionally, single ionization of nitrous oxide can lead to a dissociation into a charged and a neutral fragment. The four respective dissociation channels are compared by presenting their branching ratios, kinetic energy release (KER) distributions and their dependencies on the time delay between pump and probe pulse. In the production of the dication, there are two competitive processes: direct double ionization considering photon energies above the double-ionization threshold, and autoionization of singly ionized and excited molecules in the case of photon energies near the double-ionization threshold. In both cases, the ionization leads to a Coulomb explosion into two charged fragments, where the N − N bond or the N − O bond may dissociate. The influence of the IR-probe field on the ionization yield and the KER was investigated for both dissociation channels and compared. In addition, the corresponding photoelectron energy spectra are presented, which show indications for autoionizing states being involved, and their dependence on the delay and the KER of the respective ions is analyzed.

- Positivstellensatz certificates for containment of polyhedra and spectrahedra (2015)
- Containment problems belong to the classical problems of (convex) geometry. In the proper sense, a containment problem is the task to decide the set-theoretic inclusion of two given sets, which is hard from both the theoretical and the practical perspective. In a broader sense, this includes, e.g., radii or packing problems, which are even harder. For some classes of convex sets there has been strong interest in containment problems. This includes containment problems of polyhedra and balls, and containment of polyhedra, which have been studied in the late 20th century because of their inherent relevance in linear programming and combinatorics. Since then, there has only been limited progress in understanding containment problems of that type. In recent years, containment problems for spectrahedra, which naturally generalize the class of polyhedra, have seen great interest. This interest is particularly driven by the intrinsic relevance of spectrahedra and their projections in polynomial optimization and convex algebraic geometry. Except for the treatment of special classes or situations, there has been no overall treatment of that kind of problems, though. In this thesis, we provide a comprehensive treatment of containment problems concerning polyhedra, spectrahedra, and their projections from the viewpoint of low-degree semialgebraic problems and study algebraic certificates for containment. This leads to a new and systematic access to studying containment problems of (projections of) polyhedra and spectrahedra, and provides several new and partially unexpected results. The main idea - which is meanwhile common in polynomial optimization, but whose understanding of the particular potential on low-degree geometric problems is still a major challenge - can be explained as follows. One point of view towards linear programming is as an application of Farkas' Lemma which characterizes the (non-)solvability of a system of linear inequalities. The affine form of Farkas' Lemma characterizes linear polynomials which are nonnegative on a given polyhedron. By omitting the linearity condition, one gets a polynomial nonnegativity question on a semialgebraic set, leading to so-called Positivstellensaetze (or, more precisely Nichtnegativstellensaetze). A Positivstellensatz provides a certificate for the positivity of a polynomial function in terms of a polynomial identity. As in the linear case, these Positivstellensaetze are the foundation of polynomial optimization and relaxation methods. The transition from positivity to nonnegativity is still a major challenge in real algebraic geometry and polynomial optimization. With this in mind, several principal questions arise in the context of containment problems: Can the particular containment problem be formulated as a polynomial nonnegativity (or, feasibility) problem in a sophisticated way? If so, how are positivity and nonnegativity related to the containment question in the sense of their geometric meaning? Is there a sophisticated Positivstellensatz for the particular situation, yielding certificates for containment? Concerning the degree of the semialgebraic certificates, which degree is necessary, which degree is sufficient to decide containment? Indeed, (almost) all containment problems studied in this thesis can be formulated as polynomial nonnegativity problems allowing the application of semialgebraic relaxations. Other than this general result, the answer to all the other questions (highly) depends on the specific containment problem, particularly with regard to its underlying geometry. An important point is whether the hierarchies coming from increasing the degree in the polynomial relaxations always decide containment in finitely many steps. We focus on the containment problem of an H-polytope in a V-polytope and of a spectrahedron in a spectrahedron. Moreover, we address containment problems concerning projections of H-polyhedra and spectrahedra. This selection is justified by the fact that the mentioned containment problems are computationally hard and their geometry is not well understood.

- Holocene sedimentary development and event sedimentation of a mid-ocean atoll lagoon, Maldives, Indian Ocean (2014)
- This study describes the Holocene sedimentary lagoonal deposition history, including event sedimentation and benthic foraminiferal analyzes, from about 10 kyrs BP until today. This is the first study describing the sedimentation of a Maldivian atoll lagoon in such detail. Thirty-nine sediment cores have been recovered from the deep Rasdhoo Atoll lagoon of the Maldives (4°N/73°W). Seventeen sediment cores were opened, described, and 296 sediment samples have been collected and analyzed. Different methods have been used to evaluate the coarse- and fine-grained carbonate components and a total of fifty-eight samples have been dated radiometrically by Beta Analytic Inc., Miami, Florida. In general, the Rasdhoo Atoll lagoon sediments can be divided into (1) a Late Pleistocene soil, (2) an early Holocene peat layer composed of mangrove deposits which mark the beginning inundation of the atoll lagoon by the rising Holocene sea-level at 10,320 ± 100 yrs BP, and (3) carbonate sediments starting to fill up the lagoon 7850 ± 140 yrs BP until today. The transition from peat to carbonate is characterized by a considerable hiatus. Six different carbonate sediment facies are classified by statistical analyses, listed in decreasing abundance: (1) mollusk-coral-algal floatstone to rudstone (30%) (2) mollusk-coral-red algae rudstone (23%) (3) mollusk-coral-algal wackestone to floatstone (23%) (4) mollusk-coral wackestone (13%) (5) mollusk-coral mudstone to wackestone (9%) (6) mollusk mudstone (2%) Based on grain-sizes in combination with coral identification, the facies represent both lagoonal background sedimentation (mostly fine-grained sediments (matrix >50%)) and event sedimentation (coarse-grained sediment layers composing reefal components). Six coarser grained layers in muddy background sediments of the Rasdhoo Atoll lagoon were interpreted as Holocene tsunami events, based on the increase of allochthonous skeletal material with shallow-water reef affinity such as fragments of shallow-water coral species, coralline red algae, and reef-dwelling foraminifera in these layers, as well as AMS dating: • Event 1: 420 - 890 yrs BP (655 yrs BP) • Event 2: 890 - 1560 yrs BP (1225 yrs BP) • Event 3: 2040 - 2340 yrs BP (2190 yrs BP) • Event 4: 2420 - 3380 yrs BP (2900 yrs BP) • Event 5: 3890 - 4330 yrs BP (4110 yrs BP) • Event 6: 5480 - 5760 yrs BP (5620 yrs BP) Five of the six layers may be correlated to previously published tsunami events at adjacent coastal research sites. The mid-late Holocene atoll lagoon archive is incomplete though based on the assumption that major earthquakes at the Indonesian subduction zone generated more than six major tsunamis during the past 6.5 kyrs. According to Gischler (2006), the sediments of the Rasdhoo Atoll lagoon can be divided into two areas: (1) a central to marginal deep lagoon with a lateral west-to-east gradient of sediment facies distribution, visible in sections <4 kyrs BP with sedimentary facies of mudstone to wackestone in the western part (e.g., cores 16, 18, and 34) and coarse-grained coral and algal-rich sediments in the eastern part of the lagoon (e.g., cores 30 and 31). (2) A northern enclosed and shallow area between the sand apron and the sand spit accumulating “sandy” sediments of wackestone facies (cores 2, 19, 25, and 26). Comparing the sediment accumulation data of the lagoon with two reconstructed local sea-level curves, three different sequence-stratigraphical systems tracts are visible: (1) a lowstand systems tract (LST) >10 kyrs BP. Pleistocene brownish soil superposing subaerially exposed Pleistocene reef limestone. (2) A transgressive systems tract (TST) 10-6.5 kyrs BP. A peat layer marks the beginning of the inundation, and the carbonate sedimentation starts with very low sedimentation rates of 0.02 m/kyr. (3) A highstand systems tract (HST) 6.5-0 kyrs BP, further divided into three stages (6.5-3, 3-1, 1-0 kyrs BP). The sea-level rise slowed down, sedimentation rates are increasing continuously up to a maximum of 1.4 m/kyr, the sand spit developed some 4 kyrs BP, the lagoonal circulation got restricted, and the lateral west-to-east gradient of grain-size accumulation started. From 1-0 kyrs BP the sedimentation rates slowed down to modern mean sedimentation rates of 0.6 m/kyr. Two cores, one core from the center of the lagoon (core 16) and one core from the northern margin of the lagoon (core 19), have been analyzed on diversity and assemblages of benthic foraminifera in high-resolution. The transitions of Ammonia spp. to a more even and diverse fauna marks a significant environmental change at 7.0 kyrs BP in core 16 (onset of a stable environment in the deep lagoon after the sea-level rise slowed down at HST stage 1) and at 4.0 kyrs BP in core 19. A continuing environmental change after 1.4 kyrs BP in core 16 caused the fauna to become more even, a recovery of diversity and a permanent decline of foraminiferal accumulation rate. The changes in the faunas at 4.0 kyrs BP and at 1.4 kyrs BP could be explained with the sand spit formation in the northwestern and western lagoon. The sand spit has apparently acted as an obstacle in lagoonal circulation and might have caused unstable environmental conditions due to a more rapid circulation at the shallow marine site of core 19 and a slowdown of bottom water circulation in the main lagoon (core 16) leading to higher residence times and to lower oxygen and higher nutrient concentrations.

- Derivation and characterization of a new filter for nonlinear high-dimensional data assimilation (2015)
- Data assimilation (DA) combines model forecasts with real-world observations to achieve an optimal estimate of the state of a dynamical system. The quality of predictions in nonlinear and chaotic systems such as atmospheric or oceanic circulation is strongly sensitive to the initial conditions. Therefore, beyond the consistent reconstruction of past states, a primary relevance of advanced DA methods concerns the proper model initialization. The ensemble Kalman filter (EnKF) and its deterministic variants, mostly square root filters such as the ensemble transform Kalman filter (ETKF), represent a popular alternative to variational DA schemes. They are applied in a wide range of research and operations. Their forecast step employs an ensemble integration that fully respects the nonlinear nature of the analyzed system. In the analysis step, they implicitly assume the prior state and observation errors to be Gaussian. Consequently, in nonlinear systems, the mean and covariance of the analysis ensemble are biased and these filters remain suboptimal. In contrast, the fully nonlinear, non-Gaussian particle filter (PF) relies on Bayes' theorem without further assumptions, which guarantees an exact asymptotic behavior. However, it is exposed to weight collapse, particularly in higher-dimensional settings, known as the curse of dimensionality. This work presents a new method to obtain an analysis ensemble with mean and covariance that exactly match the corresponding Bayesian estimates. This is achieved by a deterministic matrix square root transformation of the forecast ensemble, and subsequently a suitable random rotation that significantly contributes to filter stability while preserving the required second-order statistics. The forecast step remains as in the ETKF. The algorithm, which is fairly easy to implement and computationally efficient, is referred to as the nonlinear ensemble transform filter (NETF). The limitation with respect to fully-nonlinear filtering is that the NETF only considers the mean and covariance of the Bayesian analysis density, neglecting higher-order moments. The properties and performance of the proposed algorithm are investigated via a set of experiments. The results indicate that such a filter formulation can increase the analysis quality, even for relatively small ensemble sizes, compared to other ensemble filters in nonlinear, non-Gaussian scenarios. They also confirm that localization enhances the applicability of this PF-inspired scheme in larger-dimensional systems. Finally, the novel filter is coupled to a large-scale ocean general circulation model with a realistic observation scenario. The NETF remains stable with a small ensemble size and shows a consistent behavior. Additionally, its analyses exhibit low estimation errors, as revealed by a comparison with a free ensemble integration and the ETKF. The results confirm that, in principle, the filter can be applied successfully and as simple as the ETKF in high-dimensional problems. No further modifications are needed, even though the algorithm is only based on the particle weights. Thus, it is able to overcome the curse of dimensionality, even in deterministic systems. This proves that the NETF constitutes a promising and user-friendly method for nonlinear high-dimensional DA.

- Verrechtlichung von Geschichte : Parlamentarische Debatten um die gesetzlichen Bestimmungen gegen Holocaustleugnung in der Bundesrepublik Deutschland und in Österreich (2015)
- In der Dissertation mit dem Titel „Verrechtlichung von Geschichte. Parlamentarische Debatten um die gesetzlichen Bestimmungen gegen Holocaustleugnung in Österreich und Deutschland“ wurden die strafrechtlichen Bestimmungen gegen Holocaust-Leugnung in Deutschland und Österreich untersucht. Im Vordergrund stand die Frage, wie ein historisches Ereignis mit Hilfe politischer und juristischer Terminologie so gefasst und normiert werden konnte, dass die Leugnung desselben seitdem mit Hilfe des Rechts bestraft werden kann. Dazu wurden vor allem jene parlamentarischen Vorgänge und Debatten untersucht, die der Verabschiedung der Gesetze vorausgegangen sind. Die Auswertung dieser Quellen hilft auch zu verstehen, weshalb und in welcher Form die Logik dieser Gesetze in den letzten zwanzig Jahren von anderen Staaten übernommen und auf andere historische Ereignisse ausgeweitet worden ist. Neben diesem umfangreichen empirischen Teil, der auf die jeweiligen historischen Spezifika eingeht, beinhaltet die Dissertation einen stärker analytisch ausgerichteten resümierenden Schlussteil, in dem versucht wurde, mit thesenhaften Beobachtungen das Phänomen und die ideengeschichtliche Genese der Holocaust-Leugnungsgesetze nachzuzeichnen. Diese Beobachtungen umfassen unter anderem die Bereiche Geschichtspolitik, Rechtspolitik, Sprachpolitik oder auch Wissenschaftspolitik und gehen aus unterschiedlichen Blickwinkeln der Frage nach, auf welche Weise die Gesetze begründet, legitimiert und kritisiert worden sind und immer noch werden.

- Wege in die Arbeitswelt aus der Sicht erfolgreicher Absolventinnen und Absolventen einer Berufsausbildung im dualen System - eine qualitative Studie (2014)
- Die vorliegende Arbeit vollzieht mithilfe problemzentrierter Interviews Schritte junger Berufsausbildungsabsolventen - hier Teilnehmer eines Zweigs der Benachteiligtenförderung, den ausbildungsbegleitenden Hilfen - von der allgemeinbildenden Schule in Beschäftigung nach. Die Arbeit ist in den Feldern Benachteiligtenforschung und Übergangsforschung angesiedelt. Teil 2 der Arbeit gibt Aufschluss über die durch einen Kurzfragebogen ermittelten beruflichen Wege von 79 ausgewählten Untersuchungsteilnehmern der Prüfungsabschlussjahrgänge 2000 bis 2004 vom Ende der Schule bis zum Stichtag der Befragung. Fragestellungen sind hier vor allem Wege in die Berufsausbildung, Ausbildungsverläufe, Prüfungsgeschehen, Übernahmeverhalten der Ausbildungsbetriebe, Zeiten von Arbeitslosigkeit, beruflicher Stand zum Befragungsstichtag von als benachteiligt wahrgenommenen Lehrabsolventen. Teil 3 der Arbeit beschreibt das Berufseinstiegsgeschehen nach erfolgreichem Lehrabschluss, wie es sich aus der Sicht der Befragungsteilnehmer darstellt. Von 62 Teilnehmern liegen durch problemzentrierte Interviews Einschätzungen zu förderlichen und hinderlichen Faktoren beim Berufseinstieg vor. Aus diesen Einschätzungen werden in Teil 4 der Arbeit als Ergebnis so genannte Gewinner und Verlierer beim Berufseinstieg identifiziert.