620 Ingenieurwissenschaften und zugeordnete Tätigkeiten
Refine
Year of publication
Document Type
- Article (31)
- Book (8)
- Working Paper (5)
- Doctoral Thesis (4)
- Contribution to a Periodical (2)
Has Fulltext
- yes (50)
Is part of the Bibliography
- no (50)
Keywords
- radiation-induced nanostructures (4)
- Energy system design (2)
- Large-scale integration of renewable power generation (2)
- AM-PM noise conversion (1)
- Adoption (1)
- Agile work (1)
- Assistive technology (1)
- Background work (1)
- Bericht (1)
- COGNIMUSE (1)
Institute
Understanding the physics of strongly correlated electronic systems has been a central issue in condensed matter physics for decades. In transition metal oxides, strong correlations characteristic of narrow d bands are at the origin of remarkable properties such as the opening of Mott gap, enhanced effective mass, and anomalous vibronic coupling, to mention a few. SrVO3 with V4+ in a 3d1 electronic configuration is the simplest example of a 3D correlated metallic electronic system. Here, the authors' focus on the observation of a (roughly) quadratic temperature dependence of the inverse electron mobility of this seemingly simple system, which is an intriguing property shared by other metallic oxides. The systematic analysis of electronic transport in SrVO3 thin films discloses the limitations of the simplest picture of e–e correlations in a Fermi liquid (FL); instead, it is shown show that the quasi-2D topology of the Fermi surface (FS) and a strong electron–phonon coupling, contributing to dress carriers with a phonon cloud, play a pivotal role on the reported electron spectroscopic, optical, thermodynamic, and transport data. The picture that emerges is not restricted to SrVO3 but can be shared with other 3d and 4d metallic oxides.
High shares of intermittent renewable power generation in a European electricity system will require flexible backup power generation on the dominant diurnal, synoptic, and seasonal weather timescales. The same three timescales are already covered by today’s dispatchable electricity generation facilities, which are able to follow the typical load variations on the intra-day, intra-week, and seasonal timescales. This work aims to quantify the changing demand for those three backup flexibility classes in emerging large-scale electricity systems, as they transform from low to high shares of variable renewable power generation. A weather-driven modelling is used, which aggregates eight years of wind and solar power generation data as well as load data over Germany and Europe, and splits the backup system required to cover the residual load into three flexibility classes distinguished by their respective maximum rates of change of power output. This modelling shows that the slowly flexible backup system is dominant at low renewable shares, but its optimized capacity decreases and drops close to zero once the average renewable power generation exceeds 50% of the mean load. The medium flexible backup capacities increase for modest renewable shares, peak at around a 40% renewable share, and then continuously decrease to almost zero once the average renewable power generation becomes larger than 100% of the mean load. The dispatch capacity of the highly flexible backup system becomes dominant for renewable shares beyond 50%, and reach their maximum around a 70% renewable share. For renewable shares above 70% the highly flexible backup capacity in Germany remains at its maximum, whereas it decreases again for Europe. This indicates that for highly renewable large-scale electricity systems the total required backup capacity can only be reduced if countries share their excess generation and backup power.
The transition to a future electricity system based primarily on wind and solar PV is examined for all regions in the contiguous US. We present optimized pathways for the build-up of wind and solar power for least backup energy needs as well as for least cost obtained with a simplified, lightweight model based on long-term high resolution weather-determined generation data. In the absence of storage, the pathway which achieves the best match of generation and load, thus resulting in the least backup energy requirements, generally favors a combination of both technologies, with a wind/solar PV (photovoltaics) energy mix of about 80/20 in a fully renewable scenario. The least cost development is seen to start with 100% of the technology with the lowest average generation costs first, but with increasing renewable installations, economically unfavorable excess generation pushes it toward the minimal backup pathway. Surplus generation and the entailed costs can be reduced significantly by combining wind and solar power, and/or absorbing excess generation, for example with storage or transmission, or by coupling the electricity system to other energy sectors.
Background: In general, the prevalence of work-related musculoskeletal disorders (WMSD) in dentistry is high, and dental assistants (DA) are even more affected than dentists (D). Furthermore, differentiations between the fields of dental specialization (e.g., general dentistry, endodontology, oral and maxillofacial surgery, or orthodontics) are rare. Therefore, this study aims to investigate the ergonomic risk of the aforementioned four fields of dental specialization for D and DA on the one hand, and to compare the ergonomic risk of D and DA within each individual field of dental specialization. Methods: In total, 60 dentists (33 male/27 female) and 60 dental assistants (11 male/49 female) volunteered in this study. The sample was composed of 15 dentists and 15 dental assistants from each of the dental field, in order to represent the fields of dental specialization. In a laboratory setting, all tasks were recorded using an inertial motion capture system. The kinematic data were applied to an automated version of the Rapid Upper Limb Assessment (RULA). Results: The results revealed significantly reduced ergonomic risks in endodontology and orthodontics compared to oral and maxillofacial surgery and general dentistry in DAs, while orthodontics showed a significantly reduced ergonomic risk compared to general dentistry in Ds. Further differences between the fields of dental specialization were found in the right wrist, right lower arm, and left lower arm in DAs and in the neck, right wrist, right lower arm, and left wrist in Ds. The differences between Ds and DAs within a specialist discipline were rather small. Discussion: Independent of whether one works as a D or DA, the percentage of time spent working in higher risk scores is reduced in endodontologists, and especially in orthodontics, compared to general dentists or oral and maxillofacial surgeons. In order to counteract the development of WMSD, early intervention should be made. Consequently, ergonomic training or strength training is recommended.
AttendAffectNet-emotion prediction of movie viewers using multimodal fusion with self-attention
(2021)
In this paper, we tackle the problem of predicting the affective responses of movie viewers, based on the content of the movies. Current studies on this topic focus on video representation learning and fusion techniques to combine the extracted features for predicting affect. Yet, these typically, while ignoring the correlation between multiple modality inputs, ignore the correlation between temporal inputs (i.e., sequential features). To explore these correlations, a neural network architecture—namely AttendAffectNet (AAN)—uses the self-attention mechanism for predicting the emotions of movie viewers from different input modalities. Particularly, visual, audio, and text features are considered for predicting emotions (and expressed in terms of valence and arousal). We analyze three variants of our proposed AAN: Feature AAN, Temporal AAN, and Mixed AAN. The Feature AAN applies the self-attention mechanism in an innovative way on the features extracted from the different modalities (including video, audio, and movie subtitles) of a whole movie to, thereby, capture the relationships between them. The Temporal AAN takes the time domain of the movies and the sequential dependency of affective responses into account. In the Temporal AAN, self-attention is applied on the concatenated (multimodal) feature vectors representing different subsequent movie segments. In the Mixed AAN, we combine the strong points of the Feature AAN and the Temporal AAN, by applying self-attention first on vectors of features obtained from different modalities in each movie segment and then on the feature representations of all subsequent (temporal) movie segments. We extensively trained and validated our proposed AAN on both the MediaEval 2016 dataset for the Emotional Impact of Movies Task and the extended COGNIMUSE dataset. Our experiments demonstrate that audio features play a more influential role than those extracted from video and movie subtitles when predicting the emotions of movie viewers on these datasets. The models that use all visual, audio, and text features simultaneously as their inputs performed better than those using features extracted from each modality separately. In addition, the Feature AAN outperformed other AAN variants on the above-mentioned datasets, highlighting the importance of taking different features as context to one another when fusing them. The Feature AAN also performed better than the baseline models when predicting the valence dimension.
Solar photovoltaics (PV) panels in combination with batteries are often proposed as a solution to provide stable power supply in rural areas. PV generation is mostly dominated by the solar diurnal cycle and has, in some countries, already started to have influence on the daily price distribution on the electricity market.
In this work, we study the performance and optimisation of rural PV-battery hybrid systems in a future renewable Polish power system. We use data on generation potentials to study PV and battery deployment. Together with a power system optimisation and dispatch model for the Polish power system, we study market values when selling at the national market for different CO2 price scenarios. We show that optimal orientations with respect to tilt/azimuth are subject to change as the PV share grows and that the benefit from batteries grows for higher shares of renewables.
High-temperature tolerant enzymes offer multiple advantages over enzymes from mesophilic organisms for the industrial production of sustainable chemicals due to high specific activities and stabilities towards fluctuations in pH, heat, and organic solvents. The production of molecular hydrogen (H2) is of particular interest because of the multiple uses of hydrogen in energy and chemicals applications, and the ability of hydrogenase enzymes to reduce protons to H2 at a cathode. We examined the activity of Hydrogen-Dependent CO2 Reductase (HDCR) from the thermophilic bacterium Thermoanaerobacter kivui when immobilized in a redox polymer, cobaltocene-functionalized polyallylamine (Cc-PAA), on a cathode for enzyme-mediated H2 formation from electricity. The presence of Cc-PAA increased reductive current density 340-fold when used on an electrode with HDCR at 40 °C, reaching unprecedented current densities of up to 3 mA·cm−2 with minimal overpotential and high faradaic efficiency. In contrast to other hydrogenases, T. kivui HDCR showed substantial reversibility of CO-dependent inactivation, revealing an opportunity for usage in gas mixtures containing CO, such as syngas. This study highlights the important potential of combining redox polymers with novel enzymes from thermophiles for enhanced electrosynthesis.
Sample-based longitudinal discrete choice experiments: preferences for electric vehicles over time
(2021)
Discrete choice experiments have emerged as the state-of-the-art method for measuring preferences, but they are mostly used in cross-sectional studies. In seeking to make them applicable for longitudinal studies, our study addresses two common challenges: working with different respondents and handling altering attributes. We propose a sample-based longitudinal discrete choice experiment in combination with a covariate-extended hierarchical Bayes logit estimator that allows one to test the statistical significance of changes. We showcase this method’s use in studies about preferences for electric vehicles over six years and empirically observe that preferences develop in an unpredictable, non-monotonous way. We also find that inspecting only the absolute differences in preferences between samples may result in misleading inferences. Moreover, surveying a new sample produced similar results as asking the same sample of respondents over time. Finally, we experimentally test how adding or removing an attribute affects preferences for the other attributes.
In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our generation process, i.e., the output from each time step is fed as the input to the next step. Unlike other video prediction methods that use “one shot” generation, our method is able to preserve much more details from the input image, while also capturing the critical pixel-level changes between the frames. We overcome the problem of generation quality degradation by introducing a “complementary mask” module in our architecture, and we show that this allows the model to only focus on the generation of the pixels that need to be changed, and to reuse those that should remain static from its previous frame. We empirically validate our methods against various video prediction models on the UT Dallas Dataset, and show that our approach is able to generate high quality realistic video sequences from one static input image. In addition, we also validate the robustness of our method by testing a pre-trained model on the unseen ADFES facial expression dataset. We also provide qualitative results of our model tested on a human action dataset: The Weizmann Action database.
The future of work has become a pressing matter of concern: Researchers, business consultancies, and industrial companies are intensively studying how new work models could be best implemented to increase workplace flexibility and creativity. In particular, the agile model has become one of the “must-have” elements for re-organizing work practices, especially for technology development work. However, the implementation of agile work often comes together with strong presumptions: it is regarded as an inevitable tool that can be universally integrated into different workplaces while having the same outcome of flexibility, transparency, and flattened hierarchies everywhere. This paper challenges such essentializing assumptions by turning agile work into a “matter of care.” We argue that care work occurs in contexts other than feminized reproductive work, namely, technology development. Drawing on concepts from feminist Science and Technology Studies and ethnographic research at agile technology development workplaces in Germany and Kenya, we examine what work it takes to actually keep up with the imperative of agile work. The analysis brings the often invisibilized care practices of human and nonhuman actors to the fore that are necessary to enact and stabilize the agile promises of flexibilization, co-working, and rapid prototyping. Revealing the caring sociotechnical relationships that are vital for working agile, we discuss the emergence of power asymmetries characterized by hierarchies of skills that are differently acknowledged in the daily work of technology development. The paper ends by speculating on the emancipatory potential of a care perspective, by which we seek to inspire careful Emancipatory Technology Studies.
'THIS ISN'T ME!': the role of age-related self- and user images for robot acceptance by elders
(2020)
Although companion-type robots are already commercially available, little interest has been taken in identifying reasons for inter-individual differences in their acceptance. Elders’ age-related perceptions of both their own self (self-image) and of the general older robot user (user image) could play a relevant role in this context. Since little is known to date about elders’ companion-type robot user image, it is one aim of this study to investigate its age-related facets, concentrating on possibly stigmatizing perceptions of elder robot users. The study also addresses the association between elders’ age-related self-image and robot acceptance: Is the association independent of the user image or not? To investigate these research questions, N = 28 adults aged 63 years and older were introduced to the companion-type robot Pleo. Afterwards, several markers of robot acceptance were assessed. Actual and ideal self- and subjective robot user image were assessed by a study-specific semantic differential on the stereotype dimensions of warmth and competence. Results show that participants tended to stigmatize elder robot users. The self-images were not directly related to robot acceptance, but affected it in the context of the user image. A higher fit between self- and user image was associated with higher perceived usefulness, social acceptance, and intention to use the robot. To conclude, elders’ subjective interpretations of new technologies play a relevant role for their acceptance. Together with elders’ individual self-images, they need to be considered in both robot development and implementation. Future research should consider that associations between user characteristics and robot acceptance by elders can be complex and easily overlooked.
This work aims at radar sensors in the frequency band from 57 to 64 GHz that can be embedded in wind turbine blades during manufacturing, enabling non-destructive quality inspection directly after production and structural health monitoring (SHM) during the complete service life of the blade. In this paper, we show the fundamental damage detection capability of this sensor technology during fatigue testing of typical rotor blade materials. Therefore, a frequency modulated continuous wave (FMCW) radar sensor is used for damage diagnostics, and the results are validated by simultaneous camera recordings. Here, we focus on the failure modes delamination, fiber waviness (ondulation), and inter-fiber failure. For each failure mode, three samples have been designed and experimentally investigated during fatigue testing. A damage index has been proposed based on residual, that is, differential, signals exploiting measurements from pristine structural conditions. This study shows that the proposed innovative radar approach is able to detect continuous structural degradation for all failure modes by means of gradual signal changes.
This study presents an ultra-wideband, elliptical slot, planar monopole antenna for early breast cancer microwave imaging. The on-body antenna's operation is optimised by direct contact with the patient's skin. With a compact size of 9 × 7 mm, the antenna covers a wide bandwidth from 16 to 24 GHz for reflection coefficients lower than –10 dB. Besides, it also features an electrode for electrical impedance tomography applications. Verification on a volunteer's breast gives an excellent agreement with the simulation for the defined bandwidth. Furthermore, as the first stage of the system's characterisation, pork fat is also used to demonstrate the possibility to enhance the transmission between the antennas within the high loss environment. Those results propose the feasibility of implementing a high-frequency radar system for breast cancer detection.
Consequences of minimal length discretization on line element, metric tensor, and geodesic equation
(2021)
When minimal length uncertainty emerging from a generalized uncertainty principle (GUP) is thoughtfully implemented, it is of great interest to consider its impacts on gravitational Einstein field equations (gEFEs) and to try to assess consequential modifications in metric manifesting properties of quantum geometry due to quantum gravity. GUP takes into account the gravitational impacts on the noncommutation relations of length (distance) and momentum operators or time and energy operators and so on. On the other hand, gEFE relates classical geometry or general relativity gravity to the energy–momentum tensors, that is, proposing quantum equations of state. Despite the technical difficulties, we intend to insert GUP into the metric tensor so that the line element and the geodesic equation in flat and curved space are accordingly modified. The latter apparently encompasses acceleration, jerk, and snap (jounce) of a particle in the quasi-quantized gravitational field. Finite higher orders of acceleration apparently manifest phenomena such as accelerating expansion and transitions between different radii of curvature and so on.
Objectives: The four-dimensional ultrasound (4D-US) enables imaging of the aortic segment and simultaneous determination of the wall expansion. The method shows a high spatial and temporal resolution, but its in vivo reliability is so far unknown for low-measure values. The present study determines the intraobserver repeatability and interobserver reproducibility of 4D-US in the atherosclerotic and non-atherosclerotic infrarenal aorta. Methods: In all, 22 patients with non-aneurysmal aorta were examined by an experienced examiner and a medical student. After registration of 4D images, both the examiners marked the aortic wall manually before the commercially implemented speckle tracking algorithm was applied. The cyclic changes of the aortic diameter and circumferential strain were determined with the help of custom-made software. The reliability of 4D-US was tested by the intraclass correlation coefficient (ICC). Results: The 4D-US measurements showed very good reliability for the maximum aortic diameter and the circumferential strain for all patients and for the non-atherosclerotic aortae (ICC >0.7), but low reliability for circumferential strain in calcified aortae (ICC = 0.29). The observer- and masking-related variances for both maximum diameter and circumferential strain were close to zero. Conclusions: Despite the low-measured values, the high spatial and temporal resolution of the 4D-US enables a reliable evaluation of cyclic diameter changes and circumferential strain in non-aneurysmal aortae independent from the observer experience but with some limitations for calcified aortae. The 4D-US opens up a new perspective with regard to noninvasive, in vivo assessment of kinematic properties of the vessel wall in the abdominal aorta.
This paper explores the many interesting implications for oscillator design, with optimized phase-noise performance, deriving from a newly proposed model based on the concept of oscillator conjugacy. For the case of 2-D (planar) oscillators, the model prominently predicts that only circuits producing a perfectly symmetric steady-state can have zero amplitude-to-phase (AM-PM) noise conversion, a so-called zero-state. Simulations on standard industry oscillator circuits verify all model predictions and, however, also show that these circuit classes cannot attain zero-states except in special limit-cases which are not practically relevant. Guided by the newly acquired design rules, we describe the synthesis of a novel 2-D reduced-order LC oscillator circuit which achieves several zero-states while operating at realistic output power levels. The potential future application of this developed theoretical framework for implementation of numerical algorithms aimed at optimizing oscillator phase-noise performance is briefly discussed.
Nano-granular metals are materials that fall into the general class of granular electronic systems in which the interplay of electronic correlations, disorder and finite size effects can be studied. The charge transport in nano-granular metals is dominated by thermally-assisted, sequential and correlated tunneling over a temperature-dependent number of metallic grains. Here we study the frequency-dependent conductivity (AC conductivity) of nano-granular Platinum with Pt nano-grains embedded into amorphous carbon (C). We focus on the transport regime on the insulating side of the insulator metal transition reflected by a set of samples covering a range of tunnel-coupling strengths. In this transport regime polarization contributions to the AC conductivity are small and correlation effects in the transport of free charges are expected to be particularly pronounced. We find a universal behavior in the frequency dependence that can be traced back to the temperature-dependent zero-frequency conductivity (DC conductivity) of Pt/C within a simple lumped-circuit analysis. Our results are in contradistinction to previous work on nano-granular Pd/ZrO2ZrO2 in the very weak coupling regime where polarization contributions to the AC conductivity dominated. We describe possible future applications of nano-granular metals in proximity impedance spectroscopy of dielectric materials.
In this roadmap article, we have focused on the most recent advances in terahertz (THz) imaging with particular attention paid to the optimization and miniaturization of the THz imaging systems. Such systems entail enhanced functionality, reduced power consumption, and increased convenience, thus being geared toward the implementation of THz imaging systems in real operational conditions. The article will touch upon the advanced solid-state-based THz imaging systems, including room temperature THz sensors and arrays, as well as their on-chip integration with diffractive THz optical components. We will cover the current-state of compact room temperature THz emission sources, both optolectronic and electrically driven; particular emphasis is attributed to the beam-forming role in THz imaging, THz holography and spatial filtering, THz nano-imaging, and computational imaging. A number of advanced THz techniques, such as light-field THz imaging, homodyne spectroscopy, and phase sensitive spectrometry, THz modulated continuous wave imaging, room temperature THz frequency combs, and passive THz imaging, as well as the use of artificial intelligence in THz data processing and optics development, will be reviewed. This roadmap presents a structured snapshot of current advances in THz imaging as of 2021 and provides an opinion on contemporary scientific and technological challenges in this field, as well as extrapolations of possible further evolution in THz imaging.
Organ-on-a-chip technology has the potential to accelerate pharmaceutical drug development, improve the clinical translation of basic research, and provide personalized intervention strategies. In the last decade, big pharma has engaged in many academic research cooperations to develop organ-on-a-chip systems for future drug discoveries. Although most organ-on-a-chip systems present proof-of-concept studies, miniaturized organ systems still need to demonstrate translational relevance and predictive power in clinical and pharmaceutical settings. This review explores whether microfluidic technology succeeded in paving the way for developing physiologically relevant human in vitro models for pharmacology and toxicology in biomedical research within the last decade. Individual organ-on-a-chip systems are discussed, focusing on relevant applications and highlighting their ability to tackle current challenges in pharmacological research.
Collaboration is an important 21st Century skill. Co-located (or face-to-face) collaboration (CC) analytics gained momentum with the advent of sensor technology. Most of these works have used the audio modality to detect the quality of CC. The CC quality can be detected from simple indicators of collaboration such as total speaking time or complex indicators like synchrony in the rise and fall of the average pitch. Most studies in the past focused on “how group members talk” (i.e., spectral, temporal features of audio like pitch) and not “what they talk”. The “what” of the conversations is more overt contrary to the “how” of the conversations. Very few studies studied “what” group members talk about, and these studies were lab based showing a representative overview of specific words as topic clusters instead of analysing the richness of the content of the conversations by understanding the linkage between these words. To overcome this, we made a starting step in this technical paper based on field trials to prototype a tool to move towards automatic collaboration analytics. We designed a technical setup to collect, process and visualize audio data automatically. The data collection took place while a board game was played among the university staff with pre-assigned roles to create awareness of the connection between learning analytics and learning design. We not only did a word-level analysis of the conversations, but also analysed the richness of these conversations by visualizing the strength of the linkage between these words and phrases interactively. In this visualization, we used a network graph to visualize turn taking exchange between different roles along with the word-level and phrase-level analysis. We also used centrality measures to understand the network graph further based on how much words have hold over the network of words and how influential are certain words. Finally, we found that this approach had certain limitations in terms of automation in speaker diarization (i.e., who spoke when) and text data pre-processing. Therefore, we concluded that even though the technical setup was partially automated, it is a way forward to understand the richness of the conversations between different roles and makes a significant step towards automatic collaboration analytics.
Surface plasmon polaritons on (silver) nanowires are promising components for future photonic technologies. Here, we study near-field patterns on silver nanowires with a scattering-type scanning near-field optical microscope that enables the direct mapping of surface waves. We analyze the spatial pattern of the plasmon signatures for different excitation geometries and polarization and observe a plasmon wave pattern that is canted relative to the nanowire axis, which we show is due to a superposition of two different plasmon modes, as supported by electromagnetic simulations including the influence of the substrate. These findings yield new insights into the excitation and propagation of plasmon polaritons for applications in nanoplasmonic devices.
In power systems, flow allocation (FA) methods enable to allocate the usage and costs of the transmission grid to each single market participant. Based on predefined assumptions, the power flow is split into isolated generator-specific or producer-specific sub-flows. Two prominent FA methods, Marginal Participation (MP) and Equivalent Bilateral Exchanges (EBEs), build upon the linearized power flow and thus on the Power Transfer Distribution Factors (PTDFs). Despite their intuitive and computationally efficient concepts, they are restricted to networks with passive transmission elements only. As soon as a significant number of controllable transmission elements, such as high-voltage direct current (HVDC) lines, operate in the system, they lose their applicability. This work reformulates the two methods in terms of Virtual Injection Patterns (VIPs), which allows one to efficiently introduce a shift parameter q to tune contributions of net sources and net sinks in the network. In this work, major properties and differences in the methods are pointed out, and it is shown how the MP and EBE algorithms can be applied to generic meshed AC-DC electricity grids: by introducing a pseudo-impedance ω¯ , which reflects the operational state of controllable elements and allows one to extend the PTDF matrix under the assumption of knowing the current flow in the system. Basic properties from graph theory are used to solve for the pseudo-impedance in dependence of the position within the network. This directly enables, e.g., HVDC lines to be considered in the MP and EBE algorithms. The extended methods are applied to a low-carbon European network model (PyPSA-EUR) with a spatial resolution of 181 nodes and an 18% transmission expansion compared to today’s total transmission capacity volume. The allocations of MP and EBE show that countries with high wind potentials profit most from the transmission grid expansion. Based on the average usage of transmission system expansion, a method of distributing operational and capital expenditures is proposed. In addition, it is shown how injections from renewable resources strongly drive country-to-country allocations and thus cross-border electricity flows.
Python for Power System Analysis (PyPSA) is a free software toolbox for simulating and optimising modern electrical power systems over multiple periods. PyPSA includes models for conventional generators with unit commitment, variable renewable generation, storage units, coupling to other energy sectors, and mixed alternating and direct current networks. It is designed to be easily extensible and to scale well with large networks and long time series. In this paper the basic functionality of PyPSA is described, including the formulation of the full power flow equations and the multi-period optimisation of operation and investment with linear power flow equations. PyPSA is positioned in the existing free software landscape as a bridge between traditional power flow analysis tools for steady-state analysis and full multi-period energy system models. The functionality is demonstrated on two open datasets of the transmission system in Germany (based on SciGRID) and Europe (based on GridKit).
Variable renewable energy sources (VRES), such as solarphotovoltaic (PV) and wind turbines (WT), are starting to play a significant role in several energy systems around the globe. To overcome the problem of their non-dispatchable and stochastic nature, several approaches have been proposed so far. This paper describes a novel mathematical model for scheduling the operation of a wind-powered pumped-storage hydroelectricity (PSH) hybrid for 25 to 48 h ahead. The model is based on mathematical programming and wind speed forecasts for the next 1 to 24 h, along with predicted upper reservoir occupancy for the 24th hour ahead. The results indicate that by coupling a 2-MW conventional wind turbine with a PSH of energy storing capacity equal to 54 MWh it is possible to significantly reduce the intraday energy generation coefficient of variation from 31% for pure wind turbine to 1.15% for a wind-powered PSH The scheduling errors calculated based on mean absolute percentage error (MAPE) are significantly smaller for such a coupling than those seen for wind generation forecasts, at 2.39% and 27%, respectively. This is even stronger emphasized by the fact that, those for wind generation were calculated for forecasts made for the next 1 to 24 h, while those for scheduled generation were calculated for forecasts made for the next 25 to 48 h. The results clearly show that the proposed scheduling approach ensures the high reliability of the WT–PSH energy source
The use of decentralised, sustainable urban drainage systems (SUDS) for the treatment of stormwater runoff is becoming increasingly prevalent in Germany. Decentralised SUDS can offer a viable and attractive alternative to end of pipe treatment systems for stormwater runoff from urban areas. However, there is still some uncertainty regarding the long-term performance of SUDS, and the general legislative requirements for SUDS approval and testing. Whilst the allowable pollution levels in stormwater runoff that infiltrate into ground and/or water table are regulated across Germany by the Federal Soil Protection Law, there is presently no federal law addressing the discharge requirements for surface water runoff. The lack of clear guidance can make it difficult for planners and designers to implement these innovative and sustainable stormwater treatment systems. This study clarifies the current understanding of urban stormwater treatment requirements and new technical approval guidelines for decentralised SUDS devices in Germany. The study findings should assist researchers, designers and asset managers to better anticipate and understand the performance, effective life-spans, and the planning and maintenance requirements for decentralised SUDS systems. This should help promote even greater use of these systems in the future.
Aufbauend auf einer Literaturanalyse wird der derzeitige technische Entwicklungsstand im Bereich der Wiedergewinnung von Phosphat und Stickstoffverbindungen aus dem häuslichen Abwasser skizziert: Neben der (chemischen) Wiedergewinnung aus dem Abwasser und der Verwendung von Anaerobverfahren sowie die Wiedergewinnung aus Klärschlamm ist auch die Bewässerung mit Abwasser, die Kompostierung sowie die Fraktionierung von Abwasser („Gelbwasser“) eine Möglichkeit zur besseren Ausnutzung der Nährstoffgehalte des Abwassers. Der erzielte Überblick über den derzeitigen Stand der Nährstoffrückgewinnung diente dazu, mögliche Entwicklungsaufgaben zu identifizieren, die einerseits vordringlich (insbesondere zur Lösung globaler Probleme, z.B. zur Beendigung des Ressourcenmangels) erscheinen und deren Lösung andererseits besonders innovative Leistungen erfordern. Die Entwicklungsaufgaben wurden thesenhaft zugespitzt, um so anschließend in einer Delphi-Befragung überprüft werden zu können.
Aufbauend auf einer Literaturanalyse wird der derzeitige technische Entwicklungsstand im Bereich des Grauwasserrecyclings skizziert. Neben mechanisch-biologische Anlagen treten vereinzelt Membranfilteranlagen, aber auch „Low-Tech“-Anlagen. Der Überblick half, mögliche Entwicklungsaufgaben zu identifizieren, die einerseits vordringlich (insbesondere zur Lösung künftiger Wassermengenprobleme) erscheinen und deren Lösung andererseits besonders innovative Leistungen erfordern. Die Entwicklungsaufgaben wurden thesenhaft zugespitzt, um so anschließend in einer Delphi-Befragung überprüft werden zu können.
Aufbauend auf einer Literaturanalyse wird der derzeitige technische Entwicklungsstand im Bereich der Energierückgewinnung aus dem Siedlungsabwasser skizziert. Neben der Wärmerückgewinnung, die sowohl im Kanalnetz als auch dezentral in Gebäuden möglich ist, wurde die Biogasgewinnung sowohl auf Aerobkläranlagen als auch in Anaerobanlagen und die anschließende Aufbereitung der Klärgase in Erdgasqualität ebenso diskutiert wie die Nutzung von Schlämmen als Brennmaterial. Die Darstellung des derzeitigen Entwicklungsstandes half dabei, mögliche Entwicklungsaufgaben zu identifizieren, die einerseits vordringlich erlauben könnten, Abwasser künftig als Energieressource zu betrachten, und deren Lösung andererseits besonders innovative Leistungen erfordern. Die Entwicklungsaufgaben wurden thesenhaft zugespitzt, um so anschließend in einer Delphi-Befragung überprüft zu werden.
Vorarbeiten zu einer sozial-ökologischen RisDie Nanotechnologie gilt als eine der Schlüsseltechnologien der Zukunft: Die Verringerung der Teilchengröße in den nanoskaligen Bereich führt zu neuartigen physikalischen und chemischen Stoffeigenschaften, welche Innovationspotenzial in vielfältigen Anwendungsfeldern versprechen. Besonders in den letzten zwei Jahrzehnten hat die Nanotechnologie wirtschaftlich an Bedeutung gewonnen, da immer mehr nanotechnologische Entwicklungen kommerziell umgesetzt werden. Aufgrund des breiten Anwendungsspektrums und der Vielzahl unterschiedlicher Materialien ist bisher weder eine transparente Darstellung der tatsächlichen wirtschaftlichen Bedeutung noch eine adäquate Bewertung potenzieller Gesundheits- und Umweltrisiken, die aus den neuartigen nanospezifischen Eigenschaften hervorgehen könnten, möglich.
Das Papier gibt einen aktuellen Überblick über den Stand des Wissens zum Thema Nanotechnologie, wobei besonderer Fokus auf den Bereich Risiko, Toxikologie und Ökotoxikologie sowie Risikowahrnehmung und -kommunikation gelegt wird. Die Ergebnisse der Literaturstudie sollen künftig dazu dienen, zu prüfen, welchen Beitrag ein sozial-ökologischer Forschungsansatz zur nachhaltigen Entwicklung und Nutzung der Nanotechnologie leisten kann.
Unique publication on the worldwide distribution of industrial robots based on company reports: about 40 country reports 2007 - 2012; by application areas; by industrial branches; by types of robots; and by other technical and economic variables. Data on production, exports and imports; Trends in robot densities, i.e. number of robots; per 10,000 persons employed in relevant sectors; Forecast 2013 - 2016; Special Features: Case Studies on Profitability of Robot Investments
Perchlorinated polysilanes were synthesized by polymerization of tetrachlorosilane under cold plasma conditions with hydrogen as a reducing agent. Subsequent selective cleavage of the resulting polymer yielded oligochlorosilanes SinCl2n+2 (n = 2, 3) from which the octachlorotrisilane (n = 3, Cl8Si3, OCTS) was used as a novel precursor for the synthesis of single-crystalline Si nanowires (NW) by the well-established vapor–liquid–solid (VLS) mechanism. By adding doping agents, specifically BBr3 and PCl3, we achieved highly p- and n-type doped Si-NWs by means of atmospheric-pressure chemical vapor deposition (APCVD). These as grown NWs were investigated by means of scanning electron microscopy (SEM) and transmission electron microscopy (TEM), as well as electrical measurements of the NWs integrated in four-terminal and back-gated MOSFET modules. The intrinsic NWs appeared to be highly crystalline, with a preferred growth direction of [111] and a specific resistivity of ρ = 6 kΩ·cm. The doped NWs appeared to be [112] oriented with a specific resistivity of ρ = 198 mΩ·cm for p-type Si-NWs and ρ = 2.7 mΩ·cm for n-doped Si-NWs, revealing excellent dopant activation.
We present experimental results and theoretical simulations of the adsorption behavior of the metal–organic precursor Co2(CO)8 on SiO2 surfaces after application of two different pretreatment steps, namely by air plasma cleaning or a focused electron beam pre-irradiation. We observe a spontaneous dissociation of the precursor molecules as well as autodeposition of cobalt on the pretreated SiO2 surfaces. We also find that the differences in metal content and relative stability of these deposits depend on the pretreatment conditions of the substrate. Transport measurements of these deposits are also presented. We are led to assume that the degree of passivation of the SiO2 surface by hydroxyl groups is an important controlling factor in the dissociation process. Our calculations of various slab settings, using dispersion-corrected density functional theory, support this assumption. We observe physisorption of the precursor molecule on a fully hydroxylated SiO2 surface (untreated surface) and chemisorption on a partially hydroxylated SiO2 surface (pretreated surface) with a spontaneous dissociation of the precursor molecule. In view of these calculations, we discuss the origin of this dissociation and the subsequent autocatalysis.
The biological effects of energetic heavy ions are attracting increasing interest for their applications in cancer therapy and protection against space radiation. The cascade of events leading to cell death or late effects starts from stochastic energy deposition on the nanometer scale and the corresponding lesions in biological molecules, primarily DNA. We have developed experimental techniques to visualize DNA nanolesions induced by heavy ions. Nanolesions appear in cells as “streaks” which can be visualized by using different DNA repair markers. We have studied the kinetics of repair of these “streaks” also with respect to the chromatin conformation. Initial steps in the modeling of the energy deposition patterns at the micrometer and nanometer scale were made with MCHIT and TRAX models, respectively.
Development of chromium(VI)-free defect etching solutions for application on silicon substrates
(2008)
Die folgenden Zeilen, eine akademische Antrittsvorlesung in erweiterter Form, sollen zunächst namentlich jüngeren Ingenieuren einen Überblick über die Gebiete des Wasserbaus geben, um sie zu befähigen, beim Studium der Einzelfächer den Blick aufs Ganze nicht zu verlieren. Vielleicht kann aber auch die kleine Arbeit in weiteren Kreisen dazu beitragen, die meist nur sehr oberflächliche Kenntnis teclhnischer Fragen auf dem Gebiete des Wasserbaus in etwas zu vertiefen.
Rapport fait a l'Académie des Sciences, sur la machine aérostatique, inventée par MM. de Montgolfier
(1784)
Über Luftschifffahrt
(1894)
Zur Erkundung der Depotfunktion von quellfähigen Tonmineralen für organische Umweltchemikalien und der möglichen Verdrängung dieser Chemikalien durch biogene Tenside wurden kinetische Untersuchungen mit Hilfe von Batch-Experimenten durchgeführt. Dabei wurde zunächst das Adsorptions- und Desorptionsverhalten von ausgesuchten Umweltchemikalien an mineralische Festphasen und danach die Verdrängung dieser Chemikalien durch biogene Tenside untersucht. Als Umweltchemikalien dienten in den Experimenten Di-(n-butyl)phthalat (DBP) und Di-(2-ethylhexyl)phthalat (DEHP), die in industriellem Maßstab hauptsächlich als Weichmacher in Kunststoffen verwendet werden und fünf ausgewählte polycyclische aromatische Kohlenwasserstoffe (PAK), die bei pyrolytischen Prozessen sowie der unvollständigen Verbrennung organischen Materials entstehen. In den durchgeführten Versuchsreihen dienten ein smektitreicher Bentonit, Quarzsand und Gemische aus diesen beiden Stoffen mit verschiedenen Gewichtsanteilen der Bentonit- und Sandphase sowie Seesand als Adsorbermedium für die Umweltchemikalien. Diese Variationen sollten das unterschiedliche Verhalten der verschiedenen Festphasen bezüglich der drei untersuchten Prozesse (Adsorption, Desorption und Austausch) mit den Chemikalien verdeutlichen. Untersuchungen am verwendeten Bentonit ergaben, daß sein Hauptbestandteil ein Calcium- Montmorillonit war. Der Montmorillonit ist ein quellfähiges, dioktaedrisches Tonmineral aus der Gruppe der Smektite. Die Quellfähigkeit dieses Smektits wurde in Quellversuchen mit Ethylenglykol und Glycerin mittels Röntgendiffraktometrie festgestellt. Die chemische Zusammensetzung des Minerals wurde mit Röntgenfluoreszenzmessungen analysiert. Mit dem Greene-Kelly-Test wurde der Montmorillonit als smektitischer Anteil im Bentonit identifiziert. Im Laufe einer jeden Versuchsreihe sind nacheinander drei Prozesse mit jeder Probe im Labor untersucht worden: 1. Adsorption von Umweltchemikalien (Phthalate und PAK) an Sandproben mit unterschiedlichen Tongehalten und an reinen Tonproben. 2. Desorption der adsorbierten Umweltchemikalien aus den Sand/Ton-Gemischen und Tonproben in vier Schritten. 3. Austausch dieser Chemikalien aus den Sand/Ton-Gemischen und Tonproben gegen biogene Tenside. Im ersten Schritt der Batch-Experimente wurden die beiden Phthalate bzw. die PAK (Naphthalin, Acenaphthen, Fluoren, Phenanthren und Fluoranthen) aus einer wässrigen Lösung an die mineralischen Festphasen adsorbiert. Die Phthalate wurden in einem 1:1 Verhältnis in den Experimenten eingesetzt, die fünf PAK als ein Gemisch oder auch einzeln. Für die PAKAdsorption wurde auch eine Wasser-Aceton-Mischung beim Adsorptionsversuch verwendet, da sich dadurch ihre Löslichkeit erheblich verbessern ließ und die kinetischen Reihenversuche bezüglich der Gleichgewichtseinstellung wesentlich gleichmäßiger verliefen. Die Proben wurden 20 Stunden lang bis zur Einstellung des Gleichgewichts im Überkopfmischer geschüttelt. Die festen Phasen wurden danach von den wässrigen Phasen getrennt und zur Ermittlung der Einstellung des Desorptionsgleichgewichts weiterverwendet. Die wässrigen Phasen wurden mit organischen Lösemitteln extrahiert und der Gehalt an Umweltchemikalien gaschromatographisch quantifiziert. Die verbliebenen Festphasen wurden jeweils viermal mit frischem, destilliertem Wasser 20 Stunden lang zur Ermittlung des Gleichgewichts der Desorption geschüttelt, wobei nach Abtrennung der wässrigen Phasen diese auf ihren Organikgehalt hin wie oben beschrieben untersucht wurden. An diese vier Desorptionsschritte schloß sich das Verdrängungsexperiment einer Versuchsreihe an. Hierbei wurden verseifte, langkettige biogene Tenside (Alkoholate und Carbonsäuresalze mit geradzahliger Anzahl der Kohlenstoffatome) zu jeder Probe hinzugegeben und jede Festphase nochmals mit frischem Wasser im Überkopfmischer geschüttelt. In diesem Schritt sollte überprüft werden, ob die in den Festphasen verbliebenen Phthalate und PAK durch Zugabe von biogenen Tensiden in höherem Maße in der wässrigen Phase wiedergefunden werden als dies aus dem jeweiligen Desorptionsgleichgewicht zu erwarten war. Mit den Ergebnissen konnten Adsorptionsisothermen (nur für Phthalate) aufgenommen und Angaben zur Einstellung des Desorptionsgleichgewichts oder dessen Störung nach Austauschexperimenten gemacht werden. Die Auswertung der Adsorptionsexperimente ergab, daß Festphasen mit Bentonitanteil befähigt sind, einen höheren Anteil an Phthalaten und PAK zu adsorbieren als reine Sandproben. Bei kleinen Phthalatkonzentrationen wurde DEHP aufgrund einer stärkeren Affinität zur Festphase besser adsorbiert als DBP. Stiegen die Phthalatzugaben, so wurde DBP in höherem Maße als DEHP adsorbiert. Dies wurde durch eine bessere Einlagerung der DBP-Moleküle in die innerkristallinen Zwischenschichten des Montmorillonit-Minerals ermöglicht (Interkalation). Röntgenographisch wurde ein deutlich vergrößerter Wert für den Schichtabstand im Montmorillonit nachgewiesen als im ursprünglichem Zustand (bis zu 18 Å gegenüber 15,3 Å). Die Desorptionsisothermen zeigten für Festphasen mit Quarzsandanteilen häufig ein ungleichmäßiges Verhalten. So wurde häufig im zweiten und dritten Desorptionsschritt eine unerwartet hohe Menge an Phthalaten in der wässrigen Lösung gefunden. Reine Bentonitproben zeigten dagegen eine gleichmäßige Konzentrationsabnahme der Phthalate nach jedem Desorptionsschritt. Der eingesetzte Bentonit war in der Lage, Phthalate stärker von der Desorption zurückzuhalten als Quarzsand. Die Einstellung des Desorptionsgleichgewichts erfolgte mit reinem Bentonit schneller als bei Sandproben oder Sand-Bentonit Gemischen. Bei Austauschexperimenten, in denen die ursprünglich eingesetzte Menge an Phthalaten unter 1 mg lag, wurden keine Verdrängungsprozesse festgestellt. Stiegen die Konzentrationen der Phthalate (bis zu ca. 200 mg), so kam es aufgrund der größeren Oberflächenbelegung im Montmorillonit zu Verdrängungsprozessen der Phthalate durch biogene Tenside. Die Extraktion der wässrigen Lösung ergab nach dem Austauschexperiment eine höhere Menge an Phthalaten als es aus dem Desorptionsexperimenten erwartet worden war. Insgesamt wurde mehr DBP als DEHP nach den Austauschexperimenten in der wässrigen Lösung gefunden. Da DBP besser als DEHP in die Zwischenschichten des Montmorillonits eingebaut wurde, konnte auch diese Feststellung damit erklärt werden, daß biogene Tenside die Phthalate aus den innerkristallinen Zwischenschichten verdrängen. Bei PAK wurden Verdrängungsprozesse nur im Falle von Phenanthren festgestellt. Bei anderen in den Experimenten eingesetzten PAK (vorwiegend Naphthalin, Acenaphthen und Fluoren) war offenbar der Dampfdruck so groß, daß vor dem Austauschexperiment nicht mehr genügend organisches Material in der Bodenprobe adsorbiert war. Bei parallel durchgeführten Versuchen mit reinem Quarzsand und mit Seesand als Festphase wurde dagegen weder bei Phthalaten noch PAK eine wesentliche Störung des Desorptionsgleichgewichts in der Größenordnung der bentonithaltigen Proben nach dem Verdrängungsexperiment festgestellt. Dies ist ein Hinweis darauf, daß Verdrängungsprozesse bevorzugt auf Oberflächen von Tonmineralen stattfinden. Insgesamt konnte mit dieser Arbeit gezeigt werden, daß Gleichgewichtseinstellungen von Umweltchemikalien an Tonmineralen durch biogene Tenside gestört werden können. Durch die Einwirkung der biogenen Tenside kommt es zu einer verstärkten Desorption der Umweltchemikalien aus den Tonmineralen.
The focus of this study were Celtic gold coins excavated from the Martberg, a Celtic oppidium and sanctuary, occupied in the first century B.C. by a Celtic tribe known as the Treveri. These coins and a number of associated coinages, were characterised in terms of their alloy compositions and their geochemical and isotopic signatures so as to answer archaeological and numismatic questions of coinage development and metal sources. This required the development of analytical methods involving; Electron Microprobe (EPMA), Laser Ablation-ICP-MS, solution Multicollector-ICPMS and LA-MC-ICP-MS. The alloy compositions (Au-Ag-Cu-Sn) were determined by EPMA on a small polished area on the edge of the coins. A large beam size, 50µm (diameter), was used to overcome the extreme heterogeneity of these alloys. These analyses were shown to be representative of the bulk composition of the coins. The metallurgical development of the coinages was defined and showed that the earlier coinages followed a debasement trend, which was superceded by a trend of increasing copper at the expense of sliver while gold compositions remained stable. This change occurred with the appearance of the inscribed "POTTINA" coinage, Scheers 30/V. Two typologically different coinages, Scheers 16 and 18 ("Armorican Types") were found to have markedly different compositions which do not fit into the trends described above. A Flan for a gold coin, which may indicate the presence of a mint at the Martberg, was found to have an identicle weight and composition as the Scheers 30/I coins, which preceeded the majority of the coins found at the Martberg in the coin development chronology. The trace element anaylses were made by Laser Ablation-ICPMS using an AridusTM desolvating nebuliser to introduce matrix matched solution standards to calibrate the measurements, which were then normalised to 100%. Quantitative results were obtained for the following elements: Sc, Ti, Cr, Mn, Co, Ni, Cu, Zn, Se, Ru, Rh, Pd, Ag, Sb, Te, W, Ir, Pt, Pb, Bi. The remaining elements remain problematic as they produced incorrect standardisations mainly due to chemical effects in solution such as adsorption onto the beaker walls or oxidation : V, Fe, Ga, Ge, As, Mo, Sn, Re, Os, Hg. Changes in the sources of Au, Ag and Cu were observed during the development of the coinages through the variation of trace elements, which correlate positively with the major components of the coin alloys. Changes in the Pt/Au ratios show that the Scheers 23 coins contain distinctly different gold from the later coinages and that the Scheers 18 gold source was also different. Te/Ag was used to show that the Sch.23 coins also contained different silver and some subgroups were observed in the Sch. 30/V coins. A major change in copper source is indicated by the sudden increase of Sb and Ni with the introduction of the Sch. 30/V coins (POTTINA), which can be linked to a similar change in copper observed in the contemporary silver coinage, Sch. 55 (with a ring). Lead isotopic analyses were made by solution- and Laser Ablation - MC-ICP-MS, The laser technique proved to be in good agreement with the solution analyses with precisions between 1 and 0.1%o (per mil). The development of the laser method opens the way for easy and virtually non-destructive Pb isotopic determinations of ancient gold coins. The results showed that Sch. 23 is very different from the following coinages, Sch. 16 and 18 are also different, forming their own group, and all the later "Eye" staters (Sch. 30/I-VI) lie on a mixing line controlled by the addition of copper from a Mediterranean source, probably Sardinia or Spain. An indication of gold and silver sources should be possible with further analyses of the Sch. 23 and Rainbow Cup gold coins and the Sch. 54 and 55 silver coinages. Copper Isotopic analyses were made by solution- and Laser Ablation - MC-ICP-MS. Both techniques require further development to produce more reproducible results. The results show that there appears to be a trend to more positive d Cu65 values for the later coinages and that the link between the copper used in the Sch. 30/V (POTTINA) coins and the silver Sch. 55 (with a ring) coins is also shown by similarly postive d Cu65 values. The full suite of analyses were also made on samples of gold from the region. They were mostly composed of "placer gold", alluvial gold found in rivers. It was found that when a study is restricted to a limited number of deposits or areas then it is possible to distinguish between deposits based on the concentration of those elements which are least affected by transport related alteration processes. These elements include; the PGE's, due to their refractory nature, and those elements which are usually present in high enough concentrations to remain relatively unaffected, eg: Cu, Pb and Sb. Due to the nature of the coin alloy it is not possible to link the gold used in the coins studied here with gold deposits, as the large amounts of Ag and Cu, added to the coin alloys, have masked the Au signature. However, further Pb isotopic analyses of gold deposits should prove useful in determining from which regions Celtic gold was derived.
Im Rahmen dieser Arbeit wurde untersucht, inwieweit eine Bewegungsschärzung aus monokularen Bildsequenzen von Straßenverkehrsszenen und eine darauf aufbauende Hinderniserkennung mit Hilfe von statistischen oder neuronalen Methoden realisiert werden kann. Bei dem zugrunde liegenden mathematischen Modell wird angenommen, daß die Umgebung, in der sich ein Fahrzeug bewegt, im wesentlichen eben ist, was für Verkehrsequenzen in guter Näherung erfüllt ist. Im ersten Teil dieser Arbeit wurde ein statistisches Verfahren zur Bewegungsschätzung vorgestellt und diskutiert. Der erste Schritt dieses Verfahrens stellt die Generierung eines sogenannten Markantheitsbildes dar, in welchem Objektkanten und Objektecken visuell hervorgehoben werden. Für die daraus resultierende Liste von markanten Bildbereichen werden anschließend unter Verwendung einer sogenannten Verschiebungsvektorschätzung, Korrespondenzen im zeitlich folgenden Bild ermittelt. Ausgehend von dem resultierenden Verschiebungsvektorfeld, werden in dem nächsten Schritt des Verfahrens die Bewegungsgrößen ermittelt, also die Rotationsmatrix und der Translationsvektor des Fahrzeugs, beziehungsweise der Kamera. Um abschließend eine Hinderniserkennung realisieren zu können, erfolgt unter Verwendung der Bewegungsgrößen eine Bewegungskompensation der Bilddaten. Bei einer solchen Bewegungskompensation wird unter Verwendung der ermittelten Bewegungsgrößen und dem Modell der bewegten Ebene eine Rücktransformation jedes Bildpixels durchgeführt, so daß bei der Bildung eines Differenzbildes zwischen dem bewegungskompensierten Bild und dem tatsächlich aufgenommenen Bild, dreidimensionale Strukturen, die ja das Ebenenmodell verletzen, deutlich hervortreten und somit auf potentielle Hindernisse hinweisen. Es hat sich gezeigt, daß Fehlmessungen in den Verschiebungsvektoren, welche beispielsweise durch periodische Strukturen auf der Ebene auftreten können, die Bewegungsschätzung und die Hinderniserkennung empfindlich stören. Diese statistischen Ausreißer bewirken, daß trotz der Verwendung von robusten Schätzmethoden, eine stabile Hinderniserkennung nur durch die Einbeziehung von Vorwissen über die Art der Bewegung des Fahrzeugs realisiert werden kann. Weiterhin führen die Komplexität des Verfahrens und die damit verbundenen hohen Anforderungen an die Rechenleistung der eingesetzten Hardware dazu, daß die für die praktische Anwendbarkeit so wichtige Echtzeitfähigkeit des Verfahrens bisher nur für Eingangsbilder mit geringer Auflösung ermöglicht werden konnte. Speziell für die Bildverarbeitung hat sich das neue Paradigma der Zellularen Neuronalen Netzwerke als außerordentlich leistungsfähig erwiesen. Neben der extrem hohen Verarbeitungsgeschwindigkeit von CNN-basierten schaltungstechnischen Realisierungen zeichnen sie sich durch eine hohe Robustheit bei vertauschten oder fehlerhaften Eingangsdaten aus. Für nahezu jedes aktuelle Problem der Bildverarbeitung wurde bisher ein geeignetes CNN bestimmt. Auch für komplexe Aufgabenstellungen aus der Bildverarbeitung, wie beispielsweise die Texturklassifikation, die Spurverfolgung oder die Gewinnung von Tiefeninformation konnten bereits CNN-Programme implementiert und schaltungstechnisch verwirklicht werden. So konnte auch im zweiten Teil dieser Arbeit gezeigt werden, daß die einzelnen Schritte der Hinderniserkennung aus monokularen Bildsequenzen ebenfalls unter Verwendung eines CNN realisierbar sind. Es wurde demonstriert, daß für die Generierung eines Markantheitsbildes bereits ein Standard-CNN mit linearer Kopplungsfunktion und der Nachbarschaft r=1 verwendet werden kann. Das rechenaufwändige statistische Verfahren der Markantheitsbildberechnung kann somit durch einen einzigen CNN-Verarbeitungsschritt durchgeführt werden. Weiterhin wurde im Rahmen dieser Arbeit gezeigt, daß auch der folgende, rechenintensive Schritt des statistischen Verfahrens der Hinderniserkennung, nämlich die Verschiebungsvektorschätzung, mittels CNN verwirklicht werden kann. Hierzu sind CNN mit polynomialen Kopplungsfunktionen und der Nachbarschaft r=1 notwendig. Bei den durchgeführten Untersuchungen hat sich herausgestellt, daß die CNN-basierten Verarbeitungsschritte den statistischen Methoden in den Punkten Robustheit und Verarbeitungsgeschwindigkeit deutlich überlegen sind. Abschließend wurde in dieser Arbeit gezeigt, daß mit Hilfe von CNN sogar eine direkte Hinderniserkennung aus monokularen Bildsequenzen - ohne den Umweg über die Bestimmung der Verschiebungsvektoren und der Bewegungsgrößen - realisiert werden kann. In dem vorgestellten Verfahren wird nach zwei Vorverarbeitungsschritten, die Hinderniserkennung in einem einzigen Schritt unter Verwendung eines CNN mit polynomialen Zellkopplungsgewichten vom Grade D=3 und der Nachbarschaft r=2 durchgeführt. Das vorgeschlagene Verfahren führt zu einer wesentlichen Vereinfachung der Hinderniserkennung in monokularen Bildsequenzen, da die Bewegegungsschätzung aus dem statistischen Verfahren nicht länger notwendig ist. Die Umgehung der expliziten Bewegungsschätzung hat weiterhin den Vorteil, daß der Rechenaufwand stark reduziert wurde und durch den Wegfall der Verschiebungsvektorschätzung und dem damit verketteten Problem der Ausreißer, ist das vorgestellte CNN-basierte Verfahren außerdem sehr robust. Die ersten Resultate, die unter Verwendung von synthetischen und natürlichen Bildsequenzen erhalten wurden, sind überaus vielversprechend und zeigen, daß CNN ausgezeichnet zur Verarbeitung von Videosequenzen geeignet sind.