Refine
Year of publication
- 2009 (2440) (remove)
Document Type
- Article (985)
- Doctoral Thesis (391)
- Part of Periodical (311)
- Book (210)
- Review (128)
- Working Paper (116)
- Part of a Book (86)
- Report (65)
- Conference Proceeding (58)
- Preprint (16)
Language
- German (1440)
- English (862)
- Portuguese (55)
- Croatian (39)
- French (24)
- Multiple languages (5)
- Italian (4)
- Spanish (4)
- dut (2)
- Hungarian (2)
Keywords
- Deutsch (58)
- Linguistik (35)
- Literatur (30)
- Rezension (24)
- Filmmusik (21)
- Lehrdichtung (18)
- Reiseliteratur (16)
- Deutschland (14)
- Film (13)
- Literaturwissenschaft (13)
Institute
- Medizin (287)
- Extern (198)
- Biochemie und Chemie (159)
- Biowissenschaften (93)
- Präsidium (80)
- Physik (67)
- Gesellschaftswissenschaften (61)
- Rechtswissenschaft (51)
- Geowissenschaften (48)
- Geschichtswissenschaften (48)
Poster presentation: Our work deals with the self-organization [1] of a memory structure that includes multiple hierarchical levels with massive recurrent communication within and between them. Such structure has to provide a representational basis for the relevant objects to be stored and recalled in a rapid and efficient way. Assuming that the object patterns consist of many spatially distributed local features, a problem of parts-based learning is posed. We speculate on the neural mechanisms governing the process of the structure formation and demonstrate their functionality on the task of human face recognition. The model we propose is based on two consecutive layers of distributed cortical modules, which in turn contain subunits receiving common afferents and bounded by common lateral inhibition (Figure 1). In the initial state, the connectivity between and within the layers is homogeneous, all types of synapses – bottom-up, lateral and top-down – being plastic. During the iterative learning, the lower layer of the system is exposed to the Gabor filter banks extracted from local points on the face images. Facing an unsupervised learning problem, the system is able to develop synaptic structure capturing local features and their relations on the lower level, as well as the global identity of the person at the higher level of processing, improving gradually its recognition performance with learning time. ...
Poster presentation: Introduction We study the problem of object recognition invariant to transformations, such as translation, rotation and scale. A system is underdetermined if its degrees of freedom (number of possible transformations and potential objects) exceed the available information (image size). The regularization theory solves this problem by adding constraints [1]. It is unclear what constraints biological systems use. We suggest that rather than seeking constraints, an underdetermined system can make decisions based on available information by grouping its variables. We propose a dynamical system as a minimum system for invariant recognition to demonstrate this strategy. ...
Poster presentation: Introduction Dopaminergic neurons in the midbrain show a variety of firing patterns, ranging from very regular firing pacemaker cells to bursty and irregular neurons. The effects of different experimental conditions (like pharmacological treatment or genetical manipulations) on these neuronal discharge patterns may be subtle. Applying a stochastic model is a quantitative approach to reveal these changes. ...
NeuroXidence: reliable and efficient analysis of an excess or deficiency of joint-spike events
(2009)
Poster presentation: We present a non-parametric and computationally-efficient method named NeuroXidence (see http://www.NeuroXidence.com ) that detects coordinated firing within a group of two or more neurons and tests whether the observed level of coordinated firing is significantly different from that expected by chance. NeuroXidence [1] considers the full auto-structure of the data, including the changes in the rate responses and the history dependencies in the spiking activity. We demonstrate that NeuroXidence can identify epochs with significant spike synchronisation even if these coincide with strong and fast rate modulations. We also show, that the method accounts for trial-by-trial variability in the rate responses and their latencies, and that it can be applied to short data windows lasting only tens of milliseconds. Based on simulated data we compare the performance of NeuroXidence with the UE-method [2,3] and the cross-correlation analysis. An application of NeuroXidence to 42 single-units (SU) recorded in area 17 of an anesthetized cat revealed significant coincident events of high complexities, involving firing of up to 8 SUs simultaneously (5 ms window). The results were highly consistent with those obtained by traditional pair-wise measures based on cross-correlation: Neuronal synchrony was strongest in stimulation conditions in which the orientation of the sinusoidal grating matched the preferred orientation of most of the SUs included in the analysis, and was the weakest when the neurons were stimulated least optimally. Interestingly, events of higher complexities showed stronger stimulus-specific modulation than pair-wise interactions. The results suggest strong evidence for stimulus specific synchronous firing and, therefore, support the temporal coding hypothesis in visual cortex. ...
Poster presentation: Introduction We here focus on constructing a hierarchical neural system for position-invariant recognition, which is one of the most fundamental invariant recognition achieved in visual processing [1,2]. The invariant recognition have been hypothesized to be done by matching a sensory image of a particular object stimulated on the retina to the most suitable representation stored in memory of the higher visual cortical area. Here arises a general problem: In such a visual processing, the position of the object image on the retina must be initially uncertain. Furthermore, the retinal activities possessing sensory information are being far from the ones in the higher area with a loss of the sensory object information. Nevertheless, with such recognition ambiguity, the particular object can effortlessly and easily be recognized. Our aim in this work is an attempt to resolve such a general recognition problem. ...
Poster presentation: Introduction We here address the problem of integrating information about multiple objects and their positions on the visual scene. A primate visual system has little difficulty in rapidly achieving integration, given only a few objects. Unfortunately, computer vision still has great difficultly achieving comparable performance. It has been hypothesized that temporal binding or temporal separation could serve as a crucial mechanism to deal with information about objects and their positions in parallel to each other. Elaborating on this idea, we propose a neurally plausible mechanism for reaching local decision-making for "what" and "where" information to the global multi-object recognition. ...
We model the dynamics of ask and bid curves in a limit order book market using a dynamic semiparametric factor model. The shape of the curves is captured by a factor structure which is estimated nonparametrically. Corresponding factor loadings are assumed to follow multivariate dynamics and are modelled using a vector autoregressive model. Applying the framework to four stocks traded at the Australian Stock Exchange (ASX) in 2002, we show that the suggested model captures the spatial and temporal dependencies of the limit order book. Relating the shape of the curves to variables reflecting the current state of the market, we show that the recent liquidity demand has the strongest impact. In an extensive forecasting analysis we show that the model is successful in forecasting the liquidity supply over various time horizons during a trading day. Moreover, it is shown that the model’s forecasting power can be used to improve optimal order execution strategies.
A generic drug product (World Health Organization (WHO) terminology: multisource product) is usually marketed and manufactured after the expiry date of the innovator’s patent. Generic drugs are less expensive than the innovator products because generic manufacturers do not have to amortize the investment costs of research, development, marketing, and promotion. Multisource products must contain the same active pharmaceutical ingredients (APIs) as the original formulation and have to be shown to be interchangeable with the original formulation. Multisource products have to be shown bioequivalent to the innovator counterpart with respect to pharmacokinetic and pharmacodynamic properties. Multisource products are therefore identical in dose, strength, route of administration, safety, efficacy, and intended use. Bioequivalence can be demonstrated by in vitro dissolution, pharmacokinetic, pharmacodynamic or clinical studies. Since 2000, the U.S. Food and Drug Administration (FDA) allows the approval of certain multisource products solely on the basis of in vitro studies, i.e. by waiving in vivo studies in humans (“Biowaiver”), based on the Biopharmaceutics Classification Scheme (BCS). The BCS characterizes APIs by their solubility and permeability in the gastrointestinal tract (GIT). The different BCS Classes I-IV (Class I: high solubility, high permeability; Class II: low solubility, high permeability; Class III: high solubility, low permeability and Class IV: low solubility, low permeability) result from all possible combinations of high and low solubility with high and low permeability. Since the adoption of the BCS by the FDA in 1995, the BCS criteria have been under continuous development. In 2006, the WHO has released the most recent bioequivalence guidance including relaxed criteria for bioequivalence studies based on modified BCS criteria. According to this guidance, APIs belonging to the BCS classes I – and under defined conditions - II and III – are eligible for a biowaiver-based approval. The principal objective of this work was to characterize the first-line anti tuberculosis APIs, isoniazid, pyrazinamide, ethambutol dihydrochloride and rifampicin, according to their physicochemical, biopharmaceutical, pharmacokinetic and pharmacological properties and to classify them according to the BCS. Ethambutol dihydrochloride and isoniazid were classified as borderline BCS class I/III APIs. Pyrazinamide was classified as a BCS class III and rifampicin as a BCS class II API. Based on the BCS classification and the additional criteria defined in the WHO bioequivalence guidance, the possibility of biowaiver-based approval for immediate release (immediate release) solid oral dosage forms containing the first-line antituberculosis drugs was evaluated. A biowaiver-based approval with defined constraints was recommended for immediate release solid oral dosage forms containing isoniazid (interaction with reducing sugars), pyrazinamide and ethambutol dihydrochloride (relative narrow therapeutic index). Rifampicin was classified as a BCS class II API, and it was concluded that rifampicin containing solid oral immediate release drug products as well as Scale-Up and Post-Approval Changes (SUPAC) changes should not be approved by a biowaiver on the following basis: (i) its solubility and dissolution are highly variable due to polymorphism and instability, (ii) concomitant intake of food and antacids reduces its absorption and bioavailability, (iii) no in vitro predictive dissolution test has been found which correlates to in vivo absorption and (iv) several publications reporting cases of non-bioequivalent and bioinequivalent rifampicin products have been located in the literature. Thus, it is recommended that bioequivalence of rifampicin containing solid oral immediate release drug products should be established by in vivo pharmacokinetic studies in humans. This risk-benefit benefit assessment of a biowaiver-based approval was presented as a poster at the American Association of Pharmaceutical Scientists (AAPS) 2005 and subsequently published as “Biowaiver Monographs” in the Journal of Pharmaceutical Sciences. Based on the assessment of the dissolution properties of the antituberculosis drugs for a biowaiver approval, quality control dissolution methodologies for the International Pharmacopoeia (Pharm. Int.) were developed, presented at the WHO expert meeting and adopted in the Pharm. Int. (http://www.who.int/medicines/publications/pharmprep/OMS_TRS_948.pdf). Additionally, preliminary biowaiver recommendations were also developed for four firstline antimalarial drugs listed on the WHO Essential Medicines List (EML): Quinine, as both the hydrochloride and sulphate, and proguanil hydrochloride were classified as borderline BCS class I/III APIs. Since quinine is a narrow therapeutic index drug and many cases of non-bioequivalence have been reported in the literature, a biowaiverbased approval was not recommended. For solid oral immediate release dosage forms containing proguanil a biowaiver-based approval was recommended under the condition that they dissolve very rapidly. Primaquine phosphate was classified as a BCS class I API. Therefore, a biowaiver-based approval was recommended for immediate release solid oral dosage forms containing primaquine phosphate. Mefloquine hydrochloride was classified as a basic, BCS class IV/II API, making it ineligible for the biowaiver. Additionally, reports of non-bioequivalence and a narrow therapeutic index were found in the scientific literature. Consequently, bioequivalence of solid oral immediate release dosage forms containing mefloquine hydrochloride should be established by in vivo pharmacokinetic studies. The results for quinine hydrochloride and sulphate, proguanil hydrochloride, primaquine diphosphate and mefloquine hydrochloride were presented as a poster at the Pharmaceutical Sciences World Congress (PSWC) 2007 and published as a WHO Collaborating Center Report in June 2006. The aim of this project was to collect, evaluate, generate and publish relevant information for a biowaiver-based approval of essential medicines in order to provide a summary to local regulatory authorities. This information complements the selected list of essential medicines by providing information about the biopharmaceutical properties and pharmaceutical quality of solid oral immediate release dosage forms containing these APIs. The aim of the biowaiver project, inspired by the WHO and brought in life by the International Pharmaceutical Federation (FIP), is to enable access to essential medicines in standardized quality at an affordable price. In this work, a significant contribution to this aim in the form of four biowaiver monographs for the antituberculosis drugs and several reports on the antimalarials has been achieved.
Ziel dieser Arbeit war es, zu prüfen, in welcher Art und Weise Kinder mentale Repräsentationen beim Lesen von Texten konstruieren. Ausgangspunkt der Konzeption dieser Arbeit war das Konstruktions-Integrations-Modell von Kintsch, das zu den am meisten rezipierten Textverstehensmodellen zählt. Ein zentraler Aspekt dieses Modells ist die Annahme der simultanen Speicherung von Textmaterial auf drei hierarchisch voneinander verschiedenen Ebenen mentaler Repräsentation. Genauer sind dies eine Oberflächenrepräsentation, in welcher der genaue Wortlaut und die exakte Struktur eines Textes abgebildet wird, eine propositionale Repräsentation, welche die im Text enthaltene Bedeutung wiedergibt, und schließlich die tiefste Art der Verarbeitung, das Situationsmodell. Hier wird die Textinformation mit relevantem Weltwissen verknüpft wird. Trotz der großen Akzeptanz des Modells und seiner Bedeutung im Bereich auch schulischer Textverstehensforschung, liegen Aussagen zu differentiellen Effekten nur in sehr begrenztem Umfang vor. Erste Hinweise auf entwicklungsabhängige Unterschiede, wie auch Unterschiede in Abhängigkeit von Eigenschaften der Person oder des Textes selbst liegen vor, bedürfen aber einer Erweiterung und erneuter Prüfung um zu einem stabilen und kohärenten Bild interindividueller Unterschiede zu gelangen. Die vorliegende Arbeit untersuchte drei Fragestellungen. Die erste Fragestellung bezog sich auf eine entwicklungsabhängige Veränderung in der relativen Nutzung der einzelnen Ebenen. Die zweite Fragestellung umfasste angenommene Effekte eines Zeitverlaufs auf die Stärke der Repräsentationen sowie die Möglichkeit einer Beeinflussung dieser Veränderungen durch den Einsatz einer behaltensfördernden Instruktion. Die dritte Fragestellung bezog sich auf den Effekt einer Auswahl personenbezogener Variablen auf die Ausprägung der Repräsentationsebenen. Insgesamt wurden die Fragestellungen mit zwei unterschiedlichen Textsorten, einem narrativen Text und einem Sachtext geprüft, um Unterschiede aufzudecken, die sich aus der Verarbeitung unterschiedlicher Textgenres ergeben. Die Fragestellungen wurden in einer Hauptuntersuchung geprüft. Zwei Vorstudien (Vorstudie 1: N = 56; Vorstudie 2: N = 133) dienten der Materialentwicklung und Erprobung erster Zusammenhänge. An der Hauptstudie nahmen 418 Schüler dritter, vierter und fünfter Jahrgangsstufen teil. Die Ergebnisse zeigten insgesamt eine Präferenz der situativen Repräsentation mit nur geringen altersabhängige Veränderungen. Auf eine Oberflächenrepräsentation ließ sich aufgrund der Ergebnisse nur bei einer Teilstichprobe der Viertklässler schließen. Insgesamt fiel es den Schülern erwartungsgemäß leichter, ein Situationsmodell für den narrativen Text im Vergleich zum Sachtext aufzubauen. Dieser Vorteil blieb auch über Zeitintervalle von 20 Minuten bzw. drei Tagen stabil, während sich eine erwartete Veränderung innerhalb der Ebenen nicht abbildete. Von erneutem Lesen konnten die Kinder kurzfristig für den Aufbau aller Ebenen beim Bearbeiten des Sachtextes profitieren. Als ein Prädiktor, der die Ausprägung der situationalen Ebene neben der Textsorte vorhersagen konnte, war der Wortschatz der Kinder. Allgemeine Leseverständiskompetenz zeigte positive Zusammenhänge zur propositionalen Verarbeitungsebene.
Im Rahmen der vorliegenden Studie wurde zunächst eine sensitive Nachweismethode für HPV aus Biopsien etabliert. Auch der Nachweis aus Abstrichen ist möglich, hierbei muss aber darauf geachtet werden, dass genügend Zellmaterial gewonnen wird. Hierzu eignen sich Bürstenabstriche, von Abstrichen mit Wattetupfern sollte Abstand genommen werden, da die Zellausbeute zu gering ist (siehe 4.1). Des Weiteren wurde das Vorkommen von HPV bei Tonsillitis, Tonsillen-CA und klinisch unauffälligen Tonsillen in unserem Patientengut verglichen. Aufgrund der geringen Patientenzahlen ist eine statistische Aussage nicht möglich, es zeigt sich aber, dass in jeder der Gruppen HPV nachgewiesen werden kann. Vergleicht man die beurteilbaren Ergebnisse der Gelelektrophorese, so zeigen sich 33% der Tumorproben HPV-positiv, bei den klinisch unauffälligen Tonsillen sind 60% positiv, bei den Tonsillitiden sogar fast 70%. Diese Ergebnisse zeigen, dass HPV nicht erst in Tumoren nachweisbar ist, sondern bereits in klinisch unauffälligen Geweben. Auch deuten die Ergebnisse darauf hin, dass die Infektion mit HPV wohl schon im Kindesalter erfolgt.
Die hauptsächliche Funktion der menschlichen Talgdrüse ist die Sekretion des Sebums. Vermehrter Talgfluss in Verbindung mit gestörter Verhornung des Talgdrüsenausführungsganges kann zum Krankheitsbild der Acne vulgaris beitragen. Peroxisom-Proliferator-Aktivierte Rezeptoren (PPAR) sind im menschlichen Organismus als Mediatoren des Lipidstoffwechsels bekannt. Liganden der PPAR finden bereits klinische Anwendung. Da PPAR auch in menschlichen Sebozyten exprimiert sind und nachweislich Einfluss auf die Lipogenese nehmen, ist eine mögliche aknetherapeutische Nutzung denkbar. Aus der holokrinen Sekretionsform der Talgdrüse ergibt sich als Besonderheit, dass die Lipogenese der Sebozyten mit ihrer terminalen Differenzierung verknüpft ist und in vielen Aspekten der Apoptose, dem programmierten Zelltod ähnelt. In der vorliegenden Promotionsarbeit konnten in vitro durch Anfärbung mit dem Lipidfarbstoff Nile Red SZ95-Sebozyten in Lipogenese dargestellt werden. Des Weiteren konnte im Rahmen dieser Arbeit erstmals mittels eines gegen Histone gerichteten ELISA-Verfahrens zur Detektion von DNA-Fragmenten gezeigt werden, dass PPAR-Liganden in der Lage sind, sowohl die basale, als auch die durch den Apoptoseinduktor Staurosporin herbeigeführte Apoptose in SZ95-Sebozyten konzentrationsabhängig zu hemmen. Am stärksten war hierzu der PPAR-δ-Ligand L-165.041 in der Lage. Weiterhin konnte mittels Western Blot erstmals gezeigt werden, dass der PPAR-δ-Ligand L-165.041 in SZ95-Sebozyten über die Kinasen Akt, ERK1/2 und p38 signalisiert. Durch Inhibierung von Akt und ERK1/2 konnten die durch L-165.041 gezeigten basalen antiapoptotischen Effekte abgeschwächt werden, während sie durch Inhibierung von p38 verstärkt wurden. Koinkubation von Akt- bzw. ERK1/2-Inhibitoren und PPAR-δ-Ligand L-165.041 sensibilisierte die SZ95-Sebozyten für die durch Staurosporin induzierte Apoptose. Die Ergebnisse geben Anhalt zu der Annahme, dass PPAR-Liganden, insbesondere Liganden von PPAR-δ, einen therapeutisch günstigen Effekt auf Acne vulgaris haben könnten.
Background: Microarray analysis still remains a powerful tool to identify new components of the transcriptosome and it has helped to increase the knowledge of targets triggered by stress conditions such as hypoxia and nitric oxide. However, analysis of transcriptional regulatory events remain elusive due to the contribution of altered mRNA stability to gene expression patterns, as well as changes in the half-life of mRNAs, which influence mRNA expression levels and their turn over rates. To circumvent these problems, we have focused on the analysis of newly transcribed (nascent) mRNAs by nuclear run on (NRO), followed by microarray analysis. Result: We identified 188 genes that were significantly regulated by hypoxia, 81 genes were affected by nitric oxide, and 292 genes were induced by the co-treatment of macrophages with both NO and hypoxia. Fourteen genes (Bnip3, Ddit4, Vegfa, Trib3, Atf3, Cdkn1a, Scd1, D4Ertd765e, Sesn2, Son, Nnt, Lst1, Hps6 and Fxyd5) were common to hypoxia and/or nitric oxide treatments, but with different levels of expression. We observed that 166 transcripts were regulated only when cells were co-treated with hypoxia and NO but not with either treatment alone, pointing to the importance of a crosstalk between hypoxia and NO. In addition, both array and proteomics data supported a consistent repression of hypoxia regulated targets by NO. Conclusion: By eliminating the interference of steady state mRNA in gene expression profiling, we increased the sensitivity of mRNA analysis and identified previously unknown hypoxia-induced targets. Gene analysis profiling corroborated the interplay between NO- and hypoxia-induced signalling.
Photo-initiated processes, like photo-excitation and -deexcitation, internal conversion, excitation energy transfer and electron transfer, are of importance in many areas of physics, chemistry and biology. For the understanding of such processes, detailed knowledge of excitation energies, potential energy surfaces and excited state properties of the involved molecules is an essential prerequisite. To obtain these informations, quantum chemical calculations are required. Several quantum chemical methods exist which allow for the calculation of excited states. Most of these methods are computationally costly what makes them only applicable to small molecules. However, many biological systems where photo-processes are of interest like light-harvesting complexes in photosynthesis or the reception of light in the human eye by rhodopsin are quite large. For large systems, however, only few theoretical methods remain applicable. The currently most widely used method is time-dependent density functional theory (TD-DFT), which can treat systems of up to 200–300 atoms with the excitation energies of some excited states exhibiting errors of less than 0.5 eV. Yet, TD-DFT has several drawbacks. The most severe failure of TD-DFT is the false description of charge transfer states which is particularly problematic in case of larger systems where it yields a multitude of artificially low-lying charge transfer states. But also Rydberg states and states with large double excitation character are not described correctly. Still, if these deficiencies are kept in mind during the interpretation of results, TD-DFT is a useful tool for the calculation of excited states. In my thesis, TD-DFT is applied in investigations of excitation energy and electron transfer processes in light-harvesting complexes. Since light-harvesting complexes, which consist of thousands of atoms, are by far too large to be calculated, model complexes for the processes of interest are constructed from available crystal structures. The model complexes are used to calculate potential energy curves along meaningful reaction coordinates. Artificial charge transfer states are corrected with the help of the so-called ∆DFT method. The resulting potential energy curves are then interpreted by comparison with experimental results. For the light-harvesting complex LH2 from purple bacteria the experimentally observed formation of carotenoid radical cations is studied. It is shown that the carotenoid radical cation is formed most likely via the optically forbidden S1 state of the carotenoid. In light-harvesting complex LHC-II of green plants the fast component of the so-called non-photochemical quenching (NPQ) is investigated. Two of several different hypotheses on the mechanism of NPQ, which have been proposed recently, are studied in detail. The first one suggests that NPQ proceeds via simple replacement of violaxanthin by zeaxanthin in the binding pocket in LHC-II. However, the calculated potential energy curves exhibit no difference between violaxanthin and zeaxanthin in the binding pocket. In combination with experimental results it is thus shown that simple replacement alone does not mediate NPQ in LHC-II. The second hypothesis proposes conformational changes of LHC-II that lead to quenching at the central lutein and chlorophyll molecules during NPQ. My TD-DFT calculations demonstrate that if this mechanism is operative, only the lutein 1 which is one of two central luteins present in LHC-II can take part in the quenching process. This is corroborated by recent experiments. Though several conclusions can be drawn from the investigations using TD-DFT, the interpretability of the results is limited due to the deficiencies of the method and of the models. To overcome the methodological deficiencies, more accurate methods have to be employed. Therefore, the so-called algebraic diagrammatic construction scheme (ADC) is implemented. ADC is a widely overlooked ab initio method for the calculation of excited states, which is based on propagator theory. Its theoretical derivation proceeds via perturbation expansion of the polarization propagator, which describes electronic excitations. This yields separate schemes for every order of perturbation theory. The second order scheme ADC(2), which is employed here, is the equivalent to the Møller-Plesset ground state method MP(2), but for excited states. It represents the computationally cheapest excited state method which can correctly describe doubly excited states, as well as Rydberg and charge transfer states. The quality of ADC(2) results is demonstrated in calculations on linear polyenes which serve as model systems for the larger carotenoid molecules. The calculations show that ADC(2) describes the three lowest excited states of polyenes sufficiently well, particularly the optically forbidden S1 state which is known to possess large double excitation character. Yet, the applicability of the method is limited compared to TD-DFT due to the much larger computational requirements. To facilitate the calculation of larger systems with ADC(2) a new variant of the method is developed and implemented. The variant employs the short-range behavior of electron correlation to reduce the computational effort. As a first step, the working equations of ADC(2) are transformed into a basis of local orbitals. In this basis negligible contributions of the equations which are due to electron correlation can be identified based on the distances of local orbitals. A so-called “bumping” scheme is implemented which removes the negligible parts during a calculation. This way, the computation times as well as the disk space requirements can be reduced. With the “bumping” scheme several new parameters are introduced that regulate the amount of “bumping” and thereby the speed and the accuracy of computations. To determine useful values for the parameters an evaluation is performed using the linear polyene octatetraene as test molecule. From the evaluation an optimal set of parameter values is obtained, so that the computation times become minimal, while the errors in the excitation energies due to the “bumping” do not exceed 0.15 eV. With further calculations on various molecules of different sizes it is tested if these parameter values are universal, i.e. if they can be used for all molecules. The test calculations show that the errors in the excitation energies are below 0.15 eV for all test systems. Additionally, no trend is visible for the errors that their magnitude might depend on the system. In contrast, the amount of disregarded contributions in the calculations increases drastically with growing system size. Thus, the local variant of ADC(2) can be used in future to reliably calculate excited states of systems which are not accessible with conventional ADC(2).
I. EINLEITUNG II. VORSCHLAG DER WIRTSCHAFTSRECHTLICHEN ABTEILUNG ZUM 67. DEUTSCHEN JURISTENTAG 1. Darstellung und Begriffsbestimmung 2. Begründung III. BEDEUTUNG DES AUßERBÖRSLICHEN HANDELS IN DEUTSCHLAND IV. RECHTSVERGLEICHENDE BETRACHTUNG VON AKTIEN- UND KAPITALMARKTRECHT 1. Deutschland a) Organisation des Kapitalmarktes b) Differenzierung im Rahmen des Aktienrechts 2. Großbritannien a) Organisation des Kapitalmarktes b) Differenzierungen im „Companies Act 2006“ 3. USA a) Rechtsquellen des Kapitalgesellschafts- und Kapitalmarktrechts b) Organisation des Kapitalmarktes c) Kapitalgesellschaftsrecht V. STELLUNGNAHME 1. Anknüpfung der vorhandenen Regelungen an die Kapitalmarktorientierung 2. Verwischung der Grenzen zwischen Aktien- und Kapitalmarktrecht 3. Missbrauchsgefahr durch selbstbestimmte Wahl der Satzungsstrenge 4. Bisherige Reformansätze im deutschen Schrifttum 5. Die Abkehr von einer Differenzierung im Aktienrecht in der aktuellen Reformdiskussion 6. Ökonomische Analyse des Aktienrechts („Opt-In-Modell“) VI. FAZIT: Der Deregulierungsansatz, der eine Differenzierung zwischen börsen- und nichtbörsennotierten Aktiengesellschaften vorsieht, ist nicht zu befürworten. Vor dem Hintergrund der rechtsvergleichenden Betrachtung der Beispiele Großbritannien und der USA stellt sich vielmehr eine kapitalmarktorientierte Differenzierung der Anlegerschutzbestimmungen des Aktienrechts als vorzugswürdig dar. Die Anknüpfung von Deregulierungsmaßnahmen an das Kriterium der Kapitalmarktorientierung findet sich im Ansatz auch im bereits geltenden deutschen Recht. So enthält sowohl das Aktienrecht als auch das Kapitalmarktrecht entsprechend differenzierende Regelungen. Zudem weisen auch aktuelle nationale Gesetzesvorhaben und die Entwicklungen im europäischen Gesellschaftsrecht Tendenzen zu einer Abgrenzung nach dem Kriterium der Kapitalmarktferne oder -offenheit auf. Auch birgt der enge Anwendungsbereich der zwingenden Anlegerschutznormen des Aktienrechts auf börsennotierte Aktiengesellschaften erhebliche Missbrauchsrisiken. Aktiengesellschaften könnten in den außerbörslichen Handel wechseln, um in den Genuss von Deregulierungen und geringeren Transparenz- und Anlegerschutzanforderungen zu kommen. Letztlich folgt der Vorzug einer kapitalmarktorientierten Differenzierung auch aus der aktuellen Diskussion um Reformansätze zur Steigerung der Wettbewerbsfähigkeit des deutschen Gesellschafts- und Kapitalmarktrechts. Die in diesem Zusammenhang geforderte Aufhebung der Satzungsstrenge bei gleichzeitiger Normierung entsprechender Informations- und Anlegerschutzpflichten im Kapitalmarktrecht würde dazu führen, dass an bestehende Differenzierungen des Kapitalmarktrechts angeknüpft werden könnte.
This paper explores the relationship between equity prices and the current account for 17 industrialized countries in the period 1980-2007. Based on a panel vector autoregression, I compare the effects of equity price shocks to those originating from monetary policy and exchange rates. While monetary policy shocks have a limited impact, shocks to equity prices have sizeable effects. The results suggest that equity prices impact on the current account through their effects on real activity and exchange rates. Furthermore, shocks to exchange rates play a key role as well. Keywords: current account fluctuations, equity prices, panel vector autoregression
The risk of deflation
(2009)
This paper was prepared for the meeting on Financial Regulation and Macroeconomic Stability: Key issues for the G20, organised by the CEPR and the Reinventing Bretton Woods Committee, London, 31 January 2009. Introduction: The onset of financial instability in August 2007, which quickly spread across the world, raises a number of questions for policy makers. First, what are the roots of the crisis? Many factors have been emphasized in the debate, including the opacity of complex financial products; the excessive confidence in ratings; weak risk management by financial institutions; massive reliance on wholesale funding; and the presumption that markets would always be liquid. Furthermore, poorly understood incentive effects – arising from the originate-to-distribute-model, remuneration policies and the period of low interest rates – are also widely seen as having played a role. Second, how can a repetition of the crisis can be avoided? Much attention is being focused on regulation and supervision of financial intermediaries. The G-20, at its summit in November 2008, noted that measures need to be taken in five areas: (i) financial market transparency and disclosure by firms need to be strengthened; (ii) regulation needs to be enhanced to ensure that all financial markets, products and participants are regulated or subject to oversight, as appropriate; (iii) the integrity of financial markets should be improved by bolstering investor and consumer protection, avoiding conflicts of interest, and by promoting information sharing; (iv) international cooperation among regulators must be enhanced; and (v) international financial institutions must be reformed to reflect changing economic weights in the world economy better in order to increase the legitimacy and effectiveness of these institutions. Third, how can the consequences for economic activity be minimized? Many of the adverse developments in financial markets – in particular the collapse of term interbank markets – reflect deeply entrenched perceptions of counterparty risk. Prompt and far-reaching action to support the financial system, in particular the infusion of equity capital in financial institutions to reduce counter-party risk and get credit to flow again, is essential in order to restore market functioning. A particular risk at present is that the rapid decline in inflation in many countries in recent months will turn into deflation with highly adverse real economic developments. This background paper considers how large the risk of deflation may be and discusses what policy can do to reduce it. It is organized as follows. Section 2 defines deflation and discusses downward nominal wage rigidities and the zero lower bound on interest rates. While these factors are frequently seen as two reasons why deflation can be associated with very poor economic outcomes, they should not be overemphasized. Section 3 looks at the current situation. Inflation expectations and forecasts in the subset of economies we look at (the euro area, the UK and the US) are positive, indicating that deflation is not expected. This does not imply that the current concerns of deflation are unwarranted, only that the public expects the central bank to be successful in avoiding deflation. The section also looks at the evolution of headline and “core” inflation, focusing on data from the US and the euro area. Section 4 reviews how monetary and fiscal policy can be conducted to ensure that deflation is avoided. Section 5 briefly discusses special issues arising in emerging market economies. Finally, Section 6 offers some conclusions. An Appendix discusses deflation episodes in the period 1882-1939.
This paper examines the sustainability of the currency board arrangements in Argentina and Hong Kong. We employ a Markov switching model with two regimes to infer the exchange rate pressure due to economic fundamentals and market expectations. The empirical results suggest that economic fundamentals and expectations are key determinants of a currency board’s sustainability. We also show that the government’s credibility played a more important role in Argentina than in Hong Kong. The trade surplus, real exchange rate and inflation rate were more important drivers of the sustainability of the Hong Kong currency board.
Inhalt: Prof. Dr. Helmut Siekmann : Stellungnahme für die öffentliche Anhörung des Ausschusses für Wirtschaft, Mittelstand und Energie und des Haushalts- und Finanzausschusses des Landtags Nordrhein-Westfalen Keine Hilfe für Banken ohne einen neuen Ordnungsrahmen für die Finanzmärkte Stellungnahme 14/2328 Antrag der Fraktion Bündnis 90/Die Grünen : Keine Hilfe für Banken ohne einen neuen Ordnungsrahmen für Finanzmärkte Drucksache 14/7680 Fragenkatalog zur Anhörung von Sachverständigen am 04. Februar 2009 zum Antrag der Fraktion Bündnis90/Die Grünen Tableau Anhörung von Sachverständigen 57. Sitzung des Ausschusses für Wirtschaft, Mittelstand und Energie 85. Sitzung des Haushalts- und Finanzausschusses am Mittwoch, dem 4. Februar 2009
Over the last four decades the literature on bond rating changes and its effects on security prices increased significantly with almost all studies not controlling for the respective reason for those. We therefore investigate the impact of rating events on the stock and the credit default swap (CDS) market incorporating rating reviews and rating changes together with the reason mentioned by the rating agency. Our results for the general effects are in line with prior findings but conditioning on the respective reason shows that the markets’ anticipation of rating actions is largely driven by events due to changes in firms’ operating performance. Furthermore, we provide empirical evidence for the hypothesis in prior literature that a surprise downgrade does not necessarily have to be bad news for stockholders when wealth is transferred from bondholders, but negative rating actions are always bad news for bondholders. The results additionally reveal increasing rating announcement effects by declining credit quality of firms for both rating reviews and changes. JEL Classification: D82, G14, G20. Keywords: Credit Default Swaps, Credit Ratings, Credit Rating Reasons, Event Study.
Mikropumpeffekt an dynamisch belasteten Implantat-Abutment-Verbindungen : in vitro Untersuchung
(2009)
In der implantat-prothetischen Therapie vollzieht sich eine Weiterentwicklung und die Erforschung effizienter Implantat-Abutment-Verbindungen (IAV) gewinnt dabei stetig an Bedeutung. Mehrere In-vitro Untersuchungen haben festgestellt, dass sich ein Mikrospalt zwischen der prothetischen Plattform des Implantats und dem Boden des Abutments ausbildet. Ein Zusammenhang zwischen der konstruktiven Umsetzung der Implantat-Abutment-Verbindung und der Größe des Mikrospaltes konnte dabei eindeutig hergestellt werden. Die beschriebene Mikrospaltbildung und ein persistierendes entzündliches Infiltrat auf der Höhe des Implantat-Abutment-Interfaces werden als auslösende Faktoren für das Phänomen der krestalen Knochenresorption angesehen. Die genauen Mechanismen, wie es dazu kommt, sind bisher allerdings nicht aufgedeckt worden. In diesem Zusammenhang entsteht die Frage, ob die Mikrobeweglichkeit und damit der sich zyklisch öffnende und schließende Mikrospalt ein Einschwemmen des Speichels in das Implantatinnere ermöglicht. Zur Aufklärung dieser Frage wurde folgende These aufgestellt. Das Öffnen des Spaltes unter angelegter Kaubelastung führt zu einer Volumenänderung der Hohlräume im Inneren des Implantatkörpers. Diese Vergrößerung des Volumens erzeugt einen Unterdruck, der einen Ansaugmechanismus für den umgebenden Speichel auslöst. Das Nachlassen der Belastung bewirkt das Schließen des Spaltes und damit eine Verkleinerung des intraimplantären Volumens. Ein Rückstrom der Flüssigkeit in Richtung periimplantäres Gewebe ist dadurch möglich. Die erneute Belastung und damit das wiederholte Öffnen und Schließen des Spaltes setzt den vermuteten Mechanismus erneut in Gang. Diese Vorgänge erinnern an die Arbeitsweise einer Pumpe, daher wurde der Begriff eines "Mikropumpeffekts an der Implantat-Abutment-Verbindung" geprägt. Zur Überprüfung, ob ein solcher Effekt an der Implantat-Abutment-Verbindung existiert, wurden sechs handelsübliche Implantatsysteme in die vorliegende Untersuchung einbezogen. Die Implantatsysteme werden nach ihrer Verbindungsgeometrie in konische Verbindungen und Stoßverbindungen unterschieden. Getestet wurden drei konische Implantat-Abutment-Verbindungen (Ankylos®, Astra Tech®, Bego®, ITI Straumann® Bone Level) und drei Stoßverbindungen (Camlog®, SICace®, Xive® S plus). Für jedes System wurden fünf Prüfkörper hergestellt. Jeder Prüfkörper simulierte eine implantatgestützte Molarenkrone im Oberkiefer. Die umgebende Schleimhaut an der Implantat-Abutment-Verbindung wurde durch ein Polyether-Abformmaterial nachgeahmt. Im Inneren der künstlichen Schleimhaut wurde ein Flüssigkeitszugang geschaffen und ein speziell dafür entwickeltes speichelähnliches Röntgenkontrastmittel eingebracht. Während der Belastung in einem zweidimensionalen Kausimulator durchstrahlte ein konstanter und divergierender Röntgenstrahl die Prüfkörper. Durch die Umwandlung der Röntgenstrahlung in sichtbares Licht konnten mit einer Highspeed Digitalkamera Röntgenvideos aufgezeichnet werden. Deren Bewertung sollte Aufschluss darüber geben, ob ein Mikropumpeffekt am Implantat-Abutment-Interface entsteht. Die Auswertung der erstellten Röntgenvideos wurde insgesamt von drei Personen mit praktischer implantat-prothetischer Erfahrung unabhängig voneinander durchgeführt. Es hat sich herausgestellt, dass am Implantat-Abutment-Komplex keiner der hier getesteten Konusverbindungen ein Mikropumpeffekt nachgewiesen werden konnte, wohingegen bei allen hier einbezogenen Stoßverbindungen ein solcher Effekt sichtbar wurde. Bemerkenswert war dabei die Wechselwirkung zwischen dem sichtbaren Mikropumpeffekt und einem durch Mikrobeweglichkeit ausgelösten Mikrospalt am Implantat-Abutment-Interface. Ein Mikropumpeffekt konnte nur dann ausgelöst werden, wenn zuvor ein sichtbarer Mikrospalt zwischen dem Implantat und dem Abutment detektiert wurde. Somit kann der Mikropumpeffekt als eine direkte Konsequenz des Mikrospalts am Implantat-Abutment-Interface angesehen werden. Ein weiterer Aspekt beim Auftreten der beschriebenen Flüsigkeitsströmung an der Implantat-Abutment-Verbindung stellte die angelegte Kraft dar. Während bei dem SICace®-System ein Mikrospalt und ein Mikropumpeffekt ab einer Belastung von 50N auftrat, konnte dieser für das Camlog®-Implantatsystem erst bei einer angelegten Kraft von 75N nachgewiesen werden. Darüber hinaus zeigte das Xive®-System plus einen Mikropumpeffekt ab einer Belastung von 125N. Die konstruktive Umsetzung des Implantat-Abutment-Komplexes und die Höhe der angelegten Kaubelastung scheint demnach für das Auftreten bzw. Ausbleiben eines Mikropumpeffekts eine entscheidende Rolle zu spielen. Unter Berücksichtigung des aktuellen wissenschaftlichen Kenntnisstandes und den Ergebnissen dieser In-vitro Studie erscheint das Erklärungsmodell, nach dem der Mikropumpeffekt eine Ursache von krestaler Knochenresorption ist, als wahrscheinlich. Die Veränderung des periimplantären Gewebes, insbesondere ein Knochenabbau gilt als pathologisches Zeichen, das bis zum Implantatverlust fortschreiten kann. Auch im Hinblick auf Versorgungen in ästhetisch kritischen Kieferregionen und den ästhetisch anspruchsvolleren Patienten sorgt der Erhalt des krestalen Knochens als Grundlage für stabile Weichgewebsverhältnisse für eine höhere Erfolgswahrscheinlichkeit in der Weichgewebsästhetik. Erkenntnisse aus dieser und weiteren Grundlagenuntersuchungen sollten bei der Neu- und Weiterentwicklung des Implantat-Abutment-Komplexes angemessen berücksichtigt werden.