Refine
Year of publication
Document Type
- Article (30507)
- Part of Periodical (11892)
- Book (8260)
- Doctoral Thesis (5703)
- Part of a Book (3710)
- Working Paper (3385)
- Review (2878)
- Contribution to a Periodical (2338)
- Preprint (2050)
- Report (1544)
Language
- German (42389)
- English (29171)
- French (1067)
- Portuguese (723)
- Multiple languages (309)
- Croatian (302)
- Spanish (301)
- Italian (194)
- mis (174)
- Turkish (148)
Is part of the Bibliography
- no (75103) (remove)
Keywords
- Deutsch (1038)
- Literatur (807)
- taxonomy (760)
- Deutschland (543)
- Rezension (491)
- new species (449)
- Frankfurt <Main> / Universität (341)
- Rezeption (325)
- Geschichte (292)
- Linguistik (268)
Institute
- Medizin (7684)
- Präsidium (5156)
- Physik (4417)
- Wirtschaftswissenschaften (2688)
- Extern (2661)
- Gesellschaftswissenschaften (2372)
- Biowissenschaften (2180)
- Biochemie und Chemie (1972)
- Frankfurt Institute for Advanced Studies (FIAS) (1670)
- Center for Financial Studies (CFS) (1621)
This paper presents an LTAG analysis of reflexives like himself and reciprocals like each other. These items need to find a c-commanding antecedent from which they retrieve (part of) their own denotation and with which they syntactically agree. The relation between anaphoric item and antecendent must satisfy the following important locality conditions (Chomsky (1981)).
Relative quantifier scope in German depends, in contrast to English, very much on word order. The scope possibilities of a quantifier are determined by its surface position, its base position and the type of the quantifier. In this paper we propose a multicomponent analysis for German quantifiers computing the scope of the quantifier, in particular its minimal nuclear scope, depending on the syntactic configuration it occurs in.
This paper investigates the relation between TT-MCTAG, a formalism used in computational linguistics, and RCG. RCGs are known to describe exactly the class PTIME; simple RCG even have been shown to be equivalent to linear context-free rewriting systems, i.e., to be mildly context-sensitive. TT-MCTAG has been proposed to model free word order languages. In general, it is NP-complete. In this paper, we will put an additional limitation on the derivations licensed in TT-MCTAG. We show that TT-MCTAG with this additional limitation can be transformed into equivalent simple RCGs. This result is interesting for theoretical reasons (since it shows that TT-MCTAG in this limited form is mildly context-sensitive) and, furthermore, even for practical reasons: We use the proposed transformation from TT-MCTAG to RCG in an actual parser that we have implemented.
This paper sets up a framework for LTAG (Lexicalized Tree Adjoining Grammar) semantics that brings together ideas from different recent approaches addressing some shortcomings of TAG semantics based on the derivation tree. Within this framework, several sample analyses are proposed, and it is shown that the framework allows to analyze data that have been claimed to be problematic for derivation tree based LTAG semantics approaches.
LTAG semantics for questions
(2004)
This papers presents a compositional semantic analysis of interrogatives clauses in LTAG (Lexicalized Tree Adjoining Grammar) that captures the scopal properties of wh- and nonwh-quantificational elements. It is shown that the present approach derives the correct semantics for examples claimed to be problematic for LTAG semantic approaches based on the derivation tree. The paper further provides an LTAG semantics for embedded interrogatives.
Our paper aims at capturing the distribution of negative polarity items (NPIs) within lexicalized Tree Adjoining Grammar (LTAG). The condition under which an NPI can occur in a sentence is for it to be in the scope of a negation with no quantifiers scopally intervening. We model this restriction within a recent framework for LTAG semantics based on semantic unification. The proposed analysis provides features that signal the presence of a negation in the semantics and that specify its scope. We extend our analysis to modelling the interaction of NPI licensing and neg raising constructions.
This paper addresses the problem ofconstraints for relative quantifier sope, in partiular in inverse linking readings wherecertain scope orders are exluded. We show how to account for such restrictions in the Tree Adjoining Grammar (TAG) framework by adopting a notion offlexible composition. In the semantics we use for TAG we introduce quantifier sets that group quantifiers that are "glued" together in the sense that no other quantifieran scopally intervene between them. Theflexible composition approach allows us to obtain the desired quantifier sets and thereby the desiredconstraints for quantifier sope.
In this paper we will explore the similarities and differences between two feature logic-based approaches to the composition of semantic representations. The first approach is formulated for Lexicalized Tree Adjoining Grammar (LTAG, Joshi and Schabes 1997), the second is Lexical Ressource Semantics (LRS, Richter and Sailer 2004) and was first defined in Head-driven Phrase Structure Grammar. The two frameworks have several common characteristics that make them easy to compare: 1 They use languages of two sorted type theory for semantic representations. 2. They allow underspecification. LTAG uses scope constraints while LRS provides component-of contraints. 3 They use feature logics for computing semantic representations. 4. they are designed for computational applications. By comparing the two frameworks we will also point outsome characteristics and advantages of feature logic-based semantic computation in genereal.
TT-MCTAG lets one abstract away from the relative order of co-complements in the final derived tree, which is more appropriate than classic TAG when dealing with flexible word order in German. In this paper, we present the analyses for sentential complements, i.e., wh-extraction, thatcomplementation and bridging, and we work out the crucial differences between these and respective accounts in XTAG (for English) and V-TAG (for German).
In this paper we propose a compositional semantics for lexicalized tree-adjoining grammar (LTAG). Tree-local multicomponent derivations allow separation of the semantic contribution of a lexical item into one component contributing to the predicate argument structure and a second component contributing to scope semantics. Based on this idea a syntax-semantics interface is presented where the compositional semantics depends only on the derivation structure. It is shown that the derivation structure (and indirectly the locality of derivations) allows an appropriate amount of underspecification. This is illustrated by investigating underspecified representations for quantifier scope ambiguities and related phenomena such as adjunct scope and island constraints.
In this paper, we introduce an extension of the XMG system (eXtensibleMeta-Grammar) in order to allow for the description of Multi-Component Tree Adjoining Grammars. In particular, we introduce the XMG formalism and its implementation, and show how the latter makes it possible to extend the system relatively easily to different target formalisms, thus opening the way towards multi-formalism.
Developing linguistic resources, in particular grammars, is known to be a complex task in itself, because of (amongst others) redundancy and consistency issues. Furthermore some languages can reveal themselves hard to describe because of specific characteristics, e.g. the free word order in German. In this context, we present (i) a framework allowing to describe tree-based grammars, and (ii) an actual fragment of a core multicomponent tree-adjoining grammar with tree tuples (TT-MCTAG) for German developed using this framework. This framework combines a metagrammar compiler and a parser based on range concatenation grammar (RCG) to respectively check the consistency and the correction of the grammar. The German grammar being developed within this framework already deals with a wide range of scrambling and extraction phenomena.
Der TUSNELDA-Standard : ein Korpusannotierungsstandard zur Unterstützung linguistischer Forschung
(2001)
Die Verwendung von Standards für die Annotierung größerer Sammlungen elektronischer Texte (Korpora) ist eine Voraussetzung für eine mögliche Wiederverwendung dieser Korpora. Dieser Artikel stellt einen Korpusannotierungsstandard vor, der die Anforderungen der Untersuchung unterschiedlichster linguistischer Phänomene berücksichtigt. Der Standard wurde im SFB 441 an der Universität Tübingen entwickelt. Er geht von bestehenden Standards, insbesondere CES und TEI, aus, die sich als teilweise zu ausführlich und zu wenig restriktiv,teilweise auch als nicht ausdrucksstark genug erweisen, um den Bedürfnissen korpusbasierter linguistischer Forschung gerecht zu werden.
Cet article étudie la relation entre les grammaires darbres adjoints à composantes multiples avec tuples darbres (TT-MCTAG), un formalisme utilisé en linguistique informatique, et les grammaires à concaténation dintervalles (RCG). Les RCGs sont connues pour décrire exactement la classe PTIME, il a en outre été démontré que les RCGs « simples » sont même équivalentes aux systèmes de réécriture hors-contextes linéaires (LCFRS), en dautres termes, elles sont légèrement sensibles au contexte. TT-MCTAG a été proposé pour modéliser les langages à ordre des mots libre. En général ces langages sont NP-complets. Dans cet article, nous définissons une contrainte additionnelle sur les dérivations autorisées par le formalisme TT-MCTAG. Nous montrons ensuite comment cette forme restreinte de TT-MCTAG peut être convertie en une RCG simple équivalente. Le résultat est intéressant pour des raisons théoriques (puisqu’il montre que la forme restreinte de TT-MCTAG est légèrement sensible au contexte), mais également pour des raisons pratiques (la transformation proposée ici a été utilisée pour implanter un analyseur pour TT-MCTAG).
This paper compares two approaches to computational semantics, namely semantic unification in Lexicalized Tree Adjoining Grammars (LTAG) and Lexical Resource Semantics (LRS) in HPSG. There are striking similarities between the frameworks that make them comparable in many respects. We will exemplify the differences and similarities by looking at several phenomena. We will show, first of all, that many intuitions about the mechanisms of semantic computations can be implemented in similar ways in both frameworks. Secondly, we will identify some aspects in which the frameworks intrinsically differ due to more general differences between the approaches to formal grammar adopted by LTAG and HPSG.
The work presented here addresses the question of how to determine whether a grammar formalism is powerful enough to describe natural languages. The expressive power of a formalism can be characterized in terms of i) the string languages it generates (weak generative capacity (WGC)) or ii) the tree languages it generates (strong generative capacity (SGC)). The notion of WGC is not enough to determine whether a formalism is adequate for natural languages. We argue that even SGC is problematic since the sets of trees a grammar formalism for natural languages should be able to generate is difficult to determine. The concrete syntactic structures assumed for natural languages depend very much on theoretical stipulations and empirical evidence for syntactic structures is rather hard to obtain. Therefore, for lexicalized formalisms, we propose to consider the ability to generate certain strings together with specific predicate argument dependencies as a criterion for adequacy for natural languages.
In this paper we present a parsing architecture that allows processing of different mildly context-sensitive formalisms, in particular Tree-Adjoining Grammar (TAG), Multi-Component Tree-Adjoining Grammar with Tree Tuples (TT-MCTAG) and simple Range Concatenation Grammar (RCG). Furthermore, for tree-based grammars, the parser computes not only syntactic analyses but also the corresponding semantic representations.
Multicomponent Tree Adjoining Grammars (MCTAG) is a formalism that has been shown to be useful for many natural language applications. The definition of MCTAG however is problematic since it refers to the process of the derivation itself: a simultaneity constraint must be respected concerning the way the members of the elementary tree sets are added. Looking only at the result of a derivation (i.e., the derived tree and the derivation tree), this simultaneity is no longer visible and therefore cannot be checked. I.e., this way of characterizing MCTAG does not allow to abstract away from the concrete order of derivation. Therefore, in this paper, we propose an alternative definition of MCTAG that characterizes the trees in the tree language of an MCTAG via the properties of the derivation trees the MCTAG licences.
Multicomponent Tree Adjoining Grammars (MCTAG) is a formalism that has been shown to be useful for many natural language applications. The definition of MCTAG however is problematic since it refers to the process of the derivation itself: a simultaneity constraint must be respected concerning the way the members of the elementary tree sets are added. This way of characterizing MCTAG does not allow to abstract away from the concrete order of derivation. In this paper, we propose an alternative definition of MCTAG that characterizes the trees in the tree language of an MCTAG via the properties of the derivation trees (in the underlying TAG) the MCTAG licences. This definition gives a better understanding of the formalism, it allows a more systematic comparison of different types of MCTAG, and, furthermore, it can be exploited for parsing.
Die Theorie des sprachlichen Lernens und Lehrens ist bis in die siebziger Jahre des 20. Jahrhunderts hinein eine "Meisterlehre" (Müller-Michaels 1980) gewesen. Große Vorbilder eines Volkes (z.B. Mose), Leiter philosophischer Schulen (z.B. Platon) oder Äbte von Klöstern (z.B. Augustinus) und schließlich staatlich geprüfte Oberstudiendirektoren (z.B. Ulshöfer) beschrieben den jüngeren Kollegen, was sich beim Lehren der Sprache über Jahrzehnte bewährt habe: wie man am besten den Sprachunterricht erteile (Müller 1922, Seidemann 1973, Ulshöfer 1968, Essen 1968). Mit der Etablierung der Sprachdidaktiken an den Universitäten ist das Konzept der "norm-setzenden Handlungswissenschaften" Müller-Michaels 1980, Ivo 1975) entwickelt worden. Der Forscher (nicht mehr als Meister der Praxis ausgewiesen) untersucht die Prozesse des sprachlichen Lehrens und Lernens, indem er im "Feld" des Praktikers Erhebungen anstellt, um anschließend die erhobenen Daten einer Hypothesenprüfung zu unterziehen. Als Handlungsfeld wird besonders die Schule berücksichtigt. Die Methoden der Forschung sind vorwiegend "quasi-experimentell". In der Nachfolge der Sprachtheorie Chomsky´s (Chomsky 1965) sind die experimentellen Ansätze zur Untersuchung des Spracherwerbs, der Spracherwerbsstörung und der betreffenden Interventionen entwickelt worden (de Villiers/ de Villiers 1970, Hörmann 1978). Ort der Untersuchung ist das Labor. Das Design dieser Sprachdidaktik (bzw. Psycholinguistik, Kognitionswissenschaften etc.) ist experimentell (z.B. Herrmann 2004). Alle drei Konzepte stehen sich in vielerlei Hinsicht antagonistisch gegenüber. Sie auseinander zu halten - und andererseits mit Gewinn aufeinander zu beziehen -, gehört zu den Basis-Fähigkeiten der linguosomatischen Berufe und ihrer zugrundeliegenden Theorie (Beispiel Sprachlehrberufe, Phoniatrie, Sprachheil-Sonderpädagogik, psychosomatische Sprachtherapien). Daher sind die signifikanten Gegensätze der drei Konzepte herauszuarbeiten und ihre widerstrebenden Konsequenzen aufeinander zu beziehen.
The present work reports two experiments on brain electric correlates of cognitive and emotional functions. (1) Studying paranormal belief, 35-channel resting EEG (10 believers and 13 skeptics) was analyzed with "Low Resolution Electromagnetic Tomography" (LORETA) in seven frequency bands. LORETA gravity centers of all bands shifted to the left in believers vs. sceptics, and showed that believers had stronger left fronto-temporo-parietal activity than skeptics. Self-rating of affective attitude showed believers to be less negative than skeptics. The observed EEG lateralization agreed with the ‘valence hypothesis’ that posits predominant left hemispheric processing for positive emotions. (2) Studying emotions, positive and negative emotion words were presented to 21 subjects while "Event-Related Potentials" (ERPs) were recorded. During word presentation (450 ms), 13 microstates (steps of information processing) were identified. Three microstates showed different potential maps for positive vs. negative words; LORETA functional imaging showed stronger activity in microstate #4 (106-122 ms) for positive words right anterior, for negative words left central; in #6 (138-166 ms) for positive words left anterior, for negative words left posterior; in #7 (166-198 ms), for positive words right anterior, for negative words right central. In conclusion: during word processing, the extraction of emotion content starts as early as 106 ms after stimulus onset; the brain identifies emotion content repeatedly in three separate, brief microstate epochs; and, this processing of emotion content in the three microstates involves different brain mechanisms to represent the distinction positive vs. negative valence.
This paper examines the development of periphrastic constructions involving auxiliary "have" and "be" with a past participle in the history of English, on the basis of parsed electronic corpora. It is argued that the two constructions represented distinct syntactic and semantic structures: while the one with have developed into a true perfect in the course of Middle English, the one with be remained a stative resultative throughout its history. In this way, it is explained why the be construction was rarely or never used in a number of contexts, including past counterfactuals, iteratives, duratives, certain kinds of infinitives and various other utterance types that cannot be characterized as perfects of result. When the construction with have became a true perfect, it was used in such contexts, regardless of the identity of the main verb, leading to the appearance of have with verbs like come which had previously only taken be. Crucially, however, have was not spreading at the expense of be, as the be perfect had never been used in such contexts, but rather at the expense of the old simple past. At least until the end of the Early Modern English period, the shift in the relative frequency of have and be perfects is to be explained in terms of the expansion of the former into new contexts, while the latter remained stable. A formal analysis is proposed, taking as its starting point a comparison with German which shows that the older English be perfect indeed behaves more like the German stative passive than its haben and sein perfects.
In this paper, we will argue for a novel analysis of the auxiliary alternation in Early English, its development and subsequent loss which has broader consequences for the way that auxiliary selection is looked at cross-linguistically. We will present evidence that the choice of auxiliaries accompanying past participles in Early English differed in several significant respects from that in the familiar modern European languages. Specifically, while the construction with have became a full-fledged perfect by some time in the ME period, that with be was actually a stative resultative, which it remained until it was lost. We will show that this accounts for some otherwise surprising restrictions on the distribution of BE in Early English and allows a better understanding of the spread of HAVE through late ME and EModE. Perhaps more importantly, the Early English facts also provide insight into the genesis of the kind of auxiliary selection found in German, Dutch and Italian. Our analysis of them furthermore suggests a promising strategy for explaining cross-linguistic variation in auxiliary selection in terms of variation in the syntactico-semantic structure of the perfect. In this introductory section, we will first provide some background on the historical situation we will be discussing, then we will lay out the main claims for which we will be arguing in the paper.
In April 2002 the European Central Bank (ECB) and the Center for Financial Studies (CFS) launched the ECB-CFS Research Network to promote research on “Capital Markets and Financial Integration in Europe”. The ECB-CFS research network aims at stimulating top-level and policy-relevant research, significantly contributing to the understanding of the current and future structure and integration of the financial system in Europe and its international linkages with the United States and Japan. This report summarises the work done under the network after two years. Over time the network formed a coherent and growing group of researchers interested in the integration of European financial markets, while using light organisational structures and budgets. The members of this evolving group met repeatedly at the events organised by the network to present the latest results of their research and to share views on policy options. In this sense, the “network of people” intended at the start was created. Overall, the network aroused great interest, as leading academic researchers, researchers from the main policy institutions and high-level policy makers participated actively in it by presenting research results, through speeches and in policy panels. It also stimulated a new research field on securities settlement systems, an area of high policy relevance and interest to the ECB that had not attracted much interest in the research community beforehand. Also, the network seems to have triggered several related outside initiatives by international institutions, such as the IMF or the OECD. During its first two years the network was organised around three workshops and a final symposium on 10-11 May 2004. To focus research resources and to ensure medium-term policy relevance, a limited number of areas have been given top priority: bank competition and the geographical scope of banking; international portfolio choices and asset market linkages between Europe, the United States and Japan; European bond markets; European securities settlement systems; and the emergence and evolution of new markets in Europe (in particular start-up financing markets). In order to stimulate further research focused on the priority fields of the network, the ECB Lamfalussy research fellowships were established. These fellowships sponsor projects proposed by young researchers, both a dvanced doctoral students and younger professors. Five Lamfalussy fellowships were granted in 2003 and five more in 2004. The first papers from this program have already been issued in the ECB working paper series or are forthcoming. One of them won the prize for the best paper written by a Ph.D. student at the 2004 European Finance Association Meetings in Maastricht. Results of the network in the five top priority areas can be summarised as follows: Bank competition and the geographical scope of banking. First, integration does not appear to be very advanced in many retail banking markets. Second, some of the inherent characteristics of traditional loan and deposit business constrain the cross-border expansion of commercial banking, even in a common currency area. Hence, the implementation of some policies to foster cross-border integration in retail banking may be ineffective. Third, theoretical research suggests that supervisory structures may not be neutral towards further European banking integration. Finally, a stronger role of area-wide competition policies could be beneficial for further banking integration. This would also stimulate economic growth, as more competition in the banking sector induces financially dependent firms to grow more. European bond markets. While the government bond market has integrated rapidly with the EMU convergence process, its full integration has not yet been achieved. The introduction of a common electronic trading platform reduced transaction costs substantially, but yield spreads of long-term sovereign bonds of the euro area are still heterogeneous. This is largely explained by different sensitivities to an international risk factor, whereas liquidity differentials only play a role in conjunction with this latter factor. Somewhat surprisingly in this context, the dynamically developing corporate bond market exhibits a relatively high level of integration. There is also increasing evidence that the introduction of the euro has contributed to a reduction in the cost of capital in the euro area, in particular through the reduction of corporate bond underwriting fees. As a result, firms may wish to increase bond financing relative to equity financing. The development of a larger corporate bond market is also important for monetary policy. For example, US evidence suggests that the rating of corporate bonds may contribute to the persistence of recessions, as rating agencies´ policies affect firms asymmetrically in their access to the bond market over the business cycle. US evidence also suggests that liquidity conditions in stock and bond markets tend to be positively correlated. European securities settlement systems. European securities settlement infrastructures are highly fragmented and further integration and/or consolidation would exploit economies of scale that could greatly benefit investors. It is not clear, however, whether direct public intervention in favour of consolidation would lead to the highest level of efficiency, for example because of the existence of strong vertical integration between trading and securities platforms (“silos”). In contrast, promoting open access to clearing and settlement systems could lead to consolidation and the highest level of efficiency. Finally, regarding concerns about unfair practices by Central Securities Depositories (CSDs) toward custodian banks, regulatory interventions favouring custodian banks should be discouraged, as long as CSDs are not allowed to price discriminate between custodian banks and investor banks. The emergence and evolution of new markets in Europe (in particular start-up financing markets). While fairly well integrated, “new markets” and start-up financing are less developed and integrated in Europe than in the United States. However, new markets and venture capitalists are the most important intermediaries for the financing of projects with high risk but with potentially very high return. The analysis carried out within the network reveals that European start-up financiers are mostly institutional investors, while US venture capitalists are mostly rich individuals. Also, new markets are essential for the development of start-up finance in Europe, as they provide an exit strategy for start-up financiers who can then sell new successful projects using initial public offerings. Finally, the legal framework affects the development of venture capital firms. For example, very strict personal bankruptcy laws constrain early stage entrepreneurs, reducing demand for venture capital finance. International portfolio choices and asset market linkages between Europe, the United States and Japan. At a global scale, asset market linkages have increased recently. For example, major economies such as the United States and the euro area have become more financially interdependent. This phenomenon can be observed in stock and bond markets as well as in money markets, where the main direction of spillovers has recently been from the US to the euro area. Country-specific shocks now play a smaller role in explaining stock return variations of firms whose sales are internationally diversified. Increases in firmby-firm market linkages are a global phenomenon, but they are stronger within the euro area than in the rest of the world. Various other phenomena also increase market linkages and therefore the likelihood that financial shocks spread across countries. One example is the use of global bonds. Finally, the nowadays more direct access of unsophisticated investors to financial markets may increase volatility. Other areas. Financial integration affects financial structures, but it does not need to lead to their convergence across countries. Financial structures matter for growth, as market-oriented financial systems benefit all sectors and firms, whereas bank-based systems primarily benefit younger firms that depend on external finance. Moreover, good corporate governance increases firms’ value. In particular, the dual board system, where the monitoring and advising roles of the board of directors are separated, is found to dominate the single board structure. Therefore, the further development of the European single market should strongly require good corporate governance. In general, well designed institutions foster entrepreneurial activity, partly by relaxing capital constraints. The results of the network clearly illustrated the substantial effects the introduction of the euro had on euro area financial markets. In addition to the effects on bond markets, stock markets and the cost of capital summarised above, research produced showed that the single currency had its strongest effects on money markets, whose unsecured segment is now completely integrated. Without any doubt the euro generally enhanced the liquidity and efficiency of euro area financial markets, and ongoing initiatives such as the European Union’s Financial Services Action Plan will help to continue this process. In sum, in the first two years the network has established itself as the hub for the research debate on European financial integration. Some of the best papers produced by the network, leading to the conclusions mentioned above, are currently being considered for publication in two special issues of academic journals. An issue of the Oxford Review of Economic Policy on “European financial integration” is published contemporaneously with this report, and an issue of the Review of Finance is planned for next year. The current policy context, the gradual progress of integration as well as the creation of other related non-ECB or non-CFS initiatives on financial integration suggest that this topic will remain high on the agendas of policy makers and academics for the years to come. Therefore, the ECB Executive Board and the CFS decided to continue the network, refocusing its priorities. Three priority areas have been added: 1) The relationship between financial integration and financial stability, 2) EU accession, financial development and financial integration, and 3) financial system modernisation and economic growth in Europe. These three areas have become particularly important at the current juncture, but have not received particularly strong attention in the first two years of the network. For example, the area of financial stability research was highlighted by the ECB research evaluators as an area deserving further development. Moreover, despite the results found in the first two years of the network, new developments remain to be further explored in the earlier priority areas. A three-year extension is envisaged, running from after the May 2004 symposium until 2007, with two events to be held per year. The threeyear period is long enough to consider the first effects of the Financial Services Action Plan. It also constitutes a realistic horizon for the ambitious agenda implied by the three new priorities. The generally light organisational structure and working of the network will not be changed. In addition, given the value of the Lamfalussy fellowship research program in inducing further research in the areas of the network, the program has also been extended for all the research topics in the area of the network.
Women and Halakha Shiur
(2008)
This essay examines the foreign policy discourse in contemporary Germany. In reviewing a growing body of publications by German academics and foreign policy analysts, it identifies five schools of thought based on different worldviews, assumptions about international politics, and policy recommendations. These schools of thought are then related to, first, actual preferences held by German policymakers and the public more generally and, second, to a small set of grand strategies that Germany could pursue in the future. It argues that the spectrum of likely choices is narrow, with the two most probable-the strategies of "Wider West" and "Carolingian Europe"---continuing the multilateral and integrationist orientation of the old Federal Republic. These findings are contrasted with diverging assessments in the non-German professional literature.Finally, the essay sketches avenues for future research by suggesting ways for broadening the study of country-specific grand strategies, developing and testing inclusive typologies of more abstract foreign policy strategies, and refining the analytical tools in examining foreign policy discourses in general.
Im Rahmen dieser Arbeit wurden ausgewählte 5’- und 3’-untranslatierte Regionen (UTRs) von mRNAs aus H. volcanii bestimmt. Dieses Datenset wurde verwendet um (1) haloarchaeale UTRs zu charakterisieren, (2) Konsensuselemente für die Transkrikptionsinitiation und -termination zu verifizieren und (3) den Einfluss haloarchaealer UTRs auf die Initiation und Regulation der Translation zu untersuchen. Es konnte gezeigt werden, dass alle untersuchten Transkripte nichtprozessierte 3’-UTRs mit einer durchschnittlichen Länge von 45 Nukleotiden besitzen. Darüber hinaus konnte ein putatives Transkriptionsterminationssignal bestehend aus einem pentaU-Motiv mit vorausgehender Haarnadelstruktur identifiziert werden. Die Analysen der Regionen stromaufwärts der experimentell bestimmten Transkriptionsstarts führten zur Identifizierung dreier konservierter Promotor Elemente: Der TATA-Box, dem BRE-Element und einem neuen Element an Position -10/-11. Überraschenderweise bestand die TATA-Box nur aus vier konservierten Nukleotiden. Die Untersuchung der UTRs ergab, dass die größte Anteil der haloarchaealen Transkripte keine 5’-UTR besitzt. Falls eine 5’-UTR vorhanden ist, besitzen unerwarteterweise nur 15% der 5’-UTRs aus H. volcanii eine Shine-Dalgarno-Sequenz (SD-Sequenz). Es konnte jedoch gezeigt werden, dass verschiedene native und artifizielle 5’-UTRs ohne SD-Sequenz sehr effizient in vivo translatiert werden. Außerdem hat die Sekundärstruktur der 5’-UTR und die Position struktureller Elemente offenbar einen entscheidenden Einfluss auf die Translatierbarkeit von Transkripten. Die Insertion von Strukturelementen nahe des Startkodons führte zu einer vollkommenen Repression der Translation, während die proximale Insertion des Motivs an das 5’-Ende der 5’-UTR keinen Einfluss auf die Translationsseffizienz hatte. Zusammenfassend kann sowohl der eukaryotische Scanning-Mechanismus als auch die bakterielle Initiation der Translation über die SD-Sequenz für haloarchaeale Transkripte mit 5’-UTR ohne SD-Sequenz ausgeschlossen werden. Die im Rahmen dieser Arbeit durchgeführten Untersuchungen bilden die Grundlage für weitere Untersuchungen zur Identifizierung eines entsprechenden dritten Mechanismus zur Initiation der Translation in H. volcanii. Eine aktuelle Studie zur globalen Analyse der Translationsregulation zeigte, dass der Anteil translational regulierter Gene in H. volcanii genauso hoch ist wie bei Eukaryoten (Lange et al., 2007). Um die Rolle haloarchaealer UTRs bei der Regulation der Translation zu charakterisieren, wurden die UTRs zweier ausgewählter translationsregulierter Gene untersucht. Es stellte sich heraus, dass nur die Anwesenheit beider UTRs, 5’- und 3’-UTR, zu einer Wachstumsphasen-abhängigen Regulation der Translation führt. Dabei hat die 3’-UTR allein keinen Einfluss auf die Translationseffizienz, während die 5’-UTR die Translationseffizienz in beiden Wachstumsphasen reduziert. Es zeigte sich außerdem, dass die 3’-UTR für die „Richtung“ der Regulation auf Translationsebene verantwortlich ist und putative Strukturelemente möglicherweise in den Regulationsmechanismus involviert sind. Zusammengefasst ergibt sich folgendes Modell der Translationsregulation in H. volcanii: Strukturierte 5’-UTRs führen zu einer Herabsetzung der konstitutiven Translationseffizienz. Dies kann differentiell durch regulatorische Faktoren kompensiert werden, welche spezifische Elemente der 3’-UTR binden. Sowohl natürliche als auch artifizielle Aptamere und allosterische Ribozyme stellen effektive Werkzeuge zur exogen kontrollierten Genexpression dar. Daher wurde die Anwendbarkeit eines Tetracyclin-induzierbaren Aptamers und eines konstitutiven Hammerhead-Ribozyms in H. volcanii untersucht. Es stellte sich allerdings heraus, dass das Aptamer bereits ohne Tetracyclin starke inhibitorische Sekundärstrukturen ausbildet. Als Alternative wurden Reportergenfusionen mit einem selbstspaltenden Hammerhead-Ribozym konstruiert. Die selbstspaltende Aktivität des Hammerhead-Ribozyms in H. volcanii konnte erfolgreich in vivo demonstriert werden, was die Grundlage zur Entwicklung konditionaler Expressionssysteme basierend auf dem Hammerhead-Ribozym in H. volcanii bildet.
Cellular metabolism can be envisaged by fluorescence lifetime imaging of fluorophores sensitive to specific intracellular factors such as [H+], [Ca2+], [O2], membrane potential, temperature, polarity of the probe environment, and alterations in the conformation and interactions of macromolecules. Lifetime measurements of the probes allow the quantitative determination of the intracellular factors. Fluorescence microscopy taking advantage of time-correlated single photon counting is a novel method that outperforms all other techniques with its single photon sensitivity and picoseconds time resolution. In this work, a time- and space-correlated single photon counting system was established to investigate the behavior of 2-(4-(dimethylamino)styryl)-1-methylpyridinium iodide (DASPMI) in living cells. DASPMI is known to selectively stain mitochondria in living cells. The uptake and fluorescence intensity of DASPMI in mitochondria is a dynamic measure of membrane potential. Hence, an endeavour was made to elucidate the mechanism of DASPMI fluorescence by obtaining spectrally-resolved fluorescence decays in different solvents. A bi-exponential decay model was sufficient to globally describe the wavelength dependent fluorescence in ethanol and chloroform. While in glycerol, a three-exponential decay model was necessary for global analysis. In the polar low-viscous solvent water, a mono-exponential decay model fitted the decay data. The sensitivity of DASPMI fluorescence to solvent viscosity was analysed using various proportions of glycerol/ethanol mixtures. The lifetimes were found to increase with increasing solvent viscosity. The negative amplitudes of the short lifetime component found in chloroform and glycerol at the longer wavelengths validated the formation of new excited state species from the initially excited state. Time-resolved emission spectra in chloroform and glycerol showed a biphasic increase of spectral width and emission maxima. The spectral width had an initial fast increase within 150 ps and a near constant thereafter. A two-state model based on solvation of the initially excited state and further formation of TICT state has been proposed to explain the excited state kinetics and has been substantiated by the de-composition of time-resolved spectra. The knowledge of DASPMI photophysics in a variety of solvents now provides the means of deducing complex physiological parameters of mitochondria from its behavior in living cells. Spatially-resolved fluorescence decays from single mitochondria or only very few organelles of XTH2 cells signified distinctive three-exponential decay kinetics of viscous environment. Based on DASPMI photophysics in a variety of solvents, these lifetimes have been attributed to the fluorescence from locally excited state (LE), intramolecular charge transfer state (ICT) and twisted intramolecular charge transfer (TICT) state. A considerable variation in lifetime among mitochondria of different morphology and within single cell was evident corresponding to the high physiological variations within single cells. Considerable shortening of the short lifetime component (τ1) under high membrane potential condition, such as in the presence of ATP and/or substrate, was similar to quenching and dramatic decrease of lifetime in polar solvents. Under these conditions τ2 and τ3 increased with decreasing contribution. Upon treatment with ionophore nigericin, hyperpolarization of mitochondria resulted in remarkable shortening of τ1 from 159 ps to 38 ps. Inhibiting respiration by cyanide resulted in notable increase of mean lifetime and decrease of mitochondrial fluorescence. Increase of DASPMI fluorescence on conditions elevating mitochondrial membrane potential has been attributed to uptake according Nernst distributions, to de-localisation of π electrons, quenching processes of the methyl pyridinium moiety and restricted torsional dynamics at the mitochondrial inner membrane. Accordingly, determination of anisotropy in DASPMI stained mitochondria in living XTH2 cells, revealed dependence of anisotropy on membrane potential. Such changes in anisotropy attributed to restriction of the torsional dynamics about the flexible single bonds neighboring the olefinic double bond revealed the previously known sub-mitochondrial zones with higher membrane potential along its length. Membrane-potential-dependent changes in anisotropy have further been demonstrated in senescent chick embryo fibroblasts. In conclusion, spectroscopic observations of excited-state kinetics of DASPMI in solvents and its behavior in living cells had revealed for the first time its localisation, mechanism of voltage sensitive fluorescence and its membrane-potential-dependent anisotropy in living cells. The simultaneous dependence of DASPMI photophysics on mitochondrial inner membrane viscosity and transmembrane potential has been highlighted.
Das hervorstechendste Merkmal deutscher Außenpolitik seit 1990 ist die Kontinuität der Kontinuitätsrhetorik. Helmut Kohl hatte sie nach der gewonnenen Bundestagswahl im Dezember 1990 genauso eingesetzt wie Gerhard Schröder nach seinem Sieg im Herbst 1998. Mochte sich die Republik im Innern auch noch so sehr ändern, mochte sich ihr äußeres Umfeld dramatisch verschieben – die Grundkonstanten deutscher Außenpolitik, sie sollten dieselben bleiben. Politisch gab und gibt es für diese Rhetorik fast durchwegs gute Gründe, denn angesichts einer einhellig konstatierten "Erfolgsgeschichte" bundesrepublikanischer Außenpolitik auf der einen Seite sowie, auf der anderen, deutlicher Sorgen im Ausland, dass es damit nach der Vereinigung vorbei sein könnte, sprach alles dafür, eine Fortsetzung des Alten selbst dann zu beschwören, als vieles sich änderte. Die Rede von der Kontinuität bundesdeutscher Außenpolitik hatte zudem innen wie außen eine dankbare Zuhörerschaft, denn sie handelte von einer guten alten Zeit der "Beschaulichkeit" und "Bescheidenheit" der alten Bundesrepublik, die man heute als "Bonner Republik" fast schon in der historischen Nähe der "Weimarer Republik" wiederfindet. ...
Hurra-Multilateralismus
(2001)
Gibt es so etwas wie "konservative Außenpolitik"? Die erste Antwort, die dazu einfällt, hat Joschka Fischer auf eine vergleichbare Frage gleich nach seinem Amtsantritt als neuer Außenminister gegeben. Nein, "eine grüne Außenpolitik gibt es nicht, nur eine deutsche". Klassische weltanschauliche Überzeugungen, die im innenpolitischen Wettstreit in Gegenbegriffen wie "konservativ" und "fortschrittlich" einsortiert werden, lassen sich nach dieser Auffassung nicht auf das Feld der Außenpolitik übertragen. Genau diese Position vertrat auch Kaiser Wilhelm als er kurz nach dem Beginn des Ersten Weltkriegs ausrief: "Ich kenne keine Partei mehr, ich kenne nur Deutsche"...
Die Untersuchung der Eigenschaften von Hadronen und ihren Konstituenten (Quarks und Gluonen) in heißer und/oder dichter Kernmaterie ist eines der Hauptziele der Physik mit schweren Ionen. Der Zustand dichter und heißer Materie kann im Labor für kurze Zeit in der Reaktionszone von relativistischen Schwerionenkollisionen geschaffen werden. Einen Einblick über die Eigenschaften der starken Wechselwirkung und über die Massenerzeugung der Hadronen geben Dileptonen-Experimente, da Leptonen nicht von der starken Wechselwirkung beeinflusst werden. Unabhängig von der Strahlenergie zeigen die invarianten Massenspektren der Dileptonen in Schwerionenkollisionen im Vergleich zur Superposition der erwarteten hadronischen Zerfälle im Vakuum einen Überschuss im invarianten Massenbereich 0,2 - 0,6 GeV/c². Während dieser Überschuss bei CERN-SPS Energien in Zusammenhang mit der In-Medium-Modifikation der Spektralfunktion des Rho-Mesons gebracht wird, konnte die hohe Zahl der Dileptonen, die von der DLS Kollaboration in C + C und Ca + Ca bei 1 GeV/u beobachtet wurde, bis zum Erscheinen der HADES Daten nicht zufrieden stellend erklärt werden. Die Diskrepanz zwischen experimentellen Daten und Transportrechnungen erhielt den Namen "DLS Puzzle". In diesem Zusammenhang wurde eine kontroverse Diskussion über die Validität der Ergebnisse der DLS Kollaboration geführt. Das HADES Detektorsystem (High Acceptance Di-Electron Spectrometer), das sich am Schwerionensynchroton der Gesellschaft für Schwerionenforschung (GSI) in Darmstadt befindet, ist zur Zeit das einzige Experiment, das Dielektronen bei Projektilenergien von 1 - 2 GeV/u misst. Es tritt somit die Nachfolge des DLS Experiments an. Jedoch ist HADES durch zahlreiche technische Verbesserungen, u.a. Massenauflösung und Akzeptanz, im Vergleich zum Spektrometer DLS ein Experiment der 2. Generation. Erste Ergebnisse der Messung 12C + 12C bei 2 GeV/u der HADES Kollaboration bestätigen den generellen Trend einer erhöhten Zählrate im Vergleich zu den erwarteten Beiträgen von hadronischen Zerfällen. Es stellt sich die Frage, wie sich diese Beobachtung zu kleineren Strahlenergien hin fortsetzt. Im Rahmen der vorliegenden Arbeit wird die mit dem HADES Detektorsystem durchgeführte Messung der Dielektronenproduktion in der Schwerionenkollision 12C + 12C bei einer Projektilenergie von 1 GeV/u ausgewertet. Wesentliche Zielsetzungen sind u. a. die Überprüfung der DLS Daten und die Bestimmung der Anregungsfunktion des Überschusses. In der Analyse wird demonstriert, dass Leptonen effizient nachgewiesen werden. Die dargestellte Paaranalyse zeigt, dass der kombinatorische Untergrund erfolgreich reduziert und die Menge der wahren Dielektronen weitgehend erhalten werden kann. Nach Abzug des kombinatorischen Untergrundes werden die effizienzkorrigierten und normierten invarianten Massen-, Transversalimpuls- und Rapiditätsverteilungen der Dielektronen untersucht. Die Ergebnisse werden mit hadronischen Cocktails verschiedener theoretischer Ansätze verglichen. Diese beinhalten die Beiträge kurz- und langlebiger Dileptonenquellen einer thermischen Quelle (PLUTO) sowie mikroskopische Transportrechnungen (HSD,IQMD). Im Massenbereich 0,2 - 0,6 GeV/c² wird der gemessene Überschuss relativ zu den Vorhersagen bestätigt. Zusammen mit den Ergebnissen der Messung 12C + 12C bei 2 GeV/u zeigt sich, dass der Überschuss mit abnehmender Strahlenergie relativ zunimmt. Eine detaillierte Analyse zeigt, dass der Überschuss in dem Massenintervall 0,15 - 0,5 GeV/c² als Funktion der Projektilenergie entsprechend der Zahl der produzierten neutralen Pionen und nicht wie die Zahl des Eta-Mesons skaliert. Der direkte Vergleich der HADES mit den DLS Ergebnissen zeigt, dass die Daten der vorliegenden Arbeit mit den für lange Zeit angezweifelten DLS Resultaten übereinstimmen. Die Frage nach dem physikalischen Ursprung des Überschusses rückt somit erneut in den Vordergrund. In diesem Zusammenhang ist das Studium der Dileptonenproduktion in elementaren Reaktionen p + p und d + p wichtig. Neuere Rechnungen mit einem One Boson Exchange (OBE) Modell deuten darauf hin, dass die Beiträge von p-p und hauptsächlich p-n zur Bremsstrahlung signifikant höher sind als bisher vermutet. Eine aktualisierte Transportrechnung (HSD), deren Parametrisierung der Bremsstrahlung durch dieses OBE Resultat inspiriert ist, scheint in der Lage zu sein, die Ergebnisse der Messungen 12C + 12C bei 1 GeV/u der HADES und DLS Kollaboration recht gut zu beschreiben. Die entsprechenden Vergleiche sind dargestellt und werden diskutiert. Aber auch die Transportrechnung IQMD erklärt die HADES Daten recht gut. Daher ist es offensichtlich, dass eine direkte Gegenüberstellung der OBE Modellrechnungen und der von der HADES Kollaboration gemessenen und derzeit analysierten Daten zur Dileptonenproduktion in p + p und d + p Reaktionen erforderlich ist. Nur so können sichere Schlüsse über den Ursprung der Dileptonen bei SIS Energien gezogen werden.
The search for a modification of hadron properties inside nuclear matter at normal and/or high temperature and density is one of the more interesting issues of modern nuclear physics. Dilepton experiments, by providing interesting results, give insight into the properties of strong interaction and the nature of hadron mass generation. One of these research tools is the HADES spectrometer. HADES is a high acceptance dilepton spectrometer installed at the heavy-ion synchrotron (SIS) at GSI, Darmstadt. The main physics motivation of HADES is the measurement of e+e- pairs in the invariant-mass range up to 1 GeV/c2 in pion- and proton-induced reactions, as well as in heavy-ion collisions. The goal is to investigate the properties of the vector mesons rho, omega and of other hadrons reconstructed from e+e- decay pairs. Dileptons are penetrating probes allowing to study the in-medium properties of hadrons. However, the measurement of such dilepton pairs is difficult because of a very large background from other processes in which leptons are created. This thesis presents the analysis of the data provided by the first physic run done with the HADES spectrometer. For the first time e+e- pairs produced in C+C collisions at an incident energy of 2 GeV per nucleon have been collected with sufficient statistics. This experiment is of particular importance since it allows to address the puzzling pair excess measured by the former DLS experiment at 1.04 AGeV. The thesis consists of five chapters. The first chapter presents the physics case which is addressed in the work. In the second chapter the HADES spectrometer is introduced with the characteristic of specific detectors which are part of the spectrometer. Chapter three focusses on the issue of charged-particle identification. The fourth chapter discusses the reconstruction of the di-electron spectra in C+C collisions. In this part of the thesis a comparison with theoretical models is included as well. The conclusion and final remarks are given in chapter five.
In der Stadt Wolfenbüttel wurde 1592 am damaligen Hof der Welfenherzöge von Braunschweig und Lüneburg das erste stehende Theater in Deutschland mit einem festen Ensemble gegründet. Das heutige "Lessingtheater" wurde im klassizistischen Stil erbaut und 1909 eröffnet. Es wird als Bespieltheater betrieben. Zur Zeit bietet der "Kulturbund der Lessingstadt Wolfenbüttel e. V." (gegr. 1946) im Auftrage der Stadt ein Programm von 50 bis 70 Vorstellungen jährlich auf dieser Vollbühne (10 m x 8 m) an mit jetzt ca. 600, später dann ca. 500 Zuschauerplätzen. Dieses Programm setzt sich zusammen aus Tourneetheateraufführungen, aus Gastspielen des Nordharzer Städtebundtheaters Halberstadt/ Quedlinburg und der Landesbühne Rheinland-Pfalz. Zu seinem 100-jährigen Bestehen im Jahr 2009 steht eine umfangreiche und umfassende bauliche Erneuerung an, um das Gebäude den Erfordernissen der Zeit in bühnentechnischer wie architektonischer Hinsicht anzupassen. Die Vorstellungen darüber, wie das Theater zukünftig räumlich gestaltet und ausgestattet und welche bühnentechnische Ausstattung dazu eingerichtet werden sollte, werden seit einiger Zeit in Rat und Verwaltung, bei möglichen Geldgebern sowie in der kulturpolitisch interessierten Öffentlichkeit Wolfenbüttels erörtert.