Refine
Year of publication
Document Type
- Article (30522)
- Part of Periodical (11892)
- Book (8260)
- Doctoral Thesis (5706)
- Part of a Book (3710)
- Working Paper (3386)
- Review (2878)
- Contribution to a Periodical (2338)
- Preprint (2055)
- Report (1544)
Language
- German (42393)
- English (29191)
- French (1067)
- Portuguese (723)
- Multiple languages (309)
- Croatian (302)
- Spanish (301)
- Italian (194)
- mis (174)
- Turkish (148)
Is part of the Bibliography
- no (75127) (remove)
Keywords
- Deutsch (1038)
- Literatur (807)
- taxonomy (760)
- Deutschland (543)
- Rezension (491)
- new species (449)
- Frankfurt <Main> / Universität (341)
- Rezeption (325)
- Geschichte (292)
- Linguistik (268)
Institute
- Medizin (7687)
- Präsidium (5156)
- Physik (4426)
- Wirtschaftswissenschaften (2697)
- Extern (2661)
- Gesellschaftswissenschaften (2372)
- Biowissenschaften (2180)
- Biochemie und Chemie (1973)
- Frankfurt Institute for Advanced Studies (FIAS) (1672)
- Center for Financial Studies (CFS) (1630)
Der Zufall – ein Helfer und kein Störenfried : warum die Wissenschaft stochastische Modelle braucht
(2008)
Der Zufall hat in den Wissenschaften weithin einen zweifelhaften Ruf. Für die Philosophie hat Hegel festgestellt: »Die philosophische Betrachtung hat keine andere Absicht, als das Zufällige zu entfernen« (Die Vernunft in der Geschichte, 1822) – und ähnlich denkt man auch in anderen Wissenschaften. Die Auseinandersetzungen der Physik mit dem Zufall sind verschlungen und bis heute von Kontroversen begleitet. Was die Biologie betrifft, so herrscht noch einiger Argwohn gegenüber den modernen Evolutionstheorien, die sich entscheidend auf den Zufall stützen. Und dass derartige Theorien unvereinbar sind mit der Vorstellung von einer göttlichen Schöpfung der Welt, gilt unter manchen ihrer Gegner wie Befürworter als ausgemacht.
Wikipedia, die größte Online-Enzyklopädie, gibt Rätsel auf: Was treibt so viele Menschen an, in ihrer Freizeit an einem virtuellen Lexikon mitzuarbeiten? Wie kommt es, dass das Niveau der meisten Beiträge so hoch ist und Fehler so schnell korrigiert werden, zumal der Zugang für jeden ohne Ausweis seiner Qualifikation frei ist? Mithilfe der Netzwerkanalyse lässt sich nachweisen, dass schon die Einbindung in ein solches Netzwerk wie Wikipedia das Handeln bestimmt und auch die Motivation beeinflusst.
Reggie-1 (flotillin-2) and reggie-2 (flotillin-1) are membrane microdomain proteins which are associated with the membrane by means of acylation. They influence different cellular signaling processes, such as neuronal, T-cell and insulin signaling. Upon stimulation of the EGF receptor, reggie-1 becomes phosphorylated and undergoes tyrosine 163 dependent translocation from the plasma membrane to endosomal compartments. In addition, reggie-1 was shown to influence actindependent processes. Reggie-2 has been demonstrated to affect caveolin- and clathrin-independent endocytosis. Both proteins form homo- and hetero-oligomers, but the function of these oligomers has remained elusive. Moreover, it has not been clarified if functions of reggie-1 are also influenced by reggie-2 and vice versa. The first aim of the study was to further investigate the interplay and the heterooligomerization of reggie proteins and their functional effects. Both reggie proteins were individually depleted by means of siRNA. In different siRNA systems and various cell lines, reggie-1 depleted cells showed reduced protein amounts of reggie-1 and reggie-2, but reggie-2 knock down cells still expressed reggie-1 protein. The decrease of reggie-2 in reggie-1 depleted cells was only detected at protein but not at mRNA level. Furthermore, reggie-2 expression could be rescued by expression of siRNA resistant wild type reggie-1-EGFP constructs, but not by the soluble myristoylation mutant G2A. This mutant was also not able to associate with endogenous reggie-1 or reggie-2, which demonstrates that membrane association of reggie-1 is necessary for hetero-oligomerization. In addition, fluorescence microscopy studies and membrane fractionations showed that correct localization of overexpressed reggie-2 was dependent on co-overexpressed reggie-1. Thus, hetero-oligomerization is crucial for membrane association of reggie-2 and for its protein stability or protein expression. Moreover, the binding of reggie-2 to reggie-1 required tyrosine 163 of reggie-1 which was previously shown to be important for endosomal translocation of reggie-1. Since reggie-2 was implicated to function in clathrin- and caveolin-independent endocytosis pathways, the effect of reggie-2 depletion on reggie-1 endocytosis was investigated. Indeed, reggie-1 was dependent on reggie-2 for endosomal localization and EGF-induced endocytosis. By FRET-FLIM analysis it could be shown that reggie heterooligomers are dynamic in size or conformation upon EGF stimulation. Thus, it can be concluded that reggie proteins are interdependent in different aspects, such as protein stability or expression, membrane association and subcellular localization. In addition, these results demonstrate that the hetero-oligomers are dynamic and reggie proteins influence each other in terms of function. A further aim was the characterization of reggie-1 and reggie-2 function in actindependent processes, where so far only reggie-1 was known to play a role. Depletion of either of the proteins reduced cell migration, cell spreading and the number of focal adhesions in steady state cells. Thus, also reggie-2 affects actin-dependent processes. Further investigation of the focal adhesions during cell spreading revealed that depletion of reggie-1 displayed different effects as compared to reggie-2 knock down. Reggie-1 depleted cells had elongated cell-matrix-adhesions and showed reduced activation of FAK and ERK2. On the other hand, depletion of reggie-2 resulted in a restricted localization of focal adhesion at the periphery of the cell and decreased ERK2 phosphorylation, but it did not affect FAK autophosphorylation. Hence, reggie proteins influence the regulation of cell-matrix-adhesions differently. A link between reggie proteins and focal adhesions is the actin cross-linking protein -actinin. The interaction of -actinin with reggie-1 could be verified by means of co-immunoprecipitations and FRET-FLIM analysis. Reggie-1 binds -actinin especially in membrane ruffles and in other locations where actin remodeling takes place. Moreover, -actinin showed a different localization pattern during cell spreading in reggie-1 depleted cells, as compared to the control cells. These results provide further insights into the function of both reggie proteins. Their interplay and hetero-oligomerization was shown to be crucial for their role in endocytosis. In addition, both reggie proteins influence actin-dependent processes and differentially affect focal adhesion regulation.
Untersuchung von Rezeptor-Ligand-Komplexen mittels organischer Synthese und NMR-Spektroskopie
(2008)
Viele biologische Prozesse basieren auf der spezifischen Bindung eines Liganden an einen Rezeptor. Die Wechselwirkung zwischen dem Rezeptor und seinem Ligand kann im Wesentlichen durch zwei verschiedene Modelle beschrieben werden: zum einen das vom E. Fischer eingeführte Schlüssel-Schloss-Prinzip und zum anderen das von Koshland beschriebene "induced-fit-model". Bei dem Schlüssel-Schloss-Prinzip liegt der Ligand in der Bindetasche des Rezeptors wie ein Schlüssel im Schloss. Ganz anders hierzu setzt die induzierte Anpassung ("induced-fit-model") eine konformationelle Änderung des Proteins durch den Liganden für die Bindung voraus. Ändern sich jedoch die Konformationen von Substrat und Rezeptor in einer gegenseitigen Beeinflussung, dann spricht man von "double-induced-fitmodel". Die Untersuchung dieser Erkennung auf molekularer Ebene ist von großer Wichtigkeit, denn sie dient zum besseren Verständnis und damit auch zur gezielten Beeinflussung solcher Prozesse. Wie wird der Ligand von einem Rezeptor selektiv erkannt und gebunden? Für die Erkennung und Bindung spielen spezifische nichtkovalente Wechselwirkungen eine wichtige Rolle. Zum Repertoire der nichtkovalenten Wechselwirkungen gehören die elektrostatische Wechselwirkungen, die Wasserstoffbrückenbindung und der hydrophobe Effekt.
In der vorliegenden Arbeit werden anhand von drei ausgewählten Beispielen solche Wechselwirkungen zwischen verschiedenen Liganden mit ihrem Rezeptor untersucht. In den ersten beiden Kapiteln werden Proteine und im letzten Kapitel RNA als Rezeptor untersucht. Die einzelnen Kapitel beginnen jeweils mit einer kurzen Einführung der Rezeptoren und der dazugehörenden Liganden, schließlich wird dann die Rezeptor-Ligand-Wechselwirkung beschrieben. Als Rezeptor wurden in der vorliegenden Arbeit Proteine (Kinasen und Membranproteine) und strukturierten Elemente der RNA (Aptamerdomäne der purinbindenden Riboswitche und der SELEX-RNA) gewählt. Membranproteine der Atmungskette, Kinasen und Riboswitches stellen zusätzlich attraktive Rezeptoren für das Wirkstoffdesign dar. Die damit interferierenden Liganden umfassen Substrate, Cofaktoren, Metabolite und Inhibitoren. Die Untersuchung der Wechselwirkung erfolgte mittels NMR-Spektroskopie und organischer Synthese.
Struktur, Funktion und Dynamik von Na(+)-, H(+)-Antiportern : eine infrarotspektroskopische Studie
(2008)
Die Funktion von Membranproteinen ist von entscheidender Bedeutung für eine Vielzahl zellulärer Prozesse. Um diese verstehen zu können, ist das Verständnis der Beziehungen zwischen der Struktur, der Dynamik und der Wechselwirkung mit der Umgebung der Membranproteine notwendig. Spektroskopische Methoden, wie beispielsweise FTIR- und CD-Spektroskopie sind in der Lage, diese Informationen zu geben. In der vorliegenden Dissertation haben sie bedeutende Beiträge zum Verständnis der durch die Aktivierung induzierten Konformationsänderungen der Na+/H+ Antiporter geleistet. Die hohe Empfindlichkeit einer selbstkonstruierten FTIR-ATR-Perfusionszelle ermöglichte es, über eine Proteinprobe verschiedene Wirkstoffmoleküle perfundieren zu lassen und die dadurch verursachten strukturellen Änderungen spektroskopisch zu charakterisieren. Die Konformationsänderungen, die den Aktivierungsprozess begleiten, wurden bei zwei verschiedenen Na+/H+ Antiportern, NhaA und MjNhaP1, untersucht. Sie werden bei unterschiedlichen pH-Bereichen aktiviert bzw. deaktiviert. Der Na+/H+ Antiporter NhaA aus E. coli hat seine maximale Transportaktivität bei pH 8,5 und ist bei pH < 6,5 vollständig inaktiv. Trotz bekannter 3D-Struktur dieses Proteins für die inaktive Konformation bei pH 4 bleiben die Konformationsänderungen, die mit der Aktivierung des Proteins einhergehen, immer noch ungeklärt. Die Analyse der FTIR- und CD-Spektren von NhaA ergab in beiden Zuständen Anteile an beta-Faltblatt, an Schleifen und ungeordneten Strukturen, wobei die alpha-helikale Struktur dominiert. Die FTIR Spektren des inaktiven und aktiven Zustands zeigen zwei Komponenten, die auf die Präsenz zweier alpha-Helices mit unterschiedlichen Eigenschaften abhängig vom Aktivitätszustand hindeuteten. Die temperaturinduzierten strukturellen Änderungen und die Reorganisation des Proteins während des Entfaltungsprozesses bestätigten, dass die Aktivierung des Proteins eine Änderung in den Eigenschaften der alpha-Helices zur Folge hat. Aktivierung führt zu einer thermischen Destabilisierung dieser Struktur. Auch für die beta-Faltblattstruktur, welche den Hauptkontakt zwischen den Monomeren bildet, wurde ein unterschiedliches thermisches Verhalten zwischen dem inaktiven und aktiven Zustand beobachtet. Daraus konnte gefolgert werden, dass Aktivität nur dann möglich ist, wenn NhaA als Dimer vorliegt. Die Ergebnisse des (1)H/(2)H Austauschs zeigen, dass die Lösungsmittelzugänglichkeit des Proteins sich mit der Aktivierung ändert. Die Aktivierung des Proteins induziert eine offene, für die Lösung zugänglichere Konformation, in welcher die Aminosäureseitenketten in der hydrophilen Region des Proteins schneller Wasserstoff durch Deuterium austauschen, und in welcher zusätzliche Aminosäureseitenketten, die sich im inaktiven Zustand in der hydrophoben Region des Proteins befinden, mit der Aktivierung der Lösung exponiert werden. Die Aufnahme reaktionsinduzierter Differenzspektren ergab eindeutige spektroskopische Signaturen für die Zustände „inaktiv“ und „aktiv“. Die Differenzspektren der pH-Titration zeigten, dass der pH-Wert einen dramatischen Effekt sowohl auf die Sekundärstruktur als auch auf den Protonierungszustand der Aminosäureseitenketten hat. Die pH- und Na+-induzierte Aktivierung des Proteins führt zur Umwandlung der transmembranen alpha-helikalen Struktur bezüglich Länge, Ordnungsgrad und/oder Anordnung und zur einer Protonierungsänderung der Aminosäureseitenketten von Glutaminsäure oder Asparaginsäure. Die pD induzierten Sekundärstrukturänderungen lieferten zusätzlich Informationen über die Umgebungsänderung der Aminosäureseitenkette des Tyrosins mit der Aktivierung. Der Vergleich der durch die Bindung des Natriums und des Inhibitors induzierten Differenzspektren zeigte, dass die Bindungsstellen des Natriums und des Inhibitors unterschiedlich sind. Die FTIR- und CD-Ergebnisse für den Na+/H+ Antiporter MjNhaP1 aus M. jannaschii, der im Gegensatz zu NhaA bei pH 6 aktiv und bei pH Werten > 8 inaktiv ist, zeigten, dass ähnlich wie NhaA das Protein im aktiven Zustand bei pH 6 hauptsächlich aus alpha-Helices aufgebaut ist. Es bestand die Möglichkeit, zwei verschiedene Probenpräparationen (Protein in Detergenz bzw. in 2D-Kristallen) zu untersuchen und miteinander zu vergleichen. Die Erhöhung des pH-Werts bei der in Detergenz solubilisierten Probe führte zu einer Abnahme der alpha-helikalen und einer Zunahme der ungeordneten Strukturen. Das äußerte sich auch in den Untersuchungen zur thermischen Stabilität und im (1)H/(2)H Austauschexperiment. Die thermische Stabilität der alpha-Helices nahm mit der Inaktivierung dramatisch ab. Diese Ergebnisse zeigten auch, dass bei der Aktivierung von MjNhaP1 die beta-Faltblattstruktur nicht involviert ist, aber diese von fundamentaler Bedeutung für die Gesamtstabilität des Proteins und wahrscheinlich für den Hauptkontakt zwischen den Monomeren verantwortlich ist. Im Gegensatz zu NhaA ist die Monomer Monomer Wechselwirkung nicht für die Aktivität von MjNhaP1 notwendig. Aufgrund des höheren Anteils von ungeordneter Struktur im inaktiven Zustand der in Detergenz solubilisierten Probe beobachtet man in diesem Zustand einen höheren (1)H/(2)H Austausch. Der Vergleich mit den Ergebnissen des (1)H/(2)H Austausches von 2D-Kristallen ermöglichte die Lokalisation der ungeordneten Struktur an der Außenseite des Proteinmoleküls im inaktiven Zustand. Die pH-induzierten Differenzspektren zeigten, dass die Aktivierung zu einer Helikalisierung des Proteins und einer Protonierungsänderung der Aminosäureseitenketten von Asparaginsäure und/oder Glutaminsäure unabhängig von der Probenpräparation führt. Der Vergleich von NhaA und MjNhaP1 zeigt, dass die Aktivierung in beiden Fällen mit einer Konformationsänderung und Änderung der Protonierung oder der Umgebung von einer oder mehreren Seitenketten von Asparaginsäure oder Glutaminsäure verbunden ist. Dabei sind die Strukturänderungen der beiden Proteine während der Aktivierung ähnlich, bei Inaktivierung jedoch deutlich unterscheidbar. Die pH-induzierten Strukturänderungen wurden bei NhaA und MjNhaP1 durch die Mutanten G338S und R347A, die keine pH-Abhängigkeit der Aktivität zeigen, bestätigt.
A data set of annual values of area equipped for irrigation for all 236 countries in the world during the time period 1900 - 2003 was generated. The basis for this data product was information available through various online data bases and from other published materials. The complete time series were then constructed around the reported data applying six statistical methods. The methods are discussed in terms of reliability and data uncertainties. The total area equipped for irrigation in the world in 1900 was 53.2 million hectares. Irrigation was mainly practiced in all the arid regions of the globe and in paddy rice areas of South and East Asia. In some temperate countries in Western Europe irrigation was practiced widely on pastures and meadows. The time series suggest a modest rate of increase of irrigated areas in the first half of the 20th century followed by a more dynamic development in the second half. The turn of the century is characterized by an overall consolidating trend resulting at a total of 285.8 million hectares in 2003. The major contributing countries have changed little throughout the century. This data product is regarded as a preliminary result toward an ongoing effort to develop a detailed data set and map of areas equipped for irrigation in the world over the 20th century using sub-national statistics and historical irrigation maps.
The purpose of this paper is to describe recent developments in the morphological, syntactic, and semantic annotation of the TüBa-D/Z treebank of German. The TüBa-D/Z annotation scheme is derived from the Verbmobil treebank of spoken German [4, 10], but has been extended along various dimensions to accommodate the characteristics of written texts. TüBa-D/Z uses as its data source the "die tageszeitung" (taz) newspaper corpus. The Verbmobil treebank annotation scheme distinguishes four levels of syntactic constituency: the lexical level, the phrasal level, the level of topological fields, and the clausal level. The primary ordering principle of a clause is the inventory of topological fields, which characterize the word order regularities among different clause types of German, and which are widely accepted among descriptive linguists of German [3, 6]. The TüBa-D/Z annotation relies on a context-free backbone (i.e. proper trees without crossing branches) of phrase structure combined with edge labels that specify the grammatical function of the phrase in question. The syntactic annotation scheme of the TüBa-D/Z is described in more detail in [12, 11]. TüBa-D/Z currently comprises approximately 15 000 sentences, with approximately 7 000 sentences being in the correction phase. The latter will be released along with an updated version of the existing treebank before the end of this year. The treebank is available in an XML format, in the NEGRA export format [1] and in the Penn treebank bracketing format. The XML format contains all types of information as described above, the NEGRA export format contains all sentenceinternal information while the Penn treebank format includes only those layers of information that can be expressed as pure tree structures. Over the course of the last year, more fine grained linguistic annotations have been added along the following dimensions: 1. the basic Stuttgart-Tübingen tagset, STTS, [9] labels have been enriched by relevant features of inflectional morphology, 2. named entity information has been encoded as part of the syntactic annotation, and 3. a set of anaphoric and coreference relations has been added to link referentially dependent noun phrases. In the following sections, we will describe each of these innovations in turn and will demonstrate how the additional annotations can be incorporated into one comprehensive annotation scheme.
Part-of-Speech tagging is generally performed by Markov models, based on bigram or trigram models. While Markov models have a strong concentration on the left context of a word, many languages require the inclusion of right context for correct disambiguation. We show for German that the best results are reached by a combination of left and right context. If only left context is available, then changing the direction of analysis and going from right to left improves the results. In a version of MBT (Daelemans et al., 1996) with default parameter settings, the inclusion of the right context improved POS tagging accuracy from 94.00% to 96.08%, thus corroborating our hypothesis. The version with optimized parameters reaches 96.73%.
The definition of similarity between sentences is formulated on the levels of words, POS tags, and chunks (Abney 91; Abney 96). The evaluation of this approach shows that while precision and recall based on the PARSEVAL measures (Black et al. 91) do not reach state of the art Parsers yet (F1=87.19 on syntactic constituents, F1=77.78 including functionargument structure), the parser shows a very reliable performance where function-argument structure is concerned (F1=96.52). The lower F-scores are very often due to unattached constituents.
The problem of vocalization, or diacritization, is essential to many tasks in Arabic NLP. Arabic is generally written without the short vowels, which leads to one written form having several pronunciations with each pronunciation carrying its own meaning(s). In the experiments reported here, we define vocalization as a classification problem in which we decide for each character in the unvocalized word whether it is followed by a short vowel. We investigate the importance of different types of context. Our results show that the combination of using memory-based learning with only a word internal context leads to a word error rate of 6.64%. If a lexical context is added, the results deteriorate slowly.
In dieser Arbeit soll erst ein kurzer Überblick über die Gebiete der Wortklassifizierung und des maschinellen Lernens gegeben werden (Kap. 1). Dann wird der Ansatz der transformationsbasierten fehlergesteuerten Wortklassifizierung (Transformation-Based Error-Driven Tagging) von Brill (1992, 1993, 1994) vorgestellt und für die Verwendung für deutschsprachige Korpora angepaßt (Kap. 2). Hierbei handelt es sich um ein regelbasiertes System, bei dem die Regeln im Gegensatz zu den bisher vorhandenen Systemen nicht manuell erarbeitet und dem System vorgegeben werden; das System erwirbt die Regeln vielmehr selbst anhand von wenigen Regelschemata aus einem kleinen bereits getaggten Lernkorpus. In Kapitel 3 werden die Ergebnisse aus der Anwendung des Systems auf Teile eines deutschsprachigen Korpus dargestellt. In Kapitel 4 schließlich werden andere Taggingsysteme vorgestellt und mit dem System von Brill (1993) anhand von acht Kriterien verglichen.
In syntax, the trend nowadays is towards lexicalized grammar formalisms. It is now widely accepted that dividing words into wordclasses may serve as a laborsaving mechanism - but at the same time, it discards all detailed information on the idiosyncratic behavior of words. And that is exactly the type of information that may be necessary in order to parse a sentence. For learning approaches, however, lexicalized grammars represent a challenge for the very reason that they include so much detailed and specific information, which is difficult to learn. This paper will present an algorithm for learning a link grammar of German. The problem of data sparseness is tackled by using all the available information from partial parses as well as from an existing grammar fragment and a tagger. This is a report about work in progress so there are no representative results available yet.
This paper presents a comparative study of probabilistic treebank parsing of German, using the Negra and TüBa-D/Z treebanks. Experiments with the Stanford parser, which uses a factored PCFG and dependency model, show that, contrary to previous claims for other parsers, lexicalization of PCFG models boosts parsing performance for both treebanks. The experiments also show that there is a big difference in parsing performance, when trained on the Negra and on the TüBa-D/Z treebanks. Parser performance for the models trained on TüBa-D/Z are comparable to parsing results for English with the Stanford parser, when trained on the Penn treebank. This comparison at least suggests that German is not harder to parse than its West-Germanic neighbor language English.
How to compare treebanks
(2008)
Recent years have seen an increasing interest in developing standards for linguistic annotation, with a focus on the interoperability of the resources. This effort, however, requires a profound knowledge of the advantages and disadvantages of linguistic annotation schemes in order to avoid importing the flaws and weaknesses of existing encoding schemes into the new standards. This paper addresses the question how to compare syntactically annotated corpora and gain insights into the usefulness of specific design decisions. We present an exhaustive evaluation of two German treebanks with crucially different encoding schemes. We evaluate three different parsers trained on the two treebanks and compare results using EVALB, the Leaf-Ancestor metric, and a dependency-based evaluation. Furthermore, we present TePaCoC, a new testsuite for the evaluation of parsers on complex German grammatical constructions. The testsuite provides a well thought-out error classification, which enables us to compare parser output for parsers trained on treebanks with different encoding schemes and provides interesting insights into the impact of treebank annotation schemes on specific constructions like PP attachment or non-constituent coordination.
In the last decade, the Penn treebank has become the standard data set for evaluating parsers. The fact that most parsers are solely evaluated on this specific data set leaves the question unanswered how much these results depend on the annotation scheme of the treebank. In this paper, we will investigate the influence which different decisions in the annotation schemes of treebanks have on parsing. The investigation uses the comparison of similar treebanks of German, NEGRA and TüBa-D/Z, which are subsequently modified to allow a comparison of the differences. The results show that deleted unary nodes and a flat phrase structure have a negative influence on parsing quality while a flat clause structure has a positive influence.
Transforming constituent-based annotation into dependency-based annotation has been shown to work for different treebanks and annotation schemes (e.g. Lin (1995) has transformed the Penn treebank, and Kübler and Telljohann (2002) the Tübinger Baumbank des Deutschen (TüBa-D/Z)). These ventures are usually triggered by the conflict between theory-neutral annotation, that targets most needs of a wider audience, and theory-specific annotation, that provides more fine-grained information for a smaller audience. As a compromise, it has been pointed out that treebanks can be designed to support more than one theory from the start (Nivre, 2003). We argue that information can also be added to an existing annotation scheme so that it supports additional theory-specific annotations. We also argue that such a transformation is useful for improving and extending the original annotation scheme with respect to both ambiguous annotation and annotation errors. We show this by analysing problems that arise when generating dependency information from the constituent-based TüBa-D/Z.
Chunk parsing has focused on the recognition of partial constituent structures at the level of individual chunks. Little attention has been paid to the question of how such partial analyses can be combined into larger structures for complete utterances. Such larger structures are not only desirable for a deeper syntactic analysis. They also constitute a necessary prerequisite for assigning function-argument structure. The present paper offers a similaritybased algorithm for assigning functional labels such as subject, object, head, complement, etc. to complete syntactic structures on the basis of prechunked input. The evaluation of the algorithm has concentrated on measuring the quality of functional labels. It was performed on a German and an English treebank using two different annotation schemes at the level of function argument structure. The results of 89.73% correct functional labels for German and 90.40%for English validate the general approach.
In this paper, we investigate the role of sub-optimality in training data for part-of-speech tagging. In particular, we examine to what extent the size of the training corpus and certain types of errors in it affect the performance of the tagger. We distinguish four types of errors: If a word is assigned a wrong tag, this tag can belong to the ambiguity class of the word (i.e. to the set of possible tags for that word) or not; furthermore, the major syntactic category (e.g. "N" or "V") can be correctly assigned (e.g. if a finite verb is classified as an infinitive) or not (e.g. if a verb is classified as a noun). We empirically explore the decrease of performance that each of these error types causes for different sizes of the training set. Our results show that those types of errors that are easier to eliminate have a particularly negative effect on the performance. Thus, it is worthwhile concentrating on the elimination of these types of errors, especially if the training corpus is large.
Prepositional phrase (PP) attachment is one of the major sources for errors in traditional statistical parsers. The reason for that lies in the type of information necessary for resolving structural ambiguities. For parsing, it is assumed that distributional information of parts-of-speech and phrases is sufficient for disambiguation. For PP attachment, in contrast, lexical information is needed. The problem of PP attachment has sparked much interest ever since Hindle and Rooth (1993) formulated the problem in a way that can be easily handled by machine learning approaches: In their approach, PP attachment is reduced to the decision between noun and verb attachment; and the relevant information is reduced to the two possible attachment sites (the noun and the verb) and the preposition of the PP. Brill and Resnik (1994) extended the feature set to the now standard 4-tupel also containing the noun inside the PP. Among many publications on the problem of PP attachment, Volk (2001; 2002) describes the only system for German. He uses a combination of supervised and unsupervised methods. The supervised method is based on the back-off model by Collins and Brooks (1995), the unsupervised part consists of heuristics such as ”If there is a support verb construction present, choose verb attachment”. Volk trains his back-off model on the Negra treebank (Skut et al., 1998) and extracts frequencies for the heuristics from the ”Computerzeitung”. The latter also serves as test data set. Consequently, it is difficult to compare Volk’s results to other results for German, including the results presented here, since not only he uses a combination of supervised and unsupervised learning, but he also performs domain adaptation. Most of the researchers working on PP attachment seem to be satisfied with a PP attachment system; we have found hardly any work on integrating the results of such approaches into actual parsers. The only exceptions are Mehl et al. (1998) and Foth and Menzel (2006), both working with German data. Mehl et al. report a slight improvement of PP attachment from 475 correct PPs out of 681 PPs for the original parser to 481 PPs. Foth and Menzel report an improvement of overall accuracy from 90.7% to 92.2%. Both integrate statistical attachment preferences into a parser. First, we will investigate whether dependency parsing, which generally uses lexical information, shows the same performance on PP attachment as an independent PP attachment classifier does. Then we will investigate an approach that allows the integration of PP attachment information into the output of a parser without having to modify the parser: The results of an independent PP attachment classifier are integrated into the parse of a dependency parser for German in a postprocessing step.
Maschinelles Lernen wird häufig zur effzienten Annotation großer Datenmengen eingesetzt. Die Forschung zu maschinellen Lernverfahren beschränkt sich i.a. darauf unterschiedliche Lernverfahren zu vergelichen oder die optimale größe der Trainingsdaten zu bestimmen. Bisher wurde jedoch nicht untersucht, in wie weit sich linguistisches Wissen bei der Aufgabendefinition positiv auswirken kann. Dies soll hier anhand des Lernens von Base-Nominalphrasen mit drei unterschiedlichen Definitionen untersucht werden. Die Definitionen unterscheiden sich im Grad der linguistisch motivierten Erweiterungen, die zu einer eher praktisch motivierten ersten Definition hinzu kamen. Die Untersuchungen ergaben, dass sich die Anzahl der falsch klasssifizierten Wörter um ein Drittel reduzieren lässt.
This report explores the question of compatibility between annotation projects including translating annotation formalisms to each other or to common forms. Compatibility issues are crucial for systems that use the results of multiple annotation projects. We hope that this report will begin a concerted effort in the field to track the compatibility of annotation schemes for part of speech tagging, time annotation, treebanking, role labeling and other phenomena.
This paper reports on the SYN-RA (SYNtax-based Reference Annotation) project, an on-going project of annotating German newspaper texts with referential relations. The project has developed an inventory of anaphoric and coreference relations for German in the context of a unified, XML-based annotation scheme for combining morphological, syntactic, semantic, and anaphoric information. The paper discusses how this unified annotation scheme relates to other formats currently discussed in the literature, in particular the annotation graph model of Bird and Liberman (2001) and the pie-in-thesky scheme for semantic annotation.
Chunk parsing has focused on the recognition of partial constituent structures at the level of individual chunks. Little attention has been paid to the question of how such partial analyses can be combined into larger structures for complete utterances. The TüSBL parser extends current chunk parsing techniques by a tree-construction component that extends partial chunk parses to complete tree structures including recursive phrase structure as well as function-argument structure. TüSBLs tree construction algorithm relies on techniques from memory-based learning that allow similarity-based classification of a given input structure relative to a pre-stored set of tree instances from a fully annotated treebank. A quantitative evaluation of TüSBL has been conducted using a semi-automatically constructed treebank of German that consists of appr. 67,000 fully annotated sentences. The basic PARSEVAL measures were used although they were developed for parsers that have as their main goal a complete analysis that spans the entire input.This runs counter to the basic philosophy underlying TüSBL, which has as its main goal robustness of partially analyzed structures.
This paper provides an overview of current research on a hybrid and robust parsing architecture for the morphological, syntactic and semantic annotation of German text corpora. The novel contribution of this research lies not in the individual parsing modules, each of which relies on state-of-the-art algorithms and techniques. Rather what is new about the present approach is the combination of these modules into a single architecture. This combination provides a means to significantly optimize the performance of each component, resulting in an increased accuracy of annotation.
A lot of interest has recently been paid to constraint-based definitions and extensions of Tree Adjoining Grammars (TAG). Examples are the so-called quasi-trees, D-Tree Grammars and Tree Description Grammars. The latter are grammars consisting of a set of formulars denoting trees. TDGs are derivation based where in each derivation step a conjunction is built of the old formular, a formular of the grammar and additional equivalences between node names of the two formulars. This formalism is more powerfull than TAGs. TDGs offer the advantages of MC-TAG and D-Tree Grammars for natural languages and they allow underspecification. However the problem is that TDGs might be unnecessarily powerfull for natural languages. To solve this problem, in this paper, I will propose a local TDGs, a restricted version of TDGs. Local TDGs still have the advantages of TDGs but they are semilinear and therefore more appropriate for natural languages. First, the notion of the semilinearity is defined. Then local TDGs are introduced, and, finally, semilinearity of local Tree Description Languages is proven.
Quantifizierung des pulmonal-arteriellen Drucks im Truncus pulmonalis des Schweins. Methode: Künstliche Induktion einer pulmonalen Hypertonie mittels Thromboxan A2 in verschiedenen Schweregraden. Quantifizierung des Blutflusses anhand magnetresonanztomographischer Flussmessung im T.p., hieraus Bestimmung des zeitlichen Flussprofils und Akzelerationszeit (AT). Korrelation der AT mit den simultan erhobenen Daten einer invasiven Druckmessung (Pulmonalis-Katheter).
This paper proposes a compositional semantics for lexicalized tree adjoining grammars (LTAG). Tree-local multicompnent derivations allow seperation of semantiv contribution of a lexical item into one component contributing to the predicate argument structure and second a component contributing to scope semantics. Based on this idea a syntx-semantics interface is presented where the compositional semantics depends only on the derivation structure. It is shown that the derivation structure allows an appropriate amount of underspecification. This is illustrated by investigating underspecified representations for quantifier scpoe ambiguities and related phenomena such as adjunct scope and island constraints.
A hierarchy of local TDGs
(1998)
Many recent variants of Tree Adoining Grammars (TAG) allow an underspecifiaction of the parent relation between nodes in a tree, i.e. they do not deal with fully specified trees as it is the case with TAGs.Such TAG variants are for example Description Tree Grammars (DTG), Unordered Vector Grammars with Dominance Links (UVG-DL), a definition of TAGs via so-called quasi trees and Tree Description Grammars (TDG. The last TAg variant, local TDG, is an extension of TAG generating Tree Descriptions. Local TDGs even allow an underspecification of the dominance relation between node names and thereby provide the possibility to generate underspecified representations for structural ambiguities such as quantifier scope ambiguities. This abstract deals with formal properties of local TDGs. A hierarchiy of local TDGs is established together with a pumping lemma for local TDGs of a certain rank.
We argue for incorporating the financial economics of market microstructure into the financial econometrics of asset return volatility estimation. In particular, we use market microstructure theory to derive the cross-correlation function between latent returns and market microstructure noise, which feature prominently in the recent volatility literature. The cross-correlation at zero displacement is typically negative, and cross-correlations at nonzero displacements are positive and decay geometrically. If market makers are sufficiently risk averse, however, the cross-correlation pattern is inverted. Our results are useful for assessing the validity of the frequently-assumed independence of latent price and microstructure noise, for explaining observed cross-correlation patterns, for predicting as-yet undiscovered patterns, and for making informed conjectures as to improved volatility estimation methods.
The future of securitization
(2008)
Securitization is a financial innovation that experiences a boom-bust cycle, as many other innovations before. This paper analyzes possible reasons for the breakdown of primary and secondary securitization markets, and argues that misaligned incentives along the value chain are the primary cause of the problems. The illiquidity of asset and interbank markets, in this view, is a market failure derived from ill-designed mechanisms of coordinating financial intermediaries and investors. Thus, illiquidity is closely related to the design of the financial chains. Our policy conclusions emphasize crisis prevention rather than crisis management, and the objective is to restore a “comprehensive incentive alignment”. The toe-hold for strengthening regulation is surprisingly small. First, we emphasize the importance of equity piece retention for the long-term quality of the underlying asset pool. As a consequence, equity piece allocation needs to be publicly known, alleviating market pricing. Second, on a micro level, accountability of managers can be improved by compensation packages aiming at long term incentives, and penalizing policies with destabilizing effects on financial markets. Third, on a macro level, increased transparency relating to effective risk transfer, risk-related management compensation, and credible measurement of rating performance stabilizes the valuation of financial assets and, hence, improves the solvency of financial intermediaries. Fourth, financial intermediaries, whose risk is opaque, may be subjected to higher capital requirements.
Proteorhodopsin (PR) originally isolated from uncultivated γ-Proteobacterium as a result of biodiversity screens, is highly abundant ocean wide. PR, a Type I retinal binding protein with 26% sequence identity, is a bacterial homologue of Bacteriorhodopsin (BR). The members within this family share about 78% of sequence identity and display a 40 nm difference in the absorption spectra. This property of the PR family members provides an excellent model system for understanding the mechanism of spectral tuning. Functionally PR is a photoactive proton pump and is suggested to exhibit a pH dependent vectorality of proton transfer. This raises questions about its potential role as pH dependent regulator. The abundance of PR in huge numbers within the cell, its widespread distribution ocean wide at different depths hints towards the involvement of PR in utilization of solar energy, energy metabolism and carbon recycling in the Sea. Contrary to BR, which is known to be a natural 2D crystal, no such information is available for PR til date. Neither its functional mechanism nor its 3D structure has been resolved so far. This PhD project is an attempt to gain a deeper insight so as to understand structural and functional characterization of PR. The approach combines the potentials of 2D crystallography, Atomic Force Microscopy and Solid State NMR techniques for characterization of this protein. Wide range of crystalline conditions was obtained as a result of 2D crystallization screens. This hints towards dominant protein protein interactions. Considering the high number of PR molecules reported per cell, it is likely that driven by such interactions, the protein has a native dense packing in the environment. The projection map represented low resolution of these crystals but suggested a donut shape oligomeric arrangement of protein in a hexagonal lattice with unit cell size of 87Å*87Å. Preliminary FTIR measurements indicated that the crystalline environment does not obstruct the photocycle of PR and K as well as M intermediate states could be identified. Single molecule force spectroscopy and atomic force microscopy on these 2D crystals was used to probe further information about the oligomeric state and nature of unfolding. The data revealed that protein predominantly exists as hexamers in crystalline as well as densely reconstituted regions but a small percentage of pentamers is also observed. The unfolding mechanism was similar to the other relatively well-characterized members of rhodopsin family. A good correlation of the atomic force microscopy and the electron microscopy data was achieved. Solid State NMR of the isotopically labeled 2D crystalline preparations using uniformly and selectively labeling schemes, allowed to obtain high quality SSNMR spectra with typical 15N line width in the range of 0.6-1.2 ppm. The measured 15N chemical shift value of the Schiff base in the 2D crystalline form was observed to be similar to the Schiff base chemical shift values for the functionally active reconstituted samples. This provides an indirect evidence for the active functionality of the protein and hence the folding. The first 15N assignment has been achieved for the Tryptophan with the help of Rotational Echo Double Resonance experiments. The 2D Cross Polarization Lee Goldberg measurements reflect the dynamic state of the protein inspite of restricted mobility in the crystalline state. The behavior of lipids as measured by 31P from the lipid head group showed that the lipids are not tightly bound to the protein but behave more like the lipid bilayer. The 13C-13C homonulear correlation experiments with optimized mixing time based on build up curve analysis, suggest that it is possible to observe individual resonances as seen in case of glutamic acid. The signal to noise was good enough to record a decent spectrum in a feasible period. The selective unlabeling is an efficient method for reduction in the spectral overlap. However, more efficient labeling schemes are required for further characterization. The present spectral resolution is good for individual amino acid investigation but for uniformly labeled samples, further improvement is required.
Tree-local MCTAG with shared nodes : an analysis of word order variation in German and Korean
(2004)
Tree Adjoining Grammars (TAG) are known not to be powerful enough to deal with scrambling in free word order languages. The TAG-variants proposed so far in order to account for scrambling are not entirely satisfying. Therefore, an alternative extension of TAG is introduced based on the notion of node sharing. Considering data from German and Korean, it is shown that this TAG-extension can adequately analyse scrambling data, also in combination with extraposition and topicalization.
In this paper, we present an open-source parsing environment (Tübingen Linguistic Parsing Architecture, TuLiPA) which uses Range Concatenation Grammar (RCG) as a pivot formalism, thus opening the way to the parsing of several mildly context-sensitive formalisms. This environment currently supports tree-based grammars (namely Tree-Adjoining Grammars (TAG) and Multi-Component Tree-Adjoining Grammars with Tree Tuples (TT-MCTAG)) and allows computation not only of syntactic structures, but also of the corresponding semantic representations. It is used for the development of a tree-based grammar for German.
This paper proposes a corpus encoding standard that meets the needs of linguistic research using a variety of linguistic data structures. The standard was developed in SFB 441, a research project at the University of Tuebingen. The principal concern of SFB 441 are the empirical data structures which feed into linguistic theory building. SFB 441 consists of several projects, most of which are building corpora to empirically investigate various linguistic phenomena in various languages (e.g. modal verbs in German, forms of address and politeness in Russian). These corpora will form the components of the "Tuebingen collection of reusable, empirical, linguistic data structures (TUSNELDA)". The TUSNELDA annotation standard aims at providing a uniform encoding scheme for all subcorpora and texts of TUSNELDA such that they can be processed with uniform standardized tools. To guarantee maximal reusability we use XML for encoding. Previous SGML standards for text encoding were provided by the Text Encoding Initiative (TEI) and the Expert Advisory Group on Language Engineering Standards (Corpus Encoding Standard, CES). The TUSNELDA standard is based on TEI and XCES (XML version of CES) but takes into account the specific needs of the SFB projects, i.e. the peculiarities of the examined languages and linguistic phenomena.
Existing analyses of German scrambling phenomena within TAG-related formalisms all use non-local variants of TAG. However, there are good reasons to prefer local grammars, in particular with respect to the use of the derivation structure for semantics. Therefore this paper proposes to use local TDGs, a TAG-variant generating tree descriptions that shows a local derivation structure. However the construction of minimal trees for the derived tree descriptions is not subject to any locality constraint. This provides just the amount of non-locality needed for an adequate analysis of scrambling. To illustrate this a local TDG for some German scrambling data is presented.
Die Coltrims-Methode hat sich seit den 1990er Jahren als gutes experimentelles Instrument in der Atomphysik und darüberhinaus etabliert. Sie beruht darauf, dass die bei einer Reaktion entstehenden Fragmente mit ortssensitiven Detektoren nachgewiesen werden. Die Signale der Detektoren wurden bisher mit einem analogen Vorverstärker verstärkt und dann mit Hilfe eines Constant Fraction Discriminators in digitale Signale umgewandelt. Die Zeitinformation der digitalen Signale wurden von Time to Digital Convertern aufgenommen und im Computer gespeichert. Mit dieser Form der Auslese und Analyse der von den Detektoren stammenden Signale können nur einige wenige Fragmente nachgewiesen werden. Die Lösung dieses Problems besteht also darin, eine neue Variante für die Auslese und Analyse der Signale zu finden. Diese wurde in der Verwendung eines Transientenrekorders gefunden. Anstatt nur die Zeitinformation zu speichern, nimmt dieser die gesamte Signalform der Detektoren auf. Die Aufgabe, die in dieser Arbeit bearbeitet werden sollte, bestand darin, eine Software zu entwickeln, mit deren Hilfe der Transientenrekorder gesteuert werden kann. Auch sollte ein Weg gefunden werden nur die für das Experiment notwendigen Informationen des aufgenommenen Zeitfensters zu speichern. Des Weiteren sollten Methoden aufgezeigt werden, wie die aufgenommen Signale untersucht und deren Parameter extrahiert werden können. Diese Methoden wurden dann an realen Signalen getestet. Nachdem im ersten Kapitel die Motivation zu dieser Arbeit und einige theoretische Hintergründe vorgestellt werden, wird im zweiten Kapitel auf verschiedene Methoden der Signalanalyse eingegangen. Der Augenmerk liegt dabei sowohl auf Einzel- sowie Doppelsignalanalyse. Die Güte der vorgestellten Algorithmen wird mit Hilfe von künstlichen Signalen ermittelt. Es zeigt sich, dass die beste Methode die zeitliche Position der Einzelsignale zu finden, der Pulsfit ist. Mit dieser Methode kann eine Auflösung von etwa 50 ps erzielt werden. Bei der Betrachtung der Doppelsignale stellt sich heraus, dass der minimale Abstand zwischen den Signalen 5 ns bis 7 ns betragen muss. Das dritte Kapitel zeigt eine Anwendung des neuen Aufnahmesystems. Dort werden die physikalischen Ergebnisse, die mit Hilfe des neuen Systems gewonnen werden konnten, mit einem herkömmlichen Aufnahmesystem verglichen. Aufgrund der geringeren Totzeit des neuen Aufnahmesystems konnte mehr Statistik gewonnen werden. Der dadurch gewonnene Vorteil zeigt sich deutlich in den Ergebnissen, bei denen eine vierfach Koinzidenz verlangt wird. Bei dem nächsten Kapitel beschriebenen Experiment mussten sehr viele Fragmente nachgewiesen werden. Hierzu wird ein weiteres Kriterium neben der Zeitsumme vorgestellt mit dem die Anodensignale einander zugewiesen werden können. Die in diesem Kapitel gezeigten physikalischen Ergebnisse zeigen die Impulsverteilungen für Neon und Helium für unterschiedliche Lichtintensitäten bzw. Ionisationsprozesse. Im darauf folgenden Kapitel wird beschrieben, wie die neue Aufnahmemethode dazu verwendet werden kann, die von den Detektoren kommenden Signale genauer zu analysieren. Die physikalische Reaktion führte dazu, dass von dem Detektor hauptsächlich Doppelsignale aufgenommen wurden. Dies erlaubt die Untersuchung der Doppelsignalalgorithmen an realen Signalen. Hierbei zeigte sich, dass die Totzeit bei realen Signalen vergleichbar mit der Totzeit bei künstlichen Signalen ist. Die Algorithmen können bei Abständen der Einzelsignale von weniger als 10 ns die Position der Signale nicht mehr genau bestimmen. Anhand der Pulshöhenverteilung kann gezeigt werden, dass der verwendete Detektor in der Mitte eine geringere Nachweiseffizienz hatte. Im letzten Kapitel wird die Güte der verschiedenen Methoden der Einzelsignalanalyse anhand von realen Signalen überprüft. Dabei wurden Signale desselben Detektors mit unterschiedlichen Vorverstärkern verstärkt. Die beiden Vorverstärker unterschieden sich in ihrer Bandbreitenbegrenzung. Die Daten wurden mit einem Transientenrekorder mit 2 GS aufgenommen. Es wird gezeigt wie diese Daten umgewandelt werden können, so dass sie einem System mit nur 1 GS entsprechen. Dies erlaubt es die Güte der Methoden für Signale eines Systems mit 2 GS mit denen eines Systems mit 1 GS zu vergleichen. Es zeigt sich in der Pulshöhenverteilung, dass die Signale des stärker bandbreitenbegrenzten Vorverstärkers vergleichbar mit den künstlichen Signalen sind. Die Signale des weniger stark bandbreitenbegrenzten Vorverstärkers weisen eine zu starke Abhängigkeit ihrer Breite von der Pulshöhe auf. Aus diesem Grund sind die Ergebnisse des letzt genannten Vorverstärkers abweichend von den Ergebnissen mit den künstlichen Signalen. Bei diesem Vorverstärker zeigte der einfache Constant Fraction Algorithmus die beste Auflösung.
Stressorinduzierte ökotoxikologische Effekte und Genexpressionsveränderungen bei Chironomus riparius
(2008)
Die Effekte von Stressoren auf Chironomus riparius wurden im Lebenszyklustest und auf genomischer Ebene mit dem Ziel untersucht, ein auf einem DNA-Mikroarray („ChiroChip“) basierendes Screeningverfahren zu entwickeln. Die empfindlichsten Endpunkte der Lebenszyklustests waren die Mortalität, der Anteil fruchtbarer Eigelege, der mittlere Schlupfzeitpunkt der Weibchen sowie das Gewicht der Männchen. Temperaturveränderungen um ± 6°C gegenüber einer normalen Hälterungstemperatur von 20°C führten in allen Endpunkten zu hochsignifikanten Effekten. Eine LC10 konnte nur für die Salinität berechnet werden (0,66‰, KI: 0,26 − 1,68‰). Aufgrund der nicht-linearen Konzentrations-Wirkungs-Beziehungen konnte nur für den mittleren Schlupfzeitpunkt der Weibchen nach einer Exposition gegenüber Cadmium eine EC50 (0,53 mg/kg, KI: 0,29 − 0,97 mg/kg) bestimmt werden. In den Versuchen mit Methyltestosteron, Ethinylöstradiol, Carbamazepin, Fluoxetin, Blei und Tributylzinn (mit denen auch molekularbiologische Untersuchungen durchgeführt wurden) waren die empfindlichsten Endpunkte die Mortalität, der Anteil fruchtbarer Eigelege, der mittlere Schlupfzeitpunkt der Weibchen sowie die Populationswachstumsrate. Carbamazepin (CBZ) wirkte schlupfverzögernd bei den Weibchen. 10 mg CBZ/kg führte zu einer höheren Mortalität, weniger Eigelegen, die vermehrt unfruchtbar waren, sowie zu einer geringeren Populationswachstumsrate. Fluoxetin (FX) wirkte bei beiden Geschlechtern schlupfverzögernd. 0,9 mg FX/kg führte zu einer erhöhten Mortalität, weniger und vermehrt unfruchtbaren Eigelegen und einer geringeren Populationswachstumsrate. In der höchsten Konzentration (5,9 mg/kg) waren die Weibchen leichter als die Kontrolltiere. Tributylzinn (in µg Sn/kg angegeben) bewirkte eine höhere Mortalität und geringere Populationswachstumsrate bei 100 µg Sn/kg und führte zu einer Verzögerung im Schlupfverlauf bei den Weibchen. Bei 160 µg Sn/kg gab es weniger Eigelege, die vermehrt unfruchtbar waren. Die Männchen, die gegenüber Konzentrationen von 120 und 160 µg Sn/kg exponiert wurden, waren leichter als die Kontrolle. Expositionen gegenüber Blei (Pb) in Konzentrationen von 0,65 − 65 mg/kg führten bei 6,5 mg Pb/kg zu einer erhöhten Mortalität und zu mehr unfruchtbaren Gelegen. Bei 0,65 mg Pb/kg waren die Männchen leichter und bei 6,5 mg Pb/kg schwerer. Die Anzahl der fruchtbaren Gelege pro Weibchen war bei 3,25 und 6,5 mg Pb/kg geringer als in der Kontrolle. Die gegenüber 17alpha-Methyltestosteron (MET) exponierten Mücken hatten geringere Mortalitäten als in der Kontrolle und zeigten einen verfrühten Schlupf beider Geschlechter. Ab 27 µg MET/kg gab es weniger unfruchtbare Gelege, leichtere Männchen sowie erhöhte Populationswachstumsraten. 17alpha-Ethinylöstradiol (EE2) führte zu einem verfrühten Schlupf bei beiden Geschlechtern sowie zu erhöhten Populationswachstumsraten. Bei 9 µg EE2/kg gab es weniger unfruchtbare Gelege. Die Exposition von Chironomus-Larven gegenüber Methyltestosteron, Ethinylöstradiol, Fluoxetin, Carbamazepin, Tributylzinn und Blei führte zur differenziellen Expression von neun (Methyltestosteron) bis 49 (Carbamazepin) Genen. Bei der Untersuchung der exprimierten Proteine fällt auf, dass kaum bekannte Stressproteine (z.B. Glutathion-S-Transferase oder Cytochrom P450) differentiell reguliert wurden. Bei der Exposition wurden verschiedene Prozesse durch eine veränderte Genexpression beeinflusst. Eine Exposition gegenüber Methyltestosteron führte zu einer Beeinträchtigung von drei identifizierten biologischen Prozessen, während bei den anderen Substanzen sieben bis acht Prozesse beeinflusst waren. Die am häufigsten beeinflussten Prozesse waren der Protein- und der Energiemetabolismus. Der Sauerstofftransport ist ein Prozess, der bei allen Substanzen beeinflusst wurde, jedoch mit unterschiedlichen Anteilen. Bei einer Exposition gegenüber Methyltestosteron war der Anteil des Sauerstofftransports an den beteiligten Prozessen mit 84,6% am größten und mit 10,5% bei Fluoxetin am geringsten. Die veränderte Genexpression der Globine kann möglicherweise aufgrund der schadstoffspezifischen Veränderungen als Biomarker für das Monitoring von Freilandgewässern angewendet werden. Da Tubulin und Aktin häufig nach einer Exposition gegenüber Stressoren differenziell exprimiert wird (bei Tributylzinn und CBZ in der vorliegenden Arbeit und bei Antidepressiva und Östrogenen in anderen Studien) wären die beiden Proteine möglicherweise ebenfalls als Biomarker für Chemikalienstress geeignet. Vor der Verwendung des ChiroChips als Screeninginstrument für die Chemikalienuntersuchung und das Biomonitoring müssen noch Untersuchungen zur konzentrationsabhängigen Genexpression und zur Expression in unbehandelten Larven und weiteren Lebensstadien erfolgen. Des Weiteren müssen die vorliegenden Daten verifiziert und die Funktion der differentiell regulierten Gene vertieft untersucht werden.
In dieser Arbeit wurden potentielle Mitglieder des Proteinnetzwerks um Ataxin-2 untersucht, um Rückschlüsse auf die bisher unbekannte Funktion von Ataxin-2 machen zu können. Ataxin-2 ist das Krankheitsprotein der Spinozerebellären Ataxie Typ 2, einer Polyglutaminerkrankung, bei der die Expansion eines Polyglutamintraktes zur Degeneration von Purkinje-Neuronen führt. Da die Funktion von Ataxin-2 bisher nicht ermittelt werden konnte, sollte die Charakterisierung seiner Protein-Interaktoren es ermöglichen, Einblicke in seine Funktion zu gewinnen. Dazu wurden die drei Kandidaten „Similar to golgin-like“, TRAP und alpha-Actinin-1 untersucht, die alle drei mit Hilfe von Hefe-2-Hybrid Screens identifiziert worden waren. Im Fall von „Similar to golgin-like“, einem aus Genom und cDNA-Fragmenten hervorgesagten Protein unbewiesener Existenz, konnte eine mutationsabhängige Modulation der Bindungsstärke an Ataxin-2 im Hefe-2-Hybrid-System gezeigt werden, die sich allerdings mit rekombinanten Proteinen in Koimmunpräzipitationen in Säuger-Zellen nicht reproduzieren ließ. Beide Proteine kolokalisierten am ER, unabhängig von der Länge des pathogenen Polyglutamintraktes-2 in Ataxin. Gegen ein SIM-Peptid hergestellte Antikörper zeigten eine exklusive Expression im menschlichen Gehirn und wurden erfolgreich zum Nachweis eines endogenen Komplexes aus Ataxin-2 und SIM-IR im krankheitsrelevanten Gewebe eingesetzt. Allerdings war es nicht möglich, mittels 5’-RACE und 2D-Gel Massenspektrometrie die potentiellen Isoformen von SIM näher zu charakterisieren. Zur funktionellen Analyse von SIM wurden intrazelluläre Transportvorgänge am Golgi-Apparat untersucht, aber ein Einfluss von SIM / Ataxin-2 ließ sich nicht belegen. Im Fall des Interaktions-Kandidaten TRAP wurden Antikörper hergestellt und mit einem bereits publizierten polyklonalen Antikörper, der TRAP in rattus norvegicus erkennt, verglichen. Die Expressionsmuster zeigten eine identische Expression im Hirn und der Testis. Eine Kolokalisations-Studie wies sowohl TRAP als auch Ataxin-2 am ER nach. Allerdings erwiesen sich alle verwendeten Antikörper als ungeeignet für Immunpräzipitationen, so dass die physiologische Existenz des endogenen TRAP-Ataxin-2 Komplexes nicht bewiesen werden kann. Im Fall des Interaktor-Kandidaten alpha-Actinin-1 ließ sich die Interaktion mit Ataxin-2 sowohl für die endogenen Proteine in der zytosolischen Fraktion von Mausgehirnen als auch für die rekombinanten Proteine in Säuger-Zellen belegen. Beide Proteine konnten im Zytosol und zu kleineren Anteilen an der Plasmamembran kolokalisiert werden. Als verantwortliche Subdomänen im Fall von Ataxin-2 wurde der N-terminale Bereich des Proteins in der Nähe der pathogenen Expansion, im Fall von alpha-Actinin-1 die Aktin-bindende Domäne durch GST-pulldown Analysen identifiziert. Darauf aufbauend wurde in Patienten-Fibroblasten das Aktinzytoskelett und die Dynamik von alpha-Actinin-1 in Anwesenheit der Ataxin-2-Polyglutamin-Expansion untersucht, wobei allerdings kein Unterschied zu erkennen war. Anschließend an die Analyse des Zytoskeletts wurde ein möglicher Einfluss von Ataxin-2 und seiner Polyglutamin-Expansion auf die EGF-Rezeptorinternalisierung studiert, da eine Rolle von Ataxin-2 auf die Endozytose in parallelen Analysen der Arbeitsgruppe wahrscheinlich wurde. Hierzu wurden Säuger-Zellen mit alpha-Actinin-1 transfiziert und die Internalisierung mikroskopisch und mittels Analyse der ERK1/2 Aktivierung verfolgt. Ergänzt wurden die Experimente durch Analysen zur ERK1/2 Aktivierung in Patienten- und Kontroll-Fibroblasten. Entgegen den Erwartungen hatte die Überexpression von alpha-Actinin-1 keinen Einfluss auf die Internalisierung, und bei keinem der Ansätze zeigte sich eine signifikante Veränderung der ERK1/2 Aktivierung. Auch Transkriptombefunde aus SCA2-KO Gewebe, nach denen einzelne Gene des Zytoskeletts oder der Rezeptor-Endozytose ihre Expression ändern, ließen sich nicht mit Konsistenz validieren. Abschließend wurden Experimente zur subzellulären Lokalisation von Ataxin-2 durchgeführt, die eine Lokalisation am ER und nicht wie bisher berichtet am Golgi-Apparat sicherten und Ergebnisse zur Assoziation mit Polyribosomen bekräftigten. Obwohl somit bei allen drei Proteininteraktor-Kandidaten glaubwürdige Befunde für eine Ataxin-2 Bindung sprechen, ist derzeit eine funktionelle Analyse der Assoziationen nicht möglich und eine klare Definition der physiologischen Rolle von Ataxin-2 lässt sich aus diesen Daten nicht ableiten, wenn auch die prominente Lokalisation von Ataxin-2 am rauen endoplasmatischen Retikulum mit einem Einfluss von Ataxin-2 auf die ribosomale Translation und die Sekretion in Cisternen kompatibel ist.
This paper develops a framework for TAG (Tree Adjoining Grammar) semantics that brings together ideas from different recent approaches.Then, within this framework, an analysis of scope is proposed that accounts for the different scopal properties of quantifiers, adverbs, raising verbs and attitude verbs. Finally, including situation variables in the semantics, different situation binding possibilities are derived for different types of quantificational elements.
This paper presents an LTAG analysis of reflexives like himself and reciprocals like each other. These items need to find a c-commanding antecedent from which they retrieve (part of) their own denotation and with which they syntactically agree. The relation between anaphoric item and antecendent must satisfy the following important locality conditions (Chomsky (1981)).
Relative quantifier scope in German depends, in contrast to English, very much on word order. The scope possibilities of a quantifier are determined by its surface position, its base position and the type of the quantifier. In this paper we propose a multicomponent analysis for German quantifiers computing the scope of the quantifier, in particular its minimal nuclear scope, depending on the syntactic configuration it occurs in.
This paper investigates the relation between TT-MCTAG, a formalism used in computational linguistics, and RCG. RCGs are known to describe exactly the class PTIME; simple RCG even have been shown to be equivalent to linear context-free rewriting systems, i.e., to be mildly context-sensitive. TT-MCTAG has been proposed to model free word order languages. In general, it is NP-complete. In this paper, we will put an additional limitation on the derivations licensed in TT-MCTAG. We show that TT-MCTAG with this additional limitation can be transformed into equivalent simple RCGs. This result is interesting for theoretical reasons (since it shows that TT-MCTAG in this limited form is mildly context-sensitive) and, furthermore, even for practical reasons: We use the proposed transformation from TT-MCTAG to RCG in an actual parser that we have implemented.
This paper sets up a framework for LTAG (Lexicalized Tree Adjoining Grammar) semantics that brings together ideas from different recent approaches addressing some shortcomings of TAG semantics based on the derivation tree. Within this framework, several sample analyses are proposed, and it is shown that the framework allows to analyze data that have been claimed to be problematic for derivation tree based LTAG semantics approaches.
LTAG semantics for questions
(2004)
This papers presents a compositional semantic analysis of interrogatives clauses in LTAG (Lexicalized Tree Adjoining Grammar) that captures the scopal properties of wh- and nonwh-quantificational elements. It is shown that the present approach derives the correct semantics for examples claimed to be problematic for LTAG semantic approaches based on the derivation tree. The paper further provides an LTAG semantics for embedded interrogatives.
Our paper aims at capturing the distribution of negative polarity items (NPIs) within lexicalized Tree Adjoining Grammar (LTAG). The condition under which an NPI can occur in a sentence is for it to be in the scope of a negation with no quantifiers scopally intervening. We model this restriction within a recent framework for LTAG semantics based on semantic unification. The proposed analysis provides features that signal the presence of a negation in the semantics and that specify its scope. We extend our analysis to modelling the interaction of NPI licensing and neg raising constructions.
This paper addresses the problem ofconstraints for relative quantifier sope, in partiular in inverse linking readings wherecertain scope orders are exluded. We show how to account for such restrictions in the Tree Adjoining Grammar (TAG) framework by adopting a notion offlexible composition. In the semantics we use for TAG we introduce quantifier sets that group quantifiers that are "glued" together in the sense that no other quantifieran scopally intervene between them. Theflexible composition approach allows us to obtain the desired quantifier sets and thereby the desiredconstraints for quantifier sope.
In this paper we will explore the similarities and differences between two feature logic-based approaches to the composition of semantic representations. The first approach is formulated for Lexicalized Tree Adjoining Grammar (LTAG, Joshi and Schabes 1997), the second is Lexical Ressource Semantics (LRS, Richter and Sailer 2004) and was first defined in Head-driven Phrase Structure Grammar. The two frameworks have several common characteristics that make them easy to compare: 1 They use languages of two sorted type theory for semantic representations. 2. They allow underspecification. LTAG uses scope constraints while LRS provides component-of contraints. 3 They use feature logics for computing semantic representations. 4. they are designed for computational applications. By comparing the two frameworks we will also point outsome characteristics and advantages of feature logic-based semantic computation in genereal.
TT-MCTAG lets one abstract away from the relative order of co-complements in the final derived tree, which is more appropriate than classic TAG when dealing with flexible word order in German. In this paper, we present the analyses for sentential complements, i.e., wh-extraction, thatcomplementation and bridging, and we work out the crucial differences between these and respective accounts in XTAG (for English) and V-TAG (for German).
In this paper we propose a compositional semantics for lexicalized tree-adjoining grammar (LTAG). Tree-local multicomponent derivations allow separation of the semantic contribution of a lexical item into one component contributing to the predicate argument structure and a second component contributing to scope semantics. Based on this idea a syntax-semantics interface is presented where the compositional semantics depends only on the derivation structure. It is shown that the derivation structure (and indirectly the locality of derivations) allows an appropriate amount of underspecification. This is illustrated by investigating underspecified representations for quantifier scope ambiguities and related phenomena such as adjunct scope and island constraints.
In this paper, we introduce an extension of the XMG system (eXtensibleMeta-Grammar) in order to allow for the description of Multi-Component Tree Adjoining Grammars. In particular, we introduce the XMG formalism and its implementation, and show how the latter makes it possible to extend the system relatively easily to different target formalisms, thus opening the way towards multi-formalism.
Developing linguistic resources, in particular grammars, is known to be a complex task in itself, because of (amongst others) redundancy and consistency issues. Furthermore some languages can reveal themselves hard to describe because of specific characteristics, e.g. the free word order in German. In this context, we present (i) a framework allowing to describe tree-based grammars, and (ii) an actual fragment of a core multicomponent tree-adjoining grammar with tree tuples (TT-MCTAG) for German developed using this framework. This framework combines a metagrammar compiler and a parser based on range concatenation grammar (RCG) to respectively check the consistency and the correction of the grammar. The German grammar being developed within this framework already deals with a wide range of scrambling and extraction phenomena.
Der TUSNELDA-Standard : ein Korpusannotierungsstandard zur Unterstützung linguistischer Forschung
(2001)
Die Verwendung von Standards für die Annotierung größerer Sammlungen elektronischer Texte (Korpora) ist eine Voraussetzung für eine mögliche Wiederverwendung dieser Korpora. Dieser Artikel stellt einen Korpusannotierungsstandard vor, der die Anforderungen der Untersuchung unterschiedlichster linguistischer Phänomene berücksichtigt. Der Standard wurde im SFB 441 an der Universität Tübingen entwickelt. Er geht von bestehenden Standards, insbesondere CES und TEI, aus, die sich als teilweise zu ausführlich und zu wenig restriktiv,teilweise auch als nicht ausdrucksstark genug erweisen, um den Bedürfnissen korpusbasierter linguistischer Forschung gerecht zu werden.
Cet article étudie la relation entre les grammaires darbres adjoints à composantes multiples avec tuples darbres (TT-MCTAG), un formalisme utilisé en linguistique informatique, et les grammaires à concaténation dintervalles (RCG). Les RCGs sont connues pour décrire exactement la classe PTIME, il a en outre été démontré que les RCGs « simples » sont même équivalentes aux systèmes de réécriture hors-contextes linéaires (LCFRS), en dautres termes, elles sont légèrement sensibles au contexte. TT-MCTAG a été proposé pour modéliser les langages à ordre des mots libre. En général ces langages sont NP-complets. Dans cet article, nous définissons une contrainte additionnelle sur les dérivations autorisées par le formalisme TT-MCTAG. Nous montrons ensuite comment cette forme restreinte de TT-MCTAG peut être convertie en une RCG simple équivalente. Le résultat est intéressant pour des raisons théoriques (puisqu’il montre que la forme restreinte de TT-MCTAG est légèrement sensible au contexte), mais également pour des raisons pratiques (la transformation proposée ici a été utilisée pour implanter un analyseur pour TT-MCTAG).
This paper compares two approaches to computational semantics, namely semantic unification in Lexicalized Tree Adjoining Grammars (LTAG) and Lexical Resource Semantics (LRS) in HPSG. There are striking similarities between the frameworks that make them comparable in many respects. We will exemplify the differences and similarities by looking at several phenomena. We will show, first of all, that many intuitions about the mechanisms of semantic computations can be implemented in similar ways in both frameworks. Secondly, we will identify some aspects in which the frameworks intrinsically differ due to more general differences between the approaches to formal grammar adopted by LTAG and HPSG.
The work presented here addresses the question of how to determine whether a grammar formalism is powerful enough to describe natural languages. The expressive power of a formalism can be characterized in terms of i) the string languages it generates (weak generative capacity (WGC)) or ii) the tree languages it generates (strong generative capacity (SGC)). The notion of WGC is not enough to determine whether a formalism is adequate for natural languages. We argue that even SGC is problematic since the sets of trees a grammar formalism for natural languages should be able to generate is difficult to determine. The concrete syntactic structures assumed for natural languages depend very much on theoretical stipulations and empirical evidence for syntactic structures is rather hard to obtain. Therefore, for lexicalized formalisms, we propose to consider the ability to generate certain strings together with specific predicate argument dependencies as a criterion for adequacy for natural languages.
In this paper we present a parsing architecture that allows processing of different mildly context-sensitive formalisms, in particular Tree-Adjoining Grammar (TAG), Multi-Component Tree-Adjoining Grammar with Tree Tuples (TT-MCTAG) and simple Range Concatenation Grammar (RCG). Furthermore, for tree-based grammars, the parser computes not only syntactic analyses but also the corresponding semantic representations.
Multicomponent Tree Adjoining Grammars (MCTAG) is a formalism that has been shown to be useful for many natural language applications. The definition of MCTAG however is problematic since it refers to the process of the derivation itself: a simultaneity constraint must be respected concerning the way the members of the elementary tree sets are added. Looking only at the result of a derivation (i.e., the derived tree and the derivation tree), this simultaneity is no longer visible and therefore cannot be checked. I.e., this way of characterizing MCTAG does not allow to abstract away from the concrete order of derivation. Therefore, in this paper, we propose an alternative definition of MCTAG that characterizes the trees in the tree language of an MCTAG via the properties of the derivation trees the MCTAG licences.
Multicomponent Tree Adjoining Grammars (MCTAG) is a formalism that has been shown to be useful for many natural language applications. The definition of MCTAG however is problematic since it refers to the process of the derivation itself: a simultaneity constraint must be respected concerning the way the members of the elementary tree sets are added. This way of characterizing MCTAG does not allow to abstract away from the concrete order of derivation. In this paper, we propose an alternative definition of MCTAG that characterizes the trees in the tree language of an MCTAG via the properties of the derivation trees (in the underlying TAG) the MCTAG licences. This definition gives a better understanding of the formalism, it allows a more systematic comparison of different types of MCTAG, and, furthermore, it can be exploited for parsing.
Die Theorie des sprachlichen Lernens und Lehrens ist bis in die siebziger Jahre des 20. Jahrhunderts hinein eine "Meisterlehre" (Müller-Michaels 1980) gewesen. Große Vorbilder eines Volkes (z.B. Mose), Leiter philosophischer Schulen (z.B. Platon) oder Äbte von Klöstern (z.B. Augustinus) und schließlich staatlich geprüfte Oberstudiendirektoren (z.B. Ulshöfer) beschrieben den jüngeren Kollegen, was sich beim Lehren der Sprache über Jahrzehnte bewährt habe: wie man am besten den Sprachunterricht erteile (Müller 1922, Seidemann 1973, Ulshöfer 1968, Essen 1968). Mit der Etablierung der Sprachdidaktiken an den Universitäten ist das Konzept der "norm-setzenden Handlungswissenschaften" Müller-Michaels 1980, Ivo 1975) entwickelt worden. Der Forscher (nicht mehr als Meister der Praxis ausgewiesen) untersucht die Prozesse des sprachlichen Lehrens und Lernens, indem er im "Feld" des Praktikers Erhebungen anstellt, um anschließend die erhobenen Daten einer Hypothesenprüfung zu unterziehen. Als Handlungsfeld wird besonders die Schule berücksichtigt. Die Methoden der Forschung sind vorwiegend "quasi-experimentell". In der Nachfolge der Sprachtheorie Chomsky´s (Chomsky 1965) sind die experimentellen Ansätze zur Untersuchung des Spracherwerbs, der Spracherwerbsstörung und der betreffenden Interventionen entwickelt worden (de Villiers/ de Villiers 1970, Hörmann 1978). Ort der Untersuchung ist das Labor. Das Design dieser Sprachdidaktik (bzw. Psycholinguistik, Kognitionswissenschaften etc.) ist experimentell (z.B. Herrmann 2004). Alle drei Konzepte stehen sich in vielerlei Hinsicht antagonistisch gegenüber. Sie auseinander zu halten - und andererseits mit Gewinn aufeinander zu beziehen -, gehört zu den Basis-Fähigkeiten der linguosomatischen Berufe und ihrer zugrundeliegenden Theorie (Beispiel Sprachlehrberufe, Phoniatrie, Sprachheil-Sonderpädagogik, psychosomatische Sprachtherapien). Daher sind die signifikanten Gegensätze der drei Konzepte herauszuarbeiten und ihre widerstrebenden Konsequenzen aufeinander zu beziehen.
The present work reports two experiments on brain electric correlates of cognitive and emotional functions. (1) Studying paranormal belief, 35-channel resting EEG (10 believers and 13 skeptics) was analyzed with "Low Resolution Electromagnetic Tomography" (LORETA) in seven frequency bands. LORETA gravity centers of all bands shifted to the left in believers vs. sceptics, and showed that believers had stronger left fronto-temporo-parietal activity than skeptics. Self-rating of affective attitude showed believers to be less negative than skeptics. The observed EEG lateralization agreed with the ‘valence hypothesis’ that posits predominant left hemispheric processing for positive emotions. (2) Studying emotions, positive and negative emotion words were presented to 21 subjects while "Event-Related Potentials" (ERPs) were recorded. During word presentation (450 ms), 13 microstates (steps of information processing) were identified. Three microstates showed different potential maps for positive vs. negative words; LORETA functional imaging showed stronger activity in microstate #4 (106-122 ms) for positive words right anterior, for negative words left central; in #6 (138-166 ms) for positive words left anterior, for negative words left posterior; in #7 (166-198 ms), for positive words right anterior, for negative words right central. In conclusion: during word processing, the extraction of emotion content starts as early as 106 ms after stimulus onset; the brain identifies emotion content repeatedly in three separate, brief microstate epochs; and, this processing of emotion content in the three microstates involves different brain mechanisms to represent the distinction positive vs. negative valence.
This paper examines the development of periphrastic constructions involving auxiliary "have" and "be" with a past participle in the history of English, on the basis of parsed electronic corpora. It is argued that the two constructions represented distinct syntactic and semantic structures: while the one with have developed into a true perfect in the course of Middle English, the one with be remained a stative resultative throughout its history. In this way, it is explained why the be construction was rarely or never used in a number of contexts, including past counterfactuals, iteratives, duratives, certain kinds of infinitives and various other utterance types that cannot be characterized as perfects of result. When the construction with have became a true perfect, it was used in such contexts, regardless of the identity of the main verb, leading to the appearance of have with verbs like come which had previously only taken be. Crucially, however, have was not spreading at the expense of be, as the be perfect had never been used in such contexts, but rather at the expense of the old simple past. At least until the end of the Early Modern English period, the shift in the relative frequency of have and be perfects is to be explained in terms of the expansion of the former into new contexts, while the latter remained stable. A formal analysis is proposed, taking as its starting point a comparison with German which shows that the older English be perfect indeed behaves more like the German stative passive than its haben and sein perfects.
In this paper, we will argue for a novel analysis of the auxiliary alternation in Early English, its development and subsequent loss which has broader consequences for the way that auxiliary selection is looked at cross-linguistically. We will present evidence that the choice of auxiliaries accompanying past participles in Early English differed in several significant respects from that in the familiar modern European languages. Specifically, while the construction with have became a full-fledged perfect by some time in the ME period, that with be was actually a stative resultative, which it remained until it was lost. We will show that this accounts for some otherwise surprising restrictions on the distribution of BE in Early English and allows a better understanding of the spread of HAVE through late ME and EModE. Perhaps more importantly, the Early English facts also provide insight into the genesis of the kind of auxiliary selection found in German, Dutch and Italian. Our analysis of them furthermore suggests a promising strategy for explaining cross-linguistic variation in auxiliary selection in terms of variation in the syntactico-semantic structure of the perfect. In this introductory section, we will first provide some background on the historical situation we will be discussing, then we will lay out the main claims for which we will be arguing in the paper.
In April 2002 the European Central Bank (ECB) and the Center for Financial Studies (CFS) launched the ECB-CFS Research Network to promote research on “Capital Markets and Financial Integration in Europe”. The ECB-CFS research network aims at stimulating top-level and policy-relevant research, significantly contributing to the understanding of the current and future structure and integration of the financial system in Europe and its international linkages with the United States and Japan. This report summarises the work done under the network after two years. Over time the network formed a coherent and growing group of researchers interested in the integration of European financial markets, while using light organisational structures and budgets. The members of this evolving group met repeatedly at the events organised by the network to present the latest results of their research and to share views on policy options. In this sense, the “network of people” intended at the start was created. Overall, the network aroused great interest, as leading academic researchers, researchers from the main policy institutions and high-level policy makers participated actively in it by presenting research results, through speeches and in policy panels. It also stimulated a new research field on securities settlement systems, an area of high policy relevance and interest to the ECB that had not attracted much interest in the research community beforehand. Also, the network seems to have triggered several related outside initiatives by international institutions, such as the IMF or the OECD. During its first two years the network was organised around three workshops and a final symposium on 10-11 May 2004. To focus research resources and to ensure medium-term policy relevance, a limited number of areas have been given top priority: bank competition and the geographical scope of banking; international portfolio choices and asset market linkages between Europe, the United States and Japan; European bond markets; European securities settlement systems; and the emergence and evolution of new markets in Europe (in particular start-up financing markets). In order to stimulate further research focused on the priority fields of the network, the ECB Lamfalussy research fellowships were established. These fellowships sponsor projects proposed by young researchers, both a dvanced doctoral students and younger professors. Five Lamfalussy fellowships were granted in 2003 and five more in 2004. The first papers from this program have already been issued in the ECB working paper series or are forthcoming. One of them won the prize for the best paper written by a Ph.D. student at the 2004 European Finance Association Meetings in Maastricht. Results of the network in the five top priority areas can be summarised as follows: Bank competition and the geographical scope of banking. First, integration does not appear to be very advanced in many retail banking markets. Second, some of the inherent characteristics of traditional loan and deposit business constrain the cross-border expansion of commercial banking, even in a common currency area. Hence, the implementation of some policies to foster cross-border integration in retail banking may be ineffective. Third, theoretical research suggests that supervisory structures may not be neutral towards further European banking integration. Finally, a stronger role of area-wide competition policies could be beneficial for further banking integration. This would also stimulate economic growth, as more competition in the banking sector induces financially dependent firms to grow more. European bond markets. While the government bond market has integrated rapidly with the EMU convergence process, its full integration has not yet been achieved. The introduction of a common electronic trading platform reduced transaction costs substantially, but yield spreads of long-term sovereign bonds of the euro area are still heterogeneous. This is largely explained by different sensitivities to an international risk factor, whereas liquidity differentials only play a role in conjunction with this latter factor. Somewhat surprisingly in this context, the dynamically developing corporate bond market exhibits a relatively high level of integration. There is also increasing evidence that the introduction of the euro has contributed to a reduction in the cost of capital in the euro area, in particular through the reduction of corporate bond underwriting fees. As a result, firms may wish to increase bond financing relative to equity financing. The development of a larger corporate bond market is also important for monetary policy. For example, US evidence suggests that the rating of corporate bonds may contribute to the persistence of recessions, as rating agencies´ policies affect firms asymmetrically in their access to the bond market over the business cycle. US evidence also suggests that liquidity conditions in stock and bond markets tend to be positively correlated. European securities settlement systems. European securities settlement infrastructures are highly fragmented and further integration and/or consolidation would exploit economies of scale that could greatly benefit investors. It is not clear, however, whether direct public intervention in favour of consolidation would lead to the highest level of efficiency, for example because of the existence of strong vertical integration between trading and securities platforms (“silos”). In contrast, promoting open access to clearing and settlement systems could lead to consolidation and the highest level of efficiency. Finally, regarding concerns about unfair practices by Central Securities Depositories (CSDs) toward custodian banks, regulatory interventions favouring custodian banks should be discouraged, as long as CSDs are not allowed to price discriminate between custodian banks and investor banks. The emergence and evolution of new markets in Europe (in particular start-up financing markets). While fairly well integrated, “new markets” and start-up financing are less developed and integrated in Europe than in the United States. However, new markets and venture capitalists are the most important intermediaries for the financing of projects with high risk but with potentially very high return. The analysis carried out within the network reveals that European start-up financiers are mostly institutional investors, while US venture capitalists are mostly rich individuals. Also, new markets are essential for the development of start-up finance in Europe, as they provide an exit strategy for start-up financiers who can then sell new successful projects using initial public offerings. Finally, the legal framework affects the development of venture capital firms. For example, very strict personal bankruptcy laws constrain early stage entrepreneurs, reducing demand for venture capital finance. International portfolio choices and asset market linkages between Europe, the United States and Japan. At a global scale, asset market linkages have increased recently. For example, major economies such as the United States and the euro area have become more financially interdependent. This phenomenon can be observed in stock and bond markets as well as in money markets, where the main direction of spillovers has recently been from the US to the euro area. Country-specific shocks now play a smaller role in explaining stock return variations of firms whose sales are internationally diversified. Increases in firmby-firm market linkages are a global phenomenon, but they are stronger within the euro area than in the rest of the world. Various other phenomena also increase market linkages and therefore the likelihood that financial shocks spread across countries. One example is the use of global bonds. Finally, the nowadays more direct access of unsophisticated investors to financial markets may increase volatility. Other areas. Financial integration affects financial structures, but it does not need to lead to their convergence across countries. Financial structures matter for growth, as market-oriented financial systems benefit all sectors and firms, whereas bank-based systems primarily benefit younger firms that depend on external finance. Moreover, good corporate governance increases firms’ value. In particular, the dual board system, where the monitoring and advising roles of the board of directors are separated, is found to dominate the single board structure. Therefore, the further development of the European single market should strongly require good corporate governance. In general, well designed institutions foster entrepreneurial activity, partly by relaxing capital constraints. The results of the network clearly illustrated the substantial effects the introduction of the euro had on euro area financial markets. In addition to the effects on bond markets, stock markets and the cost of capital summarised above, research produced showed that the single currency had its strongest effects on money markets, whose unsecured segment is now completely integrated. Without any doubt the euro generally enhanced the liquidity and efficiency of euro area financial markets, and ongoing initiatives such as the European Union’s Financial Services Action Plan will help to continue this process. In sum, in the first two years the network has established itself as the hub for the research debate on European financial integration. Some of the best papers produced by the network, leading to the conclusions mentioned above, are currently being considered for publication in two special issues of academic journals. An issue of the Oxford Review of Economic Policy on “European financial integration” is published contemporaneously with this report, and an issue of the Review of Finance is planned for next year. The current policy context, the gradual progress of integration as well as the creation of other related non-ECB or non-CFS initiatives on financial integration suggest that this topic will remain high on the agendas of policy makers and academics for the years to come. Therefore, the ECB Executive Board and the CFS decided to continue the network, refocusing its priorities. Three priority areas have been added: 1) The relationship between financial integration and financial stability, 2) EU accession, financial development and financial integration, and 3) financial system modernisation and economic growth in Europe. These three areas have become particularly important at the current juncture, but have not received particularly strong attention in the first two years of the network. For example, the area of financial stability research was highlighted by the ECB research evaluators as an area deserving further development. Moreover, despite the results found in the first two years of the network, new developments remain to be further explored in the earlier priority areas. A three-year extension is envisaged, running from after the May 2004 symposium until 2007, with two events to be held per year. The threeyear period is long enough to consider the first effects of the Financial Services Action Plan. It also constitutes a realistic horizon for the ambitious agenda implied by the three new priorities. The generally light organisational structure and working of the network will not be changed. In addition, given the value of the Lamfalussy fellowship research program in inducing further research in the areas of the network, the program has also been extended for all the research topics in the area of the network.
Women and Halakha Shiur
(2008)
This essay examines the foreign policy discourse in contemporary Germany. In reviewing a growing body of publications by German academics and foreign policy analysts, it identifies five schools of thought based on different worldviews, assumptions about international politics, and policy recommendations. These schools of thought are then related to, first, actual preferences held by German policymakers and the public more generally and, second, to a small set of grand strategies that Germany could pursue in the future. It argues that the spectrum of likely choices is narrow, with the two most probable-the strategies of "Wider West" and "Carolingian Europe"---continuing the multilateral and integrationist orientation of the old Federal Republic. These findings are contrasted with diverging assessments in the non-German professional literature.Finally, the essay sketches avenues for future research by suggesting ways for broadening the study of country-specific grand strategies, developing and testing inclusive typologies of more abstract foreign policy strategies, and refining the analytical tools in examining foreign policy discourses in general.
Im Rahmen dieser Arbeit wurden ausgewählte 5’- und 3’-untranslatierte Regionen (UTRs) von mRNAs aus H. volcanii bestimmt. Dieses Datenset wurde verwendet um (1) haloarchaeale UTRs zu charakterisieren, (2) Konsensuselemente für die Transkrikptionsinitiation und -termination zu verifizieren und (3) den Einfluss haloarchaealer UTRs auf die Initiation und Regulation der Translation zu untersuchen. Es konnte gezeigt werden, dass alle untersuchten Transkripte nichtprozessierte 3’-UTRs mit einer durchschnittlichen Länge von 45 Nukleotiden besitzen. Darüber hinaus konnte ein putatives Transkriptionsterminationssignal bestehend aus einem pentaU-Motiv mit vorausgehender Haarnadelstruktur identifiziert werden. Die Analysen der Regionen stromaufwärts der experimentell bestimmten Transkriptionsstarts führten zur Identifizierung dreier konservierter Promotor Elemente: Der TATA-Box, dem BRE-Element und einem neuen Element an Position -10/-11. Überraschenderweise bestand die TATA-Box nur aus vier konservierten Nukleotiden. Die Untersuchung der UTRs ergab, dass die größte Anteil der haloarchaealen Transkripte keine 5’-UTR besitzt. Falls eine 5’-UTR vorhanden ist, besitzen unerwarteterweise nur 15% der 5’-UTRs aus H. volcanii eine Shine-Dalgarno-Sequenz (SD-Sequenz). Es konnte jedoch gezeigt werden, dass verschiedene native und artifizielle 5’-UTRs ohne SD-Sequenz sehr effizient in vivo translatiert werden. Außerdem hat die Sekundärstruktur der 5’-UTR und die Position struktureller Elemente offenbar einen entscheidenden Einfluss auf die Translatierbarkeit von Transkripten. Die Insertion von Strukturelementen nahe des Startkodons führte zu einer vollkommenen Repression der Translation, während die proximale Insertion des Motivs an das 5’-Ende der 5’-UTR keinen Einfluss auf die Translationsseffizienz hatte. Zusammenfassend kann sowohl der eukaryotische Scanning-Mechanismus als auch die bakterielle Initiation der Translation über die SD-Sequenz für haloarchaeale Transkripte mit 5’-UTR ohne SD-Sequenz ausgeschlossen werden. Die im Rahmen dieser Arbeit durchgeführten Untersuchungen bilden die Grundlage für weitere Untersuchungen zur Identifizierung eines entsprechenden dritten Mechanismus zur Initiation der Translation in H. volcanii. Eine aktuelle Studie zur globalen Analyse der Translationsregulation zeigte, dass der Anteil translational regulierter Gene in H. volcanii genauso hoch ist wie bei Eukaryoten (Lange et al., 2007). Um die Rolle haloarchaealer UTRs bei der Regulation der Translation zu charakterisieren, wurden die UTRs zweier ausgewählter translationsregulierter Gene untersucht. Es stellte sich heraus, dass nur die Anwesenheit beider UTRs, 5’- und 3’-UTR, zu einer Wachstumsphasen-abhängigen Regulation der Translation führt. Dabei hat die 3’-UTR allein keinen Einfluss auf die Translationseffizienz, während die 5’-UTR die Translationseffizienz in beiden Wachstumsphasen reduziert. Es zeigte sich außerdem, dass die 3’-UTR für die „Richtung“ der Regulation auf Translationsebene verantwortlich ist und putative Strukturelemente möglicherweise in den Regulationsmechanismus involviert sind. Zusammengefasst ergibt sich folgendes Modell der Translationsregulation in H. volcanii: Strukturierte 5’-UTRs führen zu einer Herabsetzung der konstitutiven Translationseffizienz. Dies kann differentiell durch regulatorische Faktoren kompensiert werden, welche spezifische Elemente der 3’-UTR binden. Sowohl natürliche als auch artifizielle Aptamere und allosterische Ribozyme stellen effektive Werkzeuge zur exogen kontrollierten Genexpression dar. Daher wurde die Anwendbarkeit eines Tetracyclin-induzierbaren Aptamers und eines konstitutiven Hammerhead-Ribozyms in H. volcanii untersucht. Es stellte sich allerdings heraus, dass das Aptamer bereits ohne Tetracyclin starke inhibitorische Sekundärstrukturen ausbildet. Als Alternative wurden Reportergenfusionen mit einem selbstspaltenden Hammerhead-Ribozym konstruiert. Die selbstspaltende Aktivität des Hammerhead-Ribozyms in H. volcanii konnte erfolgreich in vivo demonstriert werden, was die Grundlage zur Entwicklung konditionaler Expressionssysteme basierend auf dem Hammerhead-Ribozym in H. volcanii bildet.
Cellular metabolism can be envisaged by fluorescence lifetime imaging of fluorophores sensitive to specific intracellular factors such as [H+], [Ca2+], [O2], membrane potential, temperature, polarity of the probe environment, and alterations in the conformation and interactions of macromolecules. Lifetime measurements of the probes allow the quantitative determination of the intracellular factors. Fluorescence microscopy taking advantage of time-correlated single photon counting is a novel method that outperforms all other techniques with its single photon sensitivity and picoseconds time resolution. In this work, a time- and space-correlated single photon counting system was established to investigate the behavior of 2-(4-(dimethylamino)styryl)-1-methylpyridinium iodide (DASPMI) in living cells. DASPMI is known to selectively stain mitochondria in living cells. The uptake and fluorescence intensity of DASPMI in mitochondria is a dynamic measure of membrane potential. Hence, an endeavour was made to elucidate the mechanism of DASPMI fluorescence by obtaining spectrally-resolved fluorescence decays in different solvents. A bi-exponential decay model was sufficient to globally describe the wavelength dependent fluorescence in ethanol and chloroform. While in glycerol, a three-exponential decay model was necessary for global analysis. In the polar low-viscous solvent water, a mono-exponential decay model fitted the decay data. The sensitivity of DASPMI fluorescence to solvent viscosity was analysed using various proportions of glycerol/ethanol mixtures. The lifetimes were found to increase with increasing solvent viscosity. The negative amplitudes of the short lifetime component found in chloroform and glycerol at the longer wavelengths validated the formation of new excited state species from the initially excited state. Time-resolved emission spectra in chloroform and glycerol showed a biphasic increase of spectral width and emission maxima. The spectral width had an initial fast increase within 150 ps and a near constant thereafter. A two-state model based on solvation of the initially excited state and further formation of TICT state has been proposed to explain the excited state kinetics and has been substantiated by the de-composition of time-resolved spectra. The knowledge of DASPMI photophysics in a variety of solvents now provides the means of deducing complex physiological parameters of mitochondria from its behavior in living cells. Spatially-resolved fluorescence decays from single mitochondria or only very few organelles of XTH2 cells signified distinctive three-exponential decay kinetics of viscous environment. Based on DASPMI photophysics in a variety of solvents, these lifetimes have been attributed to the fluorescence from locally excited state (LE), intramolecular charge transfer state (ICT) and twisted intramolecular charge transfer (TICT) state. A considerable variation in lifetime among mitochondria of different morphology and within single cell was evident corresponding to the high physiological variations within single cells. Considerable shortening of the short lifetime component (τ1) under high membrane potential condition, such as in the presence of ATP and/or substrate, was similar to quenching and dramatic decrease of lifetime in polar solvents. Under these conditions τ2 and τ3 increased with decreasing contribution. Upon treatment with ionophore nigericin, hyperpolarization of mitochondria resulted in remarkable shortening of τ1 from 159 ps to 38 ps. Inhibiting respiration by cyanide resulted in notable increase of mean lifetime and decrease of mitochondrial fluorescence. Increase of DASPMI fluorescence on conditions elevating mitochondrial membrane potential has been attributed to uptake according Nernst distributions, to de-localisation of π electrons, quenching processes of the methyl pyridinium moiety and restricted torsional dynamics at the mitochondrial inner membrane. Accordingly, determination of anisotropy in DASPMI stained mitochondria in living XTH2 cells, revealed dependence of anisotropy on membrane potential. Such changes in anisotropy attributed to restriction of the torsional dynamics about the flexible single bonds neighboring the olefinic double bond revealed the previously known sub-mitochondrial zones with higher membrane potential along its length. Membrane-potential-dependent changes in anisotropy have further been demonstrated in senescent chick embryo fibroblasts. In conclusion, spectroscopic observations of excited-state kinetics of DASPMI in solvents and its behavior in living cells had revealed for the first time its localisation, mechanism of voltage sensitive fluorescence and its membrane-potential-dependent anisotropy in living cells. The simultaneous dependence of DASPMI photophysics on mitochondrial inner membrane viscosity and transmembrane potential has been highlighted.
Das hervorstechendste Merkmal deutscher Außenpolitik seit 1990 ist die Kontinuität der Kontinuitätsrhetorik. Helmut Kohl hatte sie nach der gewonnenen Bundestagswahl im Dezember 1990 genauso eingesetzt wie Gerhard Schröder nach seinem Sieg im Herbst 1998. Mochte sich die Republik im Innern auch noch so sehr ändern, mochte sich ihr äußeres Umfeld dramatisch verschieben – die Grundkonstanten deutscher Außenpolitik, sie sollten dieselben bleiben. Politisch gab und gibt es für diese Rhetorik fast durchwegs gute Gründe, denn angesichts einer einhellig konstatierten "Erfolgsgeschichte" bundesrepublikanischer Außenpolitik auf der einen Seite sowie, auf der anderen, deutlicher Sorgen im Ausland, dass es damit nach der Vereinigung vorbei sein könnte, sprach alles dafür, eine Fortsetzung des Alten selbst dann zu beschwören, als vieles sich änderte. Die Rede von der Kontinuität bundesdeutscher Außenpolitik hatte zudem innen wie außen eine dankbare Zuhörerschaft, denn sie handelte von einer guten alten Zeit der "Beschaulichkeit" und "Bescheidenheit" der alten Bundesrepublik, die man heute als "Bonner Republik" fast schon in der historischen Nähe der "Weimarer Republik" wiederfindet. ...
Hurra-Multilateralismus
(2001)
Gibt es so etwas wie "konservative Außenpolitik"? Die erste Antwort, die dazu einfällt, hat Joschka Fischer auf eine vergleichbare Frage gleich nach seinem Amtsantritt als neuer Außenminister gegeben. Nein, "eine grüne Außenpolitik gibt es nicht, nur eine deutsche". Klassische weltanschauliche Überzeugungen, die im innenpolitischen Wettstreit in Gegenbegriffen wie "konservativ" und "fortschrittlich" einsortiert werden, lassen sich nach dieser Auffassung nicht auf das Feld der Außenpolitik übertragen. Genau diese Position vertrat auch Kaiser Wilhelm als er kurz nach dem Beginn des Ersten Weltkriegs ausrief: "Ich kenne keine Partei mehr, ich kenne nur Deutsche"...