Refine
Year of publication
Document Type
- Article (30507)
- Part of Periodical (11892)
- Book (8260)
- Doctoral Thesis (5703)
- Part of a Book (3710)
- Working Paper (3385)
- Review (2878)
- Contribution to a Periodical (2338)
- Preprint (2050)
- Report (1544)
Language
- German (42389)
- English (29171)
- French (1067)
- Portuguese (723)
- Multiple languages (309)
- Croatian (302)
- Spanish (301)
- Italian (194)
- mis (174)
- Turkish (148)
Is part of the Bibliography
- no (75103) (remove)
Keywords
- Deutsch (1038)
- Literatur (807)
- taxonomy (760)
- Deutschland (543)
- Rezension (491)
- new species (449)
- Frankfurt <Main> / Universität (341)
- Rezeption (325)
- Geschichte (292)
- Linguistik (268)
Institute
- Medizin (7684)
- Präsidium (5156)
- Physik (4417)
- Wirtschaftswissenschaften (2688)
- Extern (2661)
- Gesellschaftswissenschaften (2372)
- Biowissenschaften (2180)
- Biochemie und Chemie (1972)
- Frankfurt Institute for Advanced Studies (FIAS) (1670)
- Center for Financial Studies (CFS) (1621)
This paper compares two approaches to computational semantics, namely semantic unification in Lexicalized Tree Adjoining Grammars (LTAG) and Lexical Resource Semantics (LRS) in HPSG. There are striking similarities between the frameworks that make them comparable in many respects. We will exemplify the differences and similarities by looking at several phenomena. We will show, first of all, that many intuitions about the mechanisms of semantic computations can be implemented in similar ways in both frameworks. Secondly, we will identify some aspects in which the frameworks intrinsically differ due to more general differences between the approaches to formal grammar adopted by LTAG and HPSG.
The work presented here addresses the question of how to determine whether a grammar formalism is powerful enough to describe natural languages. The expressive power of a formalism can be characterized in terms of i) the string languages it generates (weak generative capacity (WGC)) or ii) the tree languages it generates (strong generative capacity (SGC)). The notion of WGC is not enough to determine whether a formalism is adequate for natural languages. We argue that even SGC is problematic since the sets of trees a grammar formalism for natural languages should be able to generate is difficult to determine. The concrete syntactic structures assumed for natural languages depend very much on theoretical stipulations and empirical evidence for syntactic structures is rather hard to obtain. Therefore, for lexicalized formalisms, we propose to consider the ability to generate certain strings together with specific predicate argument dependencies as a criterion for adequacy for natural languages.
In this paper we present a parsing architecture that allows processing of different mildly context-sensitive formalisms, in particular Tree-Adjoining Grammar (TAG), Multi-Component Tree-Adjoining Grammar with Tree Tuples (TT-MCTAG) and simple Range Concatenation Grammar (RCG). Furthermore, for tree-based grammars, the parser computes not only syntactic analyses but also the corresponding semantic representations.
Multicomponent Tree Adjoining Grammars (MCTAG) is a formalism that has been shown to be useful for many natural language applications. The definition of MCTAG however is problematic since it refers to the process of the derivation itself: a simultaneity constraint must be respected concerning the way the members of the elementary tree sets are added. Looking only at the result of a derivation (i.e., the derived tree and the derivation tree), this simultaneity is no longer visible and therefore cannot be checked. I.e., this way of characterizing MCTAG does not allow to abstract away from the concrete order of derivation. Therefore, in this paper, we propose an alternative definition of MCTAG that characterizes the trees in the tree language of an MCTAG via the properties of the derivation trees the MCTAG licences.
Multicomponent Tree Adjoining Grammars (MCTAG) is a formalism that has been shown to be useful for many natural language applications. The definition of MCTAG however is problematic since it refers to the process of the derivation itself: a simultaneity constraint must be respected concerning the way the members of the elementary tree sets are added. This way of characterizing MCTAG does not allow to abstract away from the concrete order of derivation. In this paper, we propose an alternative definition of MCTAG that characterizes the trees in the tree language of an MCTAG via the properties of the derivation trees (in the underlying TAG) the MCTAG licences. This definition gives a better understanding of the formalism, it allows a more systematic comparison of different types of MCTAG, and, furthermore, it can be exploited for parsing.
Die Theorie des sprachlichen Lernens und Lehrens ist bis in die siebziger Jahre des 20. Jahrhunderts hinein eine "Meisterlehre" (Müller-Michaels 1980) gewesen. Große Vorbilder eines Volkes (z.B. Mose), Leiter philosophischer Schulen (z.B. Platon) oder Äbte von Klöstern (z.B. Augustinus) und schließlich staatlich geprüfte Oberstudiendirektoren (z.B. Ulshöfer) beschrieben den jüngeren Kollegen, was sich beim Lehren der Sprache über Jahrzehnte bewährt habe: wie man am besten den Sprachunterricht erteile (Müller 1922, Seidemann 1973, Ulshöfer 1968, Essen 1968). Mit der Etablierung der Sprachdidaktiken an den Universitäten ist das Konzept der "norm-setzenden Handlungswissenschaften" Müller-Michaels 1980, Ivo 1975) entwickelt worden. Der Forscher (nicht mehr als Meister der Praxis ausgewiesen) untersucht die Prozesse des sprachlichen Lehrens und Lernens, indem er im "Feld" des Praktikers Erhebungen anstellt, um anschließend die erhobenen Daten einer Hypothesenprüfung zu unterziehen. Als Handlungsfeld wird besonders die Schule berücksichtigt. Die Methoden der Forschung sind vorwiegend "quasi-experimentell". In der Nachfolge der Sprachtheorie Chomsky´s (Chomsky 1965) sind die experimentellen Ansätze zur Untersuchung des Spracherwerbs, der Spracherwerbsstörung und der betreffenden Interventionen entwickelt worden (de Villiers/ de Villiers 1970, Hörmann 1978). Ort der Untersuchung ist das Labor. Das Design dieser Sprachdidaktik (bzw. Psycholinguistik, Kognitionswissenschaften etc.) ist experimentell (z.B. Herrmann 2004). Alle drei Konzepte stehen sich in vielerlei Hinsicht antagonistisch gegenüber. Sie auseinander zu halten - und andererseits mit Gewinn aufeinander zu beziehen -, gehört zu den Basis-Fähigkeiten der linguosomatischen Berufe und ihrer zugrundeliegenden Theorie (Beispiel Sprachlehrberufe, Phoniatrie, Sprachheil-Sonderpädagogik, psychosomatische Sprachtherapien). Daher sind die signifikanten Gegensätze der drei Konzepte herauszuarbeiten und ihre widerstrebenden Konsequenzen aufeinander zu beziehen.
The present work reports two experiments on brain electric correlates of cognitive and emotional functions. (1) Studying paranormal belief, 35-channel resting EEG (10 believers and 13 skeptics) was analyzed with "Low Resolution Electromagnetic Tomography" (LORETA) in seven frequency bands. LORETA gravity centers of all bands shifted to the left in believers vs. sceptics, and showed that believers had stronger left fronto-temporo-parietal activity than skeptics. Self-rating of affective attitude showed believers to be less negative than skeptics. The observed EEG lateralization agreed with the ‘valence hypothesis’ that posits predominant left hemispheric processing for positive emotions. (2) Studying emotions, positive and negative emotion words were presented to 21 subjects while "Event-Related Potentials" (ERPs) were recorded. During word presentation (450 ms), 13 microstates (steps of information processing) were identified. Three microstates showed different potential maps for positive vs. negative words; LORETA functional imaging showed stronger activity in microstate #4 (106-122 ms) for positive words right anterior, for negative words left central; in #6 (138-166 ms) for positive words left anterior, for negative words left posterior; in #7 (166-198 ms), for positive words right anterior, for negative words right central. In conclusion: during word processing, the extraction of emotion content starts as early as 106 ms after stimulus onset; the brain identifies emotion content repeatedly in three separate, brief microstate epochs; and, this processing of emotion content in the three microstates involves different brain mechanisms to represent the distinction positive vs. negative valence.
This paper examines the development of periphrastic constructions involving auxiliary "have" and "be" with a past participle in the history of English, on the basis of parsed electronic corpora. It is argued that the two constructions represented distinct syntactic and semantic structures: while the one with have developed into a true perfect in the course of Middle English, the one with be remained a stative resultative throughout its history. In this way, it is explained why the be construction was rarely or never used in a number of contexts, including past counterfactuals, iteratives, duratives, certain kinds of infinitives and various other utterance types that cannot be characterized as perfects of result. When the construction with have became a true perfect, it was used in such contexts, regardless of the identity of the main verb, leading to the appearance of have with verbs like come which had previously only taken be. Crucially, however, have was not spreading at the expense of be, as the be perfect had never been used in such contexts, but rather at the expense of the old simple past. At least until the end of the Early Modern English period, the shift in the relative frequency of have and be perfects is to be explained in terms of the expansion of the former into new contexts, while the latter remained stable. A formal analysis is proposed, taking as its starting point a comparison with German which shows that the older English be perfect indeed behaves more like the German stative passive than its haben and sein perfects.
In this paper, we will argue for a novel analysis of the auxiliary alternation in Early English, its development and subsequent loss which has broader consequences for the way that auxiliary selection is looked at cross-linguistically. We will present evidence that the choice of auxiliaries accompanying past participles in Early English differed in several significant respects from that in the familiar modern European languages. Specifically, while the construction with have became a full-fledged perfect by some time in the ME period, that with be was actually a stative resultative, which it remained until it was lost. We will show that this accounts for some otherwise surprising restrictions on the distribution of BE in Early English and allows a better understanding of the spread of HAVE through late ME and EModE. Perhaps more importantly, the Early English facts also provide insight into the genesis of the kind of auxiliary selection found in German, Dutch and Italian. Our analysis of them furthermore suggests a promising strategy for explaining cross-linguistic variation in auxiliary selection in terms of variation in the syntactico-semantic structure of the perfect. In this introductory section, we will first provide some background on the historical situation we will be discussing, then we will lay out the main claims for which we will be arguing in the paper.
In April 2002 the European Central Bank (ECB) and the Center for Financial Studies (CFS) launched the ECB-CFS Research Network to promote research on “Capital Markets and Financial Integration in Europe”. The ECB-CFS research network aims at stimulating top-level and policy-relevant research, significantly contributing to the understanding of the current and future structure and integration of the financial system in Europe and its international linkages with the United States and Japan. This report summarises the work done under the network after two years. Over time the network formed a coherent and growing group of researchers interested in the integration of European financial markets, while using light organisational structures and budgets. The members of this evolving group met repeatedly at the events organised by the network to present the latest results of their research and to share views on policy options. In this sense, the “network of people” intended at the start was created. Overall, the network aroused great interest, as leading academic researchers, researchers from the main policy institutions and high-level policy makers participated actively in it by presenting research results, through speeches and in policy panels. It also stimulated a new research field on securities settlement systems, an area of high policy relevance and interest to the ECB that had not attracted much interest in the research community beforehand. Also, the network seems to have triggered several related outside initiatives by international institutions, such as the IMF or the OECD. During its first two years the network was organised around three workshops and a final symposium on 10-11 May 2004. To focus research resources and to ensure medium-term policy relevance, a limited number of areas have been given top priority: bank competition and the geographical scope of banking; international portfolio choices and asset market linkages between Europe, the United States and Japan; European bond markets; European securities settlement systems; and the emergence and evolution of new markets in Europe (in particular start-up financing markets). In order to stimulate further research focused on the priority fields of the network, the ECB Lamfalussy research fellowships were established. These fellowships sponsor projects proposed by young researchers, both a dvanced doctoral students and younger professors. Five Lamfalussy fellowships were granted in 2003 and five more in 2004. The first papers from this program have already been issued in the ECB working paper series or are forthcoming. One of them won the prize for the best paper written by a Ph.D. student at the 2004 European Finance Association Meetings in Maastricht. Results of the network in the five top priority areas can be summarised as follows: Bank competition and the geographical scope of banking. First, integration does not appear to be very advanced in many retail banking markets. Second, some of the inherent characteristics of traditional loan and deposit business constrain the cross-border expansion of commercial banking, even in a common currency area. Hence, the implementation of some policies to foster cross-border integration in retail banking may be ineffective. Third, theoretical research suggests that supervisory structures may not be neutral towards further European banking integration. Finally, a stronger role of area-wide competition policies could be beneficial for further banking integration. This would also stimulate economic growth, as more competition in the banking sector induces financially dependent firms to grow more. European bond markets. While the government bond market has integrated rapidly with the EMU convergence process, its full integration has not yet been achieved. The introduction of a common electronic trading platform reduced transaction costs substantially, but yield spreads of long-term sovereign bonds of the euro area are still heterogeneous. This is largely explained by different sensitivities to an international risk factor, whereas liquidity differentials only play a role in conjunction with this latter factor. Somewhat surprisingly in this context, the dynamically developing corporate bond market exhibits a relatively high level of integration. There is also increasing evidence that the introduction of the euro has contributed to a reduction in the cost of capital in the euro area, in particular through the reduction of corporate bond underwriting fees. As a result, firms may wish to increase bond financing relative to equity financing. The development of a larger corporate bond market is also important for monetary policy. For example, US evidence suggests that the rating of corporate bonds may contribute to the persistence of recessions, as rating agencies´ policies affect firms asymmetrically in their access to the bond market over the business cycle. US evidence also suggests that liquidity conditions in stock and bond markets tend to be positively correlated. European securities settlement systems. European securities settlement infrastructures are highly fragmented and further integration and/or consolidation would exploit economies of scale that could greatly benefit investors. It is not clear, however, whether direct public intervention in favour of consolidation would lead to the highest level of efficiency, for example because of the existence of strong vertical integration between trading and securities platforms (“silos”). In contrast, promoting open access to clearing and settlement systems could lead to consolidation and the highest level of efficiency. Finally, regarding concerns about unfair practices by Central Securities Depositories (CSDs) toward custodian banks, regulatory interventions favouring custodian banks should be discouraged, as long as CSDs are not allowed to price discriminate between custodian banks and investor banks. The emergence and evolution of new markets in Europe (in particular start-up financing markets). While fairly well integrated, “new markets” and start-up financing are less developed and integrated in Europe than in the United States. However, new markets and venture capitalists are the most important intermediaries for the financing of projects with high risk but with potentially very high return. The analysis carried out within the network reveals that European start-up financiers are mostly institutional investors, while US venture capitalists are mostly rich individuals. Also, new markets are essential for the development of start-up finance in Europe, as they provide an exit strategy for start-up financiers who can then sell new successful projects using initial public offerings. Finally, the legal framework affects the development of venture capital firms. For example, very strict personal bankruptcy laws constrain early stage entrepreneurs, reducing demand for venture capital finance. International portfolio choices and asset market linkages between Europe, the United States and Japan. At a global scale, asset market linkages have increased recently. For example, major economies such as the United States and the euro area have become more financially interdependent. This phenomenon can be observed in stock and bond markets as well as in money markets, where the main direction of spillovers has recently been from the US to the euro area. Country-specific shocks now play a smaller role in explaining stock return variations of firms whose sales are internationally diversified. Increases in firmby-firm market linkages are a global phenomenon, but they are stronger within the euro area than in the rest of the world. Various other phenomena also increase market linkages and therefore the likelihood that financial shocks spread across countries. One example is the use of global bonds. Finally, the nowadays more direct access of unsophisticated investors to financial markets may increase volatility. Other areas. Financial integration affects financial structures, but it does not need to lead to their convergence across countries. Financial structures matter for growth, as market-oriented financial systems benefit all sectors and firms, whereas bank-based systems primarily benefit younger firms that depend on external finance. Moreover, good corporate governance increases firms’ value. In particular, the dual board system, where the monitoring and advising roles of the board of directors are separated, is found to dominate the single board structure. Therefore, the further development of the European single market should strongly require good corporate governance. In general, well designed institutions foster entrepreneurial activity, partly by relaxing capital constraints. The results of the network clearly illustrated the substantial effects the introduction of the euro had on euro area financial markets. In addition to the effects on bond markets, stock markets and the cost of capital summarised above, research produced showed that the single currency had its strongest effects on money markets, whose unsecured segment is now completely integrated. Without any doubt the euro generally enhanced the liquidity and efficiency of euro area financial markets, and ongoing initiatives such as the European Union’s Financial Services Action Plan will help to continue this process. In sum, in the first two years the network has established itself as the hub for the research debate on European financial integration. Some of the best papers produced by the network, leading to the conclusions mentioned above, are currently being considered for publication in two special issues of academic journals. An issue of the Oxford Review of Economic Policy on “European financial integration” is published contemporaneously with this report, and an issue of the Review of Finance is planned for next year. The current policy context, the gradual progress of integration as well as the creation of other related non-ECB or non-CFS initiatives on financial integration suggest that this topic will remain high on the agendas of policy makers and academics for the years to come. Therefore, the ECB Executive Board and the CFS decided to continue the network, refocusing its priorities. Three priority areas have been added: 1) The relationship between financial integration and financial stability, 2) EU accession, financial development and financial integration, and 3) financial system modernisation and economic growth in Europe. These three areas have become particularly important at the current juncture, but have not received particularly strong attention in the first two years of the network. For example, the area of financial stability research was highlighted by the ECB research evaluators as an area deserving further development. Moreover, despite the results found in the first two years of the network, new developments remain to be further explored in the earlier priority areas. A three-year extension is envisaged, running from after the May 2004 symposium until 2007, with two events to be held per year. The threeyear period is long enough to consider the first effects of the Financial Services Action Plan. It also constitutes a realistic horizon for the ambitious agenda implied by the three new priorities. The generally light organisational structure and working of the network will not be changed. In addition, given the value of the Lamfalussy fellowship research program in inducing further research in the areas of the network, the program has also been extended for all the research topics in the area of the network.
Women and Halakha Shiur
(2008)
This essay examines the foreign policy discourse in contemporary Germany. In reviewing a growing body of publications by German academics and foreign policy analysts, it identifies five schools of thought based on different worldviews, assumptions about international politics, and policy recommendations. These schools of thought are then related to, first, actual preferences held by German policymakers and the public more generally and, second, to a small set of grand strategies that Germany could pursue in the future. It argues that the spectrum of likely choices is narrow, with the two most probable-the strategies of "Wider West" and "Carolingian Europe"---continuing the multilateral and integrationist orientation of the old Federal Republic. These findings are contrasted with diverging assessments in the non-German professional literature.Finally, the essay sketches avenues for future research by suggesting ways for broadening the study of country-specific grand strategies, developing and testing inclusive typologies of more abstract foreign policy strategies, and refining the analytical tools in examining foreign policy discourses in general.
Im Rahmen dieser Arbeit wurden ausgewählte 5’- und 3’-untranslatierte Regionen (UTRs) von mRNAs aus H. volcanii bestimmt. Dieses Datenset wurde verwendet um (1) haloarchaeale UTRs zu charakterisieren, (2) Konsensuselemente für die Transkrikptionsinitiation und -termination zu verifizieren und (3) den Einfluss haloarchaealer UTRs auf die Initiation und Regulation der Translation zu untersuchen. Es konnte gezeigt werden, dass alle untersuchten Transkripte nichtprozessierte 3’-UTRs mit einer durchschnittlichen Länge von 45 Nukleotiden besitzen. Darüber hinaus konnte ein putatives Transkriptionsterminationssignal bestehend aus einem pentaU-Motiv mit vorausgehender Haarnadelstruktur identifiziert werden. Die Analysen der Regionen stromaufwärts der experimentell bestimmten Transkriptionsstarts führten zur Identifizierung dreier konservierter Promotor Elemente: Der TATA-Box, dem BRE-Element und einem neuen Element an Position -10/-11. Überraschenderweise bestand die TATA-Box nur aus vier konservierten Nukleotiden. Die Untersuchung der UTRs ergab, dass die größte Anteil der haloarchaealen Transkripte keine 5’-UTR besitzt. Falls eine 5’-UTR vorhanden ist, besitzen unerwarteterweise nur 15% der 5’-UTRs aus H. volcanii eine Shine-Dalgarno-Sequenz (SD-Sequenz). Es konnte jedoch gezeigt werden, dass verschiedene native und artifizielle 5’-UTRs ohne SD-Sequenz sehr effizient in vivo translatiert werden. Außerdem hat die Sekundärstruktur der 5’-UTR und die Position struktureller Elemente offenbar einen entscheidenden Einfluss auf die Translatierbarkeit von Transkripten. Die Insertion von Strukturelementen nahe des Startkodons führte zu einer vollkommenen Repression der Translation, während die proximale Insertion des Motivs an das 5’-Ende der 5’-UTR keinen Einfluss auf die Translationsseffizienz hatte. Zusammenfassend kann sowohl der eukaryotische Scanning-Mechanismus als auch die bakterielle Initiation der Translation über die SD-Sequenz für haloarchaeale Transkripte mit 5’-UTR ohne SD-Sequenz ausgeschlossen werden. Die im Rahmen dieser Arbeit durchgeführten Untersuchungen bilden die Grundlage für weitere Untersuchungen zur Identifizierung eines entsprechenden dritten Mechanismus zur Initiation der Translation in H. volcanii. Eine aktuelle Studie zur globalen Analyse der Translationsregulation zeigte, dass der Anteil translational regulierter Gene in H. volcanii genauso hoch ist wie bei Eukaryoten (Lange et al., 2007). Um die Rolle haloarchaealer UTRs bei der Regulation der Translation zu charakterisieren, wurden die UTRs zweier ausgewählter translationsregulierter Gene untersucht. Es stellte sich heraus, dass nur die Anwesenheit beider UTRs, 5’- und 3’-UTR, zu einer Wachstumsphasen-abhängigen Regulation der Translation führt. Dabei hat die 3’-UTR allein keinen Einfluss auf die Translationseffizienz, während die 5’-UTR die Translationseffizienz in beiden Wachstumsphasen reduziert. Es zeigte sich außerdem, dass die 3’-UTR für die „Richtung“ der Regulation auf Translationsebene verantwortlich ist und putative Strukturelemente möglicherweise in den Regulationsmechanismus involviert sind. Zusammengefasst ergibt sich folgendes Modell der Translationsregulation in H. volcanii: Strukturierte 5’-UTRs führen zu einer Herabsetzung der konstitutiven Translationseffizienz. Dies kann differentiell durch regulatorische Faktoren kompensiert werden, welche spezifische Elemente der 3’-UTR binden. Sowohl natürliche als auch artifizielle Aptamere und allosterische Ribozyme stellen effektive Werkzeuge zur exogen kontrollierten Genexpression dar. Daher wurde die Anwendbarkeit eines Tetracyclin-induzierbaren Aptamers und eines konstitutiven Hammerhead-Ribozyms in H. volcanii untersucht. Es stellte sich allerdings heraus, dass das Aptamer bereits ohne Tetracyclin starke inhibitorische Sekundärstrukturen ausbildet. Als Alternative wurden Reportergenfusionen mit einem selbstspaltenden Hammerhead-Ribozym konstruiert. Die selbstspaltende Aktivität des Hammerhead-Ribozyms in H. volcanii konnte erfolgreich in vivo demonstriert werden, was die Grundlage zur Entwicklung konditionaler Expressionssysteme basierend auf dem Hammerhead-Ribozym in H. volcanii bildet.