Refine
Year of publication
- 2004 (477) (remove)
Document Type
- Article (162)
- Working Paper (71)
- Part of a Book (67)
- Preprint (48)
- Doctoral Thesis (43)
- Part of Periodical (31)
- Conference Proceeding (28)
- Report (13)
- Book (10)
- diplomthesis (2)
Language
- English (477) (remove)
Has Fulltext
- yes (477) (remove)
Is part of the Bibliography
- no (477) (remove)
Keywords
- Syntax (26)
- Generative Transformationsgrammatik (23)
- Wortstellung (19)
- Deutsch (16)
- Optimalitätstheorie (12)
- Phonologie (11)
- Deutschland (9)
- Englisch (8)
- Formale Semantik (8)
- Informationsstruktur (8)
Institute
- Physik (75)
- Wirtschaftswissenschaften (38)
- Center for Financial Studies (CFS) (28)
- Medizin (27)
- Extern (24)
- Biochemie und Chemie (23)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Biowissenschaften (12)
- Informatik (12)
- Mathematik (9)
Editorial
(2004)
The cloud forest amphibians and reptiles constitute the most important herpetofaunal segment in Honduras, due to the prevalence of endemic and Nuclear Middle American-restricted species. This segment, however, is subject to severe environmental threats due to the actions of humans. Of the 334 species of amphibians and reptiles currently known from Honduras, 122 are known to be distributed in cloud forest habitats. Cloud forest habitats are found throughout the mountainous interior of Honduras. They are subject to a Highland Wet climate, which features annual precipitation of >1500 mm and a mean annual temperature of <18°C. Cloud forest vegetation falls into two Holdridge formations, the Lower Montane Wet Forest and Lower Montane Moist Forest. The Lower Montane Wet Forest formation generally occurs at elevations in excess of 1500 m, although it may occur as low as 1300+ m at some localities. The Lower Montane Moist Forest formation generally occurs at 1700+ m elevation. Of the 122 cloud forest species, 18 are salamanders, 38 are anurans, 27 are lizards, and 39 are snakes. Ninety-eight of these 122 species are distributed in the Lower Montane Wet Forest formation and 45 in the Lower Montane Moist Forest formation. Twenty species are distributed in both formations. The cloud forest species are distributed among restricted, widespread, and peripheral distributional categories. The restricted species range as a group in elevation from 1340 to 2700 m, the species that are widespread in at least one of the two cloud forest formations range as a group from sea level to 2744 m, and the peripheral species range as a group from sea level to 1980 m. The 122 cloud forest species exemplify ten broad distributional patterns ranging from species whose northern and southern range termini are in the United States (or Canada) and South America, respectively, to those species that are endemic to Honduras. The largest segment of the herpetofauna falls into the endemic category, with the next largest segment being restricted in distribution to Nuclear Middle America, but not endemic to Honduras. Cloud forest species are distributed among eight ecophysiographic areas, with the largest number being found in the Northwestern Highlands, followed by the North-Central Highlands and the Southwestern Highlands. The greatest significance of the Honduran herpetofauna lies in its 125 species that are either Honduran endemics or otherwise Nuclear Middle American-restricted species, of which 83 are distributed in the country’s cloud forests. This segment of the herpetofauna is seriously endangered as a consequence of exponentially increasing habitat destruction resulting from deforestation, even given the existence of several biotic reserves established in cloud forest. Other, less clearly evident environmental factors also appear to be implicated. As a consequence, slightly over half of these 83 species (50.6%) have populations that are in decline or that have disappeared from Honduran cloud forests. These species possess biological, conservational, and economic significance, all of which appear in danger of being lost.
CONTENTS: WHITHER THE SOUTH AFRICAN PUBLISHING INDUSTRY ? 4;
APNET MESSAGE TO AFRICAN PUBLISHERS ON WORLD BOOK DAY 11 ;
FUNDING OPPORTUNITIES FOR OPERATORS IN CULTURE-RELATED INDUSTRIES 13;
4TH SALON INTERNATIONAL DU LIVRE D’ABIDJAN (SILA) 2004 16;
THE NIGERIA INTERNATIONAL BOOK FAIR (NIBF) 2004 20;
THE NOMA AWARD 2003 PRESENTATION 22;
A NEW CONSULTANCY FIRM IS FORMED 27;
EDILIS HOLD DEDICATION CEREMONY 30;
LETTERS TO THE EDITOR 34;
NEWS FROM PARTNER ORGANISATIONS 41;
NOTICE 44;
PROMOTIONS 50
Since the description of sepsis by Schottmüller in 1914, the amount on knowledge available on sepsis and its underlying pathophysiology has substantially increased. Epidemiologic examinations of abdominal septic shock patients show the potential for high risk posed by and the extensive therapy situation in the intensive care unit (ICU) (5). Unfortunately, until now it has not been possible to significantly reduce the mortality rate of septic shock, which is as high as 50-60% worldwide, although PROWESS' results (1) are encouraging. This paper summarizes the main results of the MEDAN project and their medical impacts. Several aspects are already published, see the references. The heterogeneity of patient groups and the variations in therapy strategies is seen as one of the main problems for sepsis trials. In the MEDAN multi-center study of 71 intensive care units in Germany, a group of 382 patients made up exclusively of abdominal septic shock patients who met the consensus criteria for septic shock (3) was analysed. For use within scores or stand-alone experiments variables are often studied as isolated variables, not as a multidimensional whole, e.g. a recent study takes a look at the role thrombocytes play (15). To avoid this limitation, our study compares several established scores (SOFA, APACHE II, SAPS II, MODS) by a multi-dimensional neuronal network analysis. For outcome prediction the data of 382 patients was analysed by using most of the commonly documented vital parameters and doses of medicine (metric variables). Data was collected in German hospitals from 1998 to 2001. The 382 handwritten patient records were transferred to an electronic database giving the amount of 2.5 million data entries. The metric data contained in the database is composed of daily measurements and doses of medicine. We used range and plausibility checks to allow no faulty data in the electronic database. 187 of the 382 patients are deceased (49 %).
Data driven automatic model selection and parameter adaptation – a case study for septic shock
(2004)
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. This paper propose as model selection criterion the least complex description of the observed data by the model, the minimum description length. For the small, but important example of inflammation modeling the performance of the approach is evaluated.
In bioinformatics, biochemical signal pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically obtaining the most appropriate model and learning its parameters is extremely interesting. One of the most often used approaches for model selection is to choose the least complex model which “fits the needs”. For noisy measurements, the model which has the smallest mean squared error of the observed data results in a model which fits too accurately to the data – it is overfitting. Such a model will perform good on the training data, but worse on unknown data. This paper propose as model selection criterion the least complex description of the observed data by the model, the minimum description length. For the small, but important example of inflammation modeling the performance of the approach is evaluated. Keywords: biochemical pathways, differential equations, septic shock, parameter estimation, overfitting, minimum description length.
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. In this paper, for the small, important example of inflammation modeling a network is constructed and different learning algorithms are proposed. It turned out that due to the nonlinear dynamics evolutionary approaches are necessary to fit the parameters for sparse, given data. Keywords: model parameter adaption, septic shock. coupled differential equations, genetic algorithm.
For this paper, 170 Tibeto-Burman languages were surveyed for nominal ease marking (adpositions), in an attempt to determine ifit would be possible to reeonstruet any ease markers to Proto· Tibeto-Burman, and in so doing leam more about the nature of the grammatieal organization of Proto-Tibeto-Burman. The data were also eross-cheeked for patterns of isomorphy/polysemy, to see ifwe can leam anything about the development ofthe forms we da find in the languages. The results of the survey indicate that although a11 Tibeto-Bunnan languages have developed some sort of relation marking, none of the markers ean be reconstrueted to the oldest stage of the family. Looking at the patterns of isomorphy or polysemy, we find there are regularities to the patterns we find, and on the basis of these regularities we can make assurne that the path of development most probably followed the markedness/prototypicality clines: the locative and ablative use would have arose first and then were extended to the more abstract cases.
Adjectives in Qiang
(2004)
Qiang is a Tibeto-Burman language spoken by 70,000-80,000 people in Northern Sichuan Province, China, classified as being in the Qiang or Tibetan nationality by the Chinese government. The language is verb final, agglutinative (prefixing and suffixing), and has both head-marking and dependent-marking morphology.
Since 1973 I have been advocating the view that the Balto-Slavic acute tone was in fact glottalic and has been preserved unchanged in originally stressed and unstressed syllables in Žemaitian and Latvian, respectively (e.g. 1975, 1977, 1985, 1998). Jay Jasanoff has now (2004) adopted the gist of my view, but with-out mentioning my name. It may therefore be useful to sketch the background of our differences and to point out the remaining discrepancies.
Elsewhere I have argued that the three Old Prussian catechisms reflect consecutive stages in the development of a moribund language (1998a, 1998b, 2001a). After first eliminating the orthographical differences between the three versions of parallel texts while maintaining the distinction between linguistic variants and then assigning separate phonemic interpretations to the three versions on the basis of the historical evidence I listed the following phonological differences between the three catechisms.
Docherty et alii have "noted that several sociolinguistic accounts have shown a sharp distinction between the social trajectories for glottal replacement as opposed to glottal reinforcement, which have normally been treated by phonologists as aspects of 'the same thing'. It may therefore not always be appropriate to treat the two phenomena as manifestations of a single process or as points on a single continuum (presumably along which speakers move through time). From the speaker’s point of view (as manifested by different patterns of speaker behaviour) they appear as independent phenomena" (1997: 307).
The origin of the Goths
(2004)
Witold Ma´nczak has argued that Gothic is closer to Upper German than to Middle German, closer to High German than to Low German, closer to German than to Scandinavian, closer to Danish than to Swedish, and that the original homeland of the Goths must therefore be located in the southernmost part of the Germanic territories, not in Scandinavia (1982, 1984, 1987a, 1987b, 1992). I think that his argument is correct and that it is time to abandon Iordanes’ classic view that the Goths came from Scandinavia. We must therefore reconsider the grounds for adopting the latter position and the reasons why it always has remained popular.
Most scholars nowadays reconstruct a static root present with an alternation between lengthened grade in the active singular and full grade in the active plural and in the middle. I am unhappy about this traditional methodology of loosely postulating long vowels for the proto-language. What we need is a powerful theory which explains why clear instances of original lengthened grade are so very few and restrains our reconstructions accordingly. Such a theory has been available for over a hundred years now: it was put forward by Wackernagel in his Old Indic grammar (1896: 66-68). The crucial element of his theory which is relevant in the present context is that he assumed lengthening in monosyllabic word forms, such as the 2nd and 3rd sg. active forms of the sigmatic aorist injunctive.
The argument that I tried to elaborate on in this paper is that the conceptual problem behind the traditional competence/performance distinction does not go away, even if we abandon its original Chomskyan formulation. It returns as the question about the relation between the model of the grammar and the results of empirical investigations – the question of empirical verification The theoretical concept of markedness is argued to be an ideal correlate of gradience. Optimality Theory, being based on markedness, is a promising framework for the task of bridging the gap between model and empirical world. However, this task not only requires a model of grammar, but also a theory of the methods that are chosen in empirical investigations and how their results are interpreted, and a theory of how to derive predictions for these particular empirical investigations from the model. Stochastic Optimality Theory is one possible formulation of a proposal that derives empirical predictions from an OT model. However, I hope to have shown that it is not enough to take frequency distributions and relative acceptabilities at face value, and simply construe some Stochastic OT model that fits the facts. These facts first of all need to be interpreted, and those factors that the grammar has to account for must be sorted out from those about which grammar should have nothing to say. This task, to my mind, is more complicated than the picture that a simplistic application of (not only) Stochastic OT might draw.
The aim of this paper is the exploration of an optimality theoretic architecture for syntax that is guided by the concept of "correspondence": syntax is understood as the mechanism of "translating" underlying representations into a surface form. In minimalism, this surface form is called "Phonological Form" (PF). Both semantic and abstract syntactic information are reflected by the surface form. The empirical domain where this architecture is tested are minimal link effects, especially in the case of "wh"-movement. The OT constraints require the surface form to reflect the underlying semantic and syntactic representations as maximally as possible. The means by which underlying relations and properties are encoded are precedence, adjacency, surface morphology and prosodic structure. Information that is not encoded in one of these ways remains unexpressed, and gets lost unless it is recoverable via the context. Different kinds of information are often expressed by the same means. The resulting conflicts are resolved by the relative ranking of the relevant correspondence constraints.
Weak function word shift
(2004)
The fact that object shift only affects weak pronouns in mainland Scandinavian is seen as an instance of a more general observation that can be made in all Germanic languages: weak function words tend to avoid the edges of larger prosodic domains. This generalisation has been formulated within Optimality Theory in terms of alignment constraints on prosodic structure by Selkirk (1996) in explaining thedistribution of prosodically strong and weak forms of English functionwords, especially modal verbs, prepositions and pronouns. But a purely phonological account fails to integrate the syntactic licensing conditions for object shift in an appropriate way. The standard semantico-syntactic accounts of object shift, onthe other hand, fail to explain why it is only weak pronouns that undergo object shift. This paper develops an Optimality theoretic model of the syntax-phonology interface which is based on the interaction of syntactic and prosodic factors. The account can successfully be applied to further related phenomena in English and German.
Dialectal variation in german 3-verb clusters : a surface-oriented optimality theoretic account
(2004)
We present data from an empirical investigation on the dialectal variation in the syntax of German 3-verb clusters, consisting of a temporal auxiliary, a modal verb, and a predicative verb. The ordering possibilities vary greatly among the dialects. Some of the orders that we found occur only under particular stress assignments. We assume that these orders fulfil an information structural purpose and that the reordering processes are changes only in the linear order of the elements which is represented exclusively at the surface syntactic level, PF (Phonetic Form). Our Optimality theoretic account offers a multifactorial perspective on the phenomenon.
German dialects vary in which of the possible orders of the verbs in a 3-verb cluster they allow. In a still ongoing empirical investigation that I am undertaking together with Tanja Schmid, University of Stuttgart (Schmid and Vogel (2004)) we already found that each of the six logically possible permutations of the 3-verb cluster in (1) can be found in German dialects.
This paper reports the results of a corpus investigation on case conflicts in German argument free relative constructions. We investigate how corpus frequencies reflect the relative markedness of free relative and correlative constructions, the relative markedness of different case conflict configurations, and the relative markedness of different conflict resolution strategies. Section 1 introduces the conception of markedness as used in Optimality Theory. Section 2 introduces the facts about German free relative clauses, and section 3 presents the results of the corpus study. By and large, markedness and frequency go hand in hand. However, configurations at the highest end of the markedness scale rarely show up in corpus data, and for the configuration at the lowest end we found an unexpected outcome: the more marked structure is preferred.
This paper evaluates trills [r] and their palatalized counterparts [rj] from the point of view of markedness. It is argued that [r]s are unmarked sounds in comparison to [rj]s which follows from the examination of the following parameters: (a) frequency of occurrence, (b) articulatory and aerodynamic characteristics, (c) perceptual features, (d) emergence in the process of language acquisition, (e) stability from a diachronic point of view, (f) phonotactic distribution, and (g) implications. Several markedness aspects of [r]s and [rj] are analyzed on the basis of Slavic languages which offer excellent material for the evaluation of trills. Their phonetic characteristics incorporated into phonetically grounded constraints are employed for a phonological OT-analysis of r-palatalization in two selected languages: Polish and Czech.
Research on dialectal varieties was for a long time concentrated on phonetic aspects of language. While there was a lot of work done on segmental aspects, suprasegmentals remained unexploited until the last few years, despite the fact that prosody was remarked as a salient aspect of dialectal variants by linguists and by naive speakers. Actual research on dialectal prosody in the German speaking area often deals with discourse analytic methods, correlating intonations curves with communicative functions (P. Auer et al. 2000, P. Gilles & R. Schrambke 2000, R. Kehrein & S. Rabanus 2001). The project I present here has another focus. It looks at general prosodic aspects, abstracted from actual situations. These global structures are modelled and integrated in a speech synthesis system. Today, mostly intonation is being investigated. However, rhythm, the temporal organisation of speech, is not a core of actual research on prosody. But there is evidence that temporal organisation is one of the main structuring elements of speech (B. Zellner 1998, B. Zellner Keller 2002). Following this approach developed for speech synthesis, I will present the modelling of the timing of two Swiss German dialects (Bernese and Zurich dialect) that are considered quite different on the prosodic level. These models are part of the project on the "development of basic knowledge for research on Swiss German prosody by means of speech synthesis modelling" founded by the Swiss National Science Foundation.
We present a new self-contained and rigorous proof of the smoothness of invariant fiber bundles for dynamic equations on measure chains or time scales. Here, an invariant fiber bundle is the generalization of an invariant manifold to the nonautonomous case. Our main result generalizes the “Hadamard-Perron theorem” to the time-dependent, infinite-dimensional, noninvertible, and parameter-dependent case, where the linear part is not necessarily hyperbolic with variable growth rates. As a key feature, our proof works without using complicated technical tools.
It is shown that between one-turn pushdown automata (1-turn PDAs) and deterministic finite automata (DFAs) there will be savings concerning the size of description not bounded by any recursive function, so-called non-recursive tradeoffs. Considering the number of turns of the stack height as a consumable resource of PDAs, we can show the existence of non-recursive trade-offs between PDAs performing k+ 1 turns and k turns for k >= 1. Furthermore, non-recursive trade-offs are shown between arbitrary PDAs and PDAs which perform only a finite number of turns. Finally, several decidability questions are shown to be undecidable and not semidecidable.
0. Introduction 1. Observations concerning the structure of morphosyntactically marked focus constructions 1.1 First observation: SF vs. NSF asymmetry 1.2 Second observation: NSF-NAR parallelism 1.3 Affirmative ex-situ focus constructions (SF, NSF), and narrative clauses (NAR) 2. Grammaticalization 2.1 Cleft hypothesis 2.2 Movement hypothesis 2.3 Narrative hypothesis 2.3.1 Back- or Foregrounding? 2.3.2 Converse directionality of FM and conjunction 3. Language specific analysis 4. Conclusionary remarks References
In hindsight, the debate about presupposition following Frege’s discovery that the referential function of names and definite descriptions depended on the fulfillment of an existence and a uniqueness condition was curiously limited for a very long time. On the one hand, it was only in the 1960s that linguists began to take an interest and showed that presupposition was an allpervasive phenomenon far beyond this philosophers’ pet definite descriptions. And on the other hand, and this is our real concern, it is now only too obvious that the uniqueness condition is too restrictive to be applicable to the general case. An utterance of “The cat is on the mat” should not imply that there is only one cat and one mat in the whole world. The obvious move is to limit the uniqueness condition to some notion of utterance context.
In the following study we present the results of three acoustic experiments with native speakers of German and Polish which support implications (a) and (b). In our experiments we measured the friction phase after the /t d/ release before the onset of the following high front vocoid for four speakers of German and Polish. We found that the friction phase for /tj/ was significantly longer than that of /ti/, and that the friction phase of /t/ in the assibilation context is significantly longer than that of /d/.
The paper explores factors that influence the design of financing contracts between venture capital investors and European venture capital funds. 122 Private Placement Memoranda and 46 Partnership Agreements are investigated in respect to the use of covenant restrictions and compensation schemes. The analysis focuses on the impact of two key factors: the reputation of VC-funds and changes in the overall demand for venture capital services. We find that established funds are more severely restricted by contractual covenants. This contradicts the conventional wisdom which assumes that established market participants care more about their reputation, have less incentive to behave opportunistically and therefore need less covenant restrictions. We also find that managers of established funds are more often obliged to invest own capital alongside with investors money. We interpret this as evidence that established funds have actually less reason to care about their reputation as compared to young funds. One reason for this surprising result could be that managers of established VC funds are older and closer to retirement and therefore put less weight on the effects of their actions on future business opportunities. We also explore the effects of venture capital supply on contract design. Gompers and Lerner (1996) show that VC-funds in the US are able to reduce the number of restrictive covenants in years with high supply of venture capital and interpret this as a result of increased bargaining power by VC-funds. We do not find similar evidence for Europe. Instead, we find that VC-funds receive less base compensation and higher performance related compensation in years with strong capital inflows into the VC industry. This may be interpreted as a signal of overconfidence: Strong investor demand seems to coincide with overoptimistic expectations by fund managers which make them willing to accept higher powered incentive schemes. JEL: G32 Keywords: Venture Capital, Contracting, Limited Partnership, Funds, Principal Agent, Compensation, Covenants, Reputation, Bargaining Power
On embedded implicatures
(2004)
The Gricean approach explains implicatures by assumptions about the pragmatics of entire utterances. The phenomenon of embedded implicatures remains a challenge for this approach since in such cases apparently implicatures contribute to the truth-conditional content of constituents smaller than utterances. In this paper, I investigate three areas where embedded implicatures seem to differ from implicatures at the utterance level: optionality, epistemic status, and implicated presuppositions. I conclude that the differences between the two kinds of implicatures justify an approach that maintains Gricean assumptions at the utterance level, and assumes a special operator for embedded implicatures.
Quantificational determiners in Japanese can be marked with genitive case. Current analyses (for example by Watanabe, Natural Language and Linguistic Theory, to appear) treat the genetive case marker in these cases as semantically vacuous, but we show that it has semantic effects. We propose a new analysis as reverse partitives. Following Jackendoff (MIT-Press, 1977), we assume that partitives always contain two NPs one of which is phonologically deleted. We claim that, while in normal partitives the higher noun is deleted, in reverse partitives the lower noun is deleted.
"A team", definitely
(2004)
In morphological systems of the agglutinative type we sometimes encounter a nearly perfect one-to-one relation between form and function. Turkish inflectional morphology is, of course, the standard textbook example. Things seem to be quite different in systems of the flexive type. Declension in Contemporary Standard Russian (henceforth Russian, for short) may be cited as a typical example: We find, among other things, cumulative markers, “synonymous” endings (e.g., dative singular noun forms in -i, -e, or -u), and “homonymous” endings (e.g., -i, genitive, dative, and prepositional singular). True, some endings are more of an agglutinative nature, being bound to a specific case-number combination and applying across declensions, e.g., -am (dative plural, all nouns); and some cross the boundaries of word classes, e.g., -o, which serves as the nominative/accusative singular ending of neuter forms of pronouns (and adjectives) and as the nominative/accusative singular ending of (most) neuter nouns as well. Still, many observers have been struck by the impression that what we face here are rather uneconomic or even, so to speak, unnatural structures. But perhaps flexive systems are not as complicated as they seem. What seems to be uneconomic complexity may be, at least partially, an artifact of uneconomic descriptions.
A remarkable indictment and conviction following the sale of an ‘obscene’ comic book invites us to examine arguments brought forth to describe a specifically childlike reception of new media, as usually suggested by those who would motivate legal restrictions for such media. Trying to explain some perceived contradictions on the surface of these arguments, we discuss whether it is the failure or rather the extreme success of texts that is marked as ‘dangerous’ in such contexts.
All the works in Mazuna lexicography have a common denominator: they are translation dictionaries biased towards French and were compiled by Catholic and Protestant missionaries or colonial administrators. These dictionaries have both strong and weak points. The macrostructure although it does not display features of sophistication, i.e. the use of niching and nesting procedures, tends to survey the full lexicon of the language which make these dictionaries real reservoirs of knowledge. The microstructure contains a lot of useful entries. However, no metalexicographic discussion is provided in the user's guide to make it accessible to the target reader. There are also some shortcomings especially in the areas of suprasegmental phonology (absence of tonal indications) and orthography.
The Melusines that appear in Fontane’s texts [...] must be read as part of history of citations and refigurations, a history that then revives and flourishes in diluted form around the turn of the century with the trivial myth of the femme fatale. The new context for Fontane’s Melusine is the social construction of the feminine in the context of the conflict over the equality and/or the difference of the sexes, and the currency of certain clichéd versions of this construction. [...] In this essay, I will examine the function that the Melusine figure — as the recasting and rewriting of a myth — assumes in realist texts and, specifically, in the texts of Fontane.
Maintenance of genomic integrity is essential to avoid cellular transformation, neoplasia, or cell death. DNA synthesis, mitosis, and cytokinesis are important cellular processes required for cell division and the maintenance of cellular homeostasis; they are governed by many extra- and intra-cellular stimuli. Progression of normal cell division depends on cyclin interaction with cyclin-dependent kinases (Cdk) and the degradation of cyclins before chromosomal segregation through ubiquitination. Multiple checkpoints exist and are conserved in the cell cycle in higher eukaryotes to ensure that if one fails, others will take care of genomic integrity and cell survival. Many genes act as either positive or negative regulators of checkpoint function through different kinase cascades, delaying cell cycle progression to repair the DNA lesions and breaks, and assuring equal segregation of chromosomes to daughter cells. Understanding the checkpoint pathways and genes involved in the cellular response to DNA damage and cell division events in normal and cancer cells, provides information about cancer predisposition, and suggests design of small molecules and other strategies for cancer therapy. Key Words: ATM-ATR; ATM/ATR; Aurora kinases; BRCAl; Cdc6; Cdc25; Cdc27-Cdc20/CdhI; Cell cycle; CENP-E; centrosome; checkpoint; Chkl/Chk2; cyc1in-Cdk; cyclindependent kinase inhibitors (CKI); hATRIP; Mad/Bub; MCM; MgcRacGAP; microtubule-associated proteins (MAPs); mitotic exit network (MEN); Mpsl; NIMA kinases; ORC; p53; PCNA; PBK-Akt; Plk; Rad50-Nbsl-Mrell; Ran-GTP; Ras; RB-E2F; SMC; Teml.
When the concept of the auteur was coined in the 1950s and 1960s, it was an initiative to clarify the obscure matters of authorship in cinema. Because a film must necessarily be a collective work, understood as the result of a large number of creative contributions, it was often unclear who the decisive power behind a certain film was, who contributed the "distinctive quality". The control will usually belong to the director, the producer or the star (or all three in combination), but what singles out a given film could also come from the cinematographer, the scriptwriter, from the author of an adapted literary work, or from traditions in the studio or in the genre. Nothing can be taken for granted about a film's authorship, it can only be decided through a thorough analysis of each film's production process, an analysis that, in most cases, will be impossible to make. ...
Japanese wh-questions always exhibit focus intonation (FI). Furthermore, the domain of FI exhibits a correspondence to the wh-scope. I propose that this phonology-semantics correspondence is a result of the cyclic computation of FI, which is explained under the notion of Multiple Spell-Out in the recent Minimalist framework. The proposed analysis makes two predictions: (1) embedding of an FI into another is possible; (2) (overt) movement of a wh-phrase to a phase edge position causes a mismatch between FI and wh-scope. Both predictions are tested experimentally, and shown to be borne out.
We argue that there is a crucial difference between determiner and adverbial quantification. Following Herburger [2000] and von Fintel [1994], we assume that determiner quantifiers quantify over individuals and adverbial quantifiers over eventualities. While it is usually assumed that the semantics of sentences with determiner quantifiers and those with adverbial quantifiers basically come out the same, we will show by way of new data that quantification over events is more restricted than quantification over individuals. This is because eventualities in contrast to individuals have to be located in time which is done using contextual information according to a pragmatic resolution strategy. If the contextual information and the tense information given in the respective sentence contradict each other, the sentence is uninterpretable. We conclude that this is the reason why in these cases adverbial quantification, i.e. quantification over eventualities, is impossible whereas quantification over individuals is fine.
Results of a production experiment on the placement of sentence accent in German are reported. The hypothesis that German fulfills some of the most widely accepted rules of accent assignment— predicting focus domain integration—was only partly confirmed. Adjacency between argument and verb induces a single accent on the argument, as recognized in the literature, but interruption of this sequence by a modifier often induces remodeling of the accent pattern with a single accent on the modifier. The verb is rarely stressed. All models based on linear alignment or adjacency between elements belonging to a single accent domain fail to account for this result. A cyclic analysis of prosodic domain formation is proposed in an optimality-theoretic framework that can explain the accent pattern.
In this paper we review the current state of research on the issue of discourse structure (DS) / information structure (IS) interface. This field has received a lot of attention from discourse semanticists and pragmatists, and has made substantial progress in recent years. In this paper we summarize the relevant studies. In addition, we look at the issue of DS/ISinteraction at a different level—that of phonetics. It is known that both information structure and discourse structure can be realized prosodically, but the issue of phonetic interaction between the prosodic devices they employ has hardly ever been discussed in this context. We think that a proper consideration of this aspect of DS/IS-interaction would enrich our understanding of the phenomenon, and hence we formulate some related research-programmatic positions.
This paper investigates the nature of the attraction of XPs to clauseinitial position in German (and other languages). It argues that there are two different types of preposing. First, an XP can move when it is attracted by an EPP-like feature of Comp. Comp can, however, also attract elements that bear the formal marker of some semantic or pragmatic (information theoretic) function. This second type of movement is driven by the attraction of a formal property of the moved element. It has often been misanalysed as “operator” movement in the past.
In this paper, we discuss the design and implementation of our first version of the database "ANNIS" (ANNotation of Information Structure). For research based on empirical data, ANNIS provides a uniform environment for storing this data together with its linguistic annotations. A central database promotes standardized annotation, which facilitates interpretation and comparison of the data. ANNIS is used through a standard web browser and offers tier-based visualization of data and annotations, as well as search facilities that allow for cross-level and cross-sentential queries. The paper motivates the design of the system, characterizes its user interface, and provides an initial technical evaluation of ANNIS with respect to data size and query processing.
This paper is concerned with the tagging of spatial expressions in German newspaper articles, assigning a meaning to the expression and classifying the usages of the spatial expression and linking the derived referent to an event description. In our system, we implemented the activation of concepts in a very simple fashion, a concept is activated once (with a cost depending on the item that activated it) and is left activated thereafter. As an example, a city also activates the nodes for the region and the country it is part of, so that cities from one country are chosen over cities from different countries. A test corpus of 12 German newspaper articles was tested regarding several disambiguation strategies. Disambiguation was carried out via a beam search to find an approximately cost-optimal solution for the conflict set of potential grounding candidates for the tagged spatial expression. Test showed that the disambiguation strategies improved accuracy significantly.
Davidsonian event semantics has an impressive track record as a framework for natural language analysis. In recent years it has become popular to assume that not only action verbs but predicates of all sorts have an additional event argument. Yet, this hypothesis is not without controversy in particular wrt the particularly challenging case of statives. Maienborn (2003a, 2004) argues that there is a need for distinguishing two kinds of states. While verbs such as sit, stand, sleep refer to eventualities in the sense of Davidson (= Davidsonian states), the states denoted by such stative verbs like know, weigh,and own, as well as any combination of copula plus predicate are of a different ontological type (= Kimian states). Against this background, the present study assesses the two main arguments that have been raised in favour of a Davidsonian approach for statives. These are the combination with certain manner adverbials and Parsons (2000) so-called time travel argument. It will be argued that the manner data which, at first sight, seem to provide evidence for a Davidsonian approach to statives are better analysed as non-compositional reinterpretations triggered by the lack of a regular Davidsonian event argument. As for Parsons´s time travel argument, it turns out that the original version does not supply the kind of support for the Davidsonian approach that Parsons supposed. However, properly adapted, the time travel argument may provide additional evidence for the need of reifying the denotatum of statives, as suggested by the assumption of Kimian states.
A pragmatic explanation of the stage level/individual level contrast in combination with locatives
(2004)
One important difference between stage level predicates (SLPs) and individual level predicates (ILPs) is their behavior with respect to locative modifiers. It is commonly assumed that SLPs but not ILPs combine with locatives. The present study argues against a semantic account for this behavior (as advanced by e.g. Kratzer 1995, Chierchia 1995) and proposes a genuinely pragmatic explanation of the observed stage level/individual level contrast instead. The proposal is spelled out using Blutners (1998, 2000) optimality theoretic version of the Gricean maxims. Building on the observation that the respective locatives are not event-related but frame-setting modifiers, the preference for main predicates that express temporary properties is explained as a side-effect of “synchronizing” the main predicate with the locative frame in the course of finding an optimal interpretation. By emphasizing the division of labor between grammar and pragmatics, the proposed solution takes a considerable load off of semantics.
The purpose of this paper is to describe the TüBa-D/Z treebank of written German and to compare it to the independently developed TIGER treebank (Brants et al., 2002). Both treebanks, TIGER and TüBa-D/Z, use an annotation framework that is based on phrase structure grammar and that is enhanced by a level of predicate-argument structure. The comparison between the annotation schemes of the two treebanks focuses on the different treatments of free word order and discontinuous constituents in German as well as on differences in phrase-internal annotation.
The purpose of this paper is to describe recent developments in the morphological, syntactic, and semantic annotation of the TüBa-D/Z treebank of German. The TüBa-D/Z annotation scheme is derived from the Verbmobil treebank of spoken German [4, 10], but has been extended along various dimensions to accommodate the characteristics of written texts. TüBa-D/Z uses as its data source the "die tageszeitung" (taz) newspaper corpus. The Verbmobil treebank annotation scheme distinguishes four levels of syntactic constituency: the lexical level, the phrasal level, the level of topological fields, and the clausal level. The primary ordering principle of a clause is the inventory of topological fields, which characterize the word order regularities among different clause types of German, and which are widely accepted among descriptive linguists of German [3, 6]. The TüBa-D/Z annotation relies on a context-free backbone (i.e. proper trees without crossing branches) of phrase structure combined with edge labels that specify the grammatical function of the phrase in question. The syntactic annotation scheme of the TüBa-D/Z is described in more detail in [12, 11]. TüBa-D/Z currently comprises approximately 15 000 sentences, with approximately 7 000 sentences being in the correction phase. The latter will be released along with an updated version of the existing treebank before the end of this year. The treebank is available in an XML format, in the NEGRA export format [1] and in the Penn treebank bracketing format. The XML format contains all types of information as described above, the NEGRA export format contains all sentenceinternal information while the Penn treebank format includes only those layers of information that can be expressed as pure tree structures. Over the course of the last year, more fine grained linguistic annotations have been added along the following dimensions: 1. the basic Stuttgart-Tübingen tagset, STTS, [9] labels have been enriched by relevant features of inflectional morphology, 2. named entity information has been encoded as part of the syntactic annotation, and 3. a set of anaphoric and coreference relations has been added to link referentially dependent noun phrases. In the following sections, we will describe each of these innovations in turn and will demonstrate how the additional annotations can be incorporated into one comprehensive annotation scheme.
Transforming constituent-based annotation into dependency-based annotation has been shown to work for different treebanks and annotation schemes (e.g. Lin (1995) has transformed the Penn treebank, and Kübler and Telljohann (2002) the Tübinger Baumbank des Deutschen (TüBa-D/Z)). These ventures are usually triggered by the conflict between theory-neutral annotation, that targets most needs of a wider audience, and theory-specific annotation, that provides more fine-grained information for a smaller audience. As a compromise, it has been pointed out that treebanks can be designed to support more than one theory from the start (Nivre, 2003). We argue that information can also be added to an existing annotation scheme so that it supports additional theory-specific annotations. We also argue that such a transformation is useful for improving and extending the original annotation scheme with respect to both ambiguous annotation and annotation errors. We show this by analysing problems that arise when generating dependency information from the constituent-based TüBa-D/Z.
This paper reports on the SYN-RA (SYNtax-based Reference Annotation) project, an on-going project of annotating German newspaper texts with referential relations. The project has developed an inventory of anaphoric and coreference relations for German in the context of a unified, XML-based annotation scheme for combining morphological, syntactic, semantic, and anaphoric information. The paper discusses how this unified annotation scheme relates to other formats currently discussed in the literature, in particular the annotation graph model of Bird and Liberman (2001) and the pie-in-thesky scheme for semantic annotation.
Tree-local MCTAG with shared nodes : an analysis of word order variation in German and Korean
(2004)
Tree Adjoining Grammars (TAG) are known not to be powerful enough to deal with scrambling in free word order languages. The TAG-variants proposed so far in order to account for scrambling are not entirely satisfying. Therefore, an alternative extension of TAG is introduced based on the notion of node sharing. Considering data from German and Korean, it is shown that this TAG-extension can adequately analyse scrambling data, also in combination with extraposition and topicalization.
This paper sets up a framework for LTAG (Lexicalized Tree Adjoining Grammar) semantics that brings together ideas from different recent approaches addressing some shortcomings of TAG semantics based on the derivation tree. Within this framework, several sample analyses are proposed, and it is shown that the framework allows to analyze data that have been claimed to be problematic for derivation tree based LTAG semantics approaches.
LTAG semantics for questions
(2004)
This papers presents a compositional semantic analysis of interrogatives clauses in LTAG (Lexicalized Tree Adjoining Grammar) that captures the scopal properties of wh- and nonwh-quantificational elements. It is shown that the present approach derives the correct semantics for examples claimed to be problematic for LTAG semantic approaches based on the derivation tree. The paper further provides an LTAG semantics for embedded interrogatives.
In April 2002 the European Central Bank (ECB) and the Center for Financial Studies (CFS) launched the ECB-CFS Research Network to promote research on “Capital Markets and Financial Integration in Europe”. The ECB-CFS research network aims at stimulating top-level and policy-relevant research, significantly contributing to the understanding of the current and future structure and integration of the financial system in Europe and its international linkages with the United States and Japan. This report summarises the work done under the network after two years. Over time the network formed a coherent and growing group of researchers interested in the integration of European financial markets, while using light organisational structures and budgets. The members of this evolving group met repeatedly at the events organised by the network to present the latest results of their research and to share views on policy options. In this sense, the “network of people” intended at the start was created. Overall, the network aroused great interest, as leading academic researchers, researchers from the main policy institutions and high-level policy makers participated actively in it by presenting research results, through speeches and in policy panels. It also stimulated a new research field on securities settlement systems, an area of high policy relevance and interest to the ECB that had not attracted much interest in the research community beforehand. Also, the network seems to have triggered several related outside initiatives by international institutions, such as the IMF or the OECD. During its first two years the network was organised around three workshops and a final symposium on 10-11 May 2004. To focus research resources and to ensure medium-term policy relevance, a limited number of areas have been given top priority: bank competition and the geographical scope of banking; international portfolio choices and asset market linkages between Europe, the United States and Japan; European bond markets; European securities settlement systems; and the emergence and evolution of new markets in Europe (in particular start-up financing markets). In order to stimulate further research focused on the priority fields of the network, the ECB Lamfalussy research fellowships were established. These fellowships sponsor projects proposed by young researchers, both a dvanced doctoral students and younger professors. Five Lamfalussy fellowships were granted in 2003 and five more in 2004. The first papers from this program have already been issued in the ECB working paper series or are forthcoming. One of them won the prize for the best paper written by a Ph.D. student at the 2004 European Finance Association Meetings in Maastricht. Results of the network in the five top priority areas can be summarised as follows: Bank competition and the geographical scope of banking. First, integration does not appear to be very advanced in many retail banking markets. Second, some of the inherent characteristics of traditional loan and deposit business constrain the cross-border expansion of commercial banking, even in a common currency area. Hence, the implementation of some policies to foster cross-border integration in retail banking may be ineffective. Third, theoretical research suggests that supervisory structures may not be neutral towards further European banking integration. Finally, a stronger role of area-wide competition policies could be beneficial for further banking integration. This would also stimulate economic growth, as more competition in the banking sector induces financially dependent firms to grow more. European bond markets. While the government bond market has integrated rapidly with the EMU convergence process, its full integration has not yet been achieved. The introduction of a common electronic trading platform reduced transaction costs substantially, but yield spreads of long-term sovereign bonds of the euro area are still heterogeneous. This is largely explained by different sensitivities to an international risk factor, whereas liquidity differentials only play a role in conjunction with this latter factor. Somewhat surprisingly in this context, the dynamically developing corporate bond market exhibits a relatively high level of integration. There is also increasing evidence that the introduction of the euro has contributed to a reduction in the cost of capital in the euro area, in particular through the reduction of corporate bond underwriting fees. As a result, firms may wish to increase bond financing relative to equity financing. The development of a larger corporate bond market is also important for monetary policy. For example, US evidence suggests that the rating of corporate bonds may contribute to the persistence of recessions, as rating agencies´ policies affect firms asymmetrically in their access to the bond market over the business cycle. US evidence also suggests that liquidity conditions in stock and bond markets tend to be positively correlated. European securities settlement systems. European securities settlement infrastructures are highly fragmented and further integration and/or consolidation would exploit economies of scale that could greatly benefit investors. It is not clear, however, whether direct public intervention in favour of consolidation would lead to the highest level of efficiency, for example because of the existence of strong vertical integration between trading and securities platforms (“silos”). In contrast, promoting open access to clearing and settlement systems could lead to consolidation and the highest level of efficiency. Finally, regarding concerns about unfair practices by Central Securities Depositories (CSDs) toward custodian banks, regulatory interventions favouring custodian banks should be discouraged, as long as CSDs are not allowed to price discriminate between custodian banks and investor banks. The emergence and evolution of new markets in Europe (in particular start-up financing markets). While fairly well integrated, “new markets” and start-up financing are less developed and integrated in Europe than in the United States. However, new markets and venture capitalists are the most important intermediaries for the financing of projects with high risk but with potentially very high return. The analysis carried out within the network reveals that European start-up financiers are mostly institutional investors, while US venture capitalists are mostly rich individuals. Also, new markets are essential for the development of start-up finance in Europe, as they provide an exit strategy for start-up financiers who can then sell new successful projects using initial public offerings. Finally, the legal framework affects the development of venture capital firms. For example, very strict personal bankruptcy laws constrain early stage entrepreneurs, reducing demand for venture capital finance. International portfolio choices and asset market linkages between Europe, the United States and Japan. At a global scale, asset market linkages have increased recently. For example, major economies such as the United States and the euro area have become more financially interdependent. This phenomenon can be observed in stock and bond markets as well as in money markets, where the main direction of spillovers has recently been from the US to the euro area. Country-specific shocks now play a smaller role in explaining stock return variations of firms whose sales are internationally diversified. Increases in firmby-firm market linkages are a global phenomenon, but they are stronger within the euro area than in the rest of the world. Various other phenomena also increase market linkages and therefore the likelihood that financial shocks spread across countries. One example is the use of global bonds. Finally, the nowadays more direct access of unsophisticated investors to financial markets may increase volatility. Other areas. Financial integration affects financial structures, but it does not need to lead to their convergence across countries. Financial structures matter for growth, as market-oriented financial systems benefit all sectors and firms, whereas bank-based systems primarily benefit younger firms that depend on external finance. Moreover, good corporate governance increases firms’ value. In particular, the dual board system, where the monitoring and advising roles of the board of directors are separated, is found to dominate the single board structure. Therefore, the further development of the European single market should strongly require good corporate governance. In general, well designed institutions foster entrepreneurial activity, partly by relaxing capital constraints. The results of the network clearly illustrated the substantial effects the introduction of the euro had on euro area financial markets. In addition to the effects on bond markets, stock markets and the cost of capital summarised above, research produced showed that the single currency had its strongest effects on money markets, whose unsecured segment is now completely integrated. Without any doubt the euro generally enhanced the liquidity and efficiency of euro area financial markets, and ongoing initiatives such as the European Union’s Financial Services Action Plan will help to continue this process. In sum, in the first two years the network has established itself as the hub for the research debate on European financial integration. Some of the best papers produced by the network, leading to the conclusions mentioned above, are currently being considered for publication in two special issues of academic journals. An issue of the Oxford Review of Economic Policy on “European financial integration” is published contemporaneously with this report, and an issue of the Review of Finance is planned for next year. The current policy context, the gradual progress of integration as well as the creation of other related non-ECB or non-CFS initiatives on financial integration suggest that this topic will remain high on the agendas of policy makers and academics for the years to come. Therefore, the ECB Executive Board and the CFS decided to continue the network, refocusing its priorities. Three priority areas have been added: 1) The relationship between financial integration and financial stability, 2) EU accession, financial development and financial integration, and 3) financial system modernisation and economic growth in Europe. These three areas have become particularly important at the current juncture, but have not received particularly strong attention in the first two years of the network. For example, the area of financial stability research was highlighted by the ECB research evaluators as an area deserving further development. Moreover, despite the results found in the first two years of the network, new developments remain to be further explored in the earlier priority areas. A three-year extension is envisaged, running from after the May 2004 symposium until 2007, with two events to be held per year. The threeyear period is long enough to consider the first effects of the Financial Services Action Plan. It also constitutes a realistic horizon for the ambitious agenda implied by the three new priorities. The generally light organisational structure and working of the network will not be changed. In addition, given the value of the Lamfalussy fellowship research program in inducing further research in the areas of the network, the program has also been extended for all the research topics in the area of the network.
Sensilla styloconica are elongated microscopically conspicuous chemo-mechano receptors found exclusively at the tongue tip of many adult Lepidoptera. These unique proboscis sensilla were comparatively studied using the scanning electron microscope in 107 species of North American and tropical butterflies. Focus was on 76 species of North American Nymphalidae representing 45 genera and 11 subfamily groups, and 15 species of tropical Nymphalidae representing additional genera and subfamilies. Observations of adult nymphalid feeding behaviour and food preference for correlation with morphological characteristics were made largely in North America and substantially in the Neotropics, where bait traps were used in conjunction with aerial netting. The tongue tips of 16 additional species representing 5 more butterfly families were also examined for the presence and morphological characteristics of sensilla styloconica.
Catalog of the mosses of Japan compiled by the author in 1991 was revised. This new catalog lists all names of genera and species of mosses described or reported from Japan, based on all literature available to the author up to the end of January 2004. The new catalog is comprised of 1,135 species of mosses belonging to 332 genera. These taxa are listed in alphabetical order. Each valid epithet is followed by author citation, literature, distributional area in Japan, and Japanese name.
As for the relation between Islam and pluralism, it seems a little bit complicated. There are some verses in The Koran for pluralism and at the same time we have some verses against. Among the sayings of Prophet Muhammad like the some Koranic verses, we came across with something good and bad for non-Muslims in special contexts. By another saying, we find both positive and negative statements for Jews and Christians in different circumstances. Muslim scholars the complexity still exists. We find both positive and negative stances. So it is difficult to see a standard or official view on this issue. However, we should point out that Islam recognizes all the sacred (Semitic) books and their messages. It accepts all prophets of that traditions. It defines itself as the last and perfect religion of Semitic tradition and states that no other religion will be accepted from anybody else other then itself. It criticizes both the Jews and Christians especially about their failure to uphold the Oneness of God, tawhid, and to preserve the authenticity of their scripture from interventions. This exclusivist aspect of Islam as many conservative scholars formed with putting together some evidences from the Koran is generally accepted by Muslims.
Religious Anthropology studies the origins, evolution and functions of religions. The discipline researching religious beliefs and rituals comparatively with cross-cultural perspectives tries to enlighten the belief world of the mankind. Religion, as a term, can be defined as "believing as well as worshipping to the supernatural powers and/or beings by the individual who are emotionally or consciously devoted to them" (Örnek 1988: 127). There have been a number of theories so far which try to bring an explanation to the origins and the evolution of religion. In these theories, Fetishism, cults of nature, animism, Totemism, dynamism, Manism, magic, polytheism, monotheism as well as certain physiological phenomena have been particularized as evolutionary stages and forms of belief (Evans-Pritchard 1998: 124). All of these theories have the perspective of so called "progressive" and / or "unilinear" that maintain a religion which has reached ongoing stages and that communities which have developed from primitiveness to civilization. They argue that there has only been one single line of progress, and all of the communities are bound to go through the same evolutionary stages.
Untouchability and inter-caste relations in rural India : the case of southern Tamil villages
(2004)
Justice and equality are the two subjects often talked about by most of the nationalists and leaders of various political and ideological streams across the world including India. India was at the fore-front in condemning racial discrimination particularly apartheid and also the influence of super powers) on the internal affairs of independent nations. Her commitment to secure its citizens' freedom, justice, equality and fraternity is reflected in the very preamble of the Indian Constitution. Towards achieving these challenging goals, special provisions have also been made in the Constitution to protect and promote the interests of the most oppressed section of Indian society - traditionally known as Untouchables and Constitutionally as the Scheduled Castes. These provisions are expected to alter the given unjust distribution of power (political and economic) and status (social) among different sections of people and thereby transform India into an egalitarian society. Given India's unequivocal commitment to secure its citizens these noble ideals - particularly the most exploited and pilloried section of India -, we shall attempt here to understand Indian villages, which host over 80 per cent of the Indian population, from the point of view of whether or not these villages patronise the institution of caste which is in contravention of these ideals or whether there are these little republics ideal for realising the said goals and thus to be preserved as they are as claimed by many social reformers including Mahatma Gandhi. In the process, we shall also address the question of how caste has remained unchanged, how it controls social interaction between higher and lower caste groups and accordingly perpetuates unequal control over power and status. And most importantly we shall also understand whether all the Scheduled Castes (lower castes) treat their members as equals or there is hierarchy, discrimination and practice of untouchability even among them.
Safety concerns associated with the use of viral vectors in gene therapy applications have attracted considerable attention towards the development of nonviral vectors as alternatives for DNA delivery. While nonviral vectors are commonly not associated with safety problems, they are still very inefficient compared to viral vectors, and require significant improvements to approach the efficiency of their viral counterparts. Meanwhile ligands or single-chain antibody fragments that bind to cell surface receptors for increased and/or specific cellular uptake, endosome escape activities, and nuclear localization sequences (NLSs) to enhance transport of plasmid DNA into the nucleus, have become available that can be incorporated into nonviral vectors to improve their efficacy. However, as gene delivery is a multistep process, the challenge is to incorporate multiple of these functional elements into a single nonviral vector system, while retaining their specific activities. A promising method to attach such entities to plasmid DNA is the use of multifunctional fusion proteins that bind to DNA through a DNA-binding domain. In principle, two types of DNA-binding domains/proteins can be used to anchor additional functional domains or peptides to a plasmid, namely sequence-specific DNA-binding domains, described in the first part of this thesis, or those that bind DNA independent of its sequence, exemplified in the second part of this work by a derivative of the human HMGB2 protein. The first fusion protein constructed and analyzed contained the E. coli LexA repressor as a sequence-specific DNA-binding domain. In addition, this DNA-carrier protein, termed TEL, included a bacterial translocation domain as an integrated endosome escape activity, and human TGF-a for specific targeting to the EGF-receptor (EGFR). TEL was expressed in E. coli and purified under both native and denaturing conditions. Purified, denatured TEL was refolded and subsequently shown to bind specifically to EGFR-expressing cells. However, inclusion of TEL in complexes of plasmid DNA and poly-L-lysine (pL) did not lead to increased gene delivery into EGFR-expressing COS-1 cells. Most likely this was due to the absence of DNA-binding activity of the LexA moiety in TEL. In contrast, native TEL was able to interact specifically with DNA. Nevertheless, since this interaction was rather weak, and refolding of denatured TEL had not resulted in functional activity of all of its protein domains, it seemed unlikely that fusion proteins containing LexA would exhibit gene transfer capabilities superior to those of similar DNA-carrier proteins previously constructed in our group. Further work therefore focused on the use of the E2C-Sp1C protein as an alternative sequencespecific DNA-binding domain. This artificial zinc-finger protein was fused to the single-chain antibody fragment scFv(FRP5), directed against the human ErbB2 growth factor receptor. The resulting 5-E2C fusion protein was expressed in E. coli and purified under native and denaturing conditions. Refolded and native 5-E2C were found to bind specifically to ErbB2-expressing cells, indicating that scFv(FRP5) in 5-E2C was functional in both preparations. In contrast, whereas refolded 5-E2C bound DNA only weakly, significant DNA binding was observed for native 5-E2C. In addition, it could not only be shown that the interaction of native 5-E2C with DNA containing its recognition sequence was specific, but also that this protein was able to bind DNA and recombinant ErbB2 simultaneously, demonstrating the functionality of both domains in native 5-E2C. Despite these encouraging results, the inclusion of native 5-E2C in pL- or polyethyleneimine (PEI)-DNA complexes did not lead to an (5-E2C-specific) enhancement of gene transfer efficiency, irrespective of the presence of the endosome-disruptive reagent chloroquine during transfection. In the second part of this thesis an alternative approach for the development of DNA-carrier proteins for nonviral gene delivery is described, based on human HMGB2, a DNA-binding protein without sequence specificity. HMGB2 contains an acidic C-terminus that has been found to decrease the affinity of the protein for DNA. Therefore, this C-terminal tail was deleted, resulting in an HMGB2-variant consisting of amino acids 1-186. HMGB2186, purified under native conditions from E. coli lysates, was able to interact with DNA and bound to the surface of different cell lines. Importantly, after binding to plasmid DNA HMGB2186 mediated gene delivery into COS-7 cells with higher efficiency than pL. In addition, HMGB2186-mediated gene transfer was strongly enhanced in the presence of chloroquine, indicating that the endocytic pathway was involved in cellular uptake. To improve internalization and intracellular routing of HMGB2186 as a DNA-carrier, a derivative containing the TAT47-57 cell-penetrating peptide (CPP), reported to facilitate cell entry independent of endocytosis, was constructed. Since this peptide also contains an NLS, in addition an HGMB2186-variant containing the SV40-NLS was constructed to investigate the effect of a peptide that has only nuclear localizing properties. Interestingly, the resulting TAT-HMGB2186 and SV40-HMGB2186 fusion proteins displayed DNA-binding activities similar to HMGB2186, but mediated gene delivery into different cell lines clearly more efficiently than the parental molecule. Furthermore, the efficacy of both fusion proteins was enhanced markedly in the presence of chloroquine, an indication that endocytosis was involved in the transfection process mediated by these proteins. This suggests that the increased transfection efficiency observed for TAT-HMGB2186 was more likely due to the NLS function present in the TAT47-57 peptide, rather than to its ‘cell penetrating properties’. Finally, the incorporation of functional peptides derived from human proteins into HMGB2186 was investigated. An uncharged CPP originating from Kaposi-FGF, reported to facilitate efficient cellular uptake of fused protein domains in an endocytosis-independent manner, was fused to HMGB2186 together with the SV40-NLS. Interestingly, the resulting KSV40-HMGB2186 fusion protein bound DNA similarly as previously tested DNA-carrier proteins, but did not mediate enhanced transfection compared to HMGB2186. In addition, the importin-b-binding (IBB) domain derived from human importin-a2 was investigated as a component of a DNA-carrier protein. Since the IBB domain can function as an NLS, it was fused to HMGB2186 resulting in the DNA-carrier protein IBBHMGB2186. Although IBB-HMGB2186 bound DNA in a similar manner as the other HMGB2186-derivatives, gene delivery mediated by IBB-HMGB2186 was only as effective as HMGB2186 mediated transfection, suggesting no significant role of the IBB domain. However, addition of chloroquine resulted in a remarkable enhancement of IBB-HMGB2186-mediated gene transfer, which was now more efficient than with any other HMGB2186-variant tested, and not much lower than gene transfer mediated by PEI, one of the most efficient transfection reagents available to date. To enhance nonviral gene delivery even further, the HMGB2186-based DNA-carrier proteins described in this thesis might now serve as building blocks for novel fusion proteins that include additional complementing activities. In this respect it seems particularly promising that, under conditions of effective end some escape, IBB-HMGB2186, which consists entirely of protein domains of human origin, was the most efficient of all proteins tested in this work.
RcsB is a central transcriptional regulator in enteric bacteria involved in exopolysaccharide (EPS) biosynthesis, in cell division, in the expression of osmoregulated genes, and regulates at least 20 other genes and operons. It is a member of a phosphorelay system and signal transfer is mediated by phosphorylation through the RcsC/YojN phosphorelay. RcsB proteins modified with the phosphorylation mimic BeF3- as shown by its conformational changes and DNA binding properties and resulted phosphorylated RcsB derivatives with sufficient stability. Both, the wild type RcsB protein and the mutant RcsBD11A could be modified with BeF3-. Non-phosphorylated RcsB has been shown to bind as a heterodimer with the coinducer RcsA at the conserved RcsAB box in Rcs regulated promoters. In this study, it has been shown that the modification of RcsB by BeF3 - (I) has a negative effect on its homodimerization, (II) abolishes the complex formation of RcsAB with the RcsAB box as shown by the EMSA and SPR technique. All the effects were found to be reversible by increasing the NaF concentration in the assays presumably leading to the formation of the inactive BeF4 2- salt. This hypothesis of RcsB being modified by BeF3- was also supported by other phosphodonors like ATP and acetyl phosphate, both of them showed the same negative effect on DNA binding by RcsAB heterodimer giving evidence that BeF3- could be used as a phosphorylation mimic. In addition, the phosphorylation mimic BeF3- was found to be a better phosphorylating agent than ATP and acetyl phosphate. This is the first evidence that phosphorylation of RcsB might have a negative effect on the activation of RcsAB regulated operons. Autophosphorylation of RcsB proves that it has the ability to take up phosphoryl groups and the mutant protein also become autophosphorylated with less efficiency or stability than the wild type protein. RcsB probably takes up phosphoryl groups through RcsC -> YojN -> RcsB phosphorelay pathway. To study the interaction among the proteins in this pathway, fluorescence spectroscopy, NMR spectroscopy, and an in vivo ß galactosidase assay were performed by using two domains of RcsC (T-RcsC and R-RcsC), HPt domain of the protein YojN, and RcsB. The interactions between R-RcsC/YojN-HPt and YojN-HPt/RcsB supports the proposed pathway of phosphorylating RcsB. RcsB might also be phosphorylated by YojN-HPt that is phosphorylated by other sensor kinase other than RcsC in a cross-talk mechanism. The phosphorylation of RcsB by YojN-HPt probably has the same negative effect on cps induction as obtained with BeF3 - effect on DNA binding by RcsAB heterodimer.
The biomarker record in two different lakes in central Europe, Lake Albano and Lake Constance, is used to reflect environmental changes and lake system response during the Late Glacial and Holocene. Extractable organic compounds in lake sediments, which can be assigned to their biological source (biomarkers) function as fingerprints of past aquatic or land plant organisms. Using gas chromatography coupled with mass spectrometry, 21 different biomarkers (predominantly steroids and triterpenoids) as well as a variety of n-alkanes, nalkanols, and n-alkanoic acids could be identified in the sediment records of Lake Albano and Lake Constance. In the Holocene sediments of Lake Albano, the distribution of biomarkers such as dinosterol (dinoflagellates), isoarborinol, and diplopterol (aquatic organisms) indicate three biomarker zones: The period between 0-3,800 years BP (zone 3) is characterized by high concentrations of these biomarkers and others such as tetrahymanol and diploptene. Conversely, zone 2 (3,800-6,500 years BP) shows very low concentrations of all autochthonous biomarkers. In zone 1 (6,500–11,480 years BP), dinosterol, isoarborinol, and diplopterol range on a relatively high level, whereas diploptene and tetrahymanol display comparatively low concentrations. The results suggest at least two distinct changes in the predominance of primary producers during the Holocene, which are related to changes in the lake system such as lake mixing and water column stratification. This interpretation is consistent with previous investigations of Lake Albano sediments including pigment and hydrogen index data (Ariztegui et al., 1996b; Guilizzoni et al., 2002). Allochthonous biomarkers such as long-chain n-alkanes, amyrenones and friedelin indicate a development from forest to a more open landscape from 6,000 and 5.000 years BP, respectively. After a period of high concentrations during the first half of the Holocene, all biomarkers derived from deciduous trees exhibit relatively low values until around 1,000 years BP. Again, this is consistent with results from previous pollen investigations (Ariztegui et al., 2000). The sediment core from Upper Lake Constance comprises the Late Glacial and Holocene. It was analysed for biomarkers and inorganic tracers in order to compare the biomarker results with other proxy data from the same core. Magnetic susceptibility (MS) was measured to get a high-resolution stratigraphic framework of the core and to obtain further information about changes of the proportions of allochthonous and autochthonous input. Enhanced concentrations and accumulation rates of dinosterol (biomarker for dinoflagellates) and biogenic calcite give evidence of increasing lake productivity at the beginning of the Holocene followed by a decrease in bioproductivity after around 7,000 years BP. Younger Dryas sediments are characterized by low amounts of both dinosterol and biogenic calcite indicating a low productivity. The comparison of the concentrations and accumulation rates of b-sitosterol and stigmastanol with parameters reflecting lake productivity suggests that both steroids in Lake Constance sediments are mainly derived from terrigenous sources. Biomarkers as well as concentrations and accumulation rates of allochthonous inorganic compounds such as titanium, magnesium and strontium indicate a slightly enhanced allochthonous input after 8,500 years BP. Significant increase of erosive matter input from enhanced soil erosion is not observed before 4,000 years BP. This can be attributed to the combined effects of precipitation increase as a result of climatic deterioration and anthropogenic deforestation which is consistent with observations from other lakes in Central Europe. The MS record of Lake Constance confirms these results by tracing the climatically induced shifts of more intense bioproduction (low MS caused by increased calcite deposition) during the ‘climatic optimum’. This is followed by increasing input of terrigenous sediment compounds during colder and wetter periods which lead to higher MS values in the lake sediments. The occurrence of tetrahymanol in Lake Constance sediments questions the unambiguous use of tetrahymanol as an indicator for water column stratification. Anaerobic organic macroaggregates within the oxygenated, photic zone of the water column have to be considered as a possible living space for anaerobic microorganisms containing tetrahymanol. The direct comparison of two very different lakes Albano and Constance with respect to biomarkers indicating climate or environmental change provides a contribution to the recent biomarker research for a better understanding of biomarkers in lacustrine sediments.
Much has been written on the success of the Indian software industry, enumerating systemic factors like first-class higher education and research institutions, both public and private; low labour costs, stimulating (state) policies etc. However, although most studies analyzing the 'Indian' software industry cover essentially the South (and West) Indian clusters, this issue has not been tackled explicitly. This paper supplements the economic geography explanations mentioned above with the additional factor social capital, which is not only important within the region, but also in transnational (ethnic) networks linking Indian software clusters with the Silicon Valley. In other words, spatial proximity is complemented with cultural proximity, thereby, extending the system of innovation. The main hypothesis is that some Indian regions are more apt to economic development and innovation due to their higher affinity to education and learning, as well as, their more general openness, which has been a main finding of my interviews. In addition, the transnational networks of Silicon Valley Indians seem to be dominated by South Indians, thus, corroborating the regional clustering of the Indian software industry. JEL Classifications: O30, R12, Z13, L86
Im Zeitalter von Internet und digitaler Wissensvermittlung hat auch die Geschichtswissenschaft die Photographie als Quellenmaterial zur Dokumentation historischer Lebensbedingungen und Ereignisse schätzen gelernt. Neben dem geisteswissenschaftlichen Aspekt solcher Photodokumente gibt es einen technisch-konservatorischen Aspekt.
Anomalous monism and mental causality : on the debate of Donald Davidson’s philosophy of the mental
(2004)
The English version of the first chapter of Erwin Rogler and Gerhard Preyer: Materialismus, anomaler Monismus und mentale Kausalität. Zur gegenwärtigen Philosophie des Mentalen bei Donald Davidson und David Lewis (2001) "Anomaler Monismus und Mentale Kausalität. Ein Beitrag zur Debatte über Donald Davidsons Philosophie des Mentalen" is a contribution to the current debates on the philosophy of the mental and mental causality initiated from Donald Davidson's philosophy with his article "Mental Events" (1970). It is the intent of the English version to give a response to the controversy among American, British and Australian philosophers in the context of a global exchange of ideas on problems understanding the mental. Contents 1. Preliminary Remarks 2. The Critique of Property-Epiphenomenalism and Counterarguments (a) The Enlargement of Nomological Reasoning (b) The Counterfactual Analysis (c) Supervenient Causality 3. Are Mental Properties real or unreal (fictive)? Abstract Things and events are fundamental entities in Davidson's ontology. Less distinct is the ontological status of properties, especially of mental types. Despite of some eliminative allusions there are weighty reasons to understand Davidson's philosophy of mind as including intentional realism. With it, the question of mental causality arises. There are two striking solutions to this problem: the epiphenomenalism of mental properties and the downward causation of mental events. Davidson cannot accept either. He claims to justify the mental as supervenient causality in order to thus integrate it into physicalism (his version of monism). But his argument at best proves the explanatory, not the causal relevance of mental properties. For this and for other reasons, Davidson fails the aspired synthesis of a sufficiently strong physicalism and the autonomy of the mental; a project whose realization is anyhow hard to achieve.
Background and Aim: In Germany, the discharge medication is usually reported to the general practitioner (GP) by an inital short report (SR) /notification (handed over to the patient) and later by a more detailed discharge letter (DL) of the hospital.
Material and Method: We asked N=536 GPs (from Frankfurt/Main and Luebeck) after the typical report format of their patients discharge medication by the local hospitals. The questionnaire asked for 26 items covering (1) the designation of the medication (brand name, generic name) in SR and DL, (2) further specifications e.g. possibilities of generic substitution or supervision of sensible medications, (3) reasons why GPs do not follow the hospitals recommendations and (4) possibilities for an improvement in the medication-related communication between GP and hospitals.
Results: 39% GPs responded sufficiently to the questionnaire. The majority of the GPs (82%) quoted that in the SR only brand names are given (often or ever) and neither the generic name or any further information on generic substitution is available (seldom or never). 65% of the responders quoted that even in the DL only brand names are given. Only 41% of the responders quoted that further treatment relevant specifications are given (often or ever). 95% responded that new medications or change of custom medication is seldom or never explained in the DL and GP were not explicitly informed about relevant medication changes. 58% of the responders quoted economic reasons for re-adjustment of the discharge medication e.g. by generic substitution. The majority of responders (83%) are favouring (useful or very useful) a pre-discharge information (e.g. via fax) about the medication and 54% a hot-line to some relevant person in the hospital when treatment problems emerge. 67% of the responders quoted in favour of regular meetings between GPs and hospital doctors regarding actual pharmacotherapy.
Conclusion: In conclusion, our survey pointed to marked deficiencies in reporting the discharge medication to GPs.
Conflict of interest: None
Background The detection of the new Coranavirus (CoV) causing agent of the severe acute respiratory syndrome (SARS) for diagnostic purposes is still a critical step in prevention of secondary hospital infections. In this respect the PCR for SARS diagnostic is the fastest and most sensitive method and was published very early after the description of the new pathogen by different groups. To evaluate the quality and sensitivity of the SARS PCR performed in diagnostic laboratories all over the world an external quality assurance (EQA) for SARS PCR was initiated by the WHO, the European Network for Diagnostics of "Imported" Viral Diseases (ENIVD) and the Robert Koch-Institut. Methods Therefore 10 samples of inactivated SARS CoV strains isolated in Frankfurt and Hong Kong in different dilutions and negative controls were prepared. The freeze dried samples were send by mail to 62 different laboratories, in 37 countries in Europe and Israel (35), Asia (11), The Americas (11), Australia and New Zealand (4) and Africa (1). The results were returned by email or fax 1 week (13), 2 weeks (14), 3 weeks (6) and later (29) after receiving the material which does not mimic at all the possible speed of this fast method. But this was not considered in the evaluation of these first SARS EQA. Results 44 laboratories showed good or excellent results (26 = 100%, 18 = 90%) and even the 14 laboratories which archived only 80% (10) or 70% (4) correct results are mostly lacking sensitivity. The results of the other 4 laboratories show basic problems in regard to sensitivity, specificity and consistency of results and must be overcome as soon as possible. 4 laboratories seem to have problems with the specificity finding a positive signal in negative samples. The different methods used for preparation of the SARS CoV genome and diagnostic PCR test procedure used by the participating laboratories will be discussed in more detail in the presentation. Conclusion However, in contrast to previous EQAs for Ebola, Lassa and Orthopoxviruses the quality of PCR results was rather good which might be caused by the early publication and distribution of well developed PCR methods. An EQA for evaluation of SARS specific serology is still ongoing, first results will be available beginning of April 2004.
The continuously growing natural killer (NK) cell line NK-92 is highly cytotoxic against malignant cells of various origin without affecting normal human cells. Based on this selectivity, the potential of NK-92 cells for adoptive therapy is currently being investigated in phase I clinical studies. To further enhance the antitumoral activity of NK-92 cells and expand the range of tumor entities suitable for NK-92-based therapies, here by transduction with retroviral vectors we have generated genetically modified NK-92 cells expressing chimeric antigen receptors specific either for the tumor-associated ErbB2 (HER2/neu) antigen or the human Epithelial Cell Adhesion Molecule (Ep-CAM). Both antigens are overexpressed by many tumors of epithelial origin. The chimeric antigen receptors consist of either the ErbB2 specific scFv(FRP5) antibody fragment or the Ep-CAM specific scFv(MOC31), a flexible hinge region derived from CD8, and transmembrane and intracellular regions of the CD3 zeta chain. Transduced NK-92-scFv(FRP5)-zeta or NK-92-scFv(MOC31)-zeta cells express high levels of the fusion proteins on the cell surface as determined by FACS analysis. In europium release assays no difference in cytotoxic activity of NK-92 and transduced NK-92 cells towards ErbB2 or Ep-CAM negative targets was found. However, even at low effector to target ratios transduced NK-92 cells specifically and efficiently lysed established ErbB2 or Ep-CAM expressing tumor cells that were completely resistant to cytolytic activity of parental NK-92 cells. Similarly, ErbB2-positive primary breast cancer cells isolated from pleural effusions of patients with recurrent disease were selectively killed by NK-92-scFv(FRP5)-zeta. In an in vivo model in immunodeficient mice treatment with retargeted NK-92-scFv(FRP5)-zeta, but not parental NK-92 cells resulted in markedly delayed growth of ErbB2 transformed cancer cells. These results demonstrate that efficient retargeting of NK-92 cytotoxicity can be achieved, and might allow the generation of potent cell-based therapeutics for the treatment of ErbB2 and Ep-CAM expressing malignancies. This therapeutic approach might be applicable for a large variety of different cancers where suitable cell surface antigens have been identified.
The receptor tyrosine kinase ErbB2 (HER2) is overexpressed in multiple human tumors of epithelial origin. High ErbB2 expression is functionally involved in tumorigenesis and correlates with poor clinical prognosis. For immunotherapy of ErbB2 expressing tumors, we developed a strategy to supply the tumor cells with costimulatory activity. A bispecific fusion protein was constructed (BIg5), containing the IgV-like domain of huCD86, the CH2/CH3 domain of huIgG1 and the ErbB2-specific single chain antibody fragment scFv(FRP5). A similar fusion protein lacking the CD86 domain (Ig5) was used as a control. Upon binding of BIg5 to ErbB2 on tumor cells, these cells display CD86 on their surface and thus can deliver costimulatory signals for T-cell activation. In addition, NK cells could be activated by CD86 binding to CD28. BIg5 is secreted by eukaryotic cells as a homodimer with increased stability compared to monomers and possibly enhanced costimulatory activity due to crosslinking of CD28 on effector cells. By FACS analysis, specific binding of the scFv(FRP5) domain to ErbB2 as well as CD86 IgV binding to CTLA-4 could be demonstrated. Together with anti-CD3 antibody, BIg5 stimulates proliferation of human CD2-purified lymphocytes in vitro. After binding to ErbB2 on murine Renca-lacZ/ErbB2 tumor cells, about 50% of initially bound BIg5 is still present on the cell surface after 4 hours. For delivery of chimeric fusion proteins in vivo, we used syngeneic, stably transfected HC11 mammary epithelial cells continuously secreting the proteins. Inoculation of these bystander cells close to subcutaneously growing Renca-lacZ/ErbB2 tumors should provide a long-lasting source to achieve high local concentrations of BIg5 at the tumor site. In vivo HC11-BIg5 cells proved to be non-tumorigenic and secreted BIg5 for several weeks, causing a strong anti-BIg5 antibody response. Treatment of established Renca-lacZ/ErbB2 or ErbB2-negative Renca-lacZ tumors by peritumoral inoculation of either HC11-BIg5 or HC11-Ig5 cells led to rejection of all Renca-lacZ/ErbB2, but none of the Renca-lacZ tumors. HC11neo control cells had no effect on tumor growth. Rejection of ErbB2+ tumors led to long-term protection also against subsequent challenge with intravenously injected ErbB2- tumor cells. Intraperitoneal injection of bystander cells secreting the fusion proteins did not lead to tumor regression suggesting that high local concentrations at the tumor site are necessary to target ErbB2 on tumor cells and to overcome elimination of BIg5 or Ig5 by neutralizing antibodies. The CD86 IgV domain of BIg5 did not play a major role in the observed antitumoral immune response suggesting NK-cell mediated ADCC as the initial effector mechanism followed by activation of tumor specific T cells. Targeting of ErbB2 on tumor cells with antibody fusion proteins that interact specifically with the host immune system could be an efficient and specific approach for therapy of solid ErbB2+ tumors.
Tumor-specific T lymphocytes can be regarded as a highly effective mechanism for tumor rejection. A substantial number of T-cell defined tumor antigens including mutated oncoproteins and differentiation antigens have been identified. However, while most spontaneous tumors appear to be antigenic, few are immunogenic. Activation of tumor-specific cytotoxic T cells (CTL) requires presentation of tumor antigens by professional antigen presenting cells (APCs) via MHC I molecules. Due to their crucial role in T-cell activation, APCs are being exploited for active cancer immunotherapy. Present experimental strategies include the incubation of dendritic cells with synthetic, tumor specific peptides to achieve uptake of tumor antigens and presentation in the context of MHC molecules. Alternatively, gene therapeutic approaches are aimed at the endogenous expression of tumor antigens in APCs upon transfer of suitable vector constructs. Our strategy for the presentation of tumor antigens by APCs is based on the intracellular delivery of tumor antigens as part of a fusion protein specifically targeted to APC cell surface receptors. We have constructed prototype molecules that contain a soluble fragment of CTLA-4 for cell binding via interaction with B7 molecules, genetically fused to a protein fragment derived from the tumor-associated antigen ErbB2. To improve uptake and direct the antigenic determinant preferentially to the MHC class I pathway, in one of these protein vaccines also the translocation domain of the bacterial Pseudomonas exotoxin A has been included. In the parental toxin this protein domain facilitates escape from the endosomal compartment to the cytosol upon receptor mediated endocytosis. Here we have investigated the in vitro cell binding activity of such reagents and their antitumoral activity in immunocompetent murine model systems. Specific binding to B7 molecules and uptake of bacterially expressed protein vaccines could be demonstrated. Ex vivo restimulation with an ErbB2-derived peptide of splenocytes from Balb/c mice injected with the fusion proteins resulted in enhanced IFN-gamma production by T cells. Protective and therapeutic effects of ErbB2 protein vaccines were also investigated. Vaccinated animals were protected against subsequent challenge with syngeneic ErbB2 expressing tumor cells. Likewise, s.c. injection of ErbB2 protein vaccines in the vicinity of established tumors resulted in tumor rejection and long lasting protection indicating that immunological memory was induced. Our results suggest that chimeric proteins combining a tumor antigen and specific recognition of APCs in a single molecule are suitable for targeted delivery of antigens to professional APCs and might become valuable tools for cancer immunotherapy.
We study the phase diagram of dense, locally neutral three-flavor quark matter as a function of the strange quark mass, the quark chemical potential, and the temperature, employing a general nine-parameter ansatz for the gap matrix. At zero temperature and small values of the strange quark mass, the ground state of matter corresponds to the color-flavor-locked (CFL) phase. At some critical value of the strange quark mass, this is replaced by the recently proposed gapless CFL (gCFL) phase. We also find several other phases, for instance, a metallic CFL (mCFL) phase, a so-called uSC phase where all colors of up quarks are paired, as well as the standard two-flavor color-superconducting (2SC) phase and the gapless 2SC (g2SC) phase.
We discuss gapless colour superconductivity for neutral quark matter in β equilibrium at zero as well as at nonzero temperature. Basic properties of gapless superconductors are reviewed. The current progress and the remaining problems in the understanding of the phase diagram of strange quark matter are discussed.
During the past decade, processes associated with what is popularly though perhaps misleadingly known as globalization have come within the purview of anthropology. Migration and mobility ‐ and the footloose or even rootless social groups that they produce ‐ as well as the worldwide diffusion of commodities, media images, political ideas and practices, technologies and scientific knowledge today are on anthropology's research agenda. As a consequence, received notions about the ways in which culture relates to territory have been abandoned. The term transnationalisation captures cultural processes that stream across the borders of nation states. Anthropologists have been forced to revise the notion that transnationalisation would inevitably bring about a culturally homogenized world. Instead, we are witnessing a surge of greatly increasing cultural diversity. New cultural forms grow out of historically situated articulations of the local and the global. Rather than left-over relics from traditional orders, these are decidedly modern, yet far from uniform. The essay engages the idea of the pluralization of modernities, explores its potential for interdisciplinary research agendas, and also inquires into problematic assumptions underlying this new theoretical concept.
As of today, estimating interest rate reaction functions for the Euro Area is hampered by the short time span since the conduct of a single monetary policy. In this paper we circumvent the common use of aggregated data before 1999 by estimating interest rate reaction functions based on a panel including actual EMU Member States. We find that exploiting the cross-section dimen- sion of a multi-country panel and accounting for cross-country heterogeneity in advance of the single monetary policy pays off with regard to the estimated reaction functions' ability to describe actual interest rate dynamics. We retrieve a panel reaction function which is demonstrated to be a valuable tool for evaluating episodes of monetary policy since 1999. JEL - Klassifikation: E43 , E58 , C33
This paper employs individual bidding data to analyze the empirical performance of the longer term refinancing operations (LTROs) of the European Central Bank (ECB). We investigate how banks’ bidding behavior is related to a series of exogenous variables such as collateral costs, interest rate expectations, market volatility and to individual bank characteristics like country of origin, size, and experience. Panel regressions reveal that a bank’s bidding depends on bank characteristics. Yet, different bidding behavior generally does not translate into differences concerning bidder success. In contrast to the ECB’s main refinancing operations, we find evidence for the winner’s curse effect in LTROs. Our results indicate that LTROs do neither lead to market distortions nor to unfair auction outcomes. JEL classification: E52, D44
Even though tourism has been recognised as an important field for transnational research today, there are few attempts to place tourism in the context of transnational theories or to think about transnationalism from the perspective of tourists. I argue that in researching tourist practices one can add important aspects to transnational approaches. The prerequisites of mobility and interaction for example are the features chosen by backpackers to describe what their Round-The-World-Trip is about. A form of tourism is adopted, or created, that itself confronts many aspects of globalisation: First of all there is the immense dynamic that is involved. Backpackers try to cover as many places and experiences as possible, travelling at high speed. They adopt all kinds of touristic experiences ranging from beach to adventure to culture tourism. They don't focus on a specific area or country but travel the world. They cross national borders perpetually. Additionally they form a transnational network in which they interact with strangers of similar backgrounds (other backpackers, tourist professionals). This network helps them interacting with people from different backgrounds (the socalled hosts or locals). Considering my research Backpackers forge a certain identity from these transnational practices which I want to name globedentity. Globedentity expresses a type of identity construction that not only refers to the individual (I) but reflects the world (globe) in this identity. This globedentity is not fixed but is perpetually re-created and re-defined. It also embraces the increasing popular awareness of globalisation which backpackers, coming from highly educated middle class backgrounds, in particular have identified with. Due to the constant awareness of the latest global social, cultural and economic developments in these educated milieus they know exactly which tools to use to become successful parts of their societies.
I analysed the importance of shell size, shell shape, habitat preferences and availability, experienced climate, active dispersal and influence of Pleistocene glaciations for the range sizes of 37 Western Palaearctic Helicidae s.l. species for which a phylogeny was available. In both cross-species and phylogenetically controlled analyses, the range sizes were positively correlated to climatic tolerance, shell size, active dispersal and influence of Pleistocene glaciations. In addition, range sizes increased significantly with latitude. Multiple regression suggested that, predominantly, the influence of Pleistocene glaciations, tolerance to large annual temperature ranges and shell size influenced the distributional range sizes. Habitat preference, range and availability, active dispersal and shell shape explained no additional variance. The results suggest that the processes influencing species range size of the Helicidae s.l. are mainly related to the climatic shifts after the Pleistocene.
FGF-2, a potent multifunctional and neurotrophic growth factor, is widely expressed in the brain and upregulated in cerebral ischemia. Previous studies have shown that intraventricularly or systemically administered FGF-2 reduces the size of cerebral infarcts. Whether endogenous FGF-2 is beneficial for the outcome of cerebral ischemia has not been investigated. We have used mice with a null mutation of the fgf2 gene to explore the relevance of endogenous FGF-2 in brain ischemia. Focal cerebral ischemia was produced by occlusion of the middle cerebral artery (MCAO). We found a 75% increase in infarct volume in fgf2 knock-out mice versus wild type littermates (P < 0.05). This difference in the extent of ischemic damage was observed after 24 h, and correlated with decreased viability in fgf2 mutant mice following MCA occlusion. Increased infarct volume in fgf2 null mice was associated with a loss of induction in hippocampal BDNF and trkB mRNA expression. These findings indicate that signaling through trkB may contribute to ameliorating brain damage following ischemia and that bdnf and trkB may be target genes of FGF-2. Together, our data provide the first evidence that endogenous FGF-2 is important in coping with ischemic brain damage suggesting fgf2 as one crucial target gene for new therapeutic strategies in brain ischemia.