Refine
Year of publication
Document Type
- Working Paper (2350) (remove)
Language
- English (2350) (remove)
Is part of the Bibliography
- no (2350) (remove)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1376)
- Wirtschaftswissenschaften (1306)
- Sustainable Architecture for Finance in Europe (SAFE) (738)
- House of Finance (HoF) (604)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (147)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
It is my intention to make two major points in this paper: 1. The first has to do with finding a frame within which the modal expressions of one particular Ancient IE [Indoeuropean] language – I have chosen Classical Greek – can be best described. I shall try to point out that the regularities which we find in these expressions must depend on an underlying principle, represented by abstract structures. These structures are semanto-syntactic, which means that the semantic properties or bundles of properties are arranged not in a linear order but in a hierarchical order, analogous to a bracketing in a PS structure. The abstract structures we propose have, of course, a very tentative character. They can only be accepted as far as evidence for them can be furnished. 2. My second point has to do with the modal verb forms that were the object of the studies of most Indo-Europeanists. If in the innermost bracket of a semanto-syntactic structure two semantic properties or bundles of properties can be exchanged without any further change in the total structure, and if this change is correlated with a change in verbal mood forms and nothing else, then I think we are faced with a case where these forms can be said to have a meaning of their own. I shall also try to show how these meanings are to be understood as bundles of features rather than as unanalyzed terms. In my final remarks: I shall try to outline the bearing these views have on comparative IE linguistics.
The aim of any Automatic Translation project is to give a mechanical procedure for finding an equivalent expression in the target language to any sentence in the source language. The aim of my linguistic translation project is to find the corresponding structures of the languages dealt with. The two main problems that have to be solved by such a project are the difference of word order between the source language and the target language and the ambiguous words of the source language for which the appropriate word in the target language has to be chosen. The first problem is of major linguistic interest: once the project has been worked out, it will give us the parallel sentence structures for the two languages in question. Since there is no complete analysis of any language that could be used for the purpose of automatic translation, we decided to build up our project sentence by sentence. The rules which are needed for translating each sentence will have to be included in the complete program anyway, and the translation may be checked and corrected immediately. The program is split up into subroutines for each word-class, so that a correction of the program in case of an unsatisfactory translation does not complicate the program unnecessarily.
The speakers of the Paraná dialect of Kaingáng, from whom the data of this study were gathered, have lived in close contact with the Brazilians since before the turn of the century. Although many members of this group are still monolingual and Kaingáng is spoken in all the homes, the influence of Portuguese is making an impact on the language. This can be seen not only in isolated loan words, but it is slowly changing the time dimension of the language and the thinking of the Indians. The change seems to have come about first through loan words, but it is now also affecting the semantic structure of the language and is beginning to affect the grammatical structure as well. The study here presented deals with this change as it can be seen in relation to time expressions such as yesterday – today – tomorrow; units of time such as day – month – year; kinship terms; and finally aspect particles. In considering the time expressions the meaning of various paradigms will be discussed. The paradigms are related to the time when events took place, to sequence of events, and to the point of the action. No Brazilian influence can be observed here. In the discussion of the units of time the semantic area of these units before and after Brazilian influence will be explored. Through Brazilian influence vocabulary has been developed with which it is possible to accurately pinpoint events in time which was not possible before this. The time distinctions within the kinship system will be discussed, and how they change with the influence of Brazilian terms. A whole new generation distinction is added in the modified kinship system. Similary several new aspect particles are being created through contractions, which now contain a time element. The whole development shows an emphasis on fine distinctions in time depth which came about through the contact with Portuguese and which can be observed in several points of the structure of Kaingáng.
Bestimmte seit den sechziger Jahren zur Analyse früher kindlicher Äußerungen benutzte Beschreibungsmodelle unterschätzen die sprachliche Kompetenz des Kindes, indem sie die Struktur seiner Äußerungen auf Distributionsphänomene der Oberflächenstruktur reduzieren, andere Modelle überschätzen diese Kompetenz, indem sie kindlichen Äußerungen mehr sprachliche Information zuschreiben, als sie enthalten. Wenn außersprachliche Information auf systematische Art und Weise in die Untersuchung der sprachlichen Kommunikation zwischen Kind und Erwachsenem einbezogen wird, findet einerseits die Tatsache eine Erklärung, daß diese Kommunikation in so erstaunlichem Maße erfolgreich ist, andererseits erlaubt diese Beschreibungsweise es aber, frühe kindliche Äußerungen als sprachlich so undeterminiert darzustellen, wie sie sind.
Three quantificational approaches to the measurement of lexical descriptivity are proposed, based on: the semantic sum of the parts of a lexeme is equal to the whole, paraphrase-term and term-paraphrase congruence, explicitness of semantic elements of a construction. Combination of all possible values into tripartite sets and then into equipollent groups results in a system composed of 12 grades. This system was tested with a semantic domain of the Finnish lexicon: body-part terms. The descriptivity indices for each lexical item were correlated with natural divisions of the body, construction-motivation types (form, function, location), grammatical construction types (endo- and exocentric compounds, derived forms, metaphors), and loanwords. These comparisons result in a number of grade profiles whereby specific descriptivity grades are characteristically associated with one or more types of body section, construction motivation, and grammatical construction. Diachronic and synchronic evidence points overwhelmingly to a process of semantic narrowing in the development of descriptive words and labels from phrases or sentences.
Actually, the title should include intralinguistic variation along with the interlinguistic one. For variation within one and the same language is the thing which directly presents itself to the observation while it still remains to be demonstrated that phenomena in different languages can be regarded as variants to be assigned to one and the same invariant principle. There are two senses in which the terms of variant, variation are used in the following remarks: one, which has just been mentioned, concerns the assignment of variants to some definite invariant. The other implies the possibility of gradient transitions and opposes the notions of discreteness and of yes-or-no. I shall not try here to reconcile these two senses and I trust that what I intend to show will become intelligible nevertheless. Henri Delacroix (1924:126f) has reformulated an old hypothesis which seems worth exploring in connection with the search for language universals: "Une langue est une variation historique sur le grand thème humain du langage." It remains to be seen what "le grand thème" or rather "les grands thèmes" are about and what particular language-specific properties could be shown to be variants of one and the same theme. One such major theme which we shall now investigate is the interrelation between, on one side, a word or a sequence of words, and, on the other, a sentence. As this for us is not only a syntactic but also a semantic problem, we might rephrase the anti thesis as that between a term or sequence of terms and a proposition. Two alternative views on the nature of this interrelation seem conceivable: A. The interrelation is yes-or-no, i. e. an element or a string of elements either constitutes a term (sequence of terms) or a proposition. B. The interrelation is of gradient nature, i. e. we find intermediary stages. Both alternatives are appropriate, but under different circumstances.
Using Ultan's theory of descriptivity grading as a starting point, I will attempt to capture this differential utility in terms of [...] criteria of literalness, explicitness and syntactic complexity. I will first briefly present his System and investigate some generalizations which he has proposed on the basis of his study of body part terminologies in numerous languages. I will apply his theory to nouns in this and four other semantic domains, in three North American Indian languages. I will test his generalizations and propose some new ones. I will then present an alternative system of descriptivity grading and compare the results of its application with those of Ultan's system. In the final section I will suggest another methodology for quantification. An appendix at the end of the paper lists all of the descriptive lexical items mentioned, graded according to both systems.
In an earlier paper, I proposed a system for evaluating the relative descriptivity of lexical items in a consistent manner in terms of the interrelations of three metrics. The first of these, including five possible degrees of descriptivity, is based on the premise that the sum of the meaningful parts of a given form is or is not equal to the meaning of the whole. The second, also composed of five degrees, is based on paraphrase-term relations in which the logical quantifiers: all, some and no, are applied to the terms of the paraphrase in one test and to the meaningful parts of the term (linguistic form) in the reversibility test. Both tests are applied in the form of logical propositions. The third metric, with three degrees, deals with the relative explicitness of the meaningful parts of a given form: explicit, implicit or neither. […] This system was then tested in a pilot study involving the fairly limited and semantically homogeneous lexical domain of body-part terms in a specific language, Finnish. The purpose of the present paper is to subject comparable data from other languages to the same kind of analysis and compare the results in order to ascertain whether the generalizations arrived at with the Finnish data also hold for the other languages or, more specifically, which of these generalizations are more or less universal and which language or language-type specific? The additional languages to be examined here are: French, German, Ewe, Maasai and Swahili.
The basic idea I want to develop and to substantiate in this paper consists in replacing – where necessary – the traditional concept of linguistic category or linguistic relation understood as 'things', as reified hypostases, by the more dynamic concept of dimension. A dimension of language structure is not coterminous with one single category or relation but, instead, accommodates several of them. It corresponds to certain well circumscribed purposive functions of linguistic activity as well as to certain definite principles and techniques for satisfying these functions. The true universals of language are represented by these dimensions, principles, and techniques which constitute the true basis for non-historical inter-language comparison. The categories and relations used in grammar are condensations – hypostases as it were – of such dimensions, principles, and techniques. Elsewhere I have outlined the theory which I want to test here in a case study.
These notes grew out of my preoccupation with writing a grammar of a particular language, Cahuilla, which is spoken in Southern California and belongs to the Uto-Aztecan family. [...] The Introduction to the Grammar as a whole – of which two sections are reproduced here in a modified version – tries to integrate the synoptic views of the different chapters into a series of comprehensive statements. The statements cluster around two topics: 1. A presentation of Cahuilla as a type of language. 2. Remarks on writing a grammar.
In my paper "Thesen zum Universalienprojekt" (1976) I mention two complementary procedures for discovering language universals: 1. The investigation of the dimensions and principles whose existence is necessitated by the communicative function of language; 2. The development of a formal language in which all syntactic rules are explicitly formulated and in which all syntactic categories are defined by their relation to a minimally necessary number of syntactic categories. Since the first procedure is treated in many of the other papers of this volume, I wish to discuss the role of formal methods in the research of language universals. As an example I want to take the dimensions of determination and show how expressions denoting concepts are modified and turned into reference identifying expressions. There is a general end a specific motivation for the introduction of formal methods into linguistics. The general motivation is to make statements in linguistics as exact and verifiable as they are in the natural sciences. The specific motivation is to make the grammars of various languages comparable by describing them with the same form of rules. The form has to be flexible enough to describe the phenomena of any possible natural language. All natural languages have in common that they may potentially express any meaning. The flexibility of the form of grammatical rules may therefore be attained, if syntactic rules are not isolated from the semantic function they express and syntactic classes are not defined merely by the relative position of their elements in the sentence, but also by the communicative function their elements fulfill in their combination with elements of other classes.
Montague (1974) has shown that this flexibility may be attained by using the language of algebra combined with categorial grammar. Algebraic systems have been developed by mathematicians to model any systems whose operations are definable. Montague does not merely use the tools of mathematics for describing the features of language, but regards syntax, semantics and pragmatics as branches of mathematics. One of the advantages of this approach is that we may apply the laws developed by mathematicians to the systems constructed by linguists for the description and explanation of natural language.
One of the striking features in modern Newari noun phrases is the wide usage of a set of affixes found in combination with the various elements that may expand a noun into an endocentric construction. At first sight such affixation would appear as a linking device by which the subordinate constituents of a noun phrase are tied to their head noun. Closer investigation, however, reveals a more complex picture which I have attempted to outline in the following paragraphs. The results of this inspection lead to the conclusion that the pattern of affixation displayed in Newari mirrors the close interaction of two converse functional principles: both the syntagmatic function of nominal determination on the one hand and a paradigmatic function – the formation of certain types of lexicalized expressions in Newari – formally tie in with each other by the application of one common technique.
The language of the Cahuillas shows two systems of expressions referring to kinship, which could be termed, respectively, as labeling-relational and as descriptive-establishing. […] Descriptive terms show two properties: 1. They are analysable into constituent elements so as to recognize the connection between the term and the proposition. 2. They are distinguishable from the proposition: a. by a special formal element […], in Cahuilla the absolutive suffix. b. by a narrowing or specialization in the meaning. A term which is not descriptive, i.e. which is not connected with a proposition, I shall call "label", "1abeling": It does not say anything about the object but is assigned to it just as a label is attached to a thing […].
Studies of syntax in first language acquisition have so far concentrated on the propositional side of the sentence, i.e. on the occurrence and interplay of semantic roles like agent, benefactive, objective, etc. and their syntactic expression. The modality constituent, however, has received little attention in the study of child language. This may be due in part to the impetus more recent research in this field has received from studies of the acquisition of English, a language with poor verb morphology as compared to synthetic languages. The research to be presented in this paper is concerned with an early stage of the acquisition of Modern Greek as a first language, a language with a particularly rich verb morphology. Since modality, aspect, and tense are obligatorily marked on the main verb in Mod. Greek, this language offers an excellent opportunity for studying the development of these fundamental categories of verbal grammar at an earlier stage than in more analytic languages. [...] As this paper is concerned with the semantic categories of verbal grammar mentioned above as weIl as with their formal expression, only utterances containing a verb will be considered. For reasons of space we shall further limit ourselves to those utterances containing a main verb. Such utterances divide into two classes, modal and non-modal. [...] In spite of Calbert's claim (Calbert 1975) that there are no strictly non-modal expressions, affirmative and negative statements as well as questions not containing a modal verb will be considered as non-modal. As will be shown below, modal and non-modal expressions are formally differentiated at the stage of language acquisition studied.
I shall use the precise term 'interlinear morphemic translation (IMT) to designate the object of this study. [...] An IMT is a translation of a text in a language L1 to a string of elements taken from L2 where, ideally, each morpheme of the L1 text is rendered by a morpheme of L2 or a configuration of symbols representing its meaning and where the sequence of the units of the translation corresponds to the sequence of the morphemes which they render. [...] An IMT is needed whenever it is essential that the reader grasp the grammatical structure of the L1 text but is presumed to be so unfamiliar with L1 that he will not be able to do so merely with the aid of a normal translation and the context in which the text is cited. [...] The primary aim of an IMT is to make the grammatical structure of the L1 text transparent. The textual fluency of the IMT by standards of the L2 grammar is a subordinate aim at best.
In my Cahuilla Grammar (Seiler 1977:276-282) and in a subsequent paper (Seiler 1980:229-236) I have drawn attention to the fact that many kin terms in this language, especially those that have a corresponding reciprocal term in the ascending direction – like niece or nephew in relation to aunt – occur in two expressions of quite different morphological shape. The following remarks are intended to furnish an explanation of this apparent duplicity.
In this study I want to show, above all, that the linguistic expression of POSSESSION is not a given but represents a problem to be solved by the human mind. We must recognize from the outset that linguistic POSSESSION presupposes conceptual or notional POSSESSION, and I shall say more about the latter in Chapter 3. Certain varieties of linguistic structures in the particular languages are united by the fact that they serve the common purpose of expressing notional POS SESSION. But this cannot be their sole common denominator. How would we otherwise be able to recognize, to understand, to learn and to translate a particular linguistic structure as representing POSSESSION? There must be a properly linguistic common denominator, an invariant, that makes this possible. The invariant must be present both within a particular language and in cross-language comparison. What is the nature of such an invariant? As I intend to show, it consists in operational programs and functional principles corresponding to the purpose of expressing notional POSSESSION. The structures of possessivity which we find in the languages of the world represent the traces of these operations, and from the traces it becomes possible to reconstruct stepwise the operations and functions.
Possessive constructions are grammatical constructions which contain two nominals and express that the referent of one of these nominals belongs to the other. The kind of relationship denoted by possessive constructions is not only that of ownership (1), as the term "possessive" might suggest, but also that of kinship (2), bodypart relationship (3), part/whole relationship (4) and similar relationships [...]. The following investigation will start with possessive constructions on phrase level, i.e. possessive phrases, and then deal with possessive constructions on clause level.
At the end of last year, I designed an inquiry about the present state of linguistic typology in the form of a questionnaire. It was an attempt to cover the whole field by formulating the questions which seemed most relevant to it. This questionnaire is reproduced, without modifications, following this preface. In the first days of this year, it was sent to 33 linguists who I know are working in the field. The purpose was to form, on the basis of responses received, a picture of convergences and divergences among trends of present-day linguistic typology. The idea was also to get an objective basis for my report on "The present state of linguistic typology", to be delivered at the XIII. International Congress of Linguistics at Tokyo, 1982.
The approach outlined in the present paper is based on observations made with African languages. Although the 1000-odd African languages display a remarkable extent of structural variation, there are certain structures that do not seem to occur in Africa. Thus, to our knowledge, an African language having anything that could be called an ergative case or a numeral classifier system has not been discovered so far. It may turn out that our approach can, in a modified form, be made applicable to languages outside Africa. This , however, is a possibility that has not been considered here. The present approach is based essentially on diachronic findings in that it uses observations on language evolution in order to account for structural differences between languages. Thus, it has double potential: apart from describing and explaining typological diversity it can also be material to reconstructing language history.
The basic question is whether POSSESSOR and POSSESSUM are on the same level as the roles of VALENCE, two additional roles as it were. My research on POSSESSION has shown (Seiler 1981:7 ff.) that this is not the case, that there is a difference in principle between POSSESSION and VALENCE. However, there are multiple interactions between the two domains, and these interactions shall constitute the object of the following inquiry. It is hoped that this will contribute to a better understanding both of POSSESSION and of VALENCE.
According to the present state of research, there seems to be no language which shows possessive classifiers and possessive verbs corresponding to English "to have" at the same time. In classifier languages predicative possession is expressed by verbless clauses, i.e. by existential clauses ("there is my possessed item"), equative clauses ("the possessed item is mine" "that is my possessed item") or by locative expressions ("the possessed item is near me"), in which the classifier in the case of non-inherent possession marks the nature of the relationship. While most Melanesian languages, as for instance Fijian, Lenakel, Pala and Tolai are classifier languages, Nguna, a Melanesian language spoken in Vanuatu, only shows traces of the Melanesian possessive classifier system, but, in contrast to the other Melanesian languages, it has a possessive verb, namely 'peani' "to have". In order to show how the Nguna possessive constructions deviate from the common Melanesian type, we shall start with a brief description of the Melanesian possessive constructions in general, and that of Fijian in particular.
Defined as a general inner-linguistic function, modality pervades language and there can thus be no strictly nonmodal predicative expressions. We shall, however, in what follows, keep to grammatical tradition and exclude declarative and interrogative sentences in the indicative mood from consideration. Although a thorough study of the development of modal negation should prove most rewarding, we must renounce such an attempt out of space limits. […] [W]e shall be concerned with the formal linguistic devices employed by the child for expressing modality in various languages and the functions these serve, i.e. how they are used. Only by the conjoint study of form and function can one hope to arrive at a fair understanding of how the modalizing function develops in the ontogenesis of language.
Analysis of Lambda and associative pion production in relativistic nucleus-nucleus collisions
(1984)
The present paper is an attempt to describe a particular semantic domain in Thai, that of local relations, in terms of a gradual interconnection of what traditional descriptions usually regard as distinct and isolated categories. It is based on the well-known observation that isolating languages like Thai typically display a high degree of 'multifunctionality', or else of syntactic 'versatility' of very many lexical items. […] The semantic area studied in the following pages yields a clear systematic interconnection of three different categories, viz. that of nouns – as the focal instance of maximum syntactic independence –, that of verbs – as, conversely, the focal instance of maximally relational concepts –, and, as an intermediary category between these two, that of prepositions which the system lexically feeds from both these opposite ends. The examples given in the course of this paper have been obtained from published grammatical literature, from Thai texts, and from informants.
Grammatical relations, particularly the notions of transitivity, case marking, ergativity, passive and antipassive have been a favourite subject of typological research during the last decade, but surprisingly, the notion of valency has been of marginal interest in cross-linguistic studies, though the syntactic and semantic status of participants is, to a great extent, determined by the relational properties of the verb. Valency is the property of the verb which determines the obligatory and optional number of its participants, their morphosyntactic form, their semantic class membership (e.g. ± animate, ± human) ,and their semantic role (e.g. agent, patient, recipient). The valency inherently gives information on the nature of the semantic and syntactic relations that hold between the verb and its participants. If a verb is combined with more participants than allowed or less than required, or if the participants do not show the required morphosyntactic form or class membership, the clause is ungrammatical. In other words, it is not sufficient to consider only the number of actants as a matter of valency, but it is only acceptable if all semantic and morphosyntactic properties of the relation between a verb and its participants that are predictable from the verb are included. The predictability of these properties results from their inherent givenness, and it does not seem reasonable to count some inherently given relational properties as a matter of valency, but not others (compare Helbig (1971:38f) and Heidolph et ale (1981:479) who distinguish between the quantitative, syntactic and semantic aspect of valency).
Ergativity in Samoan
(1985)
Most typological and language specific studies on so- called ergative languages are concerned with case marking patterns, particularly split ergativity, with the organization of syntactic relations as defined by syntactic operations such as coreferential deletion across coordinate conjunctions, Equi-NP-deletion and relativization , and with the notion of subject, but usually neglect the notion of valency, though the inherent relational properties of the verb , i. e. valency, play a fundamental role in the syntactic organization of sentences in ergative as well as in other languages . The following investigation of ergativity in Samoan aims to integrate the notion of valency into the description of semantic and syntactic relations and to outline the characteristic features of Samoan verbal clauses as far as they seem to be relevant to recent and still ongoing discussions on linguistic typology and syntactic theory. The main points of the definition of valency […] are: Valency is the property of the verb which determines the obligatory and optional number of its participants, their morphosyntactic form, their semantic class membership (e.g. ± animate, ± human) , and their semantic role (e.g. agent , patient , recipient). All semantic properties and morphosyntactic properties of participants not inherently given by the verb and therefore not predictable from the verb, are not a matter of valency. Valency is not a homogenous property of the verb, but consists of several exponents which show varying degress of relevance in different languages or different verb classes within a single language.
As a traditional notion of fundamental importance in linguistics and philosophy (logic), "predication" is fraught with controversial issues. It is thus difficult to delimit the scope of this paper without becoming involved in some major issue. The following distinctions seem to me to be plausible on an intuitive basis. Evidence for why they are useful and legitimate will be found in the body of the paper. The discussion will focus on morphosyntactic predication […].
This paper is concerned with anticausative verbs (or verb-forms), or shortly, anticausatives. [...] [C]ausative/non-causative pairs with a marked non-causative are quite frequent in the languages of the world. However, so far they have not received sufficient attention in general and typological linguistics, a fact which is also manifested in the absence of a generally recognized term for this phenomenon […]. This paper therefore deals with the most important properties of anticausatives (particularly semantic conditions on them), their relationship to other areas of grammar as well as their historical development in different languages. The grammatical domain of transitivity, valence and voice, where the anticausative belongs, takes up a central position in grammar and consequently the present discussion should be of considerable interest to general comparative (or typological) linguists.
It is the aim of this paper to present and elaborate a new solution to the old syntactic problems connected with the Latin gerundive and gerund, two verbal categories which have been interpreted variously either as adjective (or participle) or noun (or infinitive). These questions have been much discussed for quite a number of years […] but for the most part from a philological or purely diachronic point of view. All these linguists try to explain the peculiarities of these categories and their syntax by showing that the gerund is historically prior to the gerundive. [...] It is our thesis […] that in order to arrive at a unified account of gerundive and gerund we do not have to go back to prehistoric times. Even for the classical language gerund and gerundive represent the same category, in the sense that the gerund can be shown to be a special case of the gerundive. Additional evidence from a parallel construction in Hindi is adduced to make the Latin facts more plausible. It is only in the post-classical language that certain tendencies which had shown up already in Old Latin poetry become stronger and finally lead to a reanalysis of the gerundive and a split into two distinct syntactic constructions. The propositional meaning of the gerundive in its attributive use is explained with reference to a conflict between syntactic and cognitive principles. Special constructions which are the effects of such conflicts can be found in other parts of grammar. Languages differ with respect to the degree of syntacticization (or conventionalization) of these special constructions.
The present article is a crosslinguistic discussion of the distinction between a word class of nouns and a word class of verbs in the UNI TYP framework of the dimension of PARTICIPATION (for a first overall sketch of PARTICIPATION see Seiler 1984). According to this framework the noun/verb-distinction (henceforth N/V-D) must be regarded as a gradable, continuous phenomenon ranging from the stage of a clear-cut distinction with no overlap to almost a non-distinction. Although there is no question that most, if not all, languages do differentiate between nouns and verbs, it is also quite apparent that the languages do so to a different degree and by different means, and that it only makes sense to use the terms "noun" and "verb" in different languages when one actually has a common functional denominator in mind (see below). After a general introduction to the notion of a noun/verb-continuum (chapter 1) the reader will be presented with a survey of languages as diverse as German. English, Russian, Hebrew, Turkish, Salish. and Tongan (see chapter 2) in support of the continuum hypothesis. In chapter 3 the facts are coordinated in an overall pattern of regularities underlying the Increase or decrease of categorical restrictions between the respective word classes. Also, chapter 3 raises the issue to what degree a N/V-D can be considered a matter of certain lexemes or a matter of the morphosyntactic environment of certain lexical units. Lastly, we shall seek for an answer to the question why it is not a necessary requirement for languages to draw a sharp distinction between a word class of nouns and a word class of verbs.
The aim of this contribution is to embed the question of an antinomy between "integral" vs. "partial typology", inscribed as the topic of this plenary session, into the comprehensive framework of the dimensional model of the research group on language universals and typology (UNITYP). In this introductory section I shall evoke some cardinal points in the theory of linguistic typology, as viewed "from outside", viz. on the basis of striking parallelisms with psychological typology. Section 2 will permit a brief look on the dimensional model of UNITYP. In section 3 I shall present an illustration of a typological treatment on the basis of one particular dimension. In section 4 I shall draw some conclusions with special reference to the "integral vs. partial" antinomy.
This is a survey of the development of the model of PARTICIPATION (P'ATION) with reference to the postulated sequence of the techniques on the dimension of P'ATION. Along with a brief explanation of the techniques this article contains a discussion of the major claims with regard to the sequence of the techniques and the possibilities of subjecting the claims to empirical verification.
Performance and storage requirements of topology-conserving maps for robot manipulator control
(1989)
A new programming paradigm for the control of a robot manipulator by learning the mapping between the Cartesian space and the joint space (inverse Kinematic) is discussed. It is based on a Neural Network model of optimal mapping between two high-dimensional spaces by Kohonen. This paper describes the approach and presents the optimal mapping, based on the principle of maximal information gain. It is shown that Kohonens mapping in the 2-dimensional case is optimal in this sense. Furthermore, the principal control error made by the learned mapping is evaluated for the example of the commonly used PUMA robot, the trade-off between storage resources and positional error is discussed and an optimal position encoding resolution is proposed.
The Stanford Project on Language Universals began its activities in October 1967 and brought them to an end in August 1976. Its directors were Joseph H. Greenberg and Charles A. Ferguson. The Cologne Project on Language Universals and Typology [with particular reference to functional aspects], abbreviated UNITYP, had its early beginnings in 1972, but deployed its full activities from 1976 onwards and is still operating. This writer, who is the principal investigator, had the privilege of collaborating with the Stanford Project during spring of 1976. […] One of the leading Greenbergian ideas is that of implicational generalizations, has been integrated as a fundamental principle in the construction of continua and of universal dimensions as proposed by UNITYP. It is hoped that the following considerations on numeral systems will be apt to bear witness to this situation. They would be unthinkable without Greenberg’s pioneering work on "Generalizations about numeral systems" (Greenberg 1978: 249 ff., henceforth referred to as Greenberg, NS). Further work on this domain and on other comparable domains almost inevitably leads one to the view that generalizations of the Greenberg type have a functional significance and that a dimensional framework is apt to bring this to the fore. This is the view on linguistic behaviour as being purposeful, and on language as a problem- solving device. The problem consists in the linguistic representation of cognitive-conceptual ideas. The solution is represented by the corresponding linguistic structures in their diversity and the task of the linguist consists in reconstructing the program and subprograms underlying the process of problem-solving. It is claimed that the construct of continua and of universal dimensions makes these programs intelligible.
The human mind may produce prototypization within virtually any realm of cognition and behavior. A "comparative prototype-typology" might prove to be an interesting field of study – perhaps a new subfield of semiotics. This, however, would presuppose a clear view on the samenesses and differences of prototypization in these various fields. It seems realistic for the time being that the linguist first confine himself to describing prototypization within the realm of language proper. The literature on prototypes has steadily grown in the past ten years or so. I confine myself to mentioning the volume on Noun Classes and Categorization, edited by C. Craig (1986), which contains a wealth of factual information on the subject, along with some theoretical vistas. By and large, however, linguistic prototype research is still basically in a taxonomic stage - which, of course, represents the precondition for moving beyond. The procedure is largely per ostensionem, and by accumulating examples of prototypes. We still lack a comprehensive prototype theory. The following pages are intended, not to provide such, a theory, but to do the first steps in this direction. Section 2 will feature some elements of a functional theory of prototypes. They have been developed by this author within the frame of the UNITYP model of research on language universals and typology. Section 3 will bring a discussion of prototypization with regard to selected phenomena of a wide range of levels of analysis: Phonology, morphosyntax, speech acts, and the lexicon. Prototypization will finally be studied within one of the universal dimensions, that of APPREHENSION - the linguistic representation of the concepts of objects – as proposed by Seiler (1986).
The most macabre of the numerous anthropomorphic metaphors linguists provide for their subject matter is that of language death. The extinction of a language is in fact a distressing matter, because the cultural tradition connected to it and the sociocultural or even ethnic independence of the group that speaks it very often perish together with it. Yet it is a very common phenomenon. [...] It would seem strange that such a frequent and well-known phenomenon has not been studied much earlier; nevertheless it is a fact that the investigation of language death is a new and developing field, which emerged as something like an independent subdiscipline of linguistics towards the end of the seventies. This comparatively embryonic stage of the field should be kept in mind throughout the following discussion.
Why should we engage in language universals research and language typology? What do we want to explain? It is a fact that, although languages differ significantly and considerably. indeed, no one would deny, that they have something in common; how else could they be labelled 'language'? - There is obviously unity among them, no matter how vaguely felt and for what reasons: Scientific, practical, moral, etc. Neither diversity per se nor unity per se is what we want to explain. There is no reason whatsoever to consider either one of them as primary, and the other as derived. What we do want to explain is "equivalence in difference" – cf. our motto – which manifests itself, among others, in the translatability from one language to another, the learnability of any language, language change – which all presuppose that speakers intuitively find their way from diversity to unity. This is a highly salient property which deserves to be brought into our consciousness. Generally then, our basic goal is to explain the way in which language-specific facts are connected with a unitarian concept of language – "die Sprache" – "le langage".
It is well known that artificial neural nets can be used as approximators of any continous functions to any desired degree. Nevertheless, for a given application and a given network architecture the non-trivial task rests to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation. In this paper the problem is treated by an information theoretic approach. The values for the weights and thresholds in the approximator network are determined analytically. Furthermore, the accuracy of the weights and the number of neurons are seen as general system parameters which determine the the maximal output information (i.e. the approximation error) by the absolute amount and the relative distribution of information contained in the network. A new principle of optimal information distribution is proposed and the conditions for the optimal system parameters are derived. For the simple, instructive example of a linear approximation of a non-linear, quadratic function, the principle of optimal information distribution gives the the optimal system parameters, i.e. the number of neurons and the different resolutions of the variables.
Grammatical relations – in particular the relation 'subject of' – and voice are of central concern to any theory of universal grammar. With respect to these phenomena the analysis of Tagalog (and the Philippine languages in general) has turned out to be particularly difficult and continues to be a matter of debate. What traditionally has been called passive voice in these languages […] appears to be so different from voice phenomena in the more familiar Indo-European languages that the term 'focus' was introduced in the late 1950s to underscore its 'exceptional' nature [...]. Furthermore, […] an inflationary use has been made of the term 'ergative' in the last decade; it can thus no longer be assumed that it has an unequivocal and specific meaning in typologizing languages, apart from the technical definition it might be given within a particular framework. But if the Philippine 'focus' constructions are neither passive nor ergative, how else can they be analysed? [...] In this paper a ease will be made for the claim that 'focus' marking should be analysed in terms of orientation, a concept used […] for capturing the difference between English (and, more generally, Indo-European) orientated nominalisations such as 'employ-er' or 'employ-ee', and unorientated nominalisations such as 'employ-ing'. This approach implies that 'focus' marking is derivational rather than inflectional as often presumed in the literature. This is to say that what is typologically conspicuous in Tagalog is not the 'focus' phenomenon per se, since this is very similar to orientated nominalisations in many other languages, but rather the very prominent use of orientated formations (i.e., derivational morphology) in basic clause structure.
Oppositeness, i.e. the relation between opposites or contraries or contradictories, has a fundamental role in human cognition. In the various domains of intellectual and psychological activity we find ordering schemas that are based, in one way or another, on the cognitive figure of oppositeness. It is therefore not surprising that the figure and its corresponding ordering schemas show their reflexes in the languages of the world. [...] We shall be dealing with oppositeness in the sense that a linguistically untrained native speaker, when asked what would be the opposite of 'long' can come up with some such answer as 'short', and likewise intuitively grasp the relation between 'man' and 'woman', 'corne' and 'go', 'up' and 'down', etc. Thinking that much of the vocabulary of a language is organized in such opposite pairs we must recognize that this is an important faculty, and we are curious to know how this is done, what are the underlying conceptual-cognitive structures and processes, and how they are encoded in the languages of the world. We shall leave out of consideration such oppositions as singular vs. plural. present vs. past, voiced vs. unvoiced, oppositions that the linguist states by means of a metalanguage which is itself derived from a concept of oppositeness as manifested by the examples which I gave earlier. Our approach will connect with earlier versions of the UNITYP framework. However, as a novel feature, and, hopefully, as an improvement, we shall apply some sort of a division of labor. We shall first try to reconstruct the conceptual-cognitive content of oppositeness and to keep it separate from the discussion of its reflexes in the individual languages. We shall find that a dimensional ordering of content in PARAMETERS and a continuum of TECHNIQUES is possible already on the conceptual-cognitive level. In order to keep it distinct from the level of linguistic encoding we shall use a separate terminology, graphically marked by capital 1etters.
The corporate governance Systems in the U.K. and in Germany differ markedly. German large firms have a two-board structure, they are subject to employee codetermination, their managements are not confronted with public hostile takeover bids, and banks play a major role in corporate governance, through equity stakes, through proxies given to them by small investors, and through bankers positions on the supervisory boards of these firms. One of the main issues of corporate governance in large firms, the Problem of shareholders passivity in monitoring management in Berle-Means type corporations, is thus addressed by an institutional Provision, the role of the banks, rather than by a market-oriented Solution as we find it in the U.K. with its market for corporate control through the threat of hostile takeovers. These two different approaches to corporate governance have been compared several times recently, and it was argued that a bank-based or institutional Solution has clear advantages and should be preferred. Cosh, Hughes and Singh, for example, argue at the conclusion of their discussion of takeovers and short-termism in the U.K. that the institutional shareholder [in the UK] should take a much more active and vigorous part in the internal governance of corporations. . . . In Order for such a proposal to be effective both in disciplining inefficient managements and promoting long-term investments, far reaching changes in the internal workings and behaviour of the financial institutions would be required. The financial institutions would need to pool their resources together, set up specialised departments for promoting investment and innovations - in other words behave like German banks. The following remarks seek to continue this discussion from the German perspective. The article will first attempt to evaluate the monitoring potential of our domestic bank or institution-oriented corporate governance System and then, in a further patt, compare it with that of a market-oriented Solution. lt will be argued that both Systems focus on different Problems and have specific advantages and drawbacks, and that there are still quite a few puzzles to be solved until all pros and cons of each of these monitoring devices tan be assessed. The perception that both Systems focus on different Problems suggests combining institutional monitoring with a market for corporate control rather than considering them to be contrasting and incompatible approaches. The article is organized as follows. Section II will describe the legal structure of the large corporation in Germany in more detail. Section Ill explains why a market for corporate control by the threat of public hostile takeover bids does not exist in Germany. Section IV then Shows how corporate governance in publicly held corporations with small investors is organized instead, and deals with the role of banks in corporate governance in these firms. Section V of the atticle then will try to compare the monitoring potential of a marketoriented and our bank or institution-oriented corporate governance System. Concluding remarks follow.
Other than in Belgium, German banks may hold even controlling equity participations in industrial firms (and such firms may own banks) and do so to a large extent. Vis-a-vis the European development this leads to two questions: From the perspective of the (Belgian and other) competitors of these banks, whether their own domestic System might be disadvantageous to them. And from a public interest perspective, which advantages and drawbacks are connected with the different regulations in Europe. The article first informs about the legal framework and some statistical facts. Then the various and different reasons why banks acquire and hold shares on own account are analyzed. The following Parts deal with the various public policy arguments whether equity links between banks and industrial firms should be prohibited or not (safety and soundness of banking; autonomie de Ia fonction bancaire ; abuse of confidential information and conflicts of interest; antitrust considerations; negative and positive impacts on the respective firm). In its last part the article deals with recent proposals in the German political debate to limit stockholdings of banks. The article argues that a step-by-step approach to the Single Problems and issues (conflict of interests; anticompetitive effects etc.) should be preferred to a general limitation of stock ownership of banks.
This paper is concerned with developing Joan Bybee's proposals regarding the nature of grammatical meaning and synthesizing them with Paul Hopper's concept of grammar as emergent. The basic question is this: How much of grammar may be modeled in terms of grammaticalization? In contradistinction to Heine, Claudi & Hünnemeyer (1991), who propose a fairly broad and unconstrained framework for grammaticalization, we try to present a fairly specific and constrained theory of grammaticalization in order to get a more precise idea of the potential and the problems of this approach. Thus, while Heine et al. (1991:25) expand – without discussion – the traditional notion of grammaticalization to the clause level, and even include non-segmental structure (such as word order), we will here adhere to a strictly 'element-bound' view of grammaticalization: where no grammaticalized element exists, there is no grammaticalization. Despite this fairly restricted concept of grammaticalization, we will attempt to corroborate the claim that essential aspects of grammar may be understood and modeled in terms of grammaticalization. The approach is essentially theoretical (practical applications will, hopefully, follow soon) and many issues are just mentioned and not discussed in detail. The paper presupposes a familiarity with the basic facts of grammaticalization and it does not present any new facts.
Remarks on deixis
(1992)
The prevailing conception of deixis is oriented to the idea of 'concrete' physical and perceptual characteristics of the situation of speech. Signs standardly adduced as typical deictics are I, you, here, now, this, that. I and you are defined as meaning "the person producing the utterance in question" and "the person spoken to", here and now as meaning "where the speaker is at utterance time" and "at the moment the utterance is made" (also, "at the place/time of the speech exchange"); similarly, the meanings of this and that are as a rule defined via proximity to speaker's physical location. The elements used in such definitions form the conceptual framework of most of the general characterisations of deixis in the literature. [...] There is much in the literature, of course, that goes far beyond this framework . A great variety of elements, mostly with very abstract meanings, have been found to share deictic characteristics although they do not fit into the personnel-place-time-of-utterance schema. The adequacy of that schema is also called into question by many observations to the effect that the use of such standard deictics as here, now, this, that cannot really be accounted for on its basis, and by the far-reaching possibilities of orienting deictics to reference points in situations other than the situation of speech, to 'deictic centers' other than the speaker. [...] Analyses along the lines of the standard conception regularly acknowledge the existence of deviations from the assumed basic meanings. One traditional solution attributes them to speaker's "subjectivity", or to differences between "physical" and "psychological" space or time; in a similar vein, metaphorical extensions may be said to be at play, or a distinction between prototypical and non-prototypical meanings invoked. Quite apart from the question of the relative merits of these explanatory principles, which I do not wish to discuss here, the problem with all such accounts is that the definitions of the assumed basic meanings themselves are founded on axiom rather than analysis of situated use. The logical alternative, of course, is to set out for more abstract and comprehensive meaning definitions from the start. In fact, a number of recent, discourse-oriented, treatments of the demonstratives proceed this way; they view those elements as processing instructions rather than signs with inherently spatial denotation (Isard 1975, Hawkins 1978, Kirsner 1979, Linde 1979 , Ehlich 1982.)
In early 1991 the United States Treasury Department of the Bush Administration recommended in ib proposal for Modemizing The FinancialSystem l that, in addition to other remarkable breaks with the traditional United States financial Services framework, the current bank holding Company structure be replaced with a new financial Services holding Company that would reward banks with the ability to engage in a broad new range of financial activities through separate afbliates, including full-service securities, insurance, and mutual fund activities. The Treaaury Department pointed out that commercial banking and investment banking are complementary Services and that the Glass-Steagall Separation was unnecessary. The Treasury Department gave many reasons for the need for financial modernization and why such a modemized System would work better. As an example that demonstrates the advantages of the System proposed by the Treasury Department, the proposal pointed to the German banks and called the German model of a universal banking System the most liberal banking System in the world. -What makes the German universal banking System so unique and desirable? The following outline of the history and the current structure of the Getman banking System is intended to give readers a background tc determine whether the German banking System could be a model for the System of the future.
The task of this Paper as originally described in the outline of the current project was to compare the German banking System, as one type of relationship banking , with the Japanese main bank System. This was, of course, not simply meant in the sense of a mere description and comparison of different institutions. A meaningful contribution rather has to look at the functions of a given banking System as a provider of capital or other financial Services to their client firms, has to ask in what respect the one or the other System might be superior or less efficient, and has to analyze the reasons for this. Such a thorough analysis would have to answer questions like, for instance, to what extent investment is financed by (lang or short term-)bank loans, whether German banks have, because of specific institutional arrangements like own equity holdings, seats on Company boards or other links with their borrowers, informational or other advantages that make bank finance eheaper or easier available; how such banks behave with respect to financial distress and bankruptcy of their client firms, and what their exact role in corporate governance is. While preparing this Paper I found that in Order to give reliable answers to these questions there had to be several other conferences comparable to the present one that had to focus exclusively on our domestic System. Hence what this Paper only tan provide for at this moment is a short overview of the German banking System and its special t r a i t s ( Universalbankensystem and Group Banking ; part I), describe and analyse some aspects of bank lending to firms (Part II), and the role of German banks as delegated monitors in widely held firms (Part Ill). A description of the historical development of the specific links between banks and industry and their impact on the economic growth of Germany during the period of the industrialization and later on would be specifically interesting within the framework of a Conference that discusses the lessons and relevante of banking Systems for developing market economies and for transforming socialist economies. However, historical remarks had to be omitted completely, not least because of lack of own knowledge, time and space, but also because this history is already well documented and available in English publications, too.
In my following remarks I will focus on a differente which we find in German law as well as in other legislations, the differente b e t w e e n entrepreneurial investments among firms and merely financial investments. Whereas OUT law of groups of companies o f Konzernrecht contains quite an elaborated set of rules, the rules governing financial investments, especially Cross-border financial investments, seems to be somewhat underdeveloped.
The following descriptive overview of the German corporate governance system and the current debate is structured as follows. Part II will give some information on the empirical background. Part III will describe the formal legal setting as well as actual practices in some key areas. Part IV will then deal with some issues of the current debate.
Until the late 1980s, asset securitisation was an US-American finance technique. Meanwhile this technique has been used also in some European countries, although to a much lesser extent. While some of them have adopted or developed their legal and regulatory framework, others remain on earlier stages. That may be because of the lack of economic incentives, but also because of remaining regulatory or legal impediments. The following overview deals with the legal and regulatory environment in five selected European countries. It is structured as follows: First, this finance technique will be described in outline to the benefit of the reader who might not be familiar with it. A further part will report the recent development and the underlying economic reasons that drive this development. The main part will then deal with international aspects and give an overview of some legal and regulatory issues in five European legislations. Tax and accounting questions are, however, excluded. Concluding remarks follow.
For the German observer the idea of a Company repurchasing its own shares seems to resemble the picture of a snake eating its own tail. It appears to be highly unnatura1 and one wonders how the tail tan possibly be eatable for the snake. Not in the United States. Although repurchases have once been subject to the most stubbornly fought conflict in US Company law only some modest disclosure requirements and safeguards against overt market manipulation exist today. Large repurchases are an almost everyday event and there is an increasing tendency. The aggregate value of shares repurchased by NYSE listed companies has increased from $ 1 .l billion in 1975 to $ 6.3 billion in 1982 to $ 37.1 billion in 1985*. Few examples may illustrate this practice further: Within three years Ford Motor Corp. repurchased 30 million shares for $ 1.2 billion. In 1985 Phillips Petroleum Corp. was faced with two hostile bids and took several defensive Steps, one of which was to tender for 20 million of its own shares at a total tost of $ 1 billion. And by the end of 1988 Exxon Corp. retired 28 percent of its shares that had once been outstanding at an aggregate tost of $ 14.5 billion. The Situation in Germany is completely different. As it will be shown under German law repurchases are severely restricted and do appreciable amount at all. not take place at an In contrast to German law the United Kingdom does not prohibit repurchases but requires companies to comply with such complex rules that US companies would regard simply as limiting their economic freedom. Therefore UK companies very seldom repurchase their own shares, too. This Paper deals with repurchases by quoted companies, in particular the UK public Company and the more or less German equivalent, the Aktiengesellschaft (AG). It seeks to ascertain the reasons why companies might want to engage in those activities. Moreover, it tries to analyse the Problems which may arise from repurchases and the safeguards which the UK and German legal Systems provide for these Problems.This Paper deals with repurchases by quoted companies, in particular the UK public Company and the more or less German equivalent, the Aktiengesellschaft (AG). It seeks to ascertain the reasons why companies might want to engage in those activities. Moreover, it tries to analyse the Problems which may arise from repurchases and the safeguards which the UK and German legal Systems provide for these Problems.
In the last two decades Philippine languages, and of these especially Tagalog, have acquired a prominent place in linguistic theory. A central role in this discussion was played by two papers written by Schachter (1976 and 1977), who was inspired by Keenan's artcle on the subject from 1976. The most recent contributions on this topic have been from de Wolff (1988) and Shibatani (1988), both of which were published in a collection of essays, edited by Shibatani, with the title Passive and Voice. These works, and several works in-between, deal with the focus system specific to Philippine languages. The main discussion centers around the fact that Philippine languages contain a basic set of 5 to 7 affix focus forms. Their exact number varies not only in the secondary literature, but in the primary sources, i.e. Tagalog grammars, as well, where considerable differences in the number of affix focus forms can be found. All of these works, however, do agree on one point: the Philippine focus system basica1ly consists of agent, patient (=goal or object), benefactive, locative, and instrumental affix forms. Schachter/Otanes (1972) list a number of further forms, and in Drossard (1983 and 1984) we tried to show (in an attempt similar to those of Sapir 1917 and Klimov 1977) that the main criterion for a systematization of the Philippine focus system consists in the difference between the active and stative domains, an attempt which in our opinion was largely misunderstood (cf. the brief remarks in Shibatani (1988) and de Wolff (1988). The present paper is thus, on the one hand, an attempt to repeat and clarify our earlier position, and on the other, a further step towards such a systematization. A first step in this direction was an article on resultativity in Tagalog from 1991. In the present paper this approach will be extended to reciprocity. In the process we will show that it is valid to make a distinction between an active (=controlled action) vs. a stative (=limited controlled action) domain. First, however, we will take a brief look at what makes up the active and stative voice systems.
A feature of the Northern Iroquoian languages is their especially rich inventory of particles. This paper is concerned with one particle in the Cayuga language which has a widespread distribution and performs a broad range of apparently unrelated functions. The particle ne:' is commonly .translated as 'it is/that is', 'this' or ' that'. In other instances it is translated as predominant stress, or is simply omitted in the translation. The particle can occur in almost any syntactic or semantic environment, but it is not obligatory in any context. The various functions that have been suggested in the literature include indication of declarative mood and assertion, marking of emphasis, focus or contrast, and expression of predicative and deictic force. I argue that the particle ne:' can be described successfully if its distribution is considered from a wider perspective, taking into account discourse structure and variation in scope. Its analysis as a focus marker can account for the variety of apparently unrelated functions. The analysis is based on a detailed study of the particle' s distribution in spoken language using a database of five Cayuga texts by four different speakers, including three narratives, one procedural text and a children 's version of a ceremonial text.
Revised version of a paper presented at the Conference "The Distribution of Economic Well-Being in the 1980s - an International Perspective", June 21 - 23, 1993, in Fiskebäckskil, Sweden. This paper sketches changes in the distribution of well-being during the period from 1972 to 1991 against the background of West Germany's economic and demographic development, and compares the distribution of well-being in East Germany before and after reunification. We rely on equivalent income of persons as the main indicator to measure well-being, but we also look at the distribution of gross wage income of workers and employees. Estimates of the Federal Statistical Office referring to the mesolevel of average equivalent income of socio-economic groups as well as various distributional measures computed by us at the micro-level are used to gauge changes of the distribution. The computations are based on two sets of micro-data available to us, the official Income and Consumption Surveys (1973, 1978 and 1983), and the German Socio-economic Panel (1983 to 1990 for West Germany, 1990, 1991 for East Germany). At the meso-level we find substantial changes in the relative welfare positions of the ten socio-economic groups distinguished, but a nearly constant ranking of the groups during the whole period under review. At the micro-level our computations indicate slight increases in the inequality of gross earnings during both decades. The distribution of well-being as measured by equivalent income of persons seems also to have become slightly more unequal during the whole period but the changes are very small, and partly reversed during subperiods. A decomposition of overall inequality by occupational status of the heads of household using the Theil measure shows that more than 80 percent of overall inequality is due to within-group inequality with rising tendency. This result is mitigated a little when dis aggregating the heterogeneous group of not gainfully employed with regard to the main income source of the household.
We consider the problem of unifying a set of equations between second-order terms. Terms are constructed from function symbols, constant symbols and variables, and furthermore using monadic second-order variables that may stand for a term with one hole, and parametric terms. We consider stratified systems, where for every first-order and second-order variable, the string of second-order variables on the path from the root of a term to every occurrence of this variable is always the same. It is shown that unification of stratified second-order terms is decidable by describing a nondeterministic decision algorithm that eventually uses Makanin's algorithm for deciding the unifiability of word equations. As a generalization, we show that the method can be used as a unification procedure for non-stratified second-order systems, and describe conditions for termination in the general case.
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
In recent econometric work, most analyses of female labour supply consider married women, whereas the results for unmarried women are provided rather as a by-product (Burtless/Greenberg, 1982, Johnson/Pencavel, 1984, Leu/Kugler, 1986, Merz, 1990,). When the particular interest is focused on unmarried women, data of the seventies or rather simple econometric models are used (Keeley et al., 1978, Hausman, 1980, Coverman/Kemp, 1987) . Often very specific populations are examined, like for example lone mothers in Blundell/Duncan/Meghir (1992), Jenkins (1992), Staat/Wagenhals (1993) or Laisney et al. (1993). Analysing the economic behaviour of unmarried women, one is confronted with the problem that the term ‘unmarried’ is not clearly defined. It includes single, divorced, separated and widowed women. They live in different types of households, like one-person households or family households, where they occupy different economic positions as for example head of the household or relative of the head. The present work considers unmarried female heads of household. We assume that the dominant economic position as head of household, voluntarily or involuntarily occupied, forces these women to a similar behaviour independent from their family status. Thus they are taken together in the analysis from the different family statuses: single, divorced, separated and widowed. Being unmarried often is regarded as a temporary state, voluntarily or involuntarily, for example in the case of young women before marriage or in the case of divorced women after their separation. Nevertheless the demographic development shows the increased importance of unmarried women in the population during the last decades. In the USA the portion of female headed households raised from 21,1% in 1970 to 26,2% in 1980 and 29,0% in 1992 (Statistical Abstracts of the United States, 1993. Own calculations). In the FRG, female headed households constitute 26,4% of total households in 1970, 27,4% in 1980 and 30,1% in 1992 (Stat.Bundesamt, FS 1, Reihe 3, 1970, 1980, 1992). Therefore it seems an interesting topic to analyse the labour supply behaviour of unmarried female heads. Especially the question whether the labour supply of unmarried women resembles rather that of married women or of prime-age males is of particular interest. Another purpose of this analysis is to apply modern econometric panel data models with special emphasis on the problem of unbalanced panel data. Most panel data analyses are carried out using balanced panel data, which is no problem if the selection process could be ignored and if enough cases are available to guarantee efficient estimation. Especially the last point was crucial for the present analysis of unmarried females. In the available panel data sets the unmarried female heads constitute only a rather small population. Therefore the estimation techniques were modified to take missing observations of the individuals into account. The paper is organized as follows: In section 2 the underlying theoretical model of intertemporal labour supply under uncertainty is shortly presented. Section 3 deals with the econometric specification and estimation techniques where the use of unbalanced panel data is considered. Section 4 contains the data description with a particular look on the unbalancedness of the samples. In the last section 5 the empirical results are presented. We compare the estimated parameters for the unmarried women between the USA and the FRG and also analyse the differences between unmarried and married women. Moreover a comparison between different samples of unmarried women is provided.
Universal banking means that banks are permitted to offer all of the various kinds of financial services. This includes classical banking activities like the credit and deposit business, as well as investment services, placement and brokerage of securities, and even insurance activities, trading in real estate and others. German universal banks also hold stock in nonfinancial firms and offer to vote their clients' shares in other firms. This paper deals with universal banks and their role in the investment business, more specifically, their links with investment companies and their various roles as shareholders and providers of financial services to such companies. Banks and investment companies have, as financial intermediaries, one trait in common: they both transform capital of investors (depositors and shareholders of investment funds, respectively) into funds (loans and equity or debt securities, respectively) that are channeled to other firms. So why should a regulation forbid to combine these transformation tasks in one institution or group, and why should the law not allow banks to establish investment companies and provide all kinds of financial services to them in addition to their banking services? German banking and investment company law have answered these questions in the affirmative. This paper argues that the existing regulation is not a sound and recommendable one. The paper is organized as follows: Sections II - V identify four areas where the combination of banking and investment might either harm the shareholders of the investment funds and/or negatively affect other constituencies such as the shareholders of the banking institution. These sections will at the same time explore whether there are institutional or regulatory provisions in place or market forces at work that adequately protect investors and the other constituencies in question. Concluding remarks follow (VI.).
The acquisition of Greek
(1995)
Studie zum Erwerb des Neugriechischen
[I]n its present form, the bibliography contains approximately 1100 entries. Bibliographical work is never complete, and the present one is still modest in a number of respects. It is not annotated, and it still contains a lot of mistakes and inconsistencies. It has nevertheless reached a stage which justifies considering the possibility of making it available to the public. The first step towards this is its pre-publication in the form of this working paper. […]
The bibliography is less complete for earlier years. For works before 1970, the bibliographies of Firbas and Golkova 1975 and Tyl 1970 may be consulted, which have not been included here.
Did earnings inequality in the Federal Republic of Germany increase from the 1960s to the 1980s?
(1996)
Pion and strangeness puzzles
(1996)
Data on the mean multiplicity of strange hadrons produced in minimum bias proton--proton and central nucleus--nucleus collisions at momenta between 2.8 and 400 GeV/c per nucleon have been compiled. The multiplicities for nucleon--nucleon interactions were constructed. The ratios of strange particle multiplicity to participant nucleon as well as to pion multiplicity are larger for central nucleus--nucleus collisions than for nucleon--nucleon interactions at all studied energies. The data at AGS energies suggest that the latter ratio saturates with increasing masses of the colliding nuclei. The strangeness to pion multiplicity ratio observed in nucleon--nucleon interactions increases with collision energy in the whole energy range studied. A qualitatively different behaviour is observed for central nucleus--nucleus collisions: the ratio rapidly increases when going from Dubna to AGS energies and changes little between AGS and SPS energies. This change in the behaviour can be related to the increase in the entropy production observed in central nucleus-nucleus collisions at the same energy range. The results are interpreted within a statistical approach. They are consistent with the hypothesis that the Quark Gluon Plasma is created at SPS energies, the critical collision energy being between AGS and SPS energies.
The corporate governance systems in Europe differ markedly. Economists tend to use stylized models and distinguish between the Anglo-American, the German and the Latinist model.1 In this view, for instance, the Austrian, Dutch, German, and Swiss systems are said to be variations of one model. For lawyers the picture is of course, much more detailed as particular rules may vary even where common principles prevail. Many comparative studies on these differences have been undertaken meanwhile.2 I do not want to add another study but to treat a different question. Are there as a consequence of growing internationalization, globalization of markets and technological change, also tendencies of convergence of our corporate governance systems? My answer will be in two parts. As corporate governance systems are traditionally mainly shaped by legislation, the first part will analyze the influence of the economic and technological change on the rule-setting process itself. How does this process react to the fundamental environmental change? That includes a short analysis of the solution of centralized harmonizing of company law within the EU as well as the question of whether EU-wide competition between national corporate law legislators can be observed or be expected in the future. The second part will then turn to the national level. It deals with actual tendencies of convergence or, more correctly, of approach by the German corporate governance system to the Anglo-American one.
The previous proposal for a company law directive on takeovers in 1990 was rejected in Germany almost unanimously for several different reasons. The new "slimmed down" draft proposal, in the light of the subsidiarity principle, takes the different approaches to investorprotection in the various member states better into account. Notably, the most controversial principle of the previous draft, viz. the mandatory bid rule as the only means of investorprotection in case of a change of control, has been given up. Therefore a much higher degree of acceptance seems likely. The Bundesrat (upper house) and the industry associations have already expressed their consent; the Bundestag (Federal Parliament) will deal with the proposal shortly. The technique of a "frame directive" leaves ample leeway for the member states. That will shift the discussion back to the national level and there will lead to the question as to how to make use of this leeway (cf. II, III, below) rather than to a debate about principles as in the past. It seems likely that criticism will confine itself to more technical questions (cf. IV, below).
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
Theticity
(1996)
The subject matter of this chapter is the semantic, syntactic and discoursepragmatic background as well as the cross-linguistic behavior of types of utterance exemplified by the following English sentences […]: (1) My NECK hurts. […] (2) The PHONE's ringing. [...] Sentences such as […] are usually held to stand in opposition to sentences with a topical subject. The difference is said to be formally marked, for example, by VS order vs. topical SV order (as in Albanian po bie telefoni 'the PHONE is ringing' vs. telefoni po bie 'the PHONE is RINGING'), or by accent on the subject only vs. accent on both the subject and the verb (as in the English translations). The term theticity will be used in the following to label the specific phenomenological domain to which the sentences in (1) and (2) belong. It has long been commonplace that these and similar expressions occur at particular points in the discourse where "a new situation is presented as a whole". We will try to depict and classify the various discourse situations in which these expressions have been found in the different languages, and we will try to trace out areas of cross-linguistic comparability. Finally, we will raise the question whether or not there is a common denominator which would justify a unified treatment of all these expressions in functional/semantic terms.
This paper is intended as a short survey of the most relevant methods for grouped transition data. The fundamentals of duration analysis are discussed in a continuous time framework, whereas the treatment of methods for discrete durations is limited to the peculiarity of these models. In addition, some recent empirical applications of the methods are discussed.
This paper provides a review of empirical evidence relating to the impact of training on employment performance. Since a central issue in estimating training effects is the sample selection problem a short theoretical discussion of different evaluation strategies is given. The empirical overview primarily focuses on non-experimental evidence for Germany. In addition selected studies for other countries and experimental investigations are discussed.
A partial rehabilitation of side-effecting I/O : non-determinism in non-strict functional languages
(1996)
We investigate the extension of non-strict functional languages like Haskell or Clean by a non-deterministic interaction with the external world. Using call-by-need and a natural semantics which describes the reduction of graphs, this can be done such that the Church-Rosser Theorems 1 and 2 hold. Our operational semantics is a base to recognise which particular equivalencies are preserved by program transformations. The amount of sequentialisation may be smaller than that enforced by other approaches and the programming style is closer to the common one of side-effecting programming. However, not all program transformations used by an optimising compiler for Haskell remain correct in all contexts. Our result can be interpreted as a possibility to extend current I/O-mechanism by non-deterministic deterministic memoryless function calls. For example, this permits a call to a random number generator. Adding memoryless function calls to monadic I/O is possible and has a potential to extend the Haskell I/O-system.
A new method for the determination of S-matrices of devices in multimoded waveguides and first experimental experiences are presented. The theoretical foundations are given. The scattering matrix of a TESLA copper cavity at a frequency above the cut-off of the second waveguide mode has been measured.
Rapidity distributions of net hyperons (Λ−Λ¯¯¯¯) are compared to distributions of participant protons (p−p¯¯¯). Strangeness production (mean multiplicities of produced Λ/Σ0 hyperons and ⟨K+K¯¯¯¯¯⟩) in central nucleus-nucleus collisions is shown for different collision systems at different energies. An enhanced production of Λ¯¯¯¯ compared to p¯¯¯ is observed at 200 GeV per nucleon.
The data on average hadron multiplicities in central A+A collisions measured at CERN SPS are analysed with the ideal hadron gas model. It is shown that the full chemical equilibrium version of the model fails to describe the experimental results. The agreement of the data with the off-equilibrium version allowing for partial strangeness saturation is significantly better. The freeze-out temperature of about 180 MeV seems to be independent of the system size (from S+S to Pb+Pb) and in agreement with that extracted in e+e-, pp and p{\bar p} collisions. The strangeness suppression is discussed at both hadron and valence quark level. It is found that the hadronic strangeness saturation factor gamma_S increases from about 0.45 for pp interactions to about 0.7 for central A+A collisions with no significant change from S+S to Pb+Pb collisions. The quark strangeness suppression factor lambda_S is found to be about 0.2 for elementary collisions and about 0.4 for heavy ion collisions independently of collision energy and type of colliding system
It is shown that data on pion and strangeness production in central nucleus-nucleus collisions are consistent with the hypothesis of a Quark Gluon Plasma formation between 15 A GeV/c (BNL AGS) and 160 A GeV/c (CERN SPS) collision energies. The experimental results interpreted in the framework of a statistical approach indicate that the effective number of degrees of freedom increases by a factor of about 3 in the course of the phase transition and that the plasma created at CERN SPS energy may have a temperature of about 280 MeV (energy density $\approx$ 10 GeV/fm^3). Experimental studies of central Pb+Pb collisions in the energy range 20-160 A GeV/c are urgently needed in order to localize the threshold energy, and study the properties of the QCD phase transition.
We demonstrate that a new type of analysis in heavy-ion collisions, based on an event-by-event analysis of the transverse momentum distribution, allows us to obtain information on secondary interactions and collective behaviour that is not available from the inclusive spectra. Using a random walk model as a simple phenomenological description of initial state scattering in collisions with heavy nuclei, we show that the event-by-event measurement allows a quantitative determination of this effect, well within the resolution achievable with the new generation of large acceptance hadron spectrometers. The preliminary data of the NA49 collaboration on transverse momentum fluctuations indicate qualitatively different behaviour than that obtained within the random walk model. The results are discussed in relation to the thermodynamic and hydrodynamic description of nuclear collisions.
The transverse momentum and rapidity distributions of net protons and negatively charged hadrons have been measured for minimum bias proton-nucleus and deuteron-gold interactions, as well as central oxygen-gold and sulphur-nucleus collisions at 200 GeV per nucleon. The rapidity density of net protons at midrapidity in central nucleus-nucleus collisions increases both with target mass for sulphur projectiles and with the projectile mass for a gold target. The shape of the rapidity distributions of net protons forward of midrapidity for d+Au and central S+Au collisions is similar. The average rapidity loss is larger than 2 units of rapidity for reactions with the gold target. The transverse momentum spectra of net protons for all reactions can be described by a thermal distribution with temperatures' between 145 +- 11 MeV (p+S interactions) and 244 +- 43 MeV (central S+Au collisions). The multiplicity of negatively charged hadrons increases with the mass of the colliding system. The shape of the transverse momentum spectra of negatively charged hadrons changes from minimum bias p+p and p+S interactions to p+Au and central nucleus-nucleus collisions. The mean transverse momentum is almost constant in the vicinity of midrapidity and shows little variation with the target and projectile masses. The average number of produced negatively charged hadrons per participant baryon increases slightly from p+p, p+A to central S+S,Ag collisions.
In this paper we analyze the relation between fund performance and market share. Using three performance measures we first establish that significant differences in the risk-adjusted returns of the funds in the sample exist. Thus, investors may react to past fund performance when making their investment decisions. We estimated a model relating past performance to changes in market share and found that past performance has a significant positive effect on market share. The results of a specification test indicate that investors react to risk-adjusted returns rather than to raw returns. This suggests that investors may be more sophisticated than is often assumed.
Modelling consumer behaviour in a profile design using a three equation generalised Tobit model
(1997)
We propose the application of a three equation generalised Tobit to model different aspects of consumer behaviour in a full profile study design. The model takes into account that consumer behaviour can be measured by preference scores, purchase probability and purchase volume. We aim to avoid the drawbacks of traditional conjoint analysis where the latter two aspects are disregarded. Starting from a full profile design, we develop the appropriate questionnaire layout, the econometric model, the likelihood function and tests. The model is applied in a market entry study for an innovative medicament after a reform of Germany´s public health system in 1993-1994. JEL Classification: C35,M31,L65
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
In the early 1990s, a consensus emerged among the leading experts in the field of small and micro business finance. It is based on three elements: The focus of projects should be on improving the entire financial sector of a given developing country; a commercial approach should be adopted, which implies covering costs and keeping costs as low as possible; and institutions should be created which are both able and willing to provide good financial services to the target group on a lasting basis. The starting point for this paper, which wholeheartedly endorses these three elements, is the proposition that putting these general principles into practice is much more difficult than some of their proponents seem to believe - and also more difficult than some of them have led donors to believe. The paper discusses the central issues of small and micro business financing in three areas: credit in general and the cost-effectiveness of lending methodologies in particular (Section II); savings in general and the role of deposit-taking in the growth of a target group-oriented financial institution in particular (Section III); and the process of creating viable target group-oriented financial institutions in developing countries (Section IV). We argue that donor institutions must be willing, and prepared, to play a role here which differs in important respects from their conventional role if they really wish to support sustainable financial sector development.
Paper Presented at the Conference on Workable Corporate Governance: Cross-Border Perspectives held in Paris, March 17-19, 1997 To appear in: A. Pezard/J.-M. Thiveaud: Workable Corporate Governance: Cross-Border Perspectives, Montchrestien, Paris 1997. The paper discusses the role of various constituencies in the corporate governance of a corporation from the perspective of incomplete contracts. A strict shareholder value orientation in the sense of a rule that at any time firm decisions should be made strictly in the interest of the present shareholders would make it difficult for the firm to establish long-term relationships as the potential partners would have to fear that, at a later stage of the co-operation, the shareholders or a management acting only on their behalf could exploit them because of the inevitable incompleteness of long-term contracts. One way of mitigating these problems is to put in place a corporate governance system which gives some active role to the other stakeholders or constituencies, or which makes their interests a well-defined element of the objective function of the firm. A commitment not to follow a policy of strict shareholder value maximization ex post can be efficient ex ante. Such a system would clearly differ from what is advocated by proponents of a "stakeholder approach", as it would limit the rights of the other constituencies to those which would have been agreed upon in a constitutional contract concluded between them and the founder of the firm at the time when long-term contracts are first established.
During the last years issues of strategic management accounting have received widespread attention in the accounting literature. Yet the conceptual foundation of most proposals is not clear. This paper presents a theoretical analysis of one of the most prominent approaches of strategic management accounting, i.e., Target Costing. First, the relationship between Target Costing and Life-Cycle-Costing is shown. Secondly, a model based on a mechanism-design-approach is used to answer the question of whether the „Market-into-Company“-method of Target Costing can somehow be endogenized. The model captures problems of asymmetric information, price policy and cost structures (i.e. learning effects etc.). The analysis shows that the more „strategic“ is the firm´s cost function, the less valid is „strategic“ management accounting in terms of the usual way Target Costing is employed.
Insider trading and portfolio structure in experimental asset markets with a long lived asset
(1997)
We report results of a series of nine market experiments with asymmetric information and a fundamental value process that is more "realistic" than those in previous experiments. Both a call market institution and a continuous double auction mechanism are employed. We find considerable pricing inefficiencies that are only partially exploited by insiders. The magnitude of insider gains is analyzed separately for each experiment. We find support for the hypothesis that the continuous double auction leads to more efficient outcomes. Finally, we present evidence of an endowment effect: the initial portfolio structure influences the final asset holdings of experimental subjects.
In this study we are concerned with the impact of vocational training on the individual’s unemployment duration in West Germany. The data basis used is the German Socio-Economic Panel (GSOEP) for the period from 1984 to 1994. To resolve the intriguing sample selection problem, i.e. to find an adequate control group for the group of trainees, we employ matching methods which were developed in the statistical literature. These matching methods uses as the main matching variable the individual propensity score to participate in training, which is obtained by estimating a random effects probit model. On the basis of the matched sample a discrete time hazard rate model is utilized to assess the impact of vocational training on unemployment duration. Our results indicate, that training significantly raises the transition rate of unemployed into employment in the short but not in the long run. JEL classification: C40, J20, J64
We estimate a semiparametric single-risk discrete-time duration model to assess the effect of vocational training on the duration of unemployment spells. The data basis used in this study is the German Socio-Economic-Panel (GSOEP) for West Germany for the period from 1986 to 1994. To take into account a possible selection bias actual participation in vocational training is instrumented using estimates of a randomeffects probit model for the participation in qualification measures. Our main results show that training does have a significant short term effect of reducing unemployment duration but that this effect does not persist in the long run. JEL classifications: C41, J20, J64
It is well known that first order uni cation is decidable, whereas second order and higher order unification is undecidable. Bounded second order unification (BSOU) is second order unification under the restriction that only a bounded number of holes in the instantiating terms for second order variables is permitted, however, the size of the instantiation is not restricted. In this paper, a decision algorithm for bounded second order unification is described. This is the fist non-trivial decidability result for second order unification, where the (finite) signature is not restricted and there are no restrictions on the occurrences of variables. We show that the monadic second order unification (MSOU), a specialization of BSOU is in sum p s. Since MSOU is related to word unification, this is compares favourably to the best known upper bound NEXPTIME (and also to the announced upper bound PSPACE) for word unification. This supports the claim that bounded second order unification is easier than context unification, whose decidability is currently an open question.
This paper describes context analysis, an extension to strictness analysis for lazy functional languages. In particular it extends Wadler's four point domain and permits in nitely many abstract values. A calculus is presented based on abstract reduction which given the abstract values for the result automatically finds the abstract values for the arguments. The results of the analysis are useful for veri fication purposes and can also be used in compilers which require strictness information.
The extraction of strictness information marks an indispensable element of an efficient compilation of lazy functional languages like Haskell. Based on the method of abstract reduction we have developed an e cient strictness analyser for a core language of Haskell. It is completely written in Haskell and compares favourably with known implementations. The implementation is based on the G#-machine, which is an extension of the G-machine that has been adapted to the needs of abstract reduction.