Refine
Year of publication
Document Type
- Part of a Book (22)
- Preprint (5)
- Article (2)
- Working Paper (1)
Language
- English (30)
Has Fulltext
- yes (30)
Is part of the Bibliography
- no (30)
Keywords
- Optimalitätstheorie (30) (remove)
Institute
- Extern (9)
As work like McCarthy (2002: 128) notes, pre-Optimality Theory (OT) phonology was primarily concerned with representations and theories of subsegmental structure. In contrast, the role of representations and choice of structural models has received little attention in OT. Some central representational issues of the pre-OT era have, in fact, become moot in OT (McCarthy 2002: 128). Further, as work like Baković (2007) notes, even for assimilatory processes where representation played a central role in the pre-OT era, constraint interaction now carries the main explanatory burden. Indeed, relatively few studies in OT (e.g., Rose 2000; Hargus & Beavert 2006; Huffmann 2005, 2007; Morén 2006) have argued for the importance of phonological representations. This paper intends to contribute to this work by reanalyzing a set of processes related to vowel harmony in Shimakonde, a Bantu language spoken in Mozambique and Tanzania. These processes are of particular interest, as Liphola’s (2001) study argues that they are derivationally opaque and so not amenable to an OT analysis. I show that the opacity disappears given the proper choice of representations for vowel features and a metrical harmony domain.
The main concern of this article is to discuss some recent findings concerning the psychological reality of optimality-theoretic pragmatics and its central part – bidirectional optimization. A present challenge is to close the gap between experimental pragmatics and neo-Gricean theories of pragmatics. I claim that OT pragmatics helps to overcome this gap, in particular in connection with the discussion of asymmetries between natural language comprehension and production. The theoretical debate will be concentrated on two different ways of interpreting bidirection: first, bidirectional optimization as a psychologically realistic online mechanism; second, bidirectional optimization as an offline phenomenon of fossilizing optimal form-meaning pairs. It will be argued that neither of these extreme views fits completely with the empirical data when taken per se.
One of the most important insights of Optimality Theory (Prince & Smolensky 1993) is that phonological processes can be reduced to the interaction between faithfulness and universal markedness principles. In the most constrained version of the theory, all phonological processes should be thus reducible. This hypothesis is tested by alternations that appear to be phonological but in which universal markedness principles appear to play no role. If we are to pursue the claim that all phonological processes depend on the interaction of faithfulness and markedness, then processes that are not dependent on markedness must lie outside phonology. In this paper I will examine a group of such processes, the initial consonant mutations of the Celtic languages, and argue that they belong entirely to the morphology of the languages, not the phonology.
It is well known that English children between the age of 4 and 6 display a so-called Delay of Principle B Effect (DPBE) in that they allow pronouns to refer to a local c-commanding antecedent. Their guessing pattern with pronouns contrasts with their adult-like interpretation of reflexives. The DPBE has been explained as resulting from a lack of pragmatic knowledge or insufficient cognitive resources. However, such extra-grammatical accounts cannot explain why the DPBE only shows up in particular languages and in particular syntactic environments. Moreover, such accounts fail to explain why the DPBE only emerges in comprehension and not in production. This paper hypothesizes that the presence or absence of the DPBE can be explained from the properties of the grammar. Fischer's (2004) optimality-theoretic analysis of binding, explaining cross-linguistic variation, and Hendriks and Spenader's (2005/6) optimality-theoretic account of the acquisition of pronouns and reflexives are combined into a single model. This model yields testable predictions with respect to the presence or absence of the DPBE in particular languages, in particular syntactic environments, and in comprehension and/or production.
Ever since the discovery of neural networks, there has been a controversy between two modes of information processing. On the one hand, symbolic systems have proven indispensable for our understanding of higher intelligence, especially when cognitive domains like language and reasoning are examined. On the other hand, it is a matter of fact that intelligence resides in the brain, where computation appears to be organized by numerical and statistical principles and where a parallel distributed architecture is appropriate. The present claim is in line with researchers like Paul Smolensky and Peter Gärdenfors and suggests that this controversy can be resolved by a unified theory of cognition – one that integrates both aspects of cognition and assigns the proper roles to symbolic computation and numerical neural computation.
The overall goal in this contribution is to discuss formal systems that are suitable for grounding the formal basis for such a unified theory. It is suggested that the instruments of modern logic and model theoretic semantics are appropriate for analyzing certain aspects of dynamical systems like inferring and learning in neural networks. Hence, I suggest that an active dialogue between the traditional symbolic approaches to logic, information and language and the connectionist paradigm is possible and fruitful. An essential component of this dialogue refers to Optimality Theory (OT) – taken as a theory that likewise aims to overcome the gap between symbolic and neuronal systems. In the light of the proposed logical analysis notions like recoverability and bidirection are explained, and likewise the problem of founding a strict constraint hierarchy is discussed. Moreover, a claim is made for developing an "embodied" OT closing the gap between symbolic representation and embodied cognition.
The article aims to give an overview about the application of Optimality Theory (OT) to the domain of pragmatics. In the introductory part we discuss different ways to view the division of labor between semantics and pragmatics. Rejecting the doctrine of literal meaning we conform to (i) semantic underdetermination and (ii) contextualism (the idea that the mechanism of pragmatic interpretation is crucial both for determining what the speaker says and what he means). Taking the assumptions (i) and (ii) as essential requisites for a natural theory of pragmatic interpretation, section 2 introduces the three main views conforming to these assumptions: Relevance theory, Levinson’s theory of presumptive meanings, and the Neo-Gricean approach. In section 3 we explain the general paradigm of OT and the idea of bidirectional optimization. We show how the idea of optimal interpretation can be used to restructure the core ideas of these three different approaches. Further, we argue that bidirectional OT has the potential to account both for the synchronic and the diachronic perspective on pragmatic interpretation. Section 4 lists relevant examples of using the framework of bidirectional optimization in the domain of pragmatics. Section 5 provides some general conclusions. Modeling both for the synchronic and the diachronic perspective on pragmatics opens the way for a deeper understanding of the idea of naturalization and (cultural) embodiment in the context of natural language interpretation.
To some, the relation between bidirectional optimality theory and game theory seems obvious: strong bidirectional optimality corresponds to Nash equilibrium in a strategic game (Dekker and van Rooij 2000). But in the domain of pragmatics this formally sound parallel is conceptually inadequate: the sequence of utterance and its interpretation cannot be modelled reasonably as a strategic game, because this would mean that speakers choose formulations independently of a meaning that they want to express, and that hearers choose an interpretation irrespective of an utterance that they have observed. Clearly, the sequence of utterance and interpretation requires a dynamic game model. One such model, and one that is widely studied and of manageable complexity, is a signaling game. This paper is therefore concerned with an epistemic interpretation of bidirectional optimality, both strong and weak, in terms of beliefs and strategies of players in a signaling game. In particular, I suggest that strong optimality may be regarded as a process of internal self-monitoring and that weak optimality corresponds to an iterated process of such self-monitoring. This latter process can be derived by assuming that agents act rationally to (possibly partial) beliefs in a self-monitoring opponent.
Horn's division of pragmatic labour (Horn, 1984) is a universal property of language, and amounts to the pairing of simple meanings to simple forms, and deviant meanings to complex forms. This division makes sense, but a community of language users that do not know it makes sense will still develop it after a while, because it gives optimal communication at minimal costs. This property of the division of pragmatic labour is shown by formalising it and applying it to a simple form of signalling games, which allows computer simulations to corroborate intuitions. The division of pragmatic labour is a stable communicative strategy that a population of communicating agents will converge on, and it cannot be replaced by alternative strategies once it is in place.
The phenomenon of phonological opacity has been the subject of much debate in recent years, with scholars opposed to the Optimality Theory (OT) research program arguing that opacity proves OT must be false, while the solutions proposed within OT, such as sympathy theory and stratal OT , have proved to be unsatisfying to many OT proponents, who have found these proposals to be inconsistent with the parallelist approach to phonological processes otherwise characteristic of OT. In this paper I reexamine one of the best known cases of opacity, that found in three processes of Tiberian Hebrew (TH), and argue that these processes only appear to be opaque, because previous analyses have treated them as pure phonology, rather than as an interaction between phonology and morphology. Once it is recognized that certain words of TH are lexically marked to end with a syllabic trochee, and that the goal of paradigm uniformity exerts grammatical pressure on phonology, the three processes no longer present a problem to parallelist OT. The results suggest the possibility that all crosslinguistic instances of apparent opacity can be explained in terms of the phonology-morphology interface and that purely phonological opacity does not exist. If this claim is true, then parallelist OT can be defended against its detractors without the need for additional mechanisms like sympathy theory and stratal OT.
The present study offers an Optimality-Theoretic analysis of the syllabification of intervocalic consonants and glides in Modern English. It will be argued that the proposed syllabifications fall out from universal markedness constraints – all of which derive motivation from other languages – and a language-specific ranking. The analysis offered below is therefore an alternative to the traditional rule-based analyses of English syllabification, e.g. Kahn (1976), Borowsky (1986), Giegerich (1992, 1999) and to the Optimality-Theoretic treatment proposed by Hammond (1999), whose analysis requires several language-specific constraints which apparently have no cross-linguistic motivation.