Working Paper
Refine
Year of publication
Document Type
- Working Paper (2351) (remove)
Language
- English (2351) (remove)
Is part of the Bibliography
- no (2351)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1376)
- Wirtschaftswissenschaften (1306)
- Sustainable Architecture for Finance in Europe (SAFE) (738)
- House of Finance (HoF) (604)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (148)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
Sovereign bond risk premiums
(2013)
Credit risk has become an important factor driving government bond returns. We therefore introduce an asset pricing model which exploits information contained in both forward interest rates and forward CDS spreads. Our empirical analysis covers euro-zone countries with German government bonds as credit risk-free assets. We construct a market factor from the first three principal components of the German forward curve as well as a common and a country-specific credit factor from the principal components of the forward CDS curves. We find that predictability of risk premiums of sovereign euro-zone bonds improves substantially if the market factor is augmented by a common and an orthogonal country-specific credit factor. While the common credit factor is significant for most countries in the sample, the country-specific factor is significant mainly for peripheral euro-zone countries. Finally, we find that during the current crisis period, market and credit risk premiums of government bonds are negative over long subintervals, a finding that we attribute to the presence of financial repression in euro-zone countries.
This paper takes a novel approach to estimating bankruptcy costs by inference from market prices of equity and put options using a dynamic structural model of capital structure. This approach avoids the selection bias of looking at firms in or near default and therefore permits theories of ex ante capital structure determination to be tested. We identify significant cross sectional variation in bankruptcy costs across industries and relate these to specific firm characteristics. We find that asset volatility and growth options have significant positive impacts, while tangibility and size have negative impacts. Our bankruptcy cost variable estimate significantly negatively impacts leverage ratios. This negative impact is in addition to that of other firm characteristics such as asset intangibility and asset volatility. The results provide strong support for the tradeoff theory of capital structure.
We study to what extent firms spread out their debt maturity dates across time, which we call "granularity of corporate debt." We consider the role of debt granularity using a simple model in which a firm's inability to roll over expiring debt causes inefficiencies, such as costly asset sales or underinvestment. Since multiple small asset sales are less costly than a single large one, firms may diversify debt rollovers across maturity dates. We construct granularity measures using data on corporate bond issuers for the 1991-2011 period and establish a number of novel findings. First, there is substantial variation in granularity in that many firms have either very concentrated or highly dispersed maturity structures. Second, our model's predictions are consistent with observed variation in granularity. Corporate debt maturities are more dispersed for larger and more mature firms, for firms with better investment opportunities, with higher leverage ratios, and with lower levels of current cash flows. We also show that during the recent financial crisis especially firms with valuable investment opportunities implemented more dispersed maturity structures. Finally, granularity plays an important role for bond issuances, because we document that newly issued corporate bond maturities complement pre-existing bond maturity profiles.
We consider an economy where individuals privately choose effort and trade competitively priced securities that pay off with effort-determined probability. We show that if insurance against a negative shock is sufficiently incomplete, then standard functional form restrictions ensure that individual objective functions are optimized by an effort and insurance combination that is unique and satisfies first- and second-order conditions. Modeling insurance incompleteness in terms of costly production of private insurance services, we characterize the constrained inefficiency arising in general equilibrium from competitive pricing of nonexclusive financial contracts.
We propose a new classification of consumption goods into nondurable goods, durable goods and a new class which we call “memorable” goods. A good is memorable if a consumer can draw current utility from its past consumption experience through memory. We construct a novel consumption-savings model in which a consumer has a well-defined preference ordering over both nondurable goods and memorable goods. Memorable goods consumption differs from nondurable goods consumption in that current memorable goods consumption may also impact future utility through the accumulation process of the stock of memory. In our model, households optimally choose a lumpy profile of memorable goods consumption even in a frictionless world. Using Consumer Expenditure Survey data, we then document levels and volatilities of different groups of consumption goods expenditures, as well as their expenditure patterns, and show that the expenditure patterns on memorable goods indeed differ significantly from those on nondurable and durable goods. Finally, we empirically evaluate our model’s predictions with respect to the welfare cost of consumption fluctuations and conduct an excess-sensitivity test of the consumption response to predictable income changes. We find that (i) the welfare cost of household-level consumption fluctuations may be overstated by 1.7 percentage points (11.9% points as opposed to 13.6% points of permanent consumption) if memorable goods are not appropriately accounted for; (ii) the finding of excess sensitivity of consumption documented in important papers of the literature might be entirely due to the presence of memorable goods.
There is mounting evidence that retail investors make predictable, costly investment mistakes, including underinvestment, naïve diversification, and payment of excessive fund fees. Over the past thirty-five years, however, participant-directed 401(k) plans have largely replaced professionally managed pension plans, requiring unsophisticated retail investors to navigate the financial markets themselves. Policy-makers have struggled with regulatory interventions designed to improve the quality of investment decisions without a clear understanding of the reasons for investor mistakes. Absent such an understanding, it is difficult to design effective regulatory responses. This article offers a first step in understanding the investor decision-making process. We use an internet-based experiment to disentangle possible explanations for inefficient investment decisions. The experiment employs a simplified construct of an employee’s allocation among the options in a retirement plan coupled with technology that enables us to collect data on the specific information that investors choose to view. In addition to collecting general information about the process by which investors choose among mutual fund options, we employ an experimental manipulation to test the effect of an instruction on the importance of mutual fund fees. Pairing this instruction with simplified fee disclosure allows us to distinguish between motivation-limits and cognition-limits as explanations for the widespread findings that investors ignore fees in their investment decisions. Our results offer partial but limited grounds for optimism. On the one hand, within our simplified experimental construct, our subjects allocated more money, on average, to higher-value funds. Furthermore, subjects who received the fees instruction paid closer attention to mutual fund fees and allocated their investments into funds with lower fees. On the other hand, the effects of even a blunt fees instruction were limited, and investors were unable to identify and avoid clearly inferior fund options. In addition, our results suggest that excessive, naïve diversification strategies are driving many investment decisions. Although our findings are preliminary, they suggest valuable avenues for future research and important implications for regulation of retail investing.
The substantial variation in the real price of oil since 2003 has renewed interest in the question of how to forecast monthly and quarterly oil prices. There also has been increased interest in the link between financial markets and oil markets, including the question of whether financial market information helps forecast the real price of oil in physical markets. An obvious advantage of financial data in forecasting oil prices is their availability in real time on a daily or weekly basis. We investigate whether mixed-frequency models may be used to take advantage of these rich data sets. We show that, among a range of alternative high-frequency predictors, especially changes in U.S. crude oil inventories produce substantial and statistically significant real-time improvements in forecast accuracy. The preferred MIDAS model reduces the MSPE by as much as 16 percent compared with the no-change forecast and has statistically significant directional accuracy as high as 82 percent. This MIDAS forecast also is more accurate than a mixed-frequency realtime VAR forecast, but not systematically more accurate than the corresponding forecast based on monthly inventories. We conclude that typically not much is lost by ignoring high-frequency financial data in forecasting the monthly real price of oil.
Model case procedures have some fundamentals in common with collective redress in civil law countries. This is particularly true in the field of investor protection which is highly regulated and marked by resulting enforcement failures, which led the German legislator to the enactment of the KapMuG and its recent amendment which highlight exemplary elements of model case procedure. A survey of the ongoing activities of the European Union in the area of collective redress and of its repercussions on the member state level therefore forms a suitable basis for the following analysis of the 2012 amendment of the KapMuG. It clearly brings into focus a shift from sector-specific regulation with an emphasis on the cross-border aspect of protecting consumers towards a “coherent approach” strengthening the enforcement of EU law. As a result, regulatory policy and collective redress are two sides of the same coin today. With respect to the KapMuG such a development brings about some tension between its aim to aggregate small individual claims as efficiently as possible and the dominant role of individual procedural rights in German civil procedure. This conflict can be illustrated by some specific rules of the KapMuG: its scope of application, the three-tier procedure of a model case procedure, the newly introduced notification of claims and the new opt-out settlement under the amended §§ 17-19.
We propose the realized systemic risk beta as a measure for financial companies’ contribution to systemic risk given network interdependence between firms’ tail risk exposures. Conditional on statistically pre-identified network spillover effects and market as well as balance sheet information, we define the realized systemic risk beta as the total time-varying marginal effect of a firm’s Value-at-risk (VaR) on the system’s VaR. Statistical inference reveals a multitude of relevant risk spillover channels and determines companies’ systemic importance in the U.S. financial system. Our approach can be used to monitor companies’ systemic importance allowing for a transparent macroprudential supervision.
We introduce a copula-based dynamic model for multivariate processes of (non-negative) high-frequency trading variables revealing time-varying conditional variances and correlations. Modeling the variables’ conditional mean processes using a multiplicative error model we map the resulting residuals into a Gaussian domain using a Gaussian copula. Based on high-frequency volatility, cumulative trading volumes, trade counts and market depth of various stocks traded at the NYSE, we show that the proposed copula-based transformation is supported by the data and allows capturing (multivariate) dynamics in higher order moments. The latter are modeled using a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficiently flexible to be applicable in high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in trading processes supports the usefulness of the approach. Taking these higher-order dynamics explicitly into account significantly improves the goodness-of-fit of the multiplicative error model and allows capturing time-varying liquidity risks.
Does it pay to invest in art? A selection-corrected returns perspective : [draft october 15, 2013]
(2013)
This paper shows the importance of correcting for sample selection when investing in illiquid assets with endogenous trading. Using a large sample of 20,538 paintings that were sold repeatedly at auction between 1972 and 2010, we find that paintings with higher price appreciation are more likely to trade. This strongly biases estimates of returns. The selection-corrected average annual index return is 6.5 percent, down from 10 percent for traditional uncorrected repeat sales regressions, and Sharpe Ratios drop from 0.24 to 0.04. From a pure financial perspective, passive index investing in paintings is not a viable investment strategy once selection bias is accounted for. Our results have important implications for other illiquid asset classes that trade endogenously.
The 2011 European short sale ban on financial stocks: a cure or a curse? : [version 31 july 2013]
(2013)
Did the August 2011 European short sale bans on financial stocks accomplish their goals? In order to answer this question, we use stock options’ implied volatility skews to proxy for investors’ risk aversion. We find that on ban announcement day, risk aversion levels rose for all stocks but more so for the banned financial stocks. The banned stocks’ volatility skews remained elevated during the ban but dropped for the other unbanned stocks. We show that it is the imposition of the ban itself that led to the increase in risk aversion rather than other causes such as information flow, options trading volumes, or stock specific factors. Substitution effects were minimal, as banned stocks’ put trading volumes and put-call ratios declined during the ban. We argue that although the ban succeeded in curbing further selling pressure on financial stocks by redirecting trading activity towards index options, this result came at the cost of increased risk aversion and some degree of market failure.
We show that the presence of high frequency trading (HFT) has significantly mitigated the frequency and severity of end-of-day price dislocation, counter to recent concerns expressed in the media. The effect of HFT is more pronounced on days when end of day price dislocation is more likely to be the result of market manipulation on days of option expiry dates and end of month. Moreover, the effect of HFT is more pronounced than the role of trading rules, surveillance, enforcement and legal conditions in curtailing the frequency and severity of end-of-day price dislocation. We show our findings are robust to different proxies of the start of HFT by trade size, cancellation of orders, and co-location.
We examine the impact of stock exchange trading rules and surveillance on the frequency and severity of suspected insider trading cases in 22 stock exchanges around the world over the period January 2003 through June 2011. Using new indices for market manipulation, insider trading, and broker-agency conflict based on the specific provisions of the trading rules of each stock exchange, along with surveillance to detect non-compliance with such rules, we show that more detailed exchange trading rules and surveillance over time and across markets significantly reduce the number of cases, but increase the profits per case.
We use responses to survey questions in the 2010 Italian Survey of Household Income and Wealth that ask consumers how much of an unexpected transitory income change they would consume. We find that the marginal propensity to consume (MPC) is 48 percent on average, and that there is substantial heterogeneity in the distribution. We find that households with low cash-on-hand exhibit a much higher MPC than affluent households, which is in agreement with models with precautionary savings where income risk plays an important role. The results have important implications for the evaluation of fiscal policy, and for predicting household responses to tax reforms and redistributive policies. In particular, we find that a debt-financed increase in transfers of 1 percent of national disposable income targeted to the bottom decile of the cash-on-hand distribution would increase aggregate consumption by 0.82 percent. Furthermore, we find that redistributing 1% of national disposable income from the top to the bottom decile of the income distribution would boost aggregate consumption by 0.33%.
Prior research suggests that those who rely on intuition rather than effortful reasoning when making decisions are less averse to risk and ambiguity. The evidence is largely correlational, however, leaving open the question of the direction of causality. In this paper, we present experimental evidence of causation running from reliance on intuition to risk and ambiguity preferences. We directly manipulate participants’ predilection to rely on intuition and find that enhancing reliance on intuition lowers the probability of being ambiguity averse by 30 percentage points and increases risk tolerance by about 30 percent in the experimental sub-population where we would a priori expect the manipulation to be successful(males).
Investment in financial literacy, social security and portfolio choice : [version may 21, 2013]
(2013)
We present an intertemporal portfolio choice model where individuals invest in financial literacy, save, allocate their wealth between a safe and a risky asset, and receive a pension when they retire. Financial literacy affects the excess return and the cost of stock market participation. Since literacy depreciates over time and has a cost related to current consumption, investors simultaneously choose how much to save, the portfolio allocation, and the optimal investment in literacy. This last depends on households' resources, its preference parameters and on how much financial literacy affects the returns on risky assets and the stock market participation cost, and the returns on social security wealth. The model implies one should observe a positive correlation between stock market participation (and risky asset share, conditional on participation) and financial literacy, and a negative correlation between the generosity of the social security system and financial literacy. The model also implies that the stock of financial literacy accumulated early in life is positively correlated with the individual's wealth and portfolio allocations later in life. Using microeconomic cross-country data, we find support for these predictions.
The U.S. Energy Information Administration (EIA) regularly publishes monthly and quarterly forecasts of the price of crude oil for horizons up to two years, which are widely used by practitioners. Traditionally, such out-of-sample forecasts have been largely judgmental, making them difficult to replicate and justify. An alternative is the use of real-time econometric oil price forecasting models. We investigate the merits of constructing combinations of six such models. Forecast combinations have received little attention in the oil price forecasting literature to date. We demonstrate that over the last 20 years suitably constructed real-time forecast combinations would have been systematically more accurate than the no-change forecast at horizons up to 6 quarters or 18 months. MSPE reduction may be as high as 12% and directional accuracy as high as 72%. The gains in accuracy are robust over time. In contrast, the EIA oil price forecasts not only tend to be less accurate than no-change forecasts, but are much less accurate than our preferred forecast combination. Moreover, including EIA forecasts in the forecast combination systematically lowers the accuracy of the combination forecast. We conclude that suitably constructed forecast combinations should replace traditional judgmental forecasts of the price of oil.
Are product spreads useful for forecasting? An empirical evaluation of the Verleger hypothesis
(2013)
Notwithstanding a resurgence in research on out-of-sample forecasts of the price of oil in recent years, there is one important approach to forecasting the real price of oil which has not been studied systematically to date. This approach is based on the premise that demand for crude oil derives from the demand for refined products such as gasoline or heating oil. Oil industry analysts such as Philip Verleger and financial analysts widely believe that there is predictive power in the product spread, defined as the difference between suitably weighted refined product market prices and the price of crude oil. Our objective is to evaluate this proposition. We derive from first principles a number of alternative forecasting model specifications involving product spreads and compare these models to the no-change forecast of the real price of oil. We show that not all product spread models are useful for out-of-sample forecasting, but some models are, even at horizons between one and two years. The most accurate model is a time-varying parameter model of gasoline and heating oil spot spreads that allows the marginal product market to change over time. We document MSPE reductions as high as 20% and directional accuracy as high as 63% at the two-year horizon, making product spread models a good complement to forecasting models based on economic fundamentals, which work best at short horizons.
U.S. retail food price increases in recent years may seem large in nominal terms, but after adjusting for inflation have been quite modest even after the change in U.S. biofuel policies in 2006. In contrast, increases in the real prices of corn, soybeans, wheat and rice received by U.S. farmers have been more substantial and can be linked in part to increases in the real price of oil. That link, however, appears largely driven by common macroeconomic determinants of the prices of oil and agricultural commodities rather than the pass-through from higher oil prices. We show that there is no evidence that corn ethanol mandates have created a tight link between oil and agricultural markets. Rather increases in food commodity prices not associated with changes in global real activity appear to reflect a wide range of idiosyncratic shocks ranging from changes in biofuel policies to poor harvests. Increases in agricultural commodity prices in turn contribute little to U.S. retail food price increases, because of the small cost share of agricultural products in food prices. There is no evidence that oil price shocks have caused more than a negligible increase in retail food prices in recent years. Nor is there evidence for the prevailing wisdom that oil-price driven increases in the cost of food processing, packaging, transportation and distribution are responsible for higher retail food prices. Finally, there is no evidence that oil-market specific events or for that matter U.S. biofuel policies help explain the evolution of the real price of rice, which is perhaps the single most important food commodity for many developing countries.
We investigate the theoretical impact of including two empirically-grounded insights in a dynamic life cycle portfolio choice model. The first is to recognize that, when managing their own financial wealth, investors incur opportunity costs in terms of current and future human capital accumulation, particularly if human capital is acquired via learning by doing. The second is that we incorporate age-varying efficiency patterns in financial decisionmaking. Both enhancements produce inactivity in portfolio adjustment patterns consistent with empirical evidence. We also analyze individuals’ optimal choice between self-managing their wealth versus delegating the task to a financial advisor. Delegation proves most valuable to the young and the old. Our calibrated model quantifies welfare gains from including investment time and money costs, as well as delegation, in a life cycle setting.
Household decisions are profoundly shaped by a complex set of financial options due to Social Security rules determining retirement, spousal, and survivor benefits, along with benefit adjustments that vary with the age at which these are claimed. These rules influence optimal household asset allocation, insurance, and work decisions, given life cycle demographic shocks such as marriage, divorce, and children. Our model generates a wealth profile and a low and stable equity fraction consistent with empirical evidence. We also confirm predictions that wives will claim retirement benefits earlier than husbands, while life insurance is mainly purchased by younger men. Our policy simulations imply that eliminating survivor benefits would sharply reduce claiming differences by sex while dramatically increasing men’s life insurance purchases.
This paper employs stochastic simulations of the New Area-Wide Model—a microfounded open-economy model developed at the ECB—to investigate the consequences of the zero lower bound on nominal interest rates for the evolution of risks to price stability in the euro area during the recent financial crisis. Using a formal measure of the balance of risks, which is derived from policy-makers’ preferences about inflation outcomes, we first show that downside risks to price stability were considerably greater than upside risks during the first half of 2009, followed by a gradual rebalancing of these risks until mid-2011 and a renewed deterioration thereafter. We find that the lower bound has induced a noticeable downward bias in the risk balance throughout our evaluation period because of the implied amplification of deflation risks. We then illustrate that, with nominal interest rates close to zero, forward guidance in the form of a time-based conditional commitment to keep interest rates low for longer can be successful in mitigating downside risks to price stability. However, we find that the provision of time-based forward guidance may give rise to upside risks over the medium term if extended too far into the future. By contrast, time-based forward guidance complemented with a threshold condition concerning tolerable future inflation can provide insurance against the materialisation of such upside risks.
Empirical evidence suggests that asset returns correlate more strongly in bear markets than conventional correlation estimates imply. We propose a method for determining complete tail correlation matrices based on Value-at-Risk (VaR) estimates. We demonstrate how to obtain more efficient tail-correlation estimates by use of overidentification strategies and how to guarantee positive semidefiniteness, a property required for valid risk aggregation and Markowitz{type portfolio optimization. An empirical application to a 30-asset universe illustrates the practical applicability and relevance of the approach in portfolio management.
We analyze the equilibrium in a two-tree (sector) economy with two regimes. The output of each tree is driven by a jump-diffusion process, and a downward jump in one sector of the economy can (but need not) trigger a shift to a regime where the likelihood of future jumps is generally higher. Furthermore, the true regime is unobservable, so that the representative Epstein-Zin investor has to extract the probability of being in a certain regime from the data. These two channels help us to match the stylized facts of countercyclical and excessive return volatilities and correlations between sectors. Moreover, the model reproduces the predictability of stock returns in the data without generating consumption growth predictability. The uncertainty about the state also reduces the slope of the term structure of equity. We document that heterogeneity between the two sectors with respect to shock propagation risk can lead to highly persistent aggregate price-dividend ratios. Finally, the possibility of jumps in one sector triggering higher overall jump probabilities boosts jump risk premia while uncertainty about the regime is the reason for sizeable diffusive risk premia.
This study presents an empirical analysis of capital and liability management in eight cases of bank restructurings and resolutions from eight different European countries. It can be read as a companion piece to an earlier study by the author covering the specific bank restructuring programs of Greece, Spain and Cyprus during 2012/13.
The study portrays for each case the timelines between the initial credit event and the (last) restructuring. It proceeds to discuss the capital and liability management activity before restructuring and the restructuring itself, launches an attempt to calibrate the extent of creditor participation as well as expected loss by government, and engages in a counterfactual discussion of what could have been a least cost restructuring approach.
Four of the eight cases are resolutions, i.e. the original bank is unwound (Anglo Irish Bank, Amagerbanken, Dexia, Laiki), while the four other banks have de-facto or de-jure become nationalized and are awaiting re-privatization after the restructuring (Deutsche Pfandbriefbank/Hypo Real Estate, Bankia, SNS Reaal, Alpha Bank). The case selection follows considerations of their model character for the European bank restructuring and resolution policy discussion while straddling both the U.S. (2007 - 2010) and the European (2010 - ) legs of the financial crisis, which each saw very different policy responses....
We provide an assessment of the determinants of the risk remia paid by non-financial corporations on long-term bonds. By looking at 5,500 issues over the period 2005-2012, we find that in recent years the sovereign debt market turbulence has been a major driver of corporate risk. Compared with the three-year period 2005-07 before the global financial crisis, in the years 2010-12 Italian, Spanish and Portuguese firms paid on average between 70 and 120 basis points of additional premium due to the negative spillovers from the sovereign debt crisis, while German firms got a discount of 40 basis points.
Advances in technology and several regulatory initiatives have led to the emergence of a competitive but fragmented equity trading landscape in the US and Europe. While these changes have brought about several benefits like reduced transaction costs, regulators and market participants have also raised concerns about the potential adverse effects associated with increased execution complexity and the impact on market quality of new types of venues like dark pools. In this article we review the theoretical and empirical literature examining the economic arguments and motivations underlying market fragmentation, as well as the resulting implications for investors' welfare. We start with the literature that views exchanges as natural monopolies due to presence of network externalities, and then examine studies which challenge this view by focusing on trader heterogeneity and other aspects of the microstructure of equity markets.
This paper examines a practice that is nearly imperceptible to historians because the bulk of evidence for it is to be found in the interstices of the beaten paths of legal and social history and because it mixes economic and religious matters in a strikingly unfamiliar manner. From the thirteenth to the sixteenth century, excommunication for debt offered ordinary people an economical, efficacious enforcement mechanism for small-scale, daily, unwritten credit. At the same time, the practice offered holders of ecclesiastical jurisdiction an important opportunity to round out their incomes, particularly in the difficult fifteenth century. This transitional practice reveals a level of credit below that of the letters of change, annuities secured on real property, or written obligations beloved of economic historians and historians of banking. Studying the practice casts light on the transition from the face-to-face, local economies of the high Middle Ages to the regional economies of the early modern period, on how the Reformation shaped early modern regimes of credit, and on how the disappearance of ecclesiastical civil justice facilitated the emergence of early modern juridically sovereign territories.
German Expressionist cinema is a movement that began in 1919. Expressionist film is marked by distinct visual features and performance styles that rebel against prior realist art movements. More than 20 years prior to the Expressionist movement, Sigmund Frued published "The Interpretation of Dreams" in 1899, a ground breaking study that links dreams to unconcious impulses. This thesis argues that the unexplained dream - like imagery found in two Expressionist films, The Cabinet of Dr. Caligari (Robert Wiene, 1920) and Dr. Mabus, the Gambler (Fritz Lang, 1922) - can be seen in terms of Freud's model of dreaming.
The paper explains the absence of resultative secondary predication in Russian as arising from a conflict of inferential interpretations. It formalises the framework necessary to express this proposal in terms of abductive reasoning with Poole systems in Gricean contexts. The conflict is shown to arise for default rules regulating alternative realisation of verb-internally specified consequent states. The paper thus indicates that typological variation may be due not only to different parameter values but to general inferential properties of the syntax-semantics mapping. The proposed theory also contradicts some widespread proposals that the absence of resultative secondary predication is due to the absence of some particular language feature.
Approaching the grammar of adjuncts : proceedings of the Oslo conference, September 22 - 25, 1999
(2000)
Issues on topics
(2000)
The present volume contains papers that bear mainly on issues concerning the topic concept. This concept is of course very broad and diverse. Also, different views are expressed in this volume. Some authors concentrate on the status of topics and non-topics in so-called topic prominent languages (i.e. Chinese), others focus on the syntactic behavior of topical constituents in specific European languages (German, Greek, Romance languages). The last contribution tries to bring together the concept of discourse topic (a non-syntactic notion) and the concept of sentence topic, i.e. that type of topic that all the preceding papers are concerned with.
Nominalizations
(2002)
The present volume is a selection of the papers presented in workshops at ZAS in Berlin in November 2000 and at theUniversity of Tübingen in April 2001, devoted to synchronic and, diachronic aspects of various types of nominalizations. Nominalization has a long history in linguistic research. Its nature can only be captured by taking into account the interface between morphology, syntax and semantics on the one hand, and the interface between semantics and conceptual structure on the other.
This volume represents a collection of papers that present some of the results of two projects on control: on the one hand, the project Typology of complement control directed by Barbara Stiebels and funded by the German Research Foundation (DFG STI 151/2-2), and on the other hand the project Variation in control structures directed by Maria Polinsky and Eric Potsdam and funded by the US National Science Foundation (NSF grants BCS-0131946, BCS-0131993; website http://accent.ucsd.edu/). Whereas the first project pursued a lexical approach to control with a semantic definition of obligatory control, the second project has mainly pursued a syntactic approach to control – with special emphasis on less studied control structures (such as adjunct control, backward control, finite control, etc.). Both projects have aimed at extending the research on complement control to structures that differ from the prototypical cases of infinitival complements with empty subjects found in many Indo-European languages; their common interest was to bring in new empirical data, both primary and experimental.
To monitor one's speech means to check the speech plan for errors, both before and after talking. There are several theories as to how this process works. We give a short overview on the most influential theories only to focus on the most widely received one, the Perceptual Loop Theory of monitoring by Levelt (1983). One of the underlying assumptions of this theory is the existence of an Inner Loop, a monitoring device that checks for errors before speech is articulated. This paper collects evidence for the existence of such an internal monitoring device and questions how it might work. Levelt's theory argues that internal monitoring works by means of perception, but there are other empirical findings that allow for the assumption that an Inner Loop could also use our speech production devices. Based on data from both experimental and aphasiological papers we develop a model based on Levelt (1983) which shows that internal monitoring might in fact make use of both perception and production means.
Band II von II
Band I von II
The papers in this volume were presented at the eleventh meeting of the Austronesian Formal Linguistics Association (AFLA 11), held from April 23-25 at the Zentrum für Allgemeine Sprachwissenschaft, Berlin, Germany. The conference was organized by Hans-Martin Gärtner, Joachim Sabel, and myself, as part of the research project Clause Structure and Adjuncts in Austronesian Languages. We gratefully acknowledge the financial support by the German Research Foundation (Deutsche Forschungsgemeinschaft). We would like to thank Wayan Arka, Agibail Cohn, Laura Downing, Silke Hamann, S J Hannahs, Ray Harlow, Nikolaus Himmelmann, Yuchua E. Hsiao, Lillian Huang, Ed Keenan, Glyne Piggott, Charles Randriamasimanana, Joszef Szakos, Barbara Stiebels, Jane Tang, Lisa Travis, Noami Tsukido, Sam Wang, Elizabeth Zeitoun, Kie Ross Zuraw, and Marzena Zygis for reviewing the abstracts. We are thankful to Mechthild Bernhard, Jenny Ehrhardt, Fabienne Fritzsche, Theódóra Torfadóttir and Tue Trinh for their help during the conference. I would like to thank Theódóra for providing essential editorial assistance.
Table of Contents:
T. A. Hall (Indiana University): English syllabification as the interaction of markedness constraints
Antony D. Green: Opacity in Tiberian Hebrew: Morphology, not phonology
Sabine Zerbian (ZAS Berlin): Phonological Phrases in Xhosa (Southern Bantu)
Laura J. Downing (ZAS Berlin): What African Languages Tell Us About Accent Typology
Marzena Zygis (ZAS Berlin): (Un)markedness of trills: the case of Slavic r-palatalisation
Laura J. Downing (ZAS Berlin), Al Mtenje (University of Malawi), Bernd Pompino-Marschall (Humboldt-Universitat Berlin): Prosody and Information Structure in Chichewa
T. A. Hall (Indiana University). Silke Hamann (ZAS Berlin), Marzena Zygis (ZAS Berlin): The phonetics of stop assibilation
Christian Geng (ZAS Berlin), Christine Mooshammer (Universitat Kiel): The Hungarian palatal stop: phonological considerations and phonetic data
This volume presents a collection of papers touching on various issues concerning the syntax and semantics of predicative constructions.
A hot topic in the study of predicative copula constructions, with direct implications for the treatment of he (how many he's do we need?), and wider implications for the theories of predication, event-based semantics and aspect, is the nature and source of the situation argument. Closer examination of copula-less predications is becoming increasingly relevant to all these issues, as is clearly illustrated by the present collection.
The paper makes two contributions to semantic typology of secondary predicates. It provides an explanation of the fact that Russian has no resultative secondary predicates, relating this explanation to the interpretation of secondary predicates in English. And it relates depictive secondary predicates in Russian, which usually occur in the instrumental case, to other uses of the instrumental case in Russian, establishing here, too, a difference to English concerning the scope of the secondary predication phenomenon.
Questions and focus
(2003)
This 18th issue of ZAS-Papers in Linguistics consists of papers on the development of verb acquisition in 9 languages from the very early stages up to the onset of paradigm construction. Each of the 10 papers deals with first-Ianguage developmental processes in one or two children studied via longitudinal data. The languages involved are French, Spanish, Russian, Croatian, Lithuanien, Finnish, English and German. For German two different varieties are examined, one from Berlin and one from Vienna. All papers are based on presentations at the workshop 'Early verbs: On the way to mini-paradigms' held at the ZAS (Berlin) on the 30./31. of September 2000. This workshop brought to a close the first phase of cooperation between two projects on language acquisition which has started in October 1999:
a) the project on "Syntaktische Konsequenzen des Morphologieerwerbs" at the ZAS (Berlin) headed by Juergen Weissenborn and Ewald Lang, and financially supported by the Deutsche Forschungsgemeinschaft, and
b) the international "Crosslinguistic Project on Pre- and Protomorphology in Language Acquisition" coordinated by Wolfgang U. Dressler in behalf of the Austrian Academy of Sciences.