Refine
Year of publication
Document Type
- Working Paper (2350) (remove)
Language
- English (2350) (remove)
Is part of the Bibliography
- no (2350)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1376)
- Wirtschaftswissenschaften (1306)
- Sustainable Architecture for Finance in Europe (SAFE) (738)
- House of Finance (HoF) (604)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (147)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
Namibia is known to be the most arid country south of the Sahara. Average annual rainfall is not only relatively low in most parts of the country, it is also highly variable. Only 8 per cent of the country receives enough rain during a normal rainy season to practice rainfed cultivation. At the same time between 60 per cent and 70 per cent of the population depend on subsistence agro-pastoralism in non-freehold or communal areas. Against the background of rising unemployment, the livelihoods of the majority of these people are likely to depend on natural resources in the foreseeable future.
Natural resources generally are under considerable strain. As the rural population increases, so is the demand for natural resources, land and water specifically. Dependency on subsistence farming which is the result of large scale rural poverty exacerbates the problem. Large parts of the country are stocked injudiciously, resulting in overgrazing and water is frequently overabstracted, leading to declining water tables (MET 2005: 2).
Unequal access to both land and water has prompted government to introduce reforms in these sectors. These reforms were guided by the desire to manage resources more sustainably while providing more equal access to them. In terms of NDP 2, sustainability means to use natural resources in such a way so as not to ‘compromise the ability of future generations to make use of these resources’ (NDP 2: 595).
Immediately after Independence government started reform processes in the land and water sectors. However, these reforms have happened at different paces and largely independent of each other. Increasingly policy makers and development practitioners realised that land and water management needed to be integrated, as decisions about land management and land use options had a direct impact on water resources. Conversely the availability of water sets the parameters for what is possible in terms of agricultural production and other land uses. The north-central regions face a particular challenge in this regard as the region carries more livestock than it can sustain in the long run. At the same time, close to half the households do not own any livestock. Access to livestock by these households would improve their abilities to cultivate their land more efficiently in order to feed themselves and thus reduce poverty levels.
But livestock are a major consumer of water. In 2000 livestock was consuming more water than the domestic sector. The figures were 77Mm3/a and 67Mm3/a respectively (Urban et al. 2003 Annex 7: 2). This situation has prompted a Project Progress Report on the Namibia Water Resources Management Review in 2003 to conclude that Given the extreme water scarcity in most parts of the country, land and water issues are closely linked. It therefore seems indispensable to mutually adjust land – and water sector reform processes (Ibid: 20).
This paper will briefly look at four institutions that are central to land and water management with a view to assess the extent to which they interact. These are Communal Land Boards, Water Point Committees, Traditional Authorities and Regional Councils. A discussion of relevant policy documents and legislative instruments will investigate whether the existing policy framework
provides for an integrated approach or not. Before doing this, it appears sensible to briefly situate these four institutions in the wider maze of institutions operating at regional and
sub-regional level. All these institutions – important as they are in the quest to improve participation at the regional and sub-regional level – are competing for time and input fros mallscale farmers.
Although intellectual property law is a distinctively Western, modern, and relatively young body of law, it has spread all over the world, now encompassing all but a very few outsiders such as Afghanistan, Somalia, and Vanuatu. This article presents three legal transfers that contributed to this development: first, from real property in land and movables to intellectual property in the late 18th century in Western Europe; second, from Western Europe, in particular from the United Kingdom and France to the rest of the world during the colonial era in the 19th and early 20th century; third, from the protection of new knowledge to the protection of traditional knowledge, held by indigenous communities in developing countries, on 5 August 1963. This story illuminates how legal transfers in a broad sense – including, but not limited to legal transplants - drive the evolution of law.
In this paper, we provide some reflections on the development of monetary theory and monetary policy over the last 150 years. Rather than presenting an encompassing overview, which would be overambitious, we simply concentrate on a few selected aspects that we view as milestones in the development of this subject. We also try to illustrate some of the interactions with the political and financial system, academic discussion and the views and actions of central banks.
In this paper we investigate the comparative properties of empirically-estimated monetary models of the U.S. economy using a new database of models designed for such investigations. We focus on three representative models due to Christiano, Eichenbaum, Evans (2005), Smets and Wouters (2007) and Taylor (1993a). Although these models differ in terms of structure, estimation method, sample period, and data vintage, we find surprisingly similar economic impacts of unanticipated changes in the federal funds rate. However, optimized monetary policy rules differ across models and lack robustness. Model averaging offers an effective strategy for improving the robustness of policy rules.
In this paper, we provide some reflections on the development of monetary theory and monetary policy over the last 150 years. Rather than presenting an encompassing overview, which would be overambitious, we simply concentrate on a few selected aspects that we view as milestones in the development of this subject. We also try to illustrate some of the interactions with the political and financial system, academic discussion and the views and actions of central banks.
This paper investigates how an office-motivated incumbent can use transparency enhancement on public spending to signal his budgetary management ability and win re-election. We show that when the incumbent faces a popular challenger, transparency policy can be an effective signaling device. A more popular challenger can reduce the probability to enhance transparency, while voters can be better off due to a more informative signaling. It is also shown that a higher level of public interest in fiscal issues can increase the probability of enhancing transparency, while voters can be worse off by a less informative signaling.
This paper constructs a dynamic model of health insurance to evaluate the short- and long run effects of policies that prevent firms from conditioning wages on health conditions of their workers, and that prevent health insurance companies from charging individuals with adverse health conditions higher insurance premia. Our study is motivated by recent US legislation that has tightened regulations on wage discrimination against workers with poorer health status (Americans with Disability Act of 2009, ADA, and ADA Amendments Act of 2008, ADAAA) and that will prohibit health insurance companies from charging different premiums for workers of different health status starting in 2014 (Patient Protection and Affordable Care Act, PPACA). In the model, a trade-off arises between the static gains from better insurance against poor health induced by these policies and their adverse dynamic incentive effects on household efforts to lead a healthy life. Using household panel data from the PSID we estimate and calibrate the model and then use it to evaluate the static and dynamic consequences of no-wage discrimination and no-prior conditions laws for the evolution of the cross-sectional health and consumption distribution of a cohort of households, as well as ex-ante lifetime utility of a typical member of this cohort. In our quantitative analysis we find that although a combination of both policies is effective in providing full consumption insurance period by period, it is suboptimal to introduce both policies jointly since such policy innovation induces a more rapid deterioration of the cohort health distribution over time. This is due to the fact that combination of both laws severely undermines the incentives to lead healthier lives. The resulting negative effects on health outcomes in society more than offset the static gains from better consumption insurance so that expected discounted lifetime utility is lower under both policies, relative to only implementing wage nondiscrimination legislation.
This paper investigates the effect of anticipated/experienced regret and pride on individual investors’ decisions to hold or sell a winning or losing investment, in the form of the disposition effect. As expected the results suggest that in the loss domain, low anticipated regret predicts a greater probability of selling a losing investment. While in the gain domain, high anticipated pride indicates a greater probability of selling a winning investment. The effects of high experienced regret/pride on the selling probability are found as well. An unexpected finding is that regret (pride) seems to be not only relevant for the loss (gain) domain, but also for the gain (loss) domain. In addition, this paper presents evidence of interconnectedness between anticipated and experienced emotions. The authors discuss the implications of these findings and possible avenues for further research.
After nearly two decades of US leadership during the 1980s and 1990s, are Europe’s venture capital (VC) markets in the 2000s finally catching up regarding the provision of financing and successful exits, or is the performance gap as wide as ever? Are we amid an overall VC performance slump with no encouraging news? We attempt to answer these questions by tracking over 40,000 VC-backed firms stemming from six industries in 13 European countries and the US between 1985 and 2009; determining the type of exit – if any – each particular firm’s investors choose for the venture.
Venture capital (VC) investment has long been conceptualized as a local business , in which the VC’s ability to source, syndicate, fund, monitor, and add value to portfolio firms critically depends on their access to knowledge obtained through their ties to the local (i.e., geographically proximate) network. Consistent with the view that local networks matter, existing research confirms that local and geographically distant portfolio firms are sourced, syndicated, funded, and monitored differently. Curiously, emerging research on VC investment practice within the United States finds that distant investments, as measured by “exits” (either initial public offering or merger & acquisition) out-perform local investments. These findings raise important questions about the assumed benefits of local network membership and proximity. To more deeply probe these questions, we contrast the deal structure of cross-border VC investment with domestic VC investment, and contrast the deal structure of cross-border VC investments that include a local
partner with those that do not. Evidence from 139,892 rounds of venture capital financing in the period 1980-2009 suggests that cross-border investment practice, in terms of deal sourcing, syndication, and performance indeed change with proximity, but that monitoring practices do not. Further, we find that the inclusion of a local partner in the investment syndicate yields surprisingly few benefits. This evidence, we argue, raises important questions about VC investment practice as well as the ability of firms to capture and lever the presumed benefits of network membership.
From its early post-war catch-up phase, Germany’s formidable export engine has been its consistent driver of growth. But Germany has almost equally consistently run current account surpluses. Exports have powered the dynamic phases and helped emerge from stagnation. Volatile external demand, in turn, has elevated German GDP growth volatility by advanced countries’ standards, keeping domestic consumption growth at surprisingly low levels. As a consequence, despite the size of its economy and important labor market reforms, Germany’s ability to act as global locomotive has been limited. With increasing competition in its traditional areas of manufacturing, a more domestically-driven growth dynamic, especially in the production and delivery of services, will be good for Germany and for the global economy. Absent such an effort, German growth will remain constrained, and Germany will play only a modest role in spurring growth elsewhere.
In this paper we develop empirical measures for the strength of spillover effects. Modifying and extending the framework by Diebold and Yilmaz (2011), we quantify spillovers between sovereign credit markets and banks in the euro area. Spillovers are estimated recursively from a vector autoregressive model of daily CDS spread changes, with exogenous common factors. We account for interdependencies between sovereign and bank CDS spreads and we derive generalised impulse response functions. Specifically, we assess the systemic effect of an unexpected shock to the creditworthiness of a particular sovereign or country-specific bank index to other sovereign or bank CDSs between October 2009 and July 2012. Channels of transmission from or to sovereigns and banks are aggregated as a Contagion index (CI). This index is disentangled into four components, the average potential spillover: i) amongst sovereigns, ii) amongst banks, iii) from sovereigns to banks, and iv) vice-versa. We highlight the impact of policy-related events along the different components of the contagion index. The systemic contribution of each sovereign or banking group is quantified as the net spillover weight in the total net-spillover measure. Finally, the captured time-varying interdependence between banks and sovereigns emphasises the evolution of their strong nexus.
We use a novel disaggregate sectoral euro area data set with a regional breakdown to investigate price changes and suggest a new method to extract factors from over-lapping data blocks. This allows us to separately estimate aggregate, sectoral, country-specific and regional components of price changes. We thereby provide an improved estimate of the sectoral factor in comparison with previous literature, which decomposes price changes into an aggregate and idiosyncratic component only, and interprets the latter as sectoral. We find that the sectoral component explains much less of the variation in sectoral regional inflation rates and exhibits much less volatility than previous findings for the US indicate. We further contribute to the literature on price setting by providing evidence that country- and region-specific factors play an important role in addition to the sector-specific factors, emphasising heterogeneity of inflation dynamics along different dimensions. We also conclude that sectoral price changes have a “geographical” dimension, that leads to new insights regarding the properties of sectoral price changes.
In the aftermath of the global financial crisis and great recession, many countries face substantial deficits and growing debts. In the United States, federal government outlays as a ratio to GDP rose substantially from about 19.5 percent before the crisis to over 24 percent after the crisis. In this paper we consider a fiscal consolidation strategy that brings the budget to balance by gradually reducing this spending ratio over time to the level that prevailed prior to the crisis. A crucial issue is the impact of such a consolidation strategy on the economy. We use structural macroeconomic models to estimate this impact focussing primarily on a dynamic stochastic general equilibrium model with price and wage rigidities and adjustment costs. We separate out the impact of reductions in government purchases and transfers, and we allow for a reduction in both distortionary taxes and government debt relative to the baseline of no consolidation. According to the model simulations GDP rises in the short run upon announcement and implementation of this fiscal consolidation strategy and remains higher than the baseline in the long run. We explore the role of the mix of expenditure cuts and tax reductions as well as gradualism in achieving this policy outcome. Finally, we conduct sensitivity studies regarding the type of model used and its parameterization.
We examine both the degree and the structural stability of inflation persis tence at different quantiles of the conditional inflation distribution. Previous research focused exclusively on persistence at the conditional mean of the inflation rate. Economic theory, however, provides various reasons -for example downward wage rigidities or menu costs- to expect higher inflation persistence at the upper than at the lower tail of the conditional inflation distribution.
Based on post-war US data we indeed find slower mean reversion in response to positive than to negative shocks. We find robust evidence for a structural break in persistence at all quantiles of the inflation process in the early 1980s. Inflation persistence has decreased and become more homogeneous across quantiles. Persistence at the conditional mean became more informative about the degree of persistence across the entire conditional inflation distribution. While prior to the 1980s inflation was not mean reverting in response to large positive shocks, our evidence strongly suggests that since the end of the Volcker disinflation the unit root can be rejected at every quantile including the upper tail of the conditional inflation distribution.
This paper investigates the accuracy of point and density forecasts of four DSGE models for inflation, output growth and the federal funds rate. Model parameters are estimated and forecasts are derived successively from historical U.S. data vintages synchronized with the Fed’s Greenbook projections. Point forecasts of some models are of similar accuracy as the forecasts of nonstructural large dataset methods. Despite their common underlying New Keynesian modeling philosophy, forecasts of different DSGE models turn out to be quite distinct. Weighted forecasts are more precise than forecasts from individual models. The accuracy of a simple average of DSGE model forecasts is comparable to Greenbook projections for medium term horizons. Comparing density forecasts of DSGE models with the actual distribution of observations shows that the models overestimate uncertainty around point forecasts.
The withdrawal of foreign capital from emerging countries at the height of the recent financial crisis and its quick return sparked a debate about the impact of capital flow surges on asset markets. This paper addresses the response of property prices to an inflow of foreign capital. For that purpose we estimate a panel VAR on a set of Asian emerging market economies, for which the waves of inflows were particularly pronounced, and identify capital inflow shocks based on sign restrictions. Our results suggest that capital inflow shocks have a significant effect on the appreciation of house prices and equity prices. Capital inflow shocks account for - roughly - twice the portion of overall house price changes they explain in OECD countries. We also address crosscountry differences in the house price responses to shocks, which are most likely due to differences in the monetary policy response to capital inflows.
The complexity resulting from intertwined uncertainties regarding model misspecification and mismeasurement of the state of the economy defines the monetary policy landscape. Using the euro area as laboratory this paper explores the design of robust policy guides aiming to maintain stability in the economy while recognizing this complexity. We document substantial output gap mismeasurement and make use of a new model data base to capture the evolution of model specification. A simple interest rate rule is employed to interpret ECB policy since 1999. An evaluation of alternative policy rules across 11 models of the euro area confirms the fragility of policy analysis optimized for any specific model and shows the merits of model averaging in policy design. Interestingly, a simple difference rule with the same coefficients on inflation and output growth as the one used to interpret ECB policy is quite robust as long as it responds to current outcomes of these variables.
Motivated by the U.S. events of the 2000s, we address whether a too low for too long interest rate policy may generate a boom-bust cycle. We simulate anticipated and unanticipated monetary policies in state-of-the-art DSGE models and in a model with bond financing via a shadow banking system, in which the bond spread is calibrated for normal and optimistic times. Our results suggest that the U.S. boom-bust was caused by the combination of (i) too low for too long interest rates, (ii) excessive optimism and (iii) a failure of agents to anticipate the extent of the abnormally favorable conditions.
In this paper, I introduce lumpy micro-level capital adjustment into a sticky information general equilibrium model. Lumpy adjustment arises because of inattentiveness in capital investment decisions instead of the more common assumption of non-convex adjustment costs. The model features inattentiveness as the only source of stickiness. I find that the model with lumpy investment yields business cycle dynamics which differ substantially from those of an otherwise identical model with frictionless investment and are much more consistent with the empirical evidence. These results therefore strengthen the case in favour of the relevance of microeconomic investment lumpiness for the business cycle.
This paper outlines relatively easy to implement reforms for the supervision of transnational banking-groups in the E.U. that should not be primarily based on legal form but on the actual risk structures of the pertinent financial institutions. The proposal also aims at paying close attention to the economics of public administration and international relations in allocating competences among national and supranational supervisory bodies. Before detailing the own proposition, this paper looks into the relationship between sovereign debt and banking crises that drive regulatory reactions to the financial turmoil in the Euro area. These initiatives inter alia affirm effective prudential supervision as a pivotal element of crisis prevention. In order to arrive at a more informed idea, which determinants apart from a perceived appetite for regulatory arbitrage drive banks’ organizational choices, this paper scrutinizes the merits of either a branch or subsidiary structure for the cross-border business of financial institutions. In doing so, it also considers the policy-makers perspective. The analysis shows that no one size fits all organizational structure is available and concludes that banks’ choices should generally not be second-guessed, particularly because they are subject to (some) market discipline. The analysis proceeds with describing and evaluating how competences in prudential supervision are currently allocated among national and supranational supervisory authorities. In order to assess the findings the appraisal adopts insights form the economics of public administration and international relations. It argues that the supervisory architecture has to be more aligned with bureaucrats’ incentives and that inefficient requirements to cooperate and share information should be reduced. Contrary to a widespread perception, shifting responsibility to a supranational authority cannot solve all the problems identified. Resting on these foundations, the last part of this paper finally sketches an alternative solution that dwells on far-reaching mutual recognition of national supervisory regimes and allocates competences in line with supervisors’ incentives and the risk inherent in crossborder banking groups.
A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step operational semantics is precise and explicit, and employs an early abort of conflicting transactions. A proof of correctness of the implementation is given for a contextual semantics with may- and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
Power and law in enlightened absolutism : Carl Gottlieb Svarez' theoretical and practical approach
(2012)
The term Enlightened Absolutism reflects a certain tension between its two components. This tension is in a way a continuation of the dichotomy between power on one hand and law on the other. The present paper shall provide an analysis of these two concepts from the perspective of Carl Gottlieb Svarez, who, in his position as a high-ranking Prussian civil servant and legal reformist, has had unparalleled influence on the legislative history of the
Prussian states towards the end of the 18th century. Working side-by-side with Johann Heinrich Casimir von Carmer, who held the post of Prussian minister of justice from 1779 to 1798, Svarez was able to make use of his talent for reforming and legislating. From 1780 to 1794 he was primarily responsible for the elaboration of the codification of the Prussian private law – the “Allgemeines Landrecht für die Preußischen Staaten” in 1794. In the present paper, Svarez’ approach to the relation between law and power shall be analysed on two different levels. Firstly, on a theoretical level, the reformist’s thoughts and reflections as laid down in his numerous works, papers and memorandums, shall be discussed. Secondly, on a practical level, the question of the extent to which he implemented his ideas in Prussian legal reality shall be explored.
Rare Earth Elements (REE) have become the new strategic economic weapon for the modern age. Used in the manufacturing of products ranging from mobile phones to jet fighter engines, REEs have become the new “oil” of today in terms of economic and strategic importance. Currently, 95% of REEs mined globally are mined in China, giving China a monopoly on the industry. Deng Xiaoping foresaw the importance of REEs in 1992 when he commented: “as there is oil in the Middle East, there is rare earth in China.” Recently, China temporarily stopped exports of REEs to Japan, the EU and the US as an unofficial response to varying political and economic issues. This stoppage raised concerns as to the dependability of China and REE exports. Using the theory of neo-mercantilism, this paper analyzes China’s actions in the REE market and its subsequent economic and political implications. It concludes with a look at how countries are trying to position themselves away from a dependency on China.
Japan's quest for energy security : risks and opportunities in a changing geopolitical landscape
(2011)
For much of the 20th century, economic growth was fueled by cheap oil-based energy supply. Due to increasing resource constraints, however, the political and strategic importance of oil has become a significant part of energy and foreign policy making in East and Southeast Asian countries. In Japan, the rise of China’s economic and military power is a source of considerable concern. To enhance energy security, the Japanese government has recently amended its energy regulatory framework, which reveals high political awareness of risks resulting from the looming key resources shortage and competition over access. An essential understanding that national energy security is a politically and economically sensitive area with a clear international dimension affecting everyday life is critical in shaping a nation’s energy future.
It has often been asked whether today´s Japan will be able to move into new and promising industries, or whether it is locked into an innovation system with an inherent inability to give birth to new industries. One argument reasons that the thick institutional complementarities among labour, innovation, and finance among its enterprises and the public sector favour industrial development in sectors of intermediate uncertainty, while it is difficult to move into areas of major uncertainty. In this paper, we present the case of the silver industry or, somewhat more prosaically, the 60+ or even 50+ industry, for which most would agree that Japan has indeed become a lead market and lead producer on the global market. For an institutional economist, the case of the silver industry is particularly interesting, because Japan´s success is based on the cooperation of existing actors, the enterprise and public sector in particular, which helped overcome the information uncertainties and asymmetries involved in the new market by relying on several established mechanisms developed well before. In that sense, Japan´s silver industry presents a case of of what we propose to call successful institutional path activation with the effect of an innovative market creation, instead of the problematic lockin effects that are usually associated with the term path dependence.
The emergence of Capitalism is said to always lead to extreme changes in the structure of a society. This view implies that Capitalism is a universal and unique concept that needs an explicit institutional framework and should not discriminate between a German or US Capitalism. In contrast, this work argues that the ‘ideal type’ of Capitalism in a Weberian sense does not exist. It will be demonstrated that Capitalism is not a concept that shapes a uniform institutional framework within every society, constructing a specific economic system. Rather, depending on the institutional environment - family structures in particular - different forms of Capitalism arise. To exemplify this, the networking (Guanxi) Capitalism of contemporary China will be presented, where social institutions known from the past were reinforced for successful development. It will be argued that especially the change, destruction and creation of family and kinship structures are key factors that determined the further development and success of the Chinese economy and the type of Capitalism arising there. In contrast to Weber, it will be argued that Capitalism not necessarily leads to a process of destruction of traditional structures and to large-scale enterprises under rational, bureaucratic management, without leaving space for socio-cultural structures like family businesses. The flexible global production increasingly favours small business production over larger corporations. Small Chinese family firms are able to respond to rapidly changing market conditions and motivate maximum efforts for modest pay. The structure of the Chinese family proved to be very persistent over time and to be able to accommodate diverse economic and political environments while maintaining its core identity. This implies that Chinese Capitalism may be an entirely new economic system, based on Guanxi and the family.
In contrast to the US and recently Europe, Japan appears to be unsuccessful in establishing new industries. An oft-cited example is Japan's practical invisibility in the global business software sector. Literature has ascribed Japan's weakness – or conversely, America's strength – to the specific institutional settings and competences of actors within the respective national innovation system. It has additionally been argued that unlike the American innovation system, with its proven ability to give birth to new industries, the inherent path dependency of the Japanese innovation system makes innovation and establishment of new industries quite difficult. However, there are two notable weaknesses underlying current propositions postulating that only certain innovation systems enable the creation of new industries: first, they mistakenly confound context specific with general empirical observations. And second, they grossly underestimate – or altogether fail to examine – the dynamics within innovation systems. This paper will show that it is precisely the dynamics within innovation systems – dynamics founded on the concept of path plasticity – which have enabled Japan to charge forward as a global leader in a highly innovative field: the game software sector as well as the biotechnology industry.
European scholars, colonial administrators, missionaries, bibliophiles and others were the main collectors of Malay books in the nineteenth century, both in manuscript or printed form. Among these persons were many well-known names in the field of Malay literature and culture like Raffles, Marsden, Crawfurd, Klinkert, van der Tuuk, von Dewall, Roorda, Favre, Maxwell, Overbeck, Wilkinson and Skeat, to name only a few. Their collections were often handed over to public libraries where they form an important part of the relevant Oriental or Southeast Asian manuscript collections.
Therefore the knowledge of the intellectual culture of the Malay Peninsula and the Malay World in general depended very much on these manuscripts and printed books collected often by chance or in a rather unsystematic way. The collections reflect in a strong sense the interests of its administrative or philologist collectors: court histories, genealogies of aristocratic lineages, law collections (adat-istiadat as well as undangundang) or prose belles-lettres build a vast bulk of these collections, while Islamic religious texts and poetry forms popular in the 19th century (especially syair) are fairly underrepresented. Malay manuscripts and books located in religious institutions like mosques or pondok/pesantren schools have not been searched for; until today there are more or less no systematic studies of these collections. As in some statistics religious texts build about 20% of all existing Malay manuscripts, their neglect by Europeans scholars leads to a distorted view of the literary culture in the Malay language.
In the aftermath of the global financial crisis and great recession, many countries face substantial deficits and growing debts. In the United States, federal government outlays as a ratio to GDP rose substantially from about 19.5 percent before the crisis to over 24 percent after the crisis. In this paper we consider a fiscal consolidation strategy that brings the budget to balance by gradually reducing this spending ratio over time to the level that prevailed prior to the crisis. A crucial issue is the impact of such a consolidation strategy on the economy. We use structural macroeconomic models to estimate this impact focussing primarily on a dynamic stochastic general equilibrium model with price and wage rigidities and adjustment costs. We separate out the impact of reductions in government purchases and transfers, and we allow for a reduction in both distortionary taxes and government debt relative to the baseline of no consolidation. According to the model simulations GDP rises in the short run upon announcement and implementation of this fiscal consolidation strategy and remains higher than the baseline in the long run. We explore the role of the mix of expenditure cuts and tax reductions as well as gradualism in achieving this policy outcome. Finally, we conduct sensitivity studies regarding the type of model used and its parameterization.
The complexity resulting from intertwined uncertainties regarding model misspecification and mismeasurement of the state of the economy defines the monetary policy landscape. Using the euro area as laboratory this paper explores the design of robust policy guides aiming to maintain stability in the economy while recognizing this complexity. We document substantial output gap mismeasurement and make use of a new model data base to capture the evolution of model specification. A simple interest rate rule is employed to interpret ECB policy since 1999. An evaluation of alternative policy rules across 11 models of the euro area confirms the fragility of policy analysis optimized for any specific model and shows the merits of model averaging in policy design. Interestingly, a simple difference rule with the same coefficients on inflation and output growth as the one used to interpret ECB policy is quite robust as long as it responds to current outcomes of these variables.
We argue that the U.S. personal saving rate’s long stability (1960s–1980s), subsequent steady decline (1980s–2007), and recent substantial rise (2008–2011) can be interpreted using a parsimonious ‘buffer stock’ model of consumption in the presence of labor income uncertainty and credit constraints. Saving in the model is affected by the gap between ‘target’ and actual wealth, with the target determined by credit conditions and uncertainty. An estimated structural version of the model suggests that increased credit availability accounts for most of the long-term saving decline, while fluctuations in wealth and uncertainty capture the bulk of the business-cycle variation.
This paper investigates whether preference interactions can explain why risk preferences change over time and across contexts. We conduct an experiment in which subjects accept or reject gambles involving real money gains and losses. We introduce within-subject variation by alternating subjectively liked music and disliked music in the background. We find that favourite music increases risk-taking, and disliked music suppresses risk-taking, compared to a baseline of no music. Several theories in psychology propose mechanisms by which mood affects risktaking, but none of them fully explain our results. The results are, however, consistent with preference complementarities that extend to risk preference.
Remarks on deixis
(1992)
The prevailing conception of deixis is oriented to the idea of 'concrete' physical and perceptual characteristics of the situation of speech. Signs standardly adduced as typical deictics are I, you, here, now, this, that. I and you are defined as meaning "the person producing the utterance in question" and "the person spoken to", here and now as meaning "where the speaker is at utterance time" and "at the moment the utterance is made" (also, "at the place/time of the speech exchange"); similarly, the meanings of this and that are as a rule defined via proximity to speaker's physical location. The elements used in such definitions form the conceptual framework of most of the general characterisations of deixis in the literature. [...] There is much in the literature, of course, that goes far beyond this framework . A great variety of elements, mostly with very abstract meanings, have been found to share deictic characteristics although they do not fit into the personnel-place-time-of-utterance schema. The adequacy of that schema is also called into question by many observations to the effect that the use of such standard deictics as here, now, this, that cannot really be accounted for on its basis, and by the far-reaching possibilities of orienting deictics to reference points in situations other than the situation of speech, to 'deictic centers' other than the speaker. [...] Analyses along the lines of the standard conception regularly acknowledge the existence of deviations from the assumed basic meanings. One traditional solution attributes them to speaker's "subjectivity", or to differences between "physical" and "psychological" space or time; in a similar vein, metaphorical extensions may be said to be at play, or a distinction between prototypical and non-prototypical meanings invoked. Quite apart from the question of the relative merits of these explanatory principles, which I do not wish to discuss here, the problem with all such accounts is that the definitions of the assumed basic meanings themselves are founded on axiom rather than analysis of situated use. The logical alternative, of course, is to set out for more abstract and comprehensive meaning definitions from the start. In fact, a number of recent, discourse-oriented, treatments of the demonstratives proceed this way; they view those elements as processing instructions rather than signs with inherently spatial denotation (Isard 1975, Hawkins 1978, Kirsner 1979, Linde 1979 , Ehlich 1982.)
Oppositeness, i.e. the relation between opposites or contraries or contradictories, has a fundamental role in human cognition. In the various domains of intellectual and psychological activity we find ordering schemas that are based, in one way or another, on the cognitive figure of oppositeness. It is therefore not surprising that the figure and its corresponding ordering schemas show their reflexes in the languages of the world. [...] We shall be dealing with oppositeness in the sense that a linguistically untrained native speaker, when asked what would be the opposite of 'long' can come up with some such answer as 'short', and likewise intuitively grasp the relation between 'man' and 'woman', 'corne' and 'go', 'up' and 'down', etc. Thinking that much of the vocabulary of a language is organized in such opposite pairs we must recognize that this is an important faculty, and we are curious to know how this is done, what are the underlying conceptual-cognitive structures and processes, and how they are encoded in the languages of the world. We shall leave out of consideration such oppositions as singular vs. plural. present vs. past, voiced vs. unvoiced, oppositions that the linguist states by means of a metalanguage which is itself derived from a concept of oppositeness as manifested by the examples which I gave earlier. Our approach will connect with earlier versions of the UNITYP framework. However, as a novel feature, and, hopefully, as an improvement, we shall apply some sort of a division of labor. We shall first try to reconstruct the conceptual-cognitive content of oppositeness and to keep it separate from the discussion of its reflexes in the individual languages. We shall find that a dimensional ordering of content in PARAMETERS and a continuum of TECHNIQUES is possible already on the conceptual-cognitive level. In order to keep it distinct from the level of linguistic encoding we shall use a separate terminology, graphically marked by capital 1etters.
Why should we engage in language universals research and language typology? What do we want to explain? It is a fact that, although languages differ significantly and considerably. indeed, no one would deny, that they have something in common; how else could they be labelled 'language'? - There is obviously unity among them, no matter how vaguely felt and for what reasons: Scientific, practical, moral, etc. Neither diversity per se nor unity per se is what we want to explain. There is no reason whatsoever to consider either one of them as primary, and the other as derived. What we do want to explain is "equivalence in difference" – cf. our motto – which manifests itself, among others, in the translatability from one language to another, the learnability of any language, language change – which all presuppose that speakers intuitively find their way from diversity to unity. This is a highly salient property which deserves to be brought into our consciousness. Generally then, our basic goal is to explain the way in which language-specific facts are connected with a unitarian concept of language – "die Sprache" – "le langage".
The Stanford Project on Language Universals began its activities in October 1967 and brought them to an end in August 1976. Its directors were Joseph H. Greenberg and Charles A. Ferguson. The Cologne Project on Language Universals and Typology [with particular reference to functional aspects], abbreviated UNITYP, had its early beginnings in 1972, but deployed its full activities from 1976 onwards and is still operating. This writer, who is the principal investigator, had the privilege of collaborating with the Stanford Project during spring of 1976. […] One of the leading Greenbergian ideas is that of implicational generalizations, has been integrated as a fundamental principle in the construction of continua and of universal dimensions as proposed by UNITYP. It is hoped that the following considerations on numeral systems will be apt to bear witness to this situation. They would be unthinkable without Greenberg’s pioneering work on "Generalizations about numeral systems" (Greenberg 1978: 249 ff., henceforth referred to as Greenberg, NS). Further work on this domain and on other comparable domains almost inevitably leads one to the view that generalizations of the Greenberg type have a functional significance and that a dimensional framework is apt to bring this to the fore. This is the view on linguistic behaviour as being purposeful, and on language as a problem- solving device. The problem consists in the linguistic representation of cognitive-conceptual ideas. The solution is represented by the corresponding linguistic structures in their diversity and the task of the linguist consists in reconstructing the program and subprograms underlying the process of problem-solving. It is claimed that the construct of continua and of universal dimensions makes these programs intelligible.
The human mind may produce prototypization within virtually any realm of cognition and behavior. A "comparative prototype-typology" might prove to be an interesting field of study – perhaps a new subfield of semiotics. This, however, would presuppose a clear view on the samenesses and differences of prototypization in these various fields. It seems realistic for the time being that the linguist first confine himself to describing prototypization within the realm of language proper. The literature on prototypes has steadily grown in the past ten years or so. I confine myself to mentioning the volume on Noun Classes and Categorization, edited by C. Craig (1986), which contains a wealth of factual information on the subject, along with some theoretical vistas. By and large, however, linguistic prototype research is still basically in a taxonomic stage - which, of course, represents the precondition for moving beyond. The procedure is largely per ostensionem, and by accumulating examples of prototypes. We still lack a comprehensive prototype theory. The following pages are intended, not to provide such, a theory, but to do the first steps in this direction. Section 2 will feature some elements of a functional theory of prototypes. They have been developed by this author within the frame of the UNITYP model of research on language universals and typology. Section 3 will bring a discussion of prototypization with regard to selected phenomena of a wide range of levels of analysis: Phonology, morphosyntax, speech acts, and the lexicon. Prototypization will finally be studied within one of the universal dimensions, that of APPREHENSION - the linguistic representation of the concepts of objects – as proposed by Seiler (1986).
This is a survey of the development of the model of PARTICIPATION (P'ATION) with reference to the postulated sequence of the techniques on the dimension of P'ATION. Along with a brief explanation of the techniques this article contains a discussion of the major claims with regard to the sequence of the techniques and the possibilities of subjecting the claims to empirical verification.
The present article is a crosslinguistic discussion of the distinction between a word class of nouns and a word class of verbs in the UNI TYP framework of the dimension of PARTICIPATION (for a first overall sketch of PARTICIPATION see Seiler 1984). According to this framework the noun/verb-distinction (henceforth N/V-D) must be regarded as a gradable, continuous phenomenon ranging from the stage of a clear-cut distinction with no overlap to almost a non-distinction. Although there is no question that most, if not all, languages do differentiate between nouns and verbs, it is also quite apparent that the languages do so to a different degree and by different means, and that it only makes sense to use the terms "noun" and "verb" in different languages when one actually has a common functional denominator in mind (see below). After a general introduction to the notion of a noun/verb-continuum (chapter 1) the reader will be presented with a survey of languages as diverse as German. English, Russian, Hebrew, Turkish, Salish. and Tongan (see chapter 2) in support of the continuum hypothesis. In chapter 3 the facts are coordinated in an overall pattern of regularities underlying the Increase or decrease of categorical restrictions between the respective word classes. Also, chapter 3 raises the issue to what degree a N/V-D can be considered a matter of certain lexemes or a matter of the morphosyntactic environment of certain lexical units. Lastly, we shall seek for an answer to the question why it is not a necessary requirement for languages to draw a sharp distinction between a word class of nouns and a word class of verbs.
The aim of this contribution is to embed the question of an antinomy between "integral" vs. "partial typology", inscribed as the topic of this plenary session, into the comprehensive framework of the dimensional model of the research group on language universals and typology (UNITYP). In this introductory section I shall evoke some cardinal points in the theory of linguistic typology, as viewed "from outside", viz. on the basis of striking parallelisms with psychological typology. Section 2 will permit a brief look on the dimensional model of UNITYP. In section 3 I shall present an illustration of a typological treatment on the basis of one particular dimension. In section 4 I shall draw some conclusions with special reference to the "integral vs. partial" antinomy.
As a traditional notion of fundamental importance in linguistics and philosophy (logic), "predication" is fraught with controversial issues. It is thus difficult to delimit the scope of this paper without becoming involved in some major issue. The following distinctions seem to me to be plausible on an intuitive basis. Evidence for why they are useful and legitimate will be found in the body of the paper. The discussion will focus on morphosyntactic predication […].
Ergativity in Samoan
(1985)
Most typological and language specific studies on so- called ergative languages are concerned with case marking patterns, particularly split ergativity, with the organization of syntactic relations as defined by syntactic operations such as coreferential deletion across coordinate conjunctions, Equi-NP-deletion and relativization , and with the notion of subject, but usually neglect the notion of valency, though the inherent relational properties of the verb , i. e. valency, play a fundamental role in the syntactic organization of sentences in ergative as well as in other languages . The following investigation of ergativity in Samoan aims to integrate the notion of valency into the description of semantic and syntactic relations and to outline the characteristic features of Samoan verbal clauses as far as they seem to be relevant to recent and still ongoing discussions on linguistic typology and syntactic theory. The main points of the definition of valency […] are: Valency is the property of the verb which determines the obligatory and optional number of its participants, their morphosyntactic form, their semantic class membership (e.g. ± animate, ± human) , and their semantic role (e.g. agent , patient , recipient). All semantic properties and morphosyntactic properties of participants not inherently given by the verb and therefore not predictable from the verb, are not a matter of valency. Valency is not a homogenous property of the verb, but consists of several exponents which show varying degress of relevance in different languages or different verb classes within a single language.
Grammatical relations, particularly the notions of transitivity, case marking, ergativity, passive and antipassive have been a favourite subject of typological research during the last decade, but surprisingly, the notion of valency has been of marginal interest in cross-linguistic studies, though the syntactic and semantic status of participants is, to a great extent, determined by the relational properties of the verb. Valency is the property of the verb which determines the obligatory and optional number of its participants, their morphosyntactic form, their semantic class membership (e.g. ± animate, ± human) ,and their semantic role (e.g. agent, patient, recipient). The valency inherently gives information on the nature of the semantic and syntactic relations that hold between the verb and its participants. If a verb is combined with more participants than allowed or less than required, or if the participants do not show the required morphosyntactic form or class membership, the clause is ungrammatical. In other words, it is not sufficient to consider only the number of actants as a matter of valency, but it is only acceptable if all semantic and morphosyntactic properties of the relation between a verb and its participants that are predictable from the verb are included. The predictability of these properties results from their inherent givenness, and it does not seem reasonable to count some inherently given relational properties as a matter of valency, but not others (compare Helbig (1971:38f) and Heidolph et ale (1981:479) who distinguish between the quantitative, syntactic and semantic aspect of valency).
The present paper is an attempt to describe a particular semantic domain in Thai, that of local relations, in terms of a gradual interconnection of what traditional descriptions usually regard as distinct and isolated categories. It is based on the well-known observation that isolating languages like Thai typically display a high degree of 'multifunctionality', or else of syntactic 'versatility' of very many lexical items. […] The semantic area studied in the following pages yields a clear systematic interconnection of three different categories, viz. that of nouns – as the focal instance of maximum syntactic independence –, that of verbs – as, conversely, the focal instance of maximally relational concepts –, and, as an intermediary category between these two, that of prepositions which the system lexically feeds from both these opposite ends. The examples given in the course of this paper have been obtained from published grammatical literature, from Thai texts, and from informants.