Refine
Year of publication
- 2011 (68) (remove)
Document Type
- Working Paper (68) (remove)
Language
- English (68) (remove)
Has Fulltext
- yes (68)
Is part of the Bibliography
- no (68)
Keywords
- USA (4)
- China (3)
- Deutschland (3)
- Digital Humanities (3)
- Financial Crisis (3)
- Japan (3)
- Monetary Policy (3)
- Adaptive Erwartung (2)
- Adverse Selection Risk (2)
- Außenwirtschaftliches Gleichgewicht (2)
Institute
- Center for Financial Studies (CFS) (33)
- Wirtschaftswissenschaften (8)
- Informatik (7)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (5)
- House of Finance (HoF) (4)
- Interdisziplinäres Zentrum für Ostasienstudien (IZO) (4)
- Institute for Monetary and Financial Stability (IMFS) (3)
- Gesellschaftswissenschaften (2)
- Institut für sozial-ökologische Forschung (ISOE) (1)
- Institute for Law and Finance (ILF) (1)
This paper compares the shareholder-value-maximizing capital structure and pricing policy of insurance groups against that of stand-alone insurers. Groups can utilise intra-group risk diversification by means of capital and risk transfer instruments. We show that using these instruments enables the group to offer insurance with less default risk and at lower premiums than is optimal for standalone insurers. We also take into account that shareholders of groups could find it more difficult to prevent inefficient overinvestment or cross-subsidisation, which we model by higher dead-weight costs of carrying capital. The tradeoff between risk diversification on the one hand and higher dead-weight costs on the other can result in group building being beneficial for shareholders but detrimental for policyholders.
Depending on the point of time and location, insurance companies are subject to different forms of solvency regulation. In modern regulation regimes, such as the future standard Solvency II in the EU, insurance pricing is liberalized and risk-based capital requirements will be introduced. In many economies in Asia and Latin America, on the other hand, supervisors require the prior approval of policy conditions and insurance premiums, but do not conduct risk-based capital regulation. This paper compares the outcome of insurance rate regulation and risk-based capital requirements by deriving stock insurers’ best responses. It turns out that binding price floors affect insurers’ optimal capital structures and induce them to choose higher safety levels. Risk-based capital requirements are a more efficient instrument of solvency regulation and allow for lower insurance premiums, but may come at the cost of investment efforts into adequate risk monitoring systems. The paper derives threshold values for regulator’s investments into risk-based capital regulation and provides starting points for designing a welfare-enhancing insurance regulation scheme.
Depending on the point of time and location, insurance companies are subject to different forms of solvency regulation. In modern regulation regimes, such as the future standard Solvency II in the EU, insurance pricing is liberalized and risk-based capital requirements will be introduced. In many economies in Asia and Latin America, on the other hand, supervisors require the prior approval of policy conditions and insurance premiums, but do not conduct risk-based capital regulation. This paper compares the outcome of insurance rate regulation and riskbased capital requirements by deriving stock insurers’ best responses. It turns out that binding price floors affect insurers’ optimal capital structures and induce them to choose higher safety levels. Risk-based capital requirements are a more efficient instrument of solvency regulation and allow for lower insurance premiums, but may come at the cost of investment efforts into adequate risk monitoring systems. The paper derives threshold values for regulator’s investments into risk-based capital regulation and provides starting points for designing a welfare-enhancing insurance regulation scheme.
If there is one thing to be learned from David Foster Wallace, it is that cultural transmission is a tricky game. This was a problem Wallace confronted as a literary professional, a university-based writer during what Mark McGurl has called the Program Era. But it was also a philosophical issue he grappled with on a deep level as he struggled to combat his own loneliness through writing. This fundamental concern with literature as a social, collaborative enterprise has also gained some popularity among scholars of contemporary American literature, particularly McGurl and James English: both critics explore the rules by which prestige or cultural distinction is awarded to authors (English; McGurl). Their approach requires a certain amount of empirical work, since these claims move beyond the individual experience of the text into forms of collective reading and cultural exchange influenced by social class, geographical location, education, ethnicity, and other factors. Yet McGurl and English's groundbreaking work is limited by the very forms of exclusivity they analyze: the protective bubble of creative writing programs in the academy and the elite economy of prestige surrounding literary prizes, respectively. To really study the problem of cultural transmission, we need to look beyond the symbolic markets of prestige to the real market, the site of mass literary consumption, where authors succeed or fail based on their ability to speak to that most diverse and complicated of readerships: the general public. Unless we study what I call the social lives of books, we make the mistake of keeping literature in the same ascetic laboratory that Wallace tried to break out of with his intense authorial focus on popular culture, mass media, and everyday life.
In the last few years, literary studies have experienced what we could call the rise of quantitative evidence. This had happened before of course, without producing lasting effects, but this time it’s probably going to be different, because this time we have digital databases, and automated data retrieval. As Michel’s and Lieberman’s recent article on "Culturomics" made clear, the width of the corpus and the speed of the search have increased beyond all expectations: today, we can replicate in a few minutes investigations that took a giant like Leo Spitzer months and years of work. When it comes to phenomena of language and style, we can do things that previous generations could only dream of.
When it comes to language and style. But if you work on novels or plays, style is only part of the picture. What about plot – how can that be quantified? This paper is the beginning of an answer, and the beginning of the beginning is network theory. This is a theory that studies connections within large groups of objects: the objects can be just about anything – banks, neurons, film actors, research papers, friends... – and are usually called nodes or vertices; their connections are usually called edges; and the analysis of how vertices are linked by edges has revealed many unexpected features of large systems, the most famous one being the so-called "small-world" property, or "six degrees of separation": the uncanny rapidity with which one can reach any vertex in the network from any other vertex. The theory proper requires a level of mathematical intelligence which I unfortunately lack; and it typically uses vast quantities of data which will also be missing from my paper. But this is only the first in a series of studies we’re doing at the Stanford Literary Lab; and then, even at this early stage, a few things emerge.
This paper is the report of a study conducted by five people – four at Stanford, and one at the University of Wisconsin – which tried to establish whether computer-generated algorithms could "recognize" literary genres. You take 'David Copperfield', run it through a program without any human input – "unsupervised", as the expression goes – and ... can the program figure out whether it's a gothic novel or a 'Bildungsroman'? The answer is, fundamentally, Yes: but a Yes with so many complications that it is necessary to look at the entire process of our study. These are new methods we are using, and with new methods the process is almost as important as the results.
The article discusses the methodology adopted for a cross-linguistic synchronic and diachronic corpus study on indefinites. The study covered five indefinite expressions, each in a different language. The main goal of the study was to verify the distribution of these indefinites synchronically and to attest their historical development. The methodology we used is a form of functional labeling which combines both context (syntax) and meaning (semantics) using as a starting point Haspelmath’s (1997) functional map. In the article we identify Haspelmath’s functions with logico-semantic interpretations and propose a binary branching decision tree assigning each instance of an indefinite exactly one function in the map.
This paper examines to what extent the build-up of 'global imbalances' since the mid-1990s can be explained in a purely real open-economy DSGE model in which agents' perceptions of long-run growth are based on filtering observed changes in productivity. We show that long-run growth estimates based on filtering U.S. productivity data comove strongly with long-horizon survey expectations. By simulating the model in which agents filter data on U.S. productivity growth, we closely match the U.S. current account evolution. Moreover, with household preferences that control the wealth effect on labor supply, we can generate output movements in line with the data.
Towards correctness of program transformations through unification and critical pair computation
(2011)
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems.We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.
To monitor one's speech means to check the speech plan for errors, both before and after talking. There are several theories as to how this process works. We give a short overview on the most influential theories only to focus on the most widely received one, the Perceptual Loop Theory of monitoring by Levelt (1983). One of the underlying assumptions of this theory is the existence of an Inner Loop, a monitoring device that checks for errors before speech is articulated. This paper collects evidence for the existence of such an internal monitoring device and questions how it might work. Levelt's theory argues that internal monitoring works by means of perception, but there are other empirical findings that allow for the assumption that an Inner Loop could also use our speech production devices. Based on data from both experimental and aphasiological papers we develop a model based on Levelt (1983) which shows that internal monitoring might in fact make use of both perception and production means.
The papers in this volume were originally presented at the Workshop on Bantu Wh-questions, held at the Institut des Sciences de l’Homme, Université Lyon 2, on 25-26 March 2011, which was organized by the French-German cooperative project on the Phonology/Syntax Interface in Bantu Languages (BANTU PSYN). This project, which is funded by the ANR and the DFG, comprises three research teams, based in Berlin, Paris and Lyon. The Berlin team, at the ZAS, is: Laura Downing (project leader) and Kristina Riedel (post-doc). The Paris team, at the Laboratoire de phonétique et phonologie (LPP; UMR 7018), is: Annie Rialland (project leader), Cédric Patin (Maître de Conférences, STL, Université Lille 3), Jean-Marc Beltzung (post-doc), Martial Embanga Aborobongui (doctoral student), Fatima Hamlaoui (post-doc). The Lyon team, at the Dynamique du Langage (UMR 5596) is: Gérard Philippson (project leader) and Sophie Manus (Maître de Conférences, Université Lyon 2). These three research teams bring together the range of theoretical expertise necessary to investigate the phonology-syntax interface: intonation (Patin, Rialland), tonal phonology (Aborobongui, Downing, Manus, Patin, Philippson, Rialland), phonology-syntax interface (Downing, Patin) and formal syntax (Riedel, Hamlaoui). They also bring together a range of Bantu language expertise: Western Bantu (Aboronbongui, Rialland), Eastern Bantu (Manus, Patin, Philippson, Riedel), and Southern Bantu (Downing).
Existing studies from the United States, Latin America, and Asia provide scant evidence that private schools dramatically improve academic performance relative to public schools. Using data from Kenya—a poor country with weak public institutions—we find a large effect of private schooling on test scores, equivalent to one full standard deviation. This finding is robust to endogenous sorting of more able pupils into private schools. The magnitude of the effect dwarfs the impact of any rigorously tested intervention to raise performance within public schools. Furthermore, nearly twothirds of private schools operate at lower cost than the median government school.
A large empirical literature has shown that user fees signicantly deter public service utilization in developing #countries. While most of these results reflect partial equilibrium analysis, we find that the nationwide abolition of public school fees in Kenya in 2003 led to no increase in net public enrollment rates, but rather a dramatic shift toward private schooling. Results suggest this divergence between partial- and general-equilibrium effects is partially explained by social interactions: the entry of poorer pupils into free education contributed to the exit of their more affluent peers.
Rare Earth Elements (REE) have become the new strategic economic weapon for the modern age. Used in the manufacturing of products ranging from mobile phones to jet fighter engines, REEs have become the new “oil” of today in terms of economic and strategic importance. Currently, 95% of REEs mined globally are mined in China, giving China a monopoly on the industry. Deng Xiaoping foresaw the importance of REEs in 1992 when he commented: “as there is oil in the Middle East, there is rare earth in China.” Recently, China temporarily stopped exports of REEs to Japan, the EU and the US as an unofficial response to varying political and economic issues. This stoppage raised concerns as to the dependability of China and REE exports. Using the theory of neo-mercantilism, this paper analyzes China’s actions in the REE market and its subsequent economic and political implications. It concludes with a look at how countries are trying to position themselves away from a dependency on China.
Japan's quest for energy security : risks and opportunities in a changing geopolitical landscape
(2011)
For much of the 20th century, economic growth was fueled by cheap oil-based energy supply. Due to increasing resource constraints, however, the political and strategic importance of oil has become a significant part of energy and foreign policy making in East and Southeast Asian countries. In Japan, the rise of China’s economic and military power is a source of considerable concern. To enhance energy security, the Japanese government has recently amended its energy regulatory framework, which reveals high political awareness of risks resulting from the looming key resources shortage and competition over access. An essential understanding that national energy security is a politically and economically sensitive area with a clear international dimension affecting everyday life is critical in shaping a nation’s energy future.
It has often been asked whether today´s Japan will be able to move into new and promising industries, or whether it is locked into an innovation system with an inherent inability to give birth to new industries. One argument reasons that the thick institutional complementarities among labour, innovation, and finance among its enterprises and the public sector favour industrial development in sectors of intermediate uncertainty, while it is difficult to move into areas of major uncertainty. In this paper, we present the case of the silver industry or, somewhat more prosaically, the 60+ or even 50+ industry, for which most would agree that Japan has indeed become a lead market and lead producer on the global market. For an institutional economist, the case of the silver industry is particularly interesting, because Japan´s success is based on the cooperation of existing actors, the enterprise and public sector in particular, which helped overcome the information uncertainties and asymmetries involved in the new market by relying on several established mechanisms developed well before. In that sense, Japan´s silver industry presents a case of of what we propose to call successful institutional path activation with the effect of an innovative market creation, instead of the problematic lockin effects that are usually associated with the term path dependence.
The emergence of Capitalism is said to always lead to extreme changes in the structure of a society. This view implies that Capitalism is a universal and unique concept that needs an explicit institutional framework and should not discriminate between a German or US Capitalism. In contrast, this work argues that the ‘ideal type’ of Capitalism in a Weberian sense does not exist. It will be demonstrated that Capitalism is not a concept that shapes a uniform institutional framework within every society, constructing a specific economic system. Rather, depending on the institutional environment - family structures in particular - different forms of Capitalism arise. To exemplify this, the networking (Guanxi) Capitalism of contemporary China will be presented, where social institutions known from the past were reinforced for successful development. It will be argued that especially the change, destruction and creation of family and kinship structures are key factors that determined the further development and success of the Chinese economy and the type of Capitalism arising there. In contrast to Weber, it will be argued that Capitalism not necessarily leads to a process of destruction of traditional structures and to large-scale enterprises under rational, bureaucratic management, without leaving space for socio-cultural structures like family businesses. The flexible global production increasingly favours small business production over larger corporations. Small Chinese family firms are able to respond to rapidly changing market conditions and motivate maximum efforts for modest pay. The structure of the Chinese family proved to be very persistent over time and to be able to accommodate diverse economic and political environments while maintaining its core identity. This implies that Chinese Capitalism may be an entirely new economic system, based on Guanxi and the family.
The aim of this paper is to give the semantic profile of the Greek verb-deriving suffixes -íz(o), -én(o), -év(o), -ón(o), -(i)áz(o), and -ín(o), with a special account of the ending -áo/-ó. The patterns presented are the result of an empirical analysis of data extracted from extended interviews conducted with 28 native Greek speakers in Athens, Greece in February 2009. In the first interview task the test persons were asked to force(=create) verbs by using the suffixes -ízo, -évo, -óno, -(i)ázo, and -íno and a variety of bases which conformed to the ontological distinctions made in Lieber (2004). In the second task the test persons were asked to evaluate three groups of forced verbs with a noun, an adjective, and an adverb, respectively, by using one (best/highly acceptable verb) to six (worst/unacceptable verb) points. In the third task nineteen established verb pairs with different suffixes and the ending -áo/-ó were presented. The test persons were asked to report whether there was some difference between them and what exactly this difference was. The differences reported were transformed into 16 alternations. In the fourth task 21 established verbs with different suffixes were presented. The test persons were asked to give the "opposite" or "near opposite" expression for each verb. The rationale behind this task was to arrive at the meaning of the suffixes through the semantics of the opposites. In the analysis Rochelle's Lieber's (2004) theoretical framework is used. The results of the analysis suggest (i) a sign-based treatment of affixes, (ii) a vertical preference structure in the semantic structure of the head suffixes which takes into account the semantic make-up of the bases, and (iii) the integration of socioexpressive meaning into verb structures.