Refine
Year of publication
- 2010 (47) (remove)
Document Type
- Working Paper (47) (remove)
Language
- English (47) (remove)
Has Fulltext
- yes (47)
Is part of the Bibliography
- no (47) (remove)
Keywords
- Formale Semantik (4)
- Finanzkrise (3)
- Logik (3)
- Verifikation (3)
- Asset Allocation (2)
- Bank (2)
- Financial Knowledge (2)
- Financial Markets (2)
- Finanzwirtschaft (2)
- Kreditwesen (2)
Institute
- Center for Financial Studies (CFS) (28)
- Informatik (6)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (4)
- Wirtschaftswissenschaften (4)
- Institute for Law and Finance (ILF) (2)
- Extern (1)
- Institut für sozial-ökologische Forschung (ISOE) (1)
- Institute for Monetary and Financial Stability (IMFS) (1)
Over the past few decades, changes in market conditions such as globalisation and deregulation of financial markets as well as product innovation and technical advancements have induced financial institutions to expand their business activities beyond their traditional boundaries and to engage in cross-sectoral operations. As combining different sectoral businesses offers opportunities for operational synergies and diversification benefits, financial groups comprising banks, insurance undertakings and/or investment firms, usually referred to as financial conglomerates, have rapidly emerged, providing a wide range of services and products in distinct financial sectors and oftentimes in different geographic locations. In the European Union (EU), financial conglomerates have become part of the biggest and most active financial market participants in recent years. Financial conglomerates generally pose new problems for financial authorities as they can raise new risks and exacerbate existing ones. In particular, their cross-sectoral business activities can involve prudentially substantial risks such as the risk of regulatory arbitrage and contagion risk arising from intra-group transactions. Moreover, the generally large size of financial conglomerates as well as the high complexity and interconnectedness of their corporate structures and risk exposures can entail substantial systemic risk and can therefore threaten the stability of the financial system as a whole. Until a few years ago, there was no supervisory framework in place which addressed a financial conglomerate in its entirety as a group. Instead, each group entity within a financial conglomerate was subject to the supervisory rules of its pertinent sector only. Such silo supervisory approach had the drawback of not taking account of risks which arise or aggravate at the group level. It also failed to consider how the risks from different business lines within the group interrelate with each other and affect the group as a whole. In order to address this lack of group-wide prudential supervision of financial conglomerates, the European legislator adopted the Financial Conglomerates Directive 2002/87/EC8 (‘FCD’) on 16 December 2002. The FCD was transposed into national law in the member states of the EU (‘Member States’) by 11 August 2004 for application to financial years beginning on 1 January 2005 and after. The FCD primarily aims at supplementing the existing sectoral directives to address the additional risks of concentration, contagion and complexity presented by financial conglomerates. It therefore provides for a supervisory framework which is applicable in addition to the sectoral supervision. Most importantly, the FCD has introduced additional capital requirements at the conglomerate level so as to prevent the multiple use of the same capital by different group entities. This paper seeks to examine to what extent the FCD provides for an adequate capital regulation of financial conglomerates in the EU while taking into account the underlying sectoral capital requirements and the inherent risks associated with financial conglomerates. In Part 1, the definition and the basic corporate models of financial conglomerates will be presented (I), followed by an illustration of the core motives behind the phenomenon of financial conglomeration (II) and an overview of the development of the supervision over financial conglomerates in the EU (III). Part 2 begins with a brief elaboration on the role of regulatory capital (I) and gives a general overview of the EU capital requirements applicable to banks and insurance undertakings respectively. A delineation of the commonalities and differences of the banking and the insurance capital requirements will be provided (II). It continues to further examine the need for a group-wide capital regulation of financial conglomerates and analyses the adequacy of the FCD capital requirements. In this context, the technical advice rendered by the Joint Committee on Financial Conglomerates (JCFC) as well as the currently ongoing legislative reforms at the EU level will be discussed (III). The paper finally closes with a conclusion and an outlook on remaining open issues (IV).
The first part of the following paper deals with varying points of criticism forwarded against Ordoliberalism. Here, it is not the aim to directly falsify each argument on its own; rather, the author tries to give a precise overview of the spectrum of critique. The second section picks out one argument of critical review – namely that the ordoliberal concept of the state is somewhat elitist and grounded on intellectual experts. Based on the previous sections, the final part differentiates two kinds of genesis of norms: an evolutionary and an elitist one – both (latently) present within Ordoliberalism. In combination with the two-level differentiation between individual and regulatory ethics, the essay allows for a distinction between individual-ethical norms based on an evolutionary genesis of norms and regulatory-ethical norms based on an elitist understanding of norms. A by-product of the author’s argument is a (further) demarcation within neoliberalism.
This paper investigates the accuracy and heterogeneity of output growth and inflation forecasts during the current and the four preceding NBER-dated U.S. recessions. We generate forecasts from six different models of the U.S. economy and compare them to professional forecasts from the Federal Reserve’s Greenbook and the Survey of Professional Forecasters (SPF). The model parameters and model forecasts are derived from historical data vintages so as to ensure comparability to historical forecasts by professionals. The mean model forecast comes surprisingly close to the mean SPF and Greenbook forecasts in terms of accuracy even though the models only make use of a small number of data series. Model forecasts compare particularly well to professional forecasts at a horizon of three to four quarters and during recoveries. The extent of forecast heterogeneity is similar for model and professional forecasts but varies substantially over time. Thus, forecast heterogeneity constitutes a potentially important source of economic fluctuations. While the particular reasons for diversity in professional forecasts are not observable, the diversity in model forecasts can be traced to different modeling assumptions, information sets and parameter estimates. JEL Classification: C53, D84, E31, E32, E37 Keywords: Forecasting, Business Cycles, Heterogeneous Beliefs, Forecast Distribution, Model Uncertainty, Bayesian Estimation
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
The well-known proof of termination of reduction in simply typed calculi is adapted to a monomorphically typed lambda-calculus with case and constructors and recursive data types. The proof differs at several places from the standard proof. Perhaps it is useful and can be extended also to more complex calculi.
A logical framework consisting of a polymorphic call-by-value functional language and a first-order logic on the values is presented, which is a reconstruction of the logic of the verification system VeriFun. The reconstruction uses contextual semantics to define the logical value of equations. It equates undefinedness and non-termination, which is a standard semantical approach. The main results of this paper are: Meta-theorems about the globality of several classes of theorems in the logic, and proofs of global correctness of transformations and deduction rules. The deduction rules of VeriFun are globally correct if rules depending on termination are appropriately formulated. The reconstruction also gives hints on generalizations of the VeriFun framework: reasoning on nonterminating expressions and functions, mutual recursive functions and abstractions in the data values, and formulas with arbitrary quantifier prefix could be allowed.
The interactive verification system VeriFun is based on a polymorphic call-by-value functional language and on a first-order logic with initial model semantics w.r.t. constructors. It is designed to perform automatic induction proofs and can also deal with partial functions. This paper provides a reconstruction of the corresponding logic and semantics using the standard treatment of undefinedness which adapts and improves the VeriFun-logic by allowing reasoning on nonterminating expressions and functions. Equality of expressions is defined as contextual equivalence based on observing termination in all closing contexts. The reconstruction shows that several restrictions of the VeriFun framework can easily be removed, by natural generalizations: mutual recursive functions, abstractions in the data values, and formulas with arbitrary quantifier prefix can be formulated. The main results of this paper are: an extended set of deduction rules usable in VeriFun under the adapted semantics is proved to be correct, i.e. they respect the observational equivalence in all extensions of a program. We also show that certain classes of theorems are conservative under extensions, like universally quantified equations. Also other special classes of theorems are analyzed for conservativity.
The interactive verification system VeriFun is based on a polymorphic call-by-value functional language and on a first-order logic with initial model semantics w.r.t. constructors. This paper provides a reconstruction of the corresponding logic when partial functions are permitted. Typing is polymorphic for the definition of functions but monomorphic for terms in formulas. Equality of terms is defined as contextual equivalence based on observing termination in all contexts. The reconstruction also allows several generalizations of the functional language like mutual recursive functions and abstractions in the data values. The main results are: Correctness of several program transformations for all extensions of a program, which have a potential usage in a deduction system. We also proved that universally quantified equations are conservative, i.e. if a universally quantified equation is valid w.r.t. a program P, then it remains valid if the program is extended by new functions and/or new data types.
Towards correctness of program transformations through unification and critical pair computation
(2010)
Correctness of program transformations in extended lambda-calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, which results in so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study of an application we describe a finitary and decidable unification algorithm for the combination of the equational theory of left-commutativity modelling multi-sets, context variables and many-sorted unification. Sets of equations are restricted to be almost linear, i.e. every variable and context variable occurs at most once, where we allow one exception: variables of a sort without ground terms may occur several times. Every context variable must have an argument-sort in the free part of the signature. We also extend the unification algorithm by the treatment of binding-chains in let- and letrec-environments and by context-classes. This results in a unification algorithm that can be applied to all overlaps of normal-order reductions and transformations in an extended lambda calculus with letrec that we use as a case study.