Refine
Year of publication
Document Type
- Working Paper (3395) (remove)
Language
- English (2358)
- German (1017)
- Spanish (8)
- French (7)
- Multiple languages (2)
Keywords
- Deutschland (223)
- USA (64)
- Corporate Governance (53)
- Geldpolitik (53)
- Schätzung (52)
- Europäische Union (51)
- monetary policy (47)
- Bank (41)
- Sprachtypologie (34)
- Monetary Policy (31)
Institute
- Wirtschaftswissenschaften (1504)
- Center for Financial Studies (CFS) (1477)
- Sustainable Architecture for Finance in Europe (SAFE) (811)
- House of Finance (HoF) (669)
- Rechtswissenschaft (403)
- Institute for Monetary and Financial Stability (IMFS) (216)
- Informatik (119)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (75)
- Gesellschaftswissenschaften (75)
- Geographie (64)
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb, which is locally bottom-avoiding. We use a small-step operational semantics in form of a normal order reduction. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into an arbitrary program context their termination behaviour is the same. We use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We evolve different proof tools for proving correctness of program transformations. We provide a context lemma for may- as well as must- convergence which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations keep contextual equivalence. In contrast to other approaches our syntax as well as semantics does not make use of a heap for sharing expressions. Instead we represent these expressions explicitely via letrec-bindings.
Static analysis of different non-strict functional programming languages makes use of set constants like Top, Inf, and Bot denoting all expressions, all lists without a last Nil as tail, and all non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics. This paper proves decidability, in particular EXPTIMEcompleteness, of subset relationship of co-inductively defined sets by using algorithms and results from tree automata. This shows decidability of the test for set inclusion, which is required by certain strictness analysis algorithms in lazy functional programming languages.
Extending the method of Howe, we establish a large class of untyped higher-order calculi, in particular such with call-by-need evaluation, where similarity, also called applicative simulation, can be used as a proof tool for showing contextual preorder. The paper also demonstrates that Mann’s approach using an intermediate “approximation” calculus scales up well from a basic call-by-need non-deterministic lambdacalculus to more expressive lambda calculi. I.e., it is demonstrated, that after transferring the contextual preorder of a non-deterministic call-byneed lambda calculus to its corresponding approximation calculus, it is possible to apply Howe’s method to show that similarity is a precongruence. The transfer is not treated in this paper. The paper also proposes an optimization of the similarity-test by cutting off redundant computations. Our results also applies to deterministic or non-deterministic call-by-value lambda-calculi, and improves upon previous work insofar as it is proved that only closed values are required as arguments for similaritytesting instead of all closed expressions.
The paper examines challenges in effectively implementing the lender-of-last-resort function in the EU single financial market. Briefly highlighted are features of the EU financial landscape that could increase EU systemic financial risk. Briefly described are the complexities of the EU’s financial-stability architecture for preventing and resolving financial problems, including lender-of-last-resort operations. The paper examines how the lender-of-last-resort function might materialize during a systemic financial disturbance affecting more than one EU Member State. The paper identifies challenges and possible ways of enhancing the effectiveness of the existing architecture.
Anfang 2005 wurde die Schweizer „Capital Efficiency Group“ von den Lesern der bekannten Publikation „Structured Finance International“ mit dem Preis “Innovativster Asset Back Deal des Jahres 2004” ausgezeichnet sowie von „The Banker“ für einen der „Deals of the Year“ gewürdigt. Prämiert wurde die Entwicklung von „Preferred Pooled Shares“ (PREPS). Was verbirgt sich hinter diesem Konstrukt? Bei PREPS handelt es sich um ein Finanzprodukt der „Capital Efficiency Group“, die sich den Eigennamen „PREPS“ als Marke hat eintragen lassen. PREPS ist somit nur der Name eines speziellen Finanzinstruments. Die Bezeichnung PREPS wird aber gleichsam stellvertretend für eine Vielzahl von Finanzprodukten, durch die in eigenkapitalähnliche Finanztitel des Mittelstands investiert wird, verwendet. Zu nennen sind in diesem Zusammenhang „ge/mit“ und „equiNotes“ sowie einige weniger populäre Produkte. Jedes dieser Finanzprodukte weist im Detail eine andere Struktur auf, grundsätzlich basieren aber alle auf derselben Grundidee. Die folgenden Ausführungen stellen die Struktur und Funktionsweise dieser Finanzprodukte dar. Weitere Abschnitte erörtern sodann den wirtschaftlichen Hintergrund und die rechtlichen Rahmenbedingungen.
Die Aktiengesellschaft ist die klassische Rechtsform des Großunternehmens; sie ist als Rechtsinstitution speziell zum Zweck der Gründung und Leitung von Großunternehmen ausgebildet worden. Das gilt auch für die Formen ihrer Finanzierung (Aktiengesellschaft als „Kapitalsammelbecken“), und zwar nicht nur der Außenfinanzierung durch Eigenkapital. Auch die Formen und besonderen Merkmale der Fremdkapitalfinanzierung der großen Aktiengesellschaft erklären sich daraus, daß hier große Kapitalbeträge nicht durch einen einzelnen oder eine kleine Gruppe von Investoren mit Hilfe eigener Mittel, sondern mittelbar oder unmittelbar durch das Publikum aufgebracht werden sollen, weil die erforderlichen Eigenmittel Einzelpersonen entweder nicht zu Gebote stehen, oder sich die Eigenmittelfinanzierung durch Einzelpersonen aus Erwägungen der Risikostreuung verbietet. In diesem Falle muß das Publikum angesprochen werden. Bei der Fremdkapitalfinanzierung geschieht dies auf zwei Wegen: Durch Einschaltung eines Finanzintermediärs, typischerweise eines Kreditinstituts, dem die Investoren ihre Gelder als Einlagen anvertrauen, und das diese Gelder in Unternehmenskredite transformiert, oder durch gezielte Ansprache des Kapitalmarkts seitens des kapitalnachfragenden Unternehmens, z. B. durch Emission einer Anleihe. Die Vergabe von Unternehmenskrediten durch ein Kreditinstitut wird allerdings herkömmlich nicht mit der Unternehmensfinanzierung durch das Publikum in Verbindung gebracht. Vielmehr wird die bankgestützte Unternehmensfinanzierung geradezu als Gegensatz zur Publikumsfinanzierung verstanden. Hartmut Schmidt hat aber bereits 1986 darauf hingewiesen, daß Anteilsmärkte und Kreditmärkte funktional dieselben Aufgaben erfüllen. Diese Sichtweise hat sich durchgesetzt. Aus heutiger institutionenökonomischer Sicht hat die Kreditfinanzierung durch einen Finanzintermediär, also etwa durch ein Kreditinstitut, das sich, neben dem Eigenkapital seiner Aktionäre, vor allem durch Einlagen seiner Kunden, also des Publikums, refinanziert, dieselbe Funktion wie die unmittelbare (Anleihe-) Finanzierung durch das Publikum; darauf ist sogleich zurückzukommen. Der folgende rechtshistorische Rückblick belegt, daß Entwicklung und Einsatz des mit Depositen refinanzierten Großkredits und die Entwicklung der Anleihefinanzierung der Aktiengesellschaft in Deutschland etwa zur selben Zeit eingesetzt haben.
The assumption that mankind is able to have an in uence on global or regional climate, respectively, due to the emission of greenhouse gases, is often discussed. This assumption is both very important and very obscure. In consequence, it is necessary to clarify definitively which meteorological elements (climate parameters) are in uencend by the anthropogenic climate impact, and to which extent in which regions of the world. In addition, to be able to interprete such an information properly, it is also necessary to know the magnitude of the different climate signals due to natural variability (for example due to volcanic or solar activity) and the magnitide of stochastic climate noise. The usual tool of climatologists, general circulation models (GCM) suffer from the problem that they are at least quantitatively uncertain with regard to the regional patterns of the behaviour of climate elements and from the lack of accurate information about long-term (decadal and centennial) forcing. In contrast to that, statistical methods as used in this study have the advantage to test hypotheses directly based on observational data. So, we focus to the very reality of climate variability as it has occurred in the past. We apply two strategies of time series analyis with regard to the observed climate variables under consideration. First, each time series is splitted into its variation components. This procedure is called 'structure-oriented time series separation'. The second strategy called 'cause-oriented time series separation' matches various time series representing various forcing mechanisms with those representing the climate behaviour (climate elements). In this way it can be assessed which part of observed climate variability can be explained by this (combined) forcing and which part remains unexplained.
The assumption that mankind is able to have an in uence on global or regional climate, respectively, due to the emission of greenhouse gases, is often discussed. This assumption is both very important and very obscure. In consequence, it is necessary to clarify definitively which meteorological elements (climate parameters) are in uencend by the anthropogenic climate impact, and to which extent in which regions of the world. In addition, to be able to interprete such an information properly, it is also necessary to know the magnitude of the different climate signals due to natural variability (for example due to volcanic or solar activity) and the magnitide of stochastic climate noise. The usual tool of climatologists, general circulation models (GCM) suffer from the problem that they are at least quantitatively uncertain with regard to the regional patterns of the behaviour of climate elements and from the lack of accurate information about long-term (decadal and centennial) forcing. In contrast to that, statistical methods as used in this study have the advantage to test hypotheses directly based on observational data. So, we focus to the very reality of climate variability as it has occurred in the past. We apply two strategies of time series analyis with regard to the observed climate variables under consideration. First, each time series is splitted into its variation components. This procedure is called 'structure-oriented time series separation'. The second strategy called 'cause-oriented time series separation' matches various time series representing various forcing mechanisms with those representing the climate behaviour (climate elements). In this way it can be assessed which part of observed climate variability can be explained by this (combined) forcing and which part remains unexplained.
This paper makes a case for the future development of European corporate law through regulatory competition rather than EC legislation. It is for the first time becoming legally possible for firms within the EU to select the national company law that they wish to govern their activities. A significant number of firms can be expected to exercise this freedom, and national legislatures can be expected to respond by seeking to make their company laws more attractive to firms. Whilst the UK is likely to be the single most successful jurisdiction in attracting firms, the presence of different models of corporate governance within Europe make it quite possible that competition will result in specialisation rather than convergence, and that no Member State will come to dominate as Delaware has done in the US. Procedural safeguards in the legal framework will direct the selection of laws which increase social welfare, as opposed simply to the welfare of those making the choice. Given that European legislators cannot be sure of the ‘optimal’ model for company law, the future of European company law-making would better be left with Member States than take the form of harmonized legislation.