Refine
Year of publication
Document Type
- Working Paper (2350) (remove)
Language
- English (2350) (remove)
Is part of the Bibliography
- no (2350) (remove)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1376)
- Wirtschaftswissenschaften (1306)
- Sustainable Architecture for Finance in Europe (SAFE) (738)
- House of Finance (HoF) (604)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (147)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
For some time now the buzzword 'transparency' has been bandied about in the media almost daily. For example, calls were made for greater transparency in the financial system in connection with developments in the Asian financial markets. But the call for greater transparency goes far beyond the financial markets. It is now regarded as a necessary part of "good governance" demanded of all economic policy makers. As the World Bank's chief economist Joseph Stiglitz put it: 'No one would dare say that they were against transparency (....): It would be like saying you were against motherhood or apple pie.' This paper focuses on transparency in monetary policy, in particular with respect to the European System of Central Bank.
This study uses Markov-switching models to evaluate the informational content of the term structure as a predictor of recessions in eight OECD countries. The empirical results suggest that for all countries the term spread is sensibly modelled as a two-state regime-switching process. Moreover, our simple univariate model turns out to be a filter that transforms accurately term spread changes into turning point predictions. The term structure is confirmed to be a reliable recession indicator. However, the results of probit estimations show that the markov-switching filter does not significantly improve the forecasting ability of the spread.
Modeling short-term interest rates as following regime-switching processes has become increasingly popular. Theoretically, regime-switching models are able to capture rational expectations of infrequently occurring discrete events. Technically, they allow for potential time-varying stationarity. After discussing both aspects with reference to the recent literature, this paper provides estimations of various univariate regime-switching specifications for the German three-month money market rate and bivariate specifications additionally including the term spread. However, the main contribution is a multi-step out-of-sample forecasting competition. It turns out that forecasts are improved substantially when allowing for state-dependence. Particularly, the informational content of the term spread for future short rate changes can be exploited optimally within a multivariate regime-switching framework.
Collateral, default risk, and relationship lending : an empirical study on financial contracting
(2000)
This paper provides further insights into the nature of relationship lending by analyzing the link between relationship lending, borrower quality and collateral as a key variable in loan contract design. We used a unique data set based on the examination of credit files of five leading German banks, thus relying on information actually used in the process of bank credit decision-making and contract design. In particular, bank internal borrower ratings serve to evaluate borrower quality, and the bank's own assessment of its housebank status serves to identify information-intensive relationships. Additionally, we used data on workout activities for borrowers facing financial distress. We found no significant correlation between ex ante borrower quality and the incidence or degree of collateralization. Our results indicate that the use of collateral in loan contract design is mainly driven by aspects of relationship lending and renegotiations. We found that relationship lenders or housebanks do require more collateral from their debtors, thereby increasing the borrower's lock-in and strengthening the banks' bargaining power in future renegotiation situations. This result is strongly supported by our analysis of the correlation between ex post risk, collateral and relationship lending since housebanks do more frequently engage in workout activities for distressed borrowers, and collateralization increases workout probability. First version: March 12, 1999
We analyze the role of different kinds of primary and secondary market interventions for the government's goal to maximize its revenues from public bond issuances. Some of these interventions can be thought of as characteristics of a "primary dealer system". After all, we see that a primary dealer system with a restricted number of participants may be useful in case of only restricted competition among sufficiently heterogeneous market makers. We further show that minimum secondary market turnover requirements for primary dealers with respect to bond sales seem to be in general more adequate than the definition of maximum bid-ask-spreads or minimum turnover requirements with respect to bond purchases. Moreover, official price management operations are not able to completely substitute for a system of primary dealers. Finally it should be noted that there is in general no reason for monetary compensations to primary dealers since they already possess some privileges with respect to public bond auction.
This paper considers the desirability of the observed tendency of central banks to adjust interest rates only gradually in response to changes in economic conditions. It shows, in the context of a simple model of optimizing private-sector behavior, that such inertial behavior on the part of the central bank may indeed be optimal, in the sense of minimizing a loss function that penalizes inflation variations, deviations of output from potential, and interest-rate variability. Sluggish adjustment characterizes an optimal policy commitment, even though no such inertia would be present in the case of a reputationless (Markovian) equilibrium under discretion. Optimal interest-rate feedback rules are also characterized, and shown to involve substantial positive coefficients on lagged interest rates. This provides a theoretical explanation for the numerical results obtained by Rotemberg and Woodford (1998) in their quantitative model of the U.S. economy.
This paper analyses two reasons why inflation may interfere with price adjustment so as to create inefficiencies in resource allocation at low rates of inflation. The first argument is that the higher the rate of inflation the lower the likelihood that downward nominal rigidities are binding (the Tobin argument) which implies a non-linear Phillips-curve. The second argument is that low inflation strengthens nominal price rigidities and thus impairs the flexibility of the price system resulting in a less efficient resource allocation. It is argued that inflation can be too low from a welfare point of view due to the presence of nominal rigidities, but the quantitative importance is an open question.
As inflation rates in the United States decline, analysts are asking if there are economic reasons to hold the rates at levels above zero. Previous studies of whether inflation "greases the wheels" of the labor market ignore inflation's potential for disrupting wage patterns in the same market. This paper outlines an institutionally-based model of wage-setting that allows the benefits of inflation (downward wage flexibility) to be separated from disruptive uncertainty about inflation rate (undue variation in relative prices). Our estimates, using a unique 40-year panel of wage changes made by large mid-western employers, suggest that low rates of inflation do help the economy to adjust to changes in labor supply and demand. However, when inflation's disruptive effects are balanced against this benefit the labor market justification for pursuing a positive long-term inflation goal effectively disappears.
Since 1990, a number of countries have adopted inflation targeting as their declared monetary strategy. Interpretations of the significance of this movement, however, have differed widely. To some, inflation targeting mandates the single-minded, rule-like pursuit of price stability without regard for other policy objectives; to others, inflation targeting represents nothing more than the latest version of cheap talk by central banks unable to sustain monetary commitments. Advocates of inflation targeting, including the adopting central banks themselves, have expressed the view that the efforts at transparency and communication in the inflation targeting framework grant the central bank greater short-run flexibility in pursuit of its long-run inflation goal. This paper assesses whether the talk that inflation targeting central banks engage in matters to central bank behavior, and which interpretation of the strategy is consistent with that assessment. We identify five distinct interpretations of inflation targeting, consistent with various strands of the current literature, and identify those interpretations as movements between various strategies in a conventional model of time-inconsistency in monetary policy. The empirical implications of these interpretations are then compared to the response of central banks to movements in inflation of three countries that adopted inflation targets in the early 1990s: The United Kingdom, Canada, and New Zealand. For all three, the evidence shows a break in the behavior of inflation consistent with a strengthened commitment to price stability. In no case, however, is there evidence that the strategy entails a single-minded pursuit of the inflation target. For the U.K., the results are consistent with the successful implementation the optimal state-contingent rule, thereby combining flexibility and credibility; similarly, New Zealand's improved inflation performance was achieved without a discernable increase in counter-inflationary conservatism. The results for Canada are less clear, perhaps reflecting the broader fiscal and international developments affecting the Canadian economy during this period.
Derivatives usage in risk management by U.S. and German non-financial firms : a comparative survey
(1998)
This paper is a comparative study of the responses to the 1995 Wharton School survey of derivative usage among US non-financial firms and a 1997 companion survey on German non-financial firms. It is not a mere comparison of the results of both studies but a comparative study, drawing a comparable subsample of firms from the US study to match the sample of German firms on both size and industry composition. We find that German firms are more likely to use derivatives than US firms, with 78% of German firms using derivatives compared to 57% of US firms. Aside from this higher overall usage, the general pattern of usage across industry and size groupings is comparable across the two countries. In both countries, foreign currency derivative usage is most common, followed closely by interest rate derivatives, with commodity derivatives a distant third. Usage rates across all three classes of derivatives are higher for German firms than US firms. In contrast to the similarities, firms in the two countries differ notably on issues such as the primary goal of hedging, their choice of instruments, and the influence of their market view when taking derivative positions. These differences appear to be driven by the greater importance of financial accounting statements in Germany than the US and stricter German corporate policies of control over derivative activities within the firm. German firms also indicate significantly less concern about derivative related issues than US firms, which appears to arise from a more basic and simple strategy for using derivatives. Finally, among the derivative non-users, German firms tend to cite reasons suggesting derivatives were not needed whereas US firms tend to cite reasons suggesting a possible role for derivatives, but a hesitation to use them for some reason.
The purpose of the paper is to survey and discuss inflation targeting in the context of monetary policy rules. The paper provides a general conceptual discussion of monetary policy rules, attempts to clarify the essential characteristics of inflation targeting, compares inflation targeting to the other monetary policy rules, and draws some conclusions for the monetary policy of the European system of Central Banks.
Despite the relevance of credit financing for the profit and risk situation of commercial banks only little empirical evidence on the initial credit decision and monitoring process exists due to the lack of appropriate data on bank debt financing. The present paper provides a systematic overview of a data set generated during the Center for Financial Studies research project on "Credit Management" which was designed to fill this empirical void. The data set contains a broad list of variables taken from the credit files of five major German banks. It is a random sample drawn from all customers which have engaged in some form of borrowing from the banks in question between January 1992 and January 1997 and which meet a number of selection criteria. The sampling design and data collection procedure are discussed in detail. Additionally, the project's research agenda is described and some general descriptive statistics of the firms in our sample are provided.
We studied information and interaction processes in six lending relationships between a universal bank and medium sized firms. The study is based on the credit files of the respective firms. If no problems occur in these lending relationships, bank monitoring is based mainly on cheap, retrospective and internal data. In case of distress, more expensive, prospective and external information is used. The level of monitoring and the willingness to renegotiate the lending relationship depends on what the lending officers can learn about the future prospects of the firm from the behaviour of the debtors. We identify both signalling and bonding activities. Such learning from past behaviour seems to allow monitoring at low cost, whereas the direct observation of the firm's investment outlook seems to be very costly. Also, too much knowledge about the firm's investments might leave the bank in a very strong bargaining position and distort investment incentives. Therefore, the traditional view of credit assessment as observation of the quality of a borrower's investment programme needs to be reconsidered.
Shares trading in the Bolsa mexicana de Valores do not seem to react to company news. Using a sample of Mexican corporate news announcements from the period July 1994 through June 1996, this paper finds that there is nothing unusual about returns, volatility of returns, volume of trade or bid-ask spreads in the event window. This suggests one of five possibilities: our sample size is small; or markets are inefficient; or markets are efficient but the corporate news announcements are not value-relevant; or markets are efficient and corporate news announcements are value-relevant, but they have been fully anticipated; or markets are efficient and corporate news announcements are value-relevant, but unrestricted insider trading has caused prices to fully incorporate the information. The evidence supports the last hypothesis. The paper thus points towards a methodology for ranking emerging stock markets in terms of their market integrity, an approach that can be used with the limited data available in such markets.
No one seems to be neutral about the effects of EMU on the German economy. Roughly speaking, there are two camps: those who see the euro as the advent of a newly open, large, and efficient regime which will lead to improvements in European and in particular in German competitiveness; those who see the euro as a weakening of the German commitment to price stability. From a broader macroeconomic perspective, however, it is clear that EMU is unlikely to cause directly any meaningful change either for the better in Standort Deutschland or for the worse in the German price stability. There is ample evidence that changes in monetary regimes (so long as non leaving hyperinflation) induce little changes in real economic structures such as labor or financial markets. Regional asymmetries of the sorts in the EU do not tend to translate into monetary differences. Most importantly, there is no good reason to believe that the ECB will behave any differently than the Bundesbank.
Where do we stand in the theory of finance? : a selective overview with reference to Erich Gutenberg
(1998)
For the past 20 years, financial markets research has concerned itself with issues related to the evaluation and management of financial securities in efficient capital markets and with issues of management control in incomplete markets. The following selective overview focuses on key aspects of the theory and empirical experience of management control under conditions of asymmetric information. The objective is examine the validity of the recently advanced hypothesis on the myths of corporate control. The present overview is based on Gutenberg's position that there exists a discrete corporate interest, as distinct from and separate from the interests of the shareholders or other stakeholders. In the third volume of Grundlagen der BWL: Die Finanzen, published in 1969, this position of Gutenberg's is coupled with an appeal for a so-called financial equilibrium to be maintained. Not until recently have models grounded in capital market theory been developed which also allow for a firm's management to exercise autonomy vis-à-vis its stakeholder. This paper was prepared for the Erich Gutenberg centenary conference on December 12 and 13, 1997 in Cologne.
This study examines the relation of bank loan terms like interest rates, collateral, and lines of credit to borrower risk defined by the banks' internal credit rating. The analysis is not restricted to a static view. It also incorporates rating transition and its implications on the relation. Money illusion and phenomena linked with relationship banking are discovered as important factors. The results show that riskier borrowers pay higher loan rate premiums and rely more on bank finance. Housebanks obtain more collateral and provide more finance. Caused by money illusion in times of high market interest rates loan rate premiums are relatively small whereas in times of low market interest rates they are relatively high. There was no evidence for an appropriate adjustment of loan terms to rating changes. But bank market power represented by a weighted average of credit rating before and after a rating transition serves to compensate for low earlier profits caused by phenomena of interest rate smoothing. Klassifikation: G21.
Banks increasingly recognize the need to measure and manage the credit risk of their loans on a portfolio basis. We address the subportfolio "middle market". Due to their specific lending policy for this market segment it is an important task for banks to systematically identify regional and industrial credit concentrations and reduce the detected concentrations through diversification. In recent years, the development of markets for credit securitization and credit derivatives has provided new credit risk management tools. However, in the addressed market segment adverse selection and moral hazard problems are quite severe. A potential successful application of credit securitization and credit derivatives for managing credit risk of middle market commercial loan portfolios depends on the development of incentive-compatible structures which solve or at least mitigate the adverse selection and moral hazard problems. In this paper we identify a number of general requirements and describe two possible solution concepts.
During the last years the lending business has come under considerable competitive pressure and bank managers often express concern regarding its profitability vis-a-vis other activities. This paper tries to empirically identify factors that are able to explain the financial performance of bank lending activities. The analysis is based on the CFS-data-set that has been collected in 1997 from 200 medium-sized firms. Two regressions are performed: The first is directed towards relationships between the interest rate premiums and various determining factors, the second aims at detecting relationships between those factors and the occurrence of several types of problems during the course of a credit engagement. Furthermore, the results of both regressions are used to test theoretical hypotheses regarding the impact of certain parameters on credit terms and distress probabilities. The findings are somewhat “puzzling“: First, the rating is not as significant as expected. Second, credit contracts seem to be priced lower for situations with greater risks. Finally, the results do not fully support any of three hypotheses that are often advanced to describe the role of collateral and covenants in credit contracts.
The German financial market is often characterized as a bank-based system with strong bank-customer relationships. The corresponding notion of a housebank is closely related to the theoretical idea of relationship lending. It is the objective of this paper to provide a direct comparison between housebanks and "normal" banks as to their credit policy. Therefore, we analyze a new data set, representing a random sample of borrowers drawn from the credit portfolios of five leading German banks over a period of five years. We use credit-file data rather than industry survey data and, thus, focus the analysis on information that is directly related to actual credit decisions. In particular, we use bank-internal borrower rating data to evaluate borrower quality, and the bank's own assessment of its housebank status to control for information-intensive relationships.
This paper reviews the factors that will determine the shape of financial markets under EMU. It argues that financial markets will not be unified by the introduction of the euro. National central banks have a vested interest in preserving local idiosyncracies (e.g. the Wechsels in Germany) and they might be allowed to do so by promoting the use of so-called tier two assets under the common monetary policy. Moreover, a host of national regulations (prudential and fiscal) will make assets expressed in euro imperfect substitutes across borders. Prudential control will also continue to be handled differently from country to country. In the long run these national idiosyncracies cannot survive competitive pressures in the euro area. The year 1999 will thus see the beginning of a process of unification of financial markets that will be irresistible in the long run, but might still take some time to complete.
In this paper we analyze the relation between fund performance and market share. Using three performance measures we first establish that significant differences in the risk-adjusted returns of the funds in the sample exist. Thus, investors may react to past fund performance when making their investment decisions. We estimated a model relating past performance to changes in market share and found that past performance has a significant positive effect on market share. The results of a specification test indicate that investors react to risk-adjusted returns rather than to raw returns. This suggests that investors may be more sophisticated than is often assumed.
From the mid-seventies on, the central banks of most major industrial countries switched to monetary targeting. The Bundesbank was the first central bank to take this step, making the switch at the end of 1974. This changeover to monetary targeting was due to the difficulties which the Bundesbank - like other central banks - was facing in pursuing its original strategy, and whichcame to a head in the early seventies, when inflation escalated. A second factor was the collapse of the Bretton Woods system of fixed exchange rates, which created the necessary scope for national monetary targeting. Finally, the advance of monetarist ideas fostered the explicit turn towards monetary targets, although the Bundesbank did not implement these in a mechanistic way. Whereas the Bundesbank has adhered to its policy of monetary targeting up to the present, nowadays monetary targeting plays only a minor role worldwide. Many central banks have switched to the strategy of direct inflation targeting. Others favour a more discretionary approach or a policy which is geared to the exchange rate. In the academic debate, monetary targeting is often presented as an outdated approach which has long since lost its basis of stable money demand. These findings give riseto a number of questions: Has monetary targeting actually become outdated? Which role is played by the concrete design of this strategy, and, against this background, how easily can it be transferred to European monetary union? This paper aims to answer these questions, drawing on the particular experience which the Bundesbank has gained of monetary targeting. It seems appropriate to discuss monetary targeting by using a specific example, since this notion is not very precise. This applies, for example, to the money definition used, the way the target is derived, the stringency applied in pursuing the target and the monetary management procedure.
In this speech (given at the CFSresearch conference on the Implementation of Price Stability held at the Bundesbank Frankfurt am Main, 10. - 12. Sept 1998), John Vickers discusses theoretical and practical issues relating to inflation targeting as used in the United Kingdom doing the past six years. After outlining the role of the Bank s Monetary Policy Committee, he considers the Committee s task from a theoretical perspective, beforediscussing the concept and measurement of domestically generated inflation.
Credit Unions are cooperative financial institutions specializing in the basic financial needs of certain groups of consumers. A distinguishing feature of credit unions is the legal requirement that members share a common bond. This organizing principle recently became the focus of national attention as the Supreme Court and the U.S. Congress took opposite sides in a controversy regarding the number of common bonds that could co-exist within the membership of a single credit union. Despite its importance, little research has been done into how common bonds affect how credit unions actually operate. We frame the issues with a simple theoretical model of credit-union formation and consolidation. To provide intuition into the flexibility of multiple-group credit unions in serving members, we simulate the model and present some comparative-static results. We then apply a semi-parametric empirical model to a large dataset drawn from federally chartered occupational credit unions in 1996 to investigate the effects of common bonds. Our results suggest that credit unions with multiple common bonds have higher participation rates than credit unions that are otherwise similar but whose membership shares a single common bond.
"In this paper, I analyse the conduct of business rules included in the Directive on Markets in Financial Instruments (MiFID) which has replaced the Investment Services Directive (ISD). These rules, in addition to being part of the regulation of investment intermediaries, operate as contractual standards in the relationships between intermediaries and their clients. While the need to harmonise similar rules is generally acknowledged, in the present paper I ask whether the Lamfalussy regulatory architecture, which governs securities lawmaking in the EU, has in some way improved regulation in this area. In section II, I examine the general aspects of the Lamfalussy process. In section III, I critically analyse the MiFID s provisions on conduct of business obligations, best execution of transactions and client order handling, taking into account the new regime of trade internalisation by investment intermediaries and the ensuing competition between these intermediaries and market operators. In sectionIV, I draw some general conclusions on the re-regulation made under the Lamfalussy regulatory structure and its limits. In this section, I make a few preliminary comments on the relevance of conduct of business rules to contract law, the ISD rules of conduct and the role of harmonisation."
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
In this paper we demonstrate how to relate the semantics given by the nondeterministic call-by-need calculus FUNDIO [SS03] to Haskell. After introducing new correct program transformations for FUNDIO, we translate the core language used in the Glasgow Haskell Compiler into the FUNDIO language, where the IO construct of FUNDIO corresponds to direct-call IO-actions in Haskell. We sketch the investigations of [Sab03b] where a lot of program transformations performed by the compiler have been shown to be correct w.r.t. the FUNDIO semantics. This enabled us to achieve a FUNDIO-compatible Haskell-compiler, by turning o not yet investigated transformations and the small set of incompatible transformations. With this compiler, Haskell programs which use the extension unsafePerformIO in arbitrary contexts, can be compiled in a "safe" manner.
This paper proposes a non-standard way to combine lazy functional languages with I/O. In order to demonstrate the usefulness of the approach, a tiny lazy functional core language FUNDIO , which is also a call-by-need lambda calculus, is investigated. The syntax of FUNDIO has case, letrec, constructors and an IO-interface: its operational semantics is described by small-step reductions. A contextual approximation and equivalence depending on the input-output behavior of normal order reduction sequences is defined and a context lemma is proved. This enables to study a semantics of FUNDIO and its semantic properties. The paper demonstrates that the technique of complete reduction diagrams enables to show a considerable set of program transformations to be correct. Several optimizations of evaluation are given, including strictness optimizations and an abstract machine, and shown to be correct w.r.t. contextual equivalence. Correctness of strictness optimizations also justifies correctness of parallel evaluation. Thus this calculus has a potential to integrate non-strict functional programming with a non-deterministic approach to input-output and also to provide a useful semantics for this combination. It is argued that monadic IO and unsafePerformIO can be combined in Haskell, and that the result is reliable, if all reductions and transformations are correct w.r.t. to the FUNDIO-semantics. Of course, we do not address the typing problems the are involved in the usage of Haskell s unsafePerformIO. The semantics can also be used as a novel semantics for strict functional languages with IO, where the sequence of IOs is not fixed.
Context unification is a variant of second order unification. It can also be seen as a generalization of string unification to tree unification. Currently it is not known whether context unification is decidable. A specialization of context unification is stratified context unification, which is decidable. However, the previous algorithm has a very bad worst case complexity. Recently it turned out that stratified context unification is equivalent to satisfiability of one-step rewrite constraints. This paper contains an optimized algorithm for strati ed context unification exploiting sharing and power expressions. We prove that the complexity is determined mainly by the maximal depth of SO-cycles. Two observations are used: i. For every ambiguous SO-cycle, there is a context variable that can be instantiated with a ground context of main depth O(c*d), where c is the number of context variables and d is the depth of the SO-cycle. ii. the exponent of periodicity is O(2 pi ), which means it has an O(n)sized representation. From a practical point of view, these observations allow us to conclude that the unification algorithm is well-behaved, if the maximal depth of SO-cycles does not grow too large.
Context unification is a variant of second-order unification and also a generalization of string unification. Currently it is not known whether context uni cation is decidable. An expressive fragment of context unification is stratified context unification. Recently, it turned out that stratified context unification and one-step rewrite constraints are equivalent. This paper contains a description of a decision algorithm SCU for stratified context unification together with a proof of its correctness, which shows decidability of stratified context unification as well as of satisfiability of one-step rewrite constraints.
It is well known that first order uni cation is decidable, whereas second order and higher order unification is undecidable. Bounded second order unification (BSOU) is second order unification under the restriction that only a bounded number of holes in the instantiating terms for second order variables is permitted, however, the size of the instantiation is not restricted. In this paper, a decision algorithm for bounded second order unification is described. This is the fist non-trivial decidability result for second order unification, where the (finite) signature is not restricted and there are no restrictions on the occurrences of variables. We show that the monadic second order unification (MSOU), a specialization of BSOU is in sum p s. Since MSOU is related to word unification, this is compares favourably to the best known upper bound NEXPTIME (and also to the announced upper bound PSPACE) for word unification. This supports the claim that bounded second order unification is easier than context unification, whose decidability is currently an open question.
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
The extraction of strictness information marks an indispensable element of an efficient compilation of lazy functional languages like Haskell. Based on the method of abstract reduction we have developed an e cient strictness analyser for a core language of Haskell. It is completely written in Haskell and compares favourably with known implementations. The implementation is based on the G#-machine, which is an extension of the G-machine that has been adapted to the needs of abstract reduction.
This paper describes context analysis, an extension to strictness analysis for lazy functional languages. In particular it extends Wadler's four point domain and permits in nitely many abstract values. A calculus is presented based on abstract reduction which given the abstract values for the result automatically finds the abstract values for the arguments. The results of the analysis are useful for veri fication purposes and can also be used in compilers which require strictness information.
A partial rehabilitation of side-effecting I/O : non-determinism in non-strict functional languages
(1996)
We investigate the extension of non-strict functional languages like Haskell or Clean by a non-deterministic interaction with the external world. Using call-by-need and a natural semantics which describes the reduction of graphs, this can be done such that the Church-Rosser Theorems 1 and 2 hold. Our operational semantics is a base to recognise which particular equivalencies are preserved by program transformations. The amount of sequentialisation may be smaller than that enforced by other approaches and the programming style is closer to the common one of side-effecting programming. However, not all program transformations used by an optimising compiler for Haskell remain correct in all contexts. Our result can be interpreted as a possibility to extend current I/O-mechanism by non-deterministic deterministic memoryless function calls. For example, this permits a call to a random number generator. Adding memoryless function calls to monadic I/O is possible and has a potential to extend the Haskell I/O-system.
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
We consider the problem of unifying a set of equations between second-order terms. Terms are constructed from function symbols, constant symbols and variables, and furthermore using monadic second-order variables that may stand for a term with one hole, and parametric terms. We consider stratified systems, where for every first-order and second-order variable, the string of second-order variables on the path from the root of a term to every occurrence of this variable is always the same. It is shown that unification of stratified second-order terms is decidable by describing a nondeterministic decision algorithm that eventually uses Makanin's algorithm for deciding the unifiability of word equations. As a generalization, we show that the method can be used as a unification procedure for non-stratified second-order systems, and describe conditions for termination in the general case.
This Article concerns the duty of care in American corporate law. To fully understand that duty, it is necessary to distinguish between roles, functions, standards of conduct, and standards of review. A role consists of an organized and socially recognized pattern of activity in which individuals regularly engage. In organizations, roles take the form of positions, such as the position of the director. A function consists of an activity that an actor is expected to engage in by virtue of his role or position. A standard of conduct states the way in which an actor should play a role, act in his position, or conduct his functions. A standard of review states the test that a court should apply when it reviews an actor’s conduct to determine whether to impose liability, grant injunctive relief, or determine the validity of his actions. In many or most areas of law, standards of conduct and standards of review tend to be conflated. For example, the standard of conduct that governs automobile drivers is that they should drive carefully, and the standard of review in a liability claim against a driver is whether he drove carefully. Similarly, the standard of conduct that governs an agent who engages in a transaction with his principal is that the agent must deal fairly, and the standard of review in a claim by the principal against an agent, based on such a transaction, is whether the agent dealt fairly. The conflation of standards of conduct and standards of review is so common that it is easy to overlook the fact that whether the two kinds of standards are or should be identical in any given area is a matter of prudential judgment. In a corporate world in which information was perfect, the risk of liability for assuming a given corporate role was always commensurate with the incentives for assuming the role, and institutional considerations never required deference to a corporate organ, the standards of conduct and review in corporate law might be identical. In the real world, however, these conditions seldom hold, and in American corporate law the standards of review pervasively diverge from the standards of conduct. Traditionally, the two major areas of American corporate law that involved standards of conduct and review have been the duty of care and the duty of loyalty. The duty of loyalty concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does involve his own self-interest. The duty of care concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does not involve his own self-interest.
Revised Draft: January 2005, First Draft: December 8, 2004 The picture of dispersed, isolated and uninterested shareholders so graphically drawn by Adolf Berle and Gardiner Means in 19321 is for the most part no longer accurate in today's market, although their famous observations on the separation of control and ownership of public corporations remain true.
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The attitude expressed by Carl Fuerstenberg, a leading German banker of his time, succinctly embodies one of the principal issues facing the large enterprise – the divergence of interest between the management of the firm and outside equity shareholders. Why do, or should, investors put some of their savings in the hands of others, to expend as they see fit, with no commitment to repayment or a return? The answers are far from simple, and involve a complex interaction among a number of legal rules, economic institutions and market forces. Yet crafting a viable response is essential to the functioning of a modern economy based upon technology with scale economies whose attainment is dependent on the creation of large firms.