Refine
Year of publication
Document Type
- Article (15671)
- Part of Periodical (2814)
- Working Paper (2350)
- Doctoral Thesis (2052)
- Preprint (1948)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (750)
- Report (471)
- Review (165)
Language
- English (29218) (remove)
Keywords
- taxonomy (738)
- new species (441)
- morphology (173)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (116)
- biodiversity (100)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5321)
- Physik (3715)
- Wirtschaftswissenschaften (1906)
- Frankfurt Institute for Advanced Studies (FIAS) (1654)
- Biowissenschaften (1539)
- Center for Financial Studies (CFS) (1485)
- Informatik (1390)
- Biochemie und Chemie (1085)
- Sustainable Architecture for Finance in Europe (SAFE) (1065)
- House of Finance (HoF) (708)
"In this paper, I analyse the conduct of business rules included in the Directive on Markets in Financial Instruments (MiFID) which has replaced the Investment Services Directive (ISD). These rules, in addition to being part of the regulation of investment intermediaries, operate as contractual standards in the relationships between intermediaries and their clients. While the need to harmonise similar rules is generally acknowledged, in the present paper I ask whether the Lamfalussy regulatory architecture, which governs securities lawmaking in the EU, has in some way improved regulation in this area. In section II, I examine the general aspects of the Lamfalussy process. In section III, I critically analyse the MiFID s provisions on conduct of business obligations, best execution of transactions and client order handling, taking into account the new regime of trade internalisation by investment intermediaries and the ensuing competition between these intermediaries and market operators. In sectionIV, I draw some general conclusions on the re-regulation made under the Lamfalussy regulatory structure and its limits. In this section, I make a few preliminary comments on the relevance of conduct of business rules to contract law, the ISD rules of conduct and the role of harmonisation."
In contrast to the class A heat stress transcription factors (Hsfs) of plants, a considerable number of Hsfs assigned to classes B and C have no evident function as transcription activators on their own. In the course of my PhD work I showed that tomato HsfB1, a heat stress induced member of class B Hsf family, is a novel type of transcriptional coactivator in plants. Together with class A Hsfs, e.g. tomato HsfA1, it plays an important role in efficient transcrition initiation during heat stress by forming a type of enhanceosome on fragments of Hsp promoter. Characterization of promoter architecture of hsp promoters led to the identification of novel, complex heat stress element (HSE) clusters, which are required for optimal synergistic interactions of HsfA1 and HsfB1. In addition, HsfB1 showed synergistic activation of the expression of a subset of viral and house keeping promoters. CaMV35S promoter, the most widely expressed constitutive promoter turned out to be the the most interesting candidate to study this effect in detail. Because, for most house-keeping promoters tested during this study, the activators responsible for constitutive expression are not known, but in case of CaMV35S promoter they are quite well known (the bZip proteins, TGA1/2). These proteins belong to the acidic activators, similar to class A Hsfs. Actually, on heat stress inducible promoters HsfA1 or other class A Hsfs are the synergistic partners of HsfB1, whereas on house-keeping or viral promoters, HsfB1 shows synergistic transcriptional activation in cooperation with the promoter specific acidic activators, e.g. with TGA proteins on 35S promoter. In agreement with this the binding sites for HsfB1 were identified in both house-keeping and 35S promoter. It has been suggested during this study that HsfB1 acts in the maintenance of transcription of a sub-set of house-keeping and viral genes during heat stress. The coactivator function of HsfB1 depends on a single lysine residue in the GRGK motif in its CTD. Since, this motif is highly conserved among histones as the acetylation motif, especially in histones H2A and H4,. It was suggested that the GRGK motif acts as a recruitment motif, and together with the other acidic activator is responsible for corecruitment of a histone acetyl transferase (HAT). So, the effect of mammalian CBP (a well known HAT) and its plant orthologs (HAC1) was tested on the stimulation of synergistic reporter gene activation obtained with HsfA1 and HsfB1. Both in plant and mammalian cells, CBP/HAC1 further stimulated the HsfA1/B1 synergistic effect. Corecruitment of HAC1 was proven by in vitro pull down assays, where the NTD of HAC1 interacted specifically both with HsfA1 and HsfB1. Formation of a ternary complex between HsfA1, HsfB1 and CBP/HAC1 was shown via coimmunoprecipitation and electrophoretic mobility shift assays (EMSA). In conclusion, the work presented in my thesis presents a new model for transcriptional regulation during an ongoing heat stress.
In an attempt to search for potential candidate molecules involved in the pathogenesis of endometriosis, a novel 2910 bp cDNA encoding a putative 411 amino acid protein, shrew-1 was discovered. By computational analysis it was predicted to be an integral membrane protein with an outside-in transmembrane domain but no homology with any known protein or domain could be identified. Antibodies raised against the putative open-reading frame peptide of shrew-1 labelled a protein of ca. 48 kDa in extracts of shrew-1 mRNA positive tissues and also detected ectopically expressed shrew-1. In the course of my PhD work, I confirmed the prediction that shrew-1 is indeed a transmembrane protein, by expressing epitope-tagged shrew-1 in epithelial cells and analysing the transfected cells by surface biotinylation and immunoblots. Additionally, I could show that shrew-1 is able to target to E-cadherin-mediated adherens junctions and interacts with the E-cadherin-catenin complex in polarised MCF7 and MDCK cells, but not with the N-cadherin-catenin complex in non-polarised epithelial cells. A direct interaction of shrew-1 with beta-catenin could be shown in an in vitro pull-down assay. From this data, it could be assumed that shrew-1 might play a role in the function and/or regulation of the dynamics of E-cadherin-mediated junctional complexes. In the next part of my thesis, I showed that stable overexpression of shrew-1 in normal MDCK cells. causes changes in morphology of the cells and turns them invasive. Furthermore, transcription by ²-catenin was activated in these MDCK cells stably overexpressing shrew-1. It was probably the imbalance of shrew-1 protein at the adherens junctions that led to the misregulation of adherens junctions associated proteins, i.e. E-cadherin and beta-catenin. Caveolin-1 is another integral membrane protein that forms complexes with Ecadherin- beta-catenin complexes and also plays a role in the endocytosis of E-cadherin during junctional disruption. By immunofluorescence and biochemical studies, caveolin-1 was identified as another interacting partner of shrew-1. However, the functional relevance of this interaction is still not clear. In conclusion, it can be said that shrew-1 interacts with the key players of invasion and metastasis, E-cadherin and caveolin-1, suggesting its possible role in these processes and making it an interesting candidate to unravel other unknown mechanisms involved in the complex process of invasion.
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
In this paper we demonstrate how to relate the semantics given by the nondeterministic call-by-need calculus FUNDIO [SS03] to Haskell. After introducing new correct program transformations for FUNDIO, we translate the core language used in the Glasgow Haskell Compiler into the FUNDIO language, where the IO construct of FUNDIO corresponds to direct-call IO-actions in Haskell. We sketch the investigations of [Sab03b] where a lot of program transformations performed by the compiler have been shown to be correct w.r.t. the FUNDIO semantics. This enabled us to achieve a FUNDIO-compatible Haskell-compiler, by turning o not yet investigated transformations and the small set of incompatible transformations. With this compiler, Haskell programs which use the extension unsafePerformIO in arbitrary contexts, can be compiled in a "safe" manner.
This paper proposes a non-standard way to combine lazy functional languages with I/O. In order to demonstrate the usefulness of the approach, a tiny lazy functional core language FUNDIO , which is also a call-by-need lambda calculus, is investigated. The syntax of FUNDIO has case, letrec, constructors and an IO-interface: its operational semantics is described by small-step reductions. A contextual approximation and equivalence depending on the input-output behavior of normal order reduction sequences is defined and a context lemma is proved. This enables to study a semantics of FUNDIO and its semantic properties. The paper demonstrates that the technique of complete reduction diagrams enables to show a considerable set of program transformations to be correct. Several optimizations of evaluation are given, including strictness optimizations and an abstract machine, and shown to be correct w.r.t. contextual equivalence. Correctness of strictness optimizations also justifies correctness of parallel evaluation. Thus this calculus has a potential to integrate non-strict functional programming with a non-deterministic approach to input-output and also to provide a useful semantics for this combination. It is argued that monadic IO and unsafePerformIO can be combined in Haskell, and that the result is reliable, if all reductions and transformations are correct w.r.t. to the FUNDIO-semantics. Of course, we do not address the typing problems the are involved in the usage of Haskell s unsafePerformIO. The semantics can also be used as a novel semantics for strict functional languages with IO, where the sequence of IOs is not fixed.
Context unification is a variant of second order unification. It can also be seen as a generalization of string unification to tree unification. Currently it is not known whether context unification is decidable. A specialization of context unification is stratified context unification, which is decidable. However, the previous algorithm has a very bad worst case complexity. Recently it turned out that stratified context unification is equivalent to satisfiability of one-step rewrite constraints. This paper contains an optimized algorithm for strati ed context unification exploiting sharing and power expressions. We prove that the complexity is determined mainly by the maximal depth of SO-cycles. Two observations are used: i. For every ambiguous SO-cycle, there is a context variable that can be instantiated with a ground context of main depth O(c*d), where c is the number of context variables and d is the depth of the SO-cycle. ii. the exponent of periodicity is O(2 pi ), which means it has an O(n)sized representation. From a practical point of view, these observations allow us to conclude that the unification algorithm is well-behaved, if the maximal depth of SO-cycles does not grow too large.
Context unification is a variant of second-order unification and also a generalization of string unification. Currently it is not known whether context uni cation is decidable. An expressive fragment of context unification is stratified context unification. Recently, it turned out that stratified context unification and one-step rewrite constraints are equivalent. This paper contains a description of a decision algorithm SCU for stratified context unification together with a proof of its correctness, which shows decidability of stratified context unification as well as of satisfiability of one-step rewrite constraints.
It is well known that first order uni cation is decidable, whereas second order and higher order unification is undecidable. Bounded second order unification (BSOU) is second order unification under the restriction that only a bounded number of holes in the instantiating terms for second order variables is permitted, however, the size of the instantiation is not restricted. In this paper, a decision algorithm for bounded second order unification is described. This is the fist non-trivial decidability result for second order unification, where the (finite) signature is not restricted and there are no restrictions on the occurrences of variables. We show that the monadic second order unification (MSOU), a specialization of BSOU is in sum p s. Since MSOU is related to word unification, this is compares favourably to the best known upper bound NEXPTIME (and also to the announced upper bound PSPACE) for word unification. This supports the claim that bounded second order unification is easier than context unification, whose decidability is currently an open question.
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
The extraction of strictness information marks an indispensable element of an efficient compilation of lazy functional languages like Haskell. Based on the method of abstract reduction we have developed an e cient strictness analyser for a core language of Haskell. It is completely written in Haskell and compares favourably with known implementations. The implementation is based on the G#-machine, which is an extension of the G-machine that has been adapted to the needs of abstract reduction.
This paper describes context analysis, an extension to strictness analysis for lazy functional languages. In particular it extends Wadler's four point domain and permits in nitely many abstract values. A calculus is presented based on abstract reduction which given the abstract values for the result automatically finds the abstract values for the arguments. The results of the analysis are useful for veri fication purposes and can also be used in compilers which require strictness information.
A partial rehabilitation of side-effecting I/O : non-determinism in non-strict functional languages
(1996)
We investigate the extension of non-strict functional languages like Haskell or Clean by a non-deterministic interaction with the external world. Using call-by-need and a natural semantics which describes the reduction of graphs, this can be done such that the Church-Rosser Theorems 1 and 2 hold. Our operational semantics is a base to recognise which particular equivalencies are preserved by program transformations. The amount of sequentialisation may be smaller than that enforced by other approaches and the programming style is closer to the common one of side-effecting programming. However, not all program transformations used by an optimising compiler for Haskell remain correct in all contexts. Our result can be interpreted as a possibility to extend current I/O-mechanism by non-deterministic deterministic memoryless function calls. For example, this permits a call to a random number generator. Adding memoryless function calls to monadic I/O is possible and has a potential to extend the Haskell I/O-system.
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
We consider the problem of unifying a set of equations between second-order terms. Terms are constructed from function symbols, constant symbols and variables, and furthermore using monadic second-order variables that may stand for a term with one hole, and parametric terms. We consider stratified systems, where for every first-order and second-order variable, the string of second-order variables on the path from the root of a term to every occurrence of this variable is always the same. It is shown that unification of stratified second-order terms is decidable by describing a nondeterministic decision algorithm that eventually uses Makanin's algorithm for deciding the unifiability of word equations. As a generalization, we show that the method can be used as a unification procedure for non-stratified second-order systems, and describe conditions for termination in the general case.
Lavater was admired and detested for his unconventional approach to theology and his rediscovery of physiognomy. He was an avid communicator and through his correspondence became known to almost all leading personalities of eighteenth century Europe, such as Goethe, Wieland and Rousseau. The more than 21,000 letters in Lavater's estate in the Zentralbibliothek Zürich display the enormous thematic variety produced during a remarkable forty years of correspondence. This unique source material is now being published for the first time. IDC Publishers makes this collection available for research to such various disciplines as theology, history, literature, arts, humanities and above all, the history of eighteenth century culture. Scope: * 9,121 letters from Lavater * 12,302 letters to Lavater * 1,850 correspondents
This Article concerns the duty of care in American corporate law. To fully understand that duty, it is necessary to distinguish between roles, functions, standards of conduct, and standards of review. A role consists of an organized and socially recognized pattern of activity in which individuals regularly engage. In organizations, roles take the form of positions, such as the position of the director. A function consists of an activity that an actor is expected to engage in by virtue of his role or position. A standard of conduct states the way in which an actor should play a role, act in his position, or conduct his functions. A standard of review states the test that a court should apply when it reviews an actor’s conduct to determine whether to impose liability, grant injunctive relief, or determine the validity of his actions. In many or most areas of law, standards of conduct and standards of review tend to be conflated. For example, the standard of conduct that governs automobile drivers is that they should drive carefully, and the standard of review in a liability claim against a driver is whether he drove carefully. Similarly, the standard of conduct that governs an agent who engages in a transaction with his principal is that the agent must deal fairly, and the standard of review in a claim by the principal against an agent, based on such a transaction, is whether the agent dealt fairly. The conflation of standards of conduct and standards of review is so common that it is easy to overlook the fact that whether the two kinds of standards are or should be identical in any given area is a matter of prudential judgment. In a corporate world in which information was perfect, the risk of liability for assuming a given corporate role was always commensurate with the incentives for assuming the role, and institutional considerations never required deference to a corporate organ, the standards of conduct and review in corporate law might be identical. In the real world, however, these conditions seldom hold, and in American corporate law the standards of review pervasively diverge from the standards of conduct. Traditionally, the two major areas of American corporate law that involved standards of conduct and review have been the duty of care and the duty of loyalty. The duty of loyalty concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does involve his own self-interest. The duty of care concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does not involve his own self-interest.
Revised Draft: January 2005, First Draft: December 8, 2004 The picture of dispersed, isolated and uninterested shareholders so graphically drawn by Adolf Berle and Gardiner Means in 19321 is for the most part no longer accurate in today's market, although their famous observations on the separation of control and ownership of public corporations remain true.
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The attitude expressed by Carl Fuerstenberg, a leading German banker of his time, succinctly embodies one of the principal issues facing the large enterprise – the divergence of interest between the management of the firm and outside equity shareholders. Why do, or should, investors put some of their savings in the hands of others, to expend as they see fit, with no commitment to repayment or a return? The answers are far from simple, and involve a complex interaction among a number of legal rules, economic institutions and market forces. Yet crafting a viable response is essential to the functioning of a modern economy based upon technology with scale economies whose attainment is dependent on the creation of large firms.
With the Council regulation (EC) No. 1346/2000 of 29 May 2000 on insolvency proceedings, that came into effect May 31, 2002 the European Union has introduced a legal framework for dealing with cross-border insolvency proceedings. In order to achieve the aim of improving the efficiency and effectiveness of insolvency proceedings having cross-border effects within the European Community, the provisions on jurisdiction, recognition and applicable law in this area are contained in a Regulation, a Community law measure which is binding and directly applicable in Member States. The goals of the Regulation, with 47 articles, are to enable cross-border insolvency proceedings to operate efficiently and effectively, to provide for co-ordination of the measures to be taken with regard to the debtor’s assets and to avoid forum shopping. The Insolvency Regulation, therefore, provides rules for the international jurisdiction of a court in a Member State for the opening of insolvency proceedings, the (automatic) recognition of these proceedings in other Member States and the powers of the ‘liquidator’ in the other Member States. The Regulation also deals with important choice of law (or: private international law) provisions. The Regulation is directly applicable in the Member States3 for all insolvency proceedings opened after 31 May 2002.
Increasingly, alternative investments via hedge funds are gaining importance in Germany. Just recently, this subject was taken up in the legal literature, too; this resulted in a higher product transparency. However, German investment law and, particularly, the special division "hedge funds" is still a field dominated by practitioners. First, the present situation shall be outlined. In addition, a description of the current development is given, in which the practical knowledge of the author is included. Finally, the hedge fund regulation intended by the legislator at the beginning of the year 2004 is legally evaluated against this background.
In response to recent developments in the financial markets and the stunning growth of the hedge fund industry in the United States, policy makers, most notably the Securities and Exchange Commission (“SEC”), are turning their attention to the regulation, or lack thereof, of hedge funds. U.S. regulators have scrutinized the hedge fund industry on several occasions in the recent past without imposing substantial regulatory constraints. Will this time be any different? The focus of the regulators’ interest has shifted. Traditionally, they approached the hedge fund industry by focusing on systemic risk to and integrity of the financial markets. The current inquiry is almost exclusively driven by investor protection concerns. What has changed? First, since 2000, new kinds of investors have poured capital into hedge funds in the United States, facilitated by the “retailization” of hedge funds through the development of funds of hedge funds and the dismal performance of the stock market. Second, in a post-Enron era, regulators and policy makers are increasingly sensitive to investor protection concerns. On May 14 and 15, 2003, the SEC held for the first time a public roundtable discussion on the single topic of hedge funds. Among the investor protection concerns highlighted were: an increase in incidents of fraud, inadequate suitability determinations by brokers who market hedge fund interests to individual investors, conflicts of interest of managers who manage mutual funds and hedge funds side-by-side, a lack of transparency that hinders investors from making informed investment decisions, layering of fees, and unbounded discretion by managers in pricing private hedge fund securities. Although there has been discussion about imposing wide-ranging restrictions onhedge funds, such as reining in short selling, requiring disclosure of long/short positions and limiting leverage, such a response would be heavy-handed and probably unnecessary. The existing regulatory regime is largely adequate to address the most flagrant abuses. Moreover, as the hedge fund market further matures, it is likely that institutional investors will continue to weed out weak performers and mediocre or dishonest hedge fund managers. What is likely to emerge from the newest regulatory focus on investor protection is a measured response that would enhance the SEC’s enforcement and inspection authority, while leaving hedge funds’ inherent investment flexibility largely unfettered. A likely scenario, for example, might be a requirement that some, or possibly all, hedge fund sponsors register with the SEC as investment advisers. Today, most are exempt from registration, although more and more are registering to provide advice to public hedge funds and attract institutions. Registration would make it easier for the SEC to ferret out potential fraudsters in advance by reviewing the professional history of hedge fund operators, allow the SEC to bring administrative proceedings against hedge fund advisers for statutory violations and give the agency access to books and records that it does not have today. Other possible initiatives, including additional disclosure requirements for publicly offered hedge funds, are discussed below. This article addresses the question whether U.S. regulation of hedge funds is really taking a new direction. It (i) provides a brief overview of the current U.S. regulatory scheme, from which hedge funds are generally exempt, (ii) describes recent events in the United States that have contributed to regulators’ anxiety, (iii) examines the investor protection rationale for hedge fund regulation and considers whether these concerns do, in fact, merit increased regulation of hedge funds at this time, and (iv) considers the likelihood and possible scope of a potential regulatory response, principally by the SEC.
In an ideal world all investment products, including hedge funds, would be marketable to all investors. In this ideal world, all investors would fully understand the nature of the products and would be able to make an informed choice whether to invest. Of course the ideal world does not exist – the retail investment market is characterised by asymmetries of information. Product providers know most about the products on offer (or at least they should do). Investment advisers often know rather less than the provider but much more than their retail customers. Providers and intermediary advisers are understandably motivated by the desire to sell their products. There is therefore a risk that investment products will be mis-sold by investment advisers or mis-bought by ill-informed investors. This asymmetry of information is dealt with in most countries through regulation. However, the regulatory response in different countries is not necessarily the same. There are various ways in which protections can be applied and it is important to understand that the cultural background and regulatory histories of countries flavours the way regulation has developed. This means (as will be explained in greater detail later) that some countries are better able than others to admit hedge funds to the retail sector. Following this Introduction, Section II looks at some key background issues. Section III then looks at some important questions raised by the retail hedge fund issue. Many of these are questions of balance. Balance lies at the heart of regulation of course – regulation must always balance the needs of investors and with market efficiency. Understanding the “retail hedge fund” question requires particular attention to balance. Section IV then looks at the UK regime and how the FSA has answered the balance question. Section V offers some international perspectives. Section VI concludes. It will be seen that there is no obviously right answer to the question whether hedge fund products should be marketed to retail investors. Each regulator in each jurisdiction needs to make up its own mind on how to deal with the various issues and balances. It is evident, however, that internationally there is a move towards a greater variety of retail funds. There is nothing wrong with that, provided the regulators and the retail customers they protect, understand sufficiently what sort of protection is, or is not, being offered in the regulatory regime.
While hedge funds have been around at least since the 1940's, it has only been in the last decade or so that they have attracted the widespread attention of investors, academics and regulators. Investors, mainly wealthy individuals but also increasingly institutional investors, are attracted to hedge funds because they promise high “absolute” returns -- high returns even when returns on mainstream asset classes like stocks and bonds are low or negative. This prospect, not surprisingly, has increased interest in hedge funds in recent years as returns on stocks have plummeted around the world, and as investors have sought alternative investment strategies to insulate them in the future from the kind of bear markets we are now experiencing. Government regulators, too, have become increasingly attentive to hedge funds, especially since the notorious collapse of the hedge fund Long-Term Capital Management (LTCM) in September 1998. Over the course of only a few months during the summer of 1998 LTCM lost billions of dollars because of failed investment strategies that were not well understood even by its own investors, let alone by its bankers and derivatives counterparties. LTCM had built up huge leverage both on and off the balance sheet, so that when its investments soured it was unable to meet the demands of creditors and derivatives counterparties. Had LTCM’s counterparties terminated and liquidated their positions with LTCM, the result could have been a severe liquidity shortage and sharp changes in asset prices, which many feared could have impaired the solvency of other financial institutions and destabilized financial markets generally. The Federal Reserve did not wait to see if this would happen. It intervened to organize an immediate (September 1998) creditor-bailout by LTCM’s largest creditors and derivatives counterparties, preventing the wholesale liquidation of LTCM’s positions. Over the course of the year that followed the bailout, the creditor committee charged with managing LTCM’s positions effected an orderly work-out and liquidation of LTCM’s positions. We will never know what would have happened had the Federal Reserve not intervened. In defending the Federal Reserve’s unusual actions in coming to the assistance of an unregulated financial institutions like a hedge fund, William McDonough, the president of the Federal Reserve Bank of New York, stated that it was the Federal Reserve’s judgement that the “...abrupt and disorderly close-out of LTCM’s positions would pose unacceptable risks to the American economy. ... there was a likelihood that a number of credit and interest rate markets would experience extreme price moves and possibly cease to function for a period of one or more days and maybe longer. This would have caused a vicious cycle: a loss of investor confidence, lending to further liquidations of positions, and so on.” The near-collapse of LTCM galvanized regulators throughout the world to examine the operations of hedge funds to determine if they posed a risk to investors and to financial stability more generally. Studies were undertaken by nearly every major central bank, regulatory agency, and international “regulatory” committee (such as the Basle Committee and IOSCO), and reports were issued, by among others, The President’s Working Group on Financial Markets, the United States General Accounting Office (GAO), the Counterparty Risk Management Policy Group, the Basle Committee on Banking Supervision, and the International Organization of Securities Commissions (IOSCO). Many of these studies concluded that there was a need for greater disclosure by hedge funds in order to increase transparency and enhance market discipline, by creditors, derivatives counterparties and investors. In the Fall of 1999 two bills were introduced before the U.S. Congress directed at increasing hedge fund disclosure (the “Hedge Fund Disclosure Act” [the “Baker Bill”] and the “Markey/Dorgan Bill”). But when the legislative firestorm sparked by the LTCM’s episode finally quieted, there was no new regulation of hedge funds. This paper provides an overview of the regulation of hedge funds and examines the key regulatory issues that now confront regulators throughout the world. In particular, two major issues are examined. First, whether hedge funds pose a systemic threat to the stability of financial markets, and, if so, whether additional government regulation would be useful. And second, whether existing regulation provides sufficient protection for hedge fund investors, and, if not, what additional regulation is needed.
When performance measures are used for evaluation purposes, agents have some incentives to learn how their actions affect these measures. We show that the use of imperfect performance measures can cause an agent to devote too many resources (too much effort) to acquiring information. Doing so can be costly to the principal because the agent can use information to game the performance measure to the detriment of the principal. We analyze the impact of endogenous information acquisition on the optimal incentive strength and the quality of the performance measure used.
The volume is a collection of papers given at the conference “sub8 -- Sinn und Bedeutung”, the eighth annual conference of the Gesellschaft für Semantik, held at the Johann-Wolfgang-Goethe-Universität, Frankfurt (Germany) in September 2003. During this conference, experts presented and discussed various aspects of semantics. The very different topics included in this book provide insight into fields of ongoing Semantics research.
Compelling evidence for the creation of a new form of matter has been claimed to be found in Pb+Pb collisions at SPS. We discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion collisions. It is demonstrated that so far none of the proposed signals like J/psi meson production/suppression, strangeness enhancement, dileptons, and directed flow unambigiously show that a phase of deconfined matter has been formed in SPS Pb+Pb collisions. We emphasize the need for systematic future measurements to search for simultaneous irregularities in the excitation functions of several observables in order to come close to pinning the properties of hot, dense QCD matter from data.
We calculate the Gaussian radius parameters of the pion-emitting source in high energy heavy ion collisions, assuming a first order phase transition from a thermalized Quark-Gluon-Plasma (QGP) to a gas of hadrons. Such a model leads to a very long-lived dissipative hadronic rescattering phase which dominates the properties of the two-pion correlation functions. The radii are found to depend only weakly on the thermalization time tau i, the critical temperature T c (and thus the latent heat), and the specific entropy of the QGP. The dissipative hadronic stage enforces large variations of the pion emission times around the mean. Therefore, the model calculations suggest a rapid increase of R out/R side as a function of K T if a thermalized QGP were formed.
The equilibration of hot and dense nuclear matter produced in the central cell of central Au+Au collisions at RHIC (sqrt s = 200 A GeV) energies is studied within a microscopic transport model. The pressure in the cell becomes isotropic at t approx 5 fm/c after beginning of the collision. Within the next 15 fm/c the expansion of matter in the cell proceeds almost isentropically with the entropy per baryon ratio S/A approx 150, and the equation of state in the (P,epsilon) plane has a very simple form, P=0.15 epsilon. Comparison with the statistical model of an ideal hadron gas indicates that the time t approx 20 fm/c may be too short to reach the fully equilibrated state. Particularly, the creation of long-lived resonance-rich matter in the cell decelerates the relaxation to chemical equilibrium. This resonance-abundant state can be detected experimentally after the thermal freeze-out of particles.
The yields of strange particles are calculated with the UrQMD model for p,Pb(158 AGeV)Pb collisions and compared to experimental data. The yields are enhanced in central collisions if compared to proton induced or peripheral Pb+Pb collisions. The enhancement is due to secondary interactions. Nevertheless, only a reduction of the quark masses or equivalently an increase of the string tension provides an adequate description of the large observed enhancement factors (WA97 and NA49). Furthermore, the yields of unstable strange resonances as the Lambda star(1520) resonance or the phi meson are considerably affected by hadronic rescattering of the decay products.
The equilibration of hot and dense nuclear matter produced in the central region in central Au+Au collisions at square root s = 200A GeV is studied within the microscopic transport model UrQMD. The pressure here becomes isotropic at t approx 5 fm/c. Within the next 15 fm/c the expansion of the matter proceeds almost isentropically with the entropy per baryon ratio S/A approx 150. During this period the equation of state in the (P, epsilon)-plane has a very simple form, P = 0.15 epsilon. Comparison with the statistical model (SM) of an ideal hadron gas reveals that the time of approx 20 fm/c may be too short to attain the fully equilibrated state. Particularly, the fractions of resonances are overpopulated in contrast to the SM values. The creation of such a long-lived resonance-rich state slows down the relaxation to chemical equilibrium and can be detected experimentally.
Enhanced antiproton production in Pb(160 AGeV)+Pb reactions: evidence for quark gluon matter?
(2000)
The centrality dependence of the antiproton per participant ratio is studied in Pb(160 AGeV)+Pb reactions. Antiproton production in collisions of heavy nuclei at the CERN/SPS seems considerably enhanced as compared to conventional hadronic physics, given by the antiproton production rates in pp and antiproton annihilation in p p reactions. This enhancement is consistent with the observation of strong in-medium effects in other hadronic observables and may be an indication of partial restoration of chiral symmetry.
The relaxation of hot nuclear matter to an equilibrated state in the central zone of heavy-ion collisions at energies from AGS to RHIC is studied within the microscopic UrQMD model. It is found that the system reaches the (quasi)equilibrium stage for the period of 10-15 fm/c. Within this time the matter in the cell expands nearly isentropically with the entropy to baryon ratio S/A = 150 - 170. Thermodynamic characteristics of the system at AGS and at SPS energies at the endpoints of this stage are very close to the parameters of chemical and thermal freeze-out extracted from the thermal fit to experimental data. Predictions are made for the full RHIC energy square root s = 200$ AGeV. The formation of a resonance-rich state at RHIC energies is discussed.
The behavior of hadronic matter at high baryon densities is studied within Ultrarelativistic Quantum Molecular Dynamics (URQMD). Baryonic stopping is observed for Au+Au collisions from SIS up to SPS energies. The excitation function of flow shows strong sensitivities to the underlying equation of state (EOS), allowing for systematic studies of the EOS. Effects of a density dependent pole of the rho-meson propagator on dilepton spectra are studied for different systems and centralities at CERN energies.
Dilepton spectra are calculated within the microscopic transport model UrQMD and compared to data from the CERES experiment. The invariant mass spectra in the region between 300 MeV and 600 MeV depend strongly on the mass dependence of the rho meson decay width which is not sufficiently determined by the Vector Meson Dominance model. A consistent explanation of both the recent Pb+Au data and the proton induced data can be given without additional medium effects.
The hypothesis of local equilibrium (LE) in relativistic heavy ion collisions at energies from AGS to RHIC is checked in the microscopic transport model. We find that kinetic, thermal, and chemical equilibration of the expanding hadronic matter is nearly reached in central collisions at AGS energy for t >_ fm/c in a central cell. At these times the equation of state may be approximated by a simple dependence P ~= (0.12-0.15) epsilon. Increasing deviations of the yields and the energy spectra of hadrons from statistical model values are observed for increasing bombarding energies. The origin of these deviations is traced to the irreversible multiparticle decays of strings and many-body (N >_ 3) decays of resonances. The violations of LE indicate that the matter in the cell reaches a steady state instead of idealized equilibrium. The entropy density in the cell is only about 6% smaller than that of the equilibrium state.
Local equilibrium in heavy ion collisions. Microscopic model versus statistical model analysis
(1999)
The assumption of local equilibrium in relativistic heavy ion collisions at energies from 10.7 AGeV (AGS) up to 160 AGeV (SPS) is checked in the microscopic transport model. Dynamical calculations performed for a central cell in the reaction are compared to the predictions of the thermal statistical model. We find that kinetic, thermal and chemical equilibration of the expanding hadronic matter are nearly approached late in central collisions at AGS energy for t >= 10 fm/c in a central cell. At these times the equation of state may be approximated by a simple dependence P ~= (0.12-0.15) epsilon. Increasing deviations of the yields and the energy spectra of hadrons from statistical model values are observed for increasing energy, 40 AGeV and 160 AGeV. These violations of local equilibrium indicate that a fully equilibrated state is not reached, not even in the central cell of heavy ion collisions at energies above 10 AGeV. The origin of these findings is traced to the multiparticle decays of strings and many-body decays of resonances.
In dieser Arbeit werden Untersuchungen über die Anwendbarkeit von vier Methoden zur selektiven Einführung von Radikalen in DNA vorgestellt. Hierzu wurde die EPR-Spektroskopie (Elektronen-paramagnetische Resonanz) benutzt. Die selektive Einführung und Erzeugung von Radikalen in DNA ist nötig, um J-Kopplungen in DNA zu untersuchen. Vor dem Fernziel der Bestimmung der Austauschkopplungskonstanten J in biradikalischer DNA und deren Korrelation mit der charge-transfer-Geschwindigkeitskonstanten kCT stellen diese Untersuchungen einen wichtigen Ausgangspunkt dar. Stabile aromatische Nitroxide. Simulationen von Raumtemperatur-CW-X-Band-EPRSpektren fünf verschiedener aromatischer Nitroxide, welche potentielle DNA-Interkalatoren sind, wurden durchgeführt. Die aromatischen Nitroxide zeigen aufgelöste Hyperfeinkopplungen, welche zu dem Schluss führen, dass die Spindichte in hohem Maße delokalisiert ist, was die Verwendung dieser Verbindungen zur Messung von J-Kopplungen in biradikalischer DNA erlaubt. Transiente Guanin-Radikale. Transiente Guanin-Radikale werden in DNA selektiv durch die Flash-Quench-Technik erzeugt, bei der optisch anregbare Ruthenium-Interkalatoren verwendet werden. Transiente Thymyl-Radikale aus UV-bestrahltem 4'-Pivaloyl-Thymidin. Es werden photoinduzierte Prozesse untersucht, welche durch Bestrahlung von Thymin-Nukleosiden, die an der 4’-Position die optisch spaltbare Pivaloyl-Gruppe tragen, erzeugt werden. Dieses Nukleosid wurde speziell dafür entworfen, um Elektronenlöcher in DNA zu injizieren. In dieser Arbeit wird gezeigt, dass diese Verbindung benutzt werden kann, um selektiv eine Thymin-Base zu reduzieren. Transiente Thymyl-Radikale erzeugt durch ein neuartig modifiziertes Thymin nach UV-Bestrahlung. Photoinduzierte Prozesse, welche durch Bestrahlung eines ähnlichen Thymidin-Nukleosids erzeugt wurden, werden hier untersucht. Dieses Thymidin- Nukleosid wurde modifiziert, indem die optisch spaltbare Pivaloyl-Gruppe an eine Seitenkette angehängt wurde, welche an der C6-Position der Thymin-Base sitzt. Die Thymin-Base wurde speziell dafür entworfen, um Elektronen in DNA zu injizieren. In dieser Arbeit wurde bestätigt, dass ein Überschuss-Elektron selektiv auf eine Thymin-Base transferiert werden kann.
The behavior of hadronic matter at high baryon densities is studied within Ultrarelativistic Quantum Molecular Dynamics (URQMD). Baryonic stopping is observed for Au+Au collisions from SIS up to SPS energies. The excitation function of flow shows strong sensitivities to the underlying equation of state (EOS), allowing for systematic studies of the EOS. Dilepton spectra are calculated with and without shifting the rho pole. Except for S+Au collisions our calculations reproduce the CERES data.
Quantum Molecular Dynamics (QMD) calculations of central collisions between heavy nuclei are used to study fragment production and the creation of collective flow. It is shown that the final phase space distributions are compatible with the expectations from a thermally equilibrated source, which in addition exhibits a collective transverse expansion. However, the microscopic analyses of the transient states in the intermediate reaction stages show that the event shapes are more complex and that equilibrium is reached only in very special cases but not in event samples which cover a wide range of impact parameters as it is the case in experiments. The basic features of a new molecular dynamics model (UQMD) for heavy ion collisions from the Fermi energy regime up to the highest presently available energies are outlined.
We study the thermodynamic properties of infinite nuclear matter with the Ultrarelativistic Quantum Molecular Dynamics (URQMD), a semiclassical transport model, running in a box with periodic boundary conditions. It appears that the energy density rises faster than T4 at high temperatures of T approx. 200 - 300 MeV. This indicates an increase in the number of degrees of freedom. Moreover, We have calculated direct photon production in Pb+Pb collisions at 160 GeV/u within this model. The direct photon slope from the microscopic calculation equals that from a hydrodynamical calculation without a phase transition in the equation of state of the photon source.