Refine
Year of publication
Document Type
- Article (15661)
- Part of Periodical (2814)
- Working Paper (2350)
- Doctoral Thesis (2052)
- Preprint (1946)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (750)
- Report (471)
- Review (165)
Language
- English (29206) (remove)
Keywords
- taxonomy (738)
- new species (441)
- morphology (173)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (116)
- biodiversity (99)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5321)
- Physik (3710)
- Wirtschaftswissenschaften (1903)
- Frankfurt Institute for Advanced Studies (FIAS) (1653)
- Biowissenschaften (1539)
- Center for Financial Studies (CFS) (1485)
- Informatik (1389)
- Biochemie und Chemie (1084)
- Sustainable Architecture for Finance in Europe (SAFE) (1065)
- House of Finance (HoF) (708)
Cold target recoil ion momentum spectroscopy (COLTRIM) has been employed to image the momentum distributions of continuum electrons liberated in the impact of slow He2+ on He and H2. The distributions were measured for fully determined motion of the nuclei that is as a function of the impact parameter and in a well de ned scattering plane The single ionization (SI) of H2 leading to H2+ recoil ions in nondissociative states (He2+ + H+ -> He2+ + H+ + e-) and the transfer ionization (TI) of H2 leading to H2 dissociation into two free protons (He2+ H2 -> He+ + H+ + H+ + e-) were investigated. Similar measurements have been carried out for He target, the corresponding atomic two electron system, i.e. the single ionization (SI) (He2+ + He -> He+ + He2+ e- and the transfer ionization (TI) (He2+ + He -> He+ + He2+ + e-). These measurements have been exploited to understand the results obtained for H target. In comparing the continuum electron momentum distributions for H2 with that for He, a high degree of similarity is observed. In the case of transfer ionization of H2, the electron momentum distributions generated for parallel and perpendicular molecular orientations revealed no orientation dependence. The in scattering plane electron momentum distributions for the transfer ionization of H2 by He2+ and for the transfer ionization of He by He2e showed that the salient feature of these distributions for both collisions systems consists in the appearance of two groups of electrons with difeerent structures. In addition to the group of the saddle electrons forming two jets separated by a valley along the projectile axis we nd a new group of electrons moving with a velocity higher than the projectile velocity These new fast forward electrons result from a narrow range of impact parameters and appear as image saddle in the projectile frame. In contrast to the transfer ionization of He, the fast forward electrons group disappears in the in scattering plane electron momentum distribution generated for the single ionization of He. Instead of this group another new group of electrons appear These electrons exhibit an amount of backscattering These backward elec trons appear as image saddle in the target frame The structures that the saddle electrons show are owing to the quasi molecular nature of the collision process For the TI of H2, the TI of He and the SI of He, a pi-orbital shape of the electron momentum distribution is observed This indicates the importance of the rotational coupling 2-p-theta -> 2p-pi in the initial promotion of the ground state followed by further promotions to the continuum The backward electrons as well as the fast forward electrons are not discussed in the theoretical literature at all. However, a number of obvious indications of the existence of the backward and fast forward electrons could be seen in the experimental works of Abdallah et al. as well as in the theoretical calculations of Sidky et al One might speculate that electrons which are promoted on the saddle for some time during the collision could finally swing around the He+ ion in the way out of the collision, i.e. either around the projectile in the forward direction as in the TI case forming the fast forward electrons or around the recoil ion in the backward direction as in the SI case forming the backward electrons. This might be a result of the strong gradient, and hence the large acceleration of the screened He+ potential.
Alzheimer’s disease (AD) is the most common neurodegenerative disorder world wide, causing presenile dementia and death of millions of people. During AD damage and massive loss of brain cells occur. Alzheimer’s disease is genetically heterogeneous and may therefore represent a common phenotype that results from various genetic and environmental influences and risk factors. In approximately 10% of patients, changes of the genetic information were detected (gene mutations). In these cases, Alzheimer’s disease is inherited as an autosomal dominant trait (familial Alzheimer’s disease, FAD). In rare cases of familial Alzheimer’s disease (about 1-3%), mutations have been detected in genes on chromosomes 14 and 1 (encoding for Presenilin 1 and 2, respectively), and on chromosome 21 encoding for the amyloid precursor protein (APP), which is responsible for the release of the cell-damaging protein amyloid-beta (ß-amyloid, Aß). Familial forms of early-onset Alzheimer’s disease are rare; however, their importance extends far beyond their frequency, because they allow to identify some of the critical pathogenetic pathways of the disease. All familial Alzheimer mutations share a common feature: they lead to an enhanced production of the Aß, which is the major constituent of senile plaques in brains of AD patients. New data indicates that Aß promotes neuronal degeneration. Therefore, one aim of these thesis was to elucidate the neurotoxic biochemical pathways induced by Aß, investigating the effect of the FAD Swedish APP double mutation (APPsw) on oxidative stress-induced cell death mechanisms. This mutation results in a three- to sixfold increased Aß production compared to wild-type APP (APPwt). As cell models, the neuronal PC12 (rat pheochromocytoma) and the HEK (human embryonic kidney 293) cell lines were used, which have been transfected with human wiltyp APP or human APP containing the Swedish double mutation. The used cell models offer two important advantages. First, compared to experiments using high concentrations of Aß at micromolar levels applied extracellularly to cells, PC12 APPsw cells secret low Aß levels similar to the situation in FAD brains. Thus, this cell model represents a very suitable approach to elucidate the AD-specific cell death pathways mimicking physiological conditions. Second, these two cell lines (PC12 and HEK APPwt and APPsw) with different production levels of Aß may additionally allow to study dose-dependent effects of Aß. The here obtained results provide evidence for the enhanced cell vulnerability caused by the Swedish APP mutation and elucidate the cell death mechanism probably initiated by intracellulary produced Aß. Here it seems likely that increased production of Aß at physiological levels primes APPsw PC12 cells to undergo cell death only after additional stress, while chronic high levels in HEK cells already lead to enhanced basal apoptotic levels. Crucial effects of the Swedish APP mutation include the impairments of cellular energy metabolism affecting mitochondrial membrane potential and ATP levels as well as the additional activation of caspase 2, caspase 8 and JNK in response to oxidative stress. Thereby ,the following model can be proposed: PC12 cells harboring the Swedish APP mutation have a reduced energy metabolism compared to APPwt or control cells. However, this effect does not leads to enhanced basal apoptotic levels of cultured cells. An exposure of PC12 cells to oxidative stress leads to mitochondrial dysfunction, e.g., decrease in mitochondrial membrane potential and depletion in ATP. The consequence is the activation of the intrinsic apoptotic pathway releasing cytochrome c and Smac resulting in the activation of caspase 9. This effect is amplified by the overexpression of APP, since both APPsw and APPwt PC12 cells show enhanced cytochrome c and Smac release as well as enhanced caspase 9 activity as vector transfected control. In APPsw PC12 cells a parallel pathway is additionally emphased. Due to reduced ATP levels or enhanced Aß production JNK is activated. Furthermore, the extrinsic apoptotic pathway is enhanced, since caspase 8 and caspase 2 activation was clearly enhanced by the Swedish APP mutation. Both pathways may then converge by activating the effector enzyme, caspase 3, and the execution of cell death. In addition, caspase independent effects also needs to be considered. One possibility could be the implication of AIF since AIF expression was found to be induced by the Swedish APP mutation. In APPsw HEK cells high chronic Aß levels leads to enhanced apoptotic levels, reduce mitochondrial membrane potential and ATP levels even under basal conditions. Summarizing, a hypothetical sequence of events is proposed linking FAD, Aß production, JNK-activation, mitochondrial dysfunction with caspase pathway and neuronal loss for our cell model. The brain has a high metabolic rate and is exposured to gradually rising levels of oxidative stress during life. In Swedish FAD patients the levels of oxidative stress are increased in the temporal inferior cortex. This study using a cell model mimicking the in vivo situation in AD brains indicates that probably both, increased Aß production and the gradual rise of oxidative stress throughout life converge at a final common pathway of an increased vulnerability of neurons to apoptotic cell death from FAD patients. Presenilin (PS) 1 is an aspartyl protease, involved in the gamma-secretase mediated proteolysis of Amyloid-ß-protein (Aß), the major constituent of senile plaques in brains of Alzheimer’s disease (AD) patients. Recent studies have suggested an additional role for presenilin proteins in apoptotic cell death observed in AD. Since PS 1 is proteolytic cleaved by caspase 3, it has been prosposed that the resulting C-terminal fragment of PS1 (PSCas) could play a role in signal transduction during apoptosis. Moreover, it was shown that mutant presenilins causing early-onset of familial Alzheimer's disease (FAD) may render cells vulnerable to apoptosis. The mechanism by which PS1 regulates apoptotic cell death is yet not understood. Therefore one aim of our present study was to clarify the involvement of PS1 in the proteolytic cascade of apoptosis and if the cleavage of PS1 by caspase 3 has an regulatory function. Here it is demonstrated that both, PS1 and PS1Cas lead to a reduced vulnerability of PC12 and Jurkat cells to different apoptotic stimuli. However a mutation at the caspase 3 recognition site (D345A/ PSmut), which inhibits cleavage of PS1 by caspase 3, show no differences in the effect of PS1 or PSCas towards apoptotic stimuli. This suggest that proteolysis of PS1 by caspase 3 is not a determinant, but only a secondary effect during apoptosis. Since several FAD mutation distributed through the whole PS1 gene lead to enhanced apoptosis, an abolishment of the antiapoptotic effect of PS1 might contribute to the massive neurodegeneration in early age of FAD patients. Here, the regulate properties of PS1 in apoptosis may not be through an caspase 3 dependent cleavage and generation of PSCas, but rather through interaction of PS1 with other proteins involved in apoptosis.
The German financial market is often characterized as a bank-based system with strong bank-customer relationships. The corresponding notion of a housebank is closely related to the theoretical idea of relationship lending. It is the objective of this paper to provide a direct comparison between housebanks and "normal" banks as to their credit policy. Therefore, we analyze a new data set, representing a random sample of borrowers drawn from the credit portfolios of five leading German banks over a period of five years. We use credit-file data rather than industry survey data and, thus, focus the analysis on information that is directly related to actual credit decisions. In particular, we use bank-internal borrower rating data to evaluate borrower quality, and the bank's own assessment of its housebank status to control for information-intensive relationships.
This paper reviews the factors that will determine the shape of financial markets under EMU. It argues that financial markets will not be unified by the introduction of the euro. National central banks have a vested interest in preserving local idiosyncracies (e.g. the Wechsels in Germany) and they might be allowed to do so by promoting the use of so-called tier two assets under the common monetary policy. Moreover, a host of national regulations (prudential and fiscal) will make assets expressed in euro imperfect substitutes across borders. Prudential control will also continue to be handled differently from country to country. In the long run these national idiosyncracies cannot survive competitive pressures in the euro area. The year 1999 will thus see the beginning of a process of unification of financial markets that will be irresistible in the long run, but might still take some time to complete.
In this paper we analyze the relation between fund performance and market share. Using three performance measures we first establish that significant differences in the risk-adjusted returns of the funds in the sample exist. Thus, investors may react to past fund performance when making their investment decisions. We estimated a model relating past performance to changes in market share and found that past performance has a significant positive effect on market share. The results of a specification test indicate that investors react to risk-adjusted returns rather than to raw returns. This suggests that investors may be more sophisticated than is often assumed.
From the mid-seventies on, the central banks of most major industrial countries switched to monetary targeting. The Bundesbank was the first central bank to take this step, making the switch at the end of 1974. This changeover to monetary targeting was due to the difficulties which the Bundesbank - like other central banks - was facing in pursuing its original strategy, and whichcame to a head in the early seventies, when inflation escalated. A second factor was the collapse of the Bretton Woods system of fixed exchange rates, which created the necessary scope for national monetary targeting. Finally, the advance of monetarist ideas fostered the explicit turn towards monetary targets, although the Bundesbank did not implement these in a mechanistic way. Whereas the Bundesbank has adhered to its policy of monetary targeting up to the present, nowadays monetary targeting plays only a minor role worldwide. Many central banks have switched to the strategy of direct inflation targeting. Others favour a more discretionary approach or a policy which is geared to the exchange rate. In the academic debate, monetary targeting is often presented as an outdated approach which has long since lost its basis of stable money demand. These findings give riseto a number of questions: Has monetary targeting actually become outdated? Which role is played by the concrete design of this strategy, and, against this background, how easily can it be transferred to European monetary union? This paper aims to answer these questions, drawing on the particular experience which the Bundesbank has gained of monetary targeting. It seems appropriate to discuss monetary targeting by using a specific example, since this notion is not very precise. This applies, for example, to the money definition used, the way the target is derived, the stringency applied in pursuing the target and the monetary management procedure.
In this speech (given at the CFSresearch conference on the Implementation of Price Stability held at the Bundesbank Frankfurt am Main, 10. - 12. Sept 1998), John Vickers discusses theoretical and practical issues relating to inflation targeting as used in the United Kingdom doing the past six years. After outlining the role of the Bank s Monetary Policy Committee, he considers the Committee s task from a theoretical perspective, beforediscussing the concept and measurement of domestically generated inflation.
Credit Unions are cooperative financial institutions specializing in the basic financial needs of certain groups of consumers. A distinguishing feature of credit unions is the legal requirement that members share a common bond. This organizing principle recently became the focus of national attention as the Supreme Court and the U.S. Congress took opposite sides in a controversy regarding the number of common bonds that could co-exist within the membership of a single credit union. Despite its importance, little research has been done into how common bonds affect how credit unions actually operate. We frame the issues with a simple theoretical model of credit-union formation and consolidation. To provide intuition into the flexibility of multiple-group credit unions in serving members, we simulate the model and present some comparative-static results. We then apply a semi-parametric empirical model to a large dataset drawn from federally chartered occupational credit unions in 1996 to investigate the effects of common bonds. Our results suggest that credit unions with multiple common bonds have higher participation rates than credit unions that are otherwise similar but whose membership shares a single common bond.
"In this paper, I analyse the conduct of business rules included in the Directive on Markets in Financial Instruments (MiFID) which has replaced the Investment Services Directive (ISD). These rules, in addition to being part of the regulation of investment intermediaries, operate as contractual standards in the relationships between intermediaries and their clients. While the need to harmonise similar rules is generally acknowledged, in the present paper I ask whether the Lamfalussy regulatory architecture, which governs securities lawmaking in the EU, has in some way improved regulation in this area. In section II, I examine the general aspects of the Lamfalussy process. In section III, I critically analyse the MiFID s provisions on conduct of business obligations, best execution of transactions and client order handling, taking into account the new regime of trade internalisation by investment intermediaries and the ensuing competition between these intermediaries and market operators. In sectionIV, I draw some general conclusions on the re-regulation made under the Lamfalussy regulatory structure and its limits. In this section, I make a few preliminary comments on the relevance of conduct of business rules to contract law, the ISD rules of conduct and the role of harmonisation."
In contrast to the class A heat stress transcription factors (Hsfs) of plants, a considerable number of Hsfs assigned to classes B and C have no evident function as transcription activators on their own. In the course of my PhD work I showed that tomato HsfB1, a heat stress induced member of class B Hsf family, is a novel type of transcriptional coactivator in plants. Together with class A Hsfs, e.g. tomato HsfA1, it plays an important role in efficient transcrition initiation during heat stress by forming a type of enhanceosome on fragments of Hsp promoter. Characterization of promoter architecture of hsp promoters led to the identification of novel, complex heat stress element (HSE) clusters, which are required for optimal synergistic interactions of HsfA1 and HsfB1. In addition, HsfB1 showed synergistic activation of the expression of a subset of viral and house keeping promoters. CaMV35S promoter, the most widely expressed constitutive promoter turned out to be the the most interesting candidate to study this effect in detail. Because, for most house-keeping promoters tested during this study, the activators responsible for constitutive expression are not known, but in case of CaMV35S promoter they are quite well known (the bZip proteins, TGA1/2). These proteins belong to the acidic activators, similar to class A Hsfs. Actually, on heat stress inducible promoters HsfA1 or other class A Hsfs are the synergistic partners of HsfB1, whereas on house-keeping or viral promoters, HsfB1 shows synergistic transcriptional activation in cooperation with the promoter specific acidic activators, e.g. with TGA proteins on 35S promoter. In agreement with this the binding sites for HsfB1 were identified in both house-keeping and 35S promoter. It has been suggested during this study that HsfB1 acts in the maintenance of transcription of a sub-set of house-keeping and viral genes during heat stress. The coactivator function of HsfB1 depends on a single lysine residue in the GRGK motif in its CTD. Since, this motif is highly conserved among histones as the acetylation motif, especially in histones H2A and H4,. It was suggested that the GRGK motif acts as a recruitment motif, and together with the other acidic activator is responsible for corecruitment of a histone acetyl transferase (HAT). So, the effect of mammalian CBP (a well known HAT) and its plant orthologs (HAC1) was tested on the stimulation of synergistic reporter gene activation obtained with HsfA1 and HsfB1. Both in plant and mammalian cells, CBP/HAC1 further stimulated the HsfA1/B1 synergistic effect. Corecruitment of HAC1 was proven by in vitro pull down assays, where the NTD of HAC1 interacted specifically both with HsfA1 and HsfB1. Formation of a ternary complex between HsfA1, HsfB1 and CBP/HAC1 was shown via coimmunoprecipitation and electrophoretic mobility shift assays (EMSA). In conclusion, the work presented in my thesis presents a new model for transcriptional regulation during an ongoing heat stress.
In an attempt to search for potential candidate molecules involved in the pathogenesis of endometriosis, a novel 2910 bp cDNA encoding a putative 411 amino acid protein, shrew-1 was discovered. By computational analysis it was predicted to be an integral membrane protein with an outside-in transmembrane domain but no homology with any known protein or domain could be identified. Antibodies raised against the putative open-reading frame peptide of shrew-1 labelled a protein of ca. 48 kDa in extracts of shrew-1 mRNA positive tissues and also detected ectopically expressed shrew-1. In the course of my PhD work, I confirmed the prediction that shrew-1 is indeed a transmembrane protein, by expressing epitope-tagged shrew-1 in epithelial cells and analysing the transfected cells by surface biotinylation and immunoblots. Additionally, I could show that shrew-1 is able to target to E-cadherin-mediated adherens junctions and interacts with the E-cadherin-catenin complex in polarised MCF7 and MDCK cells, but not with the N-cadherin-catenin complex in non-polarised epithelial cells. A direct interaction of shrew-1 with beta-catenin could be shown in an in vitro pull-down assay. From this data, it could be assumed that shrew-1 might play a role in the function and/or regulation of the dynamics of E-cadherin-mediated junctional complexes. In the next part of my thesis, I showed that stable overexpression of shrew-1 in normal MDCK cells. causes changes in morphology of the cells and turns them invasive. Furthermore, transcription by ²-catenin was activated in these MDCK cells stably overexpressing shrew-1. It was probably the imbalance of shrew-1 protein at the adherens junctions that led to the misregulation of adherens junctions associated proteins, i.e. E-cadherin and beta-catenin. Caveolin-1 is another integral membrane protein that forms complexes with Ecadherin- beta-catenin complexes and also plays a role in the endocytosis of E-cadherin during junctional disruption. By immunofluorescence and biochemical studies, caveolin-1 was identified as another interacting partner of shrew-1. However, the functional relevance of this interaction is still not clear. In conclusion, it can be said that shrew-1 interacts with the key players of invasion and metastasis, E-cadherin and caveolin-1, suggesting its possible role in these processes and making it an interesting candidate to unravel other unknown mechanisms involved in the complex process of invasion.
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
In this paper we demonstrate how to relate the semantics given by the nondeterministic call-by-need calculus FUNDIO [SS03] to Haskell. After introducing new correct program transformations for FUNDIO, we translate the core language used in the Glasgow Haskell Compiler into the FUNDIO language, where the IO construct of FUNDIO corresponds to direct-call IO-actions in Haskell. We sketch the investigations of [Sab03b] where a lot of program transformations performed by the compiler have been shown to be correct w.r.t. the FUNDIO semantics. This enabled us to achieve a FUNDIO-compatible Haskell-compiler, by turning o not yet investigated transformations and the small set of incompatible transformations. With this compiler, Haskell programs which use the extension unsafePerformIO in arbitrary contexts, can be compiled in a "safe" manner.
This paper proposes a non-standard way to combine lazy functional languages with I/O. In order to demonstrate the usefulness of the approach, a tiny lazy functional core language FUNDIO , which is also a call-by-need lambda calculus, is investigated. The syntax of FUNDIO has case, letrec, constructors and an IO-interface: its operational semantics is described by small-step reductions. A contextual approximation and equivalence depending on the input-output behavior of normal order reduction sequences is defined and a context lemma is proved. This enables to study a semantics of FUNDIO and its semantic properties. The paper demonstrates that the technique of complete reduction diagrams enables to show a considerable set of program transformations to be correct. Several optimizations of evaluation are given, including strictness optimizations and an abstract machine, and shown to be correct w.r.t. contextual equivalence. Correctness of strictness optimizations also justifies correctness of parallel evaluation. Thus this calculus has a potential to integrate non-strict functional programming with a non-deterministic approach to input-output and also to provide a useful semantics for this combination. It is argued that monadic IO and unsafePerformIO can be combined in Haskell, and that the result is reliable, if all reductions and transformations are correct w.r.t. to the FUNDIO-semantics. Of course, we do not address the typing problems the are involved in the usage of Haskell s unsafePerformIO. The semantics can also be used as a novel semantics for strict functional languages with IO, where the sequence of IOs is not fixed.
Context unification is a variant of second order unification. It can also be seen as a generalization of string unification to tree unification. Currently it is not known whether context unification is decidable. A specialization of context unification is stratified context unification, which is decidable. However, the previous algorithm has a very bad worst case complexity. Recently it turned out that stratified context unification is equivalent to satisfiability of one-step rewrite constraints. This paper contains an optimized algorithm for strati ed context unification exploiting sharing and power expressions. We prove that the complexity is determined mainly by the maximal depth of SO-cycles. Two observations are used: i. For every ambiguous SO-cycle, there is a context variable that can be instantiated with a ground context of main depth O(c*d), where c is the number of context variables and d is the depth of the SO-cycle. ii. the exponent of periodicity is O(2 pi ), which means it has an O(n)sized representation. From a practical point of view, these observations allow us to conclude that the unification algorithm is well-behaved, if the maximal depth of SO-cycles does not grow too large.
Context unification is a variant of second-order unification and also a generalization of string unification. Currently it is not known whether context uni cation is decidable. An expressive fragment of context unification is stratified context unification. Recently, it turned out that stratified context unification and one-step rewrite constraints are equivalent. This paper contains a description of a decision algorithm SCU for stratified context unification together with a proof of its correctness, which shows decidability of stratified context unification as well as of satisfiability of one-step rewrite constraints.
It is well known that first order uni cation is decidable, whereas second order and higher order unification is undecidable. Bounded second order unification (BSOU) is second order unification under the restriction that only a bounded number of holes in the instantiating terms for second order variables is permitted, however, the size of the instantiation is not restricted. In this paper, a decision algorithm for bounded second order unification is described. This is the fist non-trivial decidability result for second order unification, where the (finite) signature is not restricted and there are no restrictions on the occurrences of variables. We show that the monadic second order unification (MSOU), a specialization of BSOU is in sum p s. Since MSOU is related to word unification, this is compares favourably to the best known upper bound NEXPTIME (and also to the announced upper bound PSPACE) for word unification. This supports the claim that bounded second order unification is easier than context unification, whose decidability is currently an open question.
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
The extraction of strictness information marks an indispensable element of an efficient compilation of lazy functional languages like Haskell. Based on the method of abstract reduction we have developed an e cient strictness analyser for a core language of Haskell. It is completely written in Haskell and compares favourably with known implementations. The implementation is based on the G#-machine, which is an extension of the G-machine that has been adapted to the needs of abstract reduction.
This paper describes context analysis, an extension to strictness analysis for lazy functional languages. In particular it extends Wadler's four point domain and permits in nitely many abstract values. A calculus is presented based on abstract reduction which given the abstract values for the result automatically finds the abstract values for the arguments. The results of the analysis are useful for veri fication purposes and can also be used in compilers which require strictness information.
A partial rehabilitation of side-effecting I/O : non-determinism in non-strict functional languages
(1996)
We investigate the extension of non-strict functional languages like Haskell or Clean by a non-deterministic interaction with the external world. Using call-by-need and a natural semantics which describes the reduction of graphs, this can be done such that the Church-Rosser Theorems 1 and 2 hold. Our operational semantics is a base to recognise which particular equivalencies are preserved by program transformations. The amount of sequentialisation may be smaller than that enforced by other approaches and the programming style is closer to the common one of side-effecting programming. However, not all program transformations used by an optimising compiler for Haskell remain correct in all contexts. Our result can be interpreted as a possibility to extend current I/O-mechanism by non-deterministic deterministic memoryless function calls. For example, this permits a call to a random number generator. Adding memoryless function calls to monadic I/O is possible and has a potential to extend the Haskell I/O-system.
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
We consider the problem of unifying a set of equations between second-order terms. Terms are constructed from function symbols, constant symbols and variables, and furthermore using monadic second-order variables that may stand for a term with one hole, and parametric terms. We consider stratified systems, where for every first-order and second-order variable, the string of second-order variables on the path from the root of a term to every occurrence of this variable is always the same. It is shown that unification of stratified second-order terms is decidable by describing a nondeterministic decision algorithm that eventually uses Makanin's algorithm for deciding the unifiability of word equations. As a generalization, we show that the method can be used as a unification procedure for non-stratified second-order systems, and describe conditions for termination in the general case.
Lavater was admired and detested for his unconventional approach to theology and his rediscovery of physiognomy. He was an avid communicator and through his correspondence became known to almost all leading personalities of eighteenth century Europe, such as Goethe, Wieland and Rousseau. The more than 21,000 letters in Lavater's estate in the Zentralbibliothek Zürich display the enormous thematic variety produced during a remarkable forty years of correspondence. This unique source material is now being published for the first time. IDC Publishers makes this collection available for research to such various disciplines as theology, history, literature, arts, humanities and above all, the history of eighteenth century culture. Scope: * 9,121 letters from Lavater * 12,302 letters to Lavater * 1,850 correspondents
This Article concerns the duty of care in American corporate law. To fully understand that duty, it is necessary to distinguish between roles, functions, standards of conduct, and standards of review. A role consists of an organized and socially recognized pattern of activity in which individuals regularly engage. In organizations, roles take the form of positions, such as the position of the director. A function consists of an activity that an actor is expected to engage in by virtue of his role or position. A standard of conduct states the way in which an actor should play a role, act in his position, or conduct his functions. A standard of review states the test that a court should apply when it reviews an actor’s conduct to determine whether to impose liability, grant injunctive relief, or determine the validity of his actions. In many or most areas of law, standards of conduct and standards of review tend to be conflated. For example, the standard of conduct that governs automobile drivers is that they should drive carefully, and the standard of review in a liability claim against a driver is whether he drove carefully. Similarly, the standard of conduct that governs an agent who engages in a transaction with his principal is that the agent must deal fairly, and the standard of review in a claim by the principal against an agent, based on such a transaction, is whether the agent dealt fairly. The conflation of standards of conduct and standards of review is so common that it is easy to overlook the fact that whether the two kinds of standards are or should be identical in any given area is a matter of prudential judgment. In a corporate world in which information was perfect, the risk of liability for assuming a given corporate role was always commensurate with the incentives for assuming the role, and institutional considerations never required deference to a corporate organ, the standards of conduct and review in corporate law might be identical. In the real world, however, these conditions seldom hold, and in American corporate law the standards of review pervasively diverge from the standards of conduct. Traditionally, the two major areas of American corporate law that involved standards of conduct and review have been the duty of care and the duty of loyalty. The duty of loyalty concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does involve his own self-interest. The duty of care concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does not involve his own self-interest.
Revised Draft: January 2005, First Draft: December 8, 2004 The picture of dispersed, isolated and uninterested shareholders so graphically drawn by Adolf Berle and Gardiner Means in 19321 is for the most part no longer accurate in today's market, although their famous observations on the separation of control and ownership of public corporations remain true.
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The attitude expressed by Carl Fuerstenberg, a leading German banker of his time, succinctly embodies one of the principal issues facing the large enterprise – the divergence of interest between the management of the firm and outside equity shareholders. Why do, or should, investors put some of their savings in the hands of others, to expend as they see fit, with no commitment to repayment or a return? The answers are far from simple, and involve a complex interaction among a number of legal rules, economic institutions and market forces. Yet crafting a viable response is essential to the functioning of a modern economy based upon technology with scale economies whose attainment is dependent on the creation of large firms.
With the Council regulation (EC) No. 1346/2000 of 29 May 2000 on insolvency proceedings, that came into effect May 31, 2002 the European Union has introduced a legal framework for dealing with cross-border insolvency proceedings. In order to achieve the aim of improving the efficiency and effectiveness of insolvency proceedings having cross-border effects within the European Community, the provisions on jurisdiction, recognition and applicable law in this area are contained in a Regulation, a Community law measure which is binding and directly applicable in Member States. The goals of the Regulation, with 47 articles, are to enable cross-border insolvency proceedings to operate efficiently and effectively, to provide for co-ordination of the measures to be taken with regard to the debtor’s assets and to avoid forum shopping. The Insolvency Regulation, therefore, provides rules for the international jurisdiction of a court in a Member State for the opening of insolvency proceedings, the (automatic) recognition of these proceedings in other Member States and the powers of the ‘liquidator’ in the other Member States. The Regulation also deals with important choice of law (or: private international law) provisions. The Regulation is directly applicable in the Member States3 for all insolvency proceedings opened after 31 May 2002.
Increasingly, alternative investments via hedge funds are gaining importance in Germany. Just recently, this subject was taken up in the legal literature, too; this resulted in a higher product transparency. However, German investment law and, particularly, the special division "hedge funds" is still a field dominated by practitioners. First, the present situation shall be outlined. In addition, a description of the current development is given, in which the practical knowledge of the author is included. Finally, the hedge fund regulation intended by the legislator at the beginning of the year 2004 is legally evaluated against this background.
In response to recent developments in the financial markets and the stunning growth of the hedge fund industry in the United States, policy makers, most notably the Securities and Exchange Commission (“SEC”), are turning their attention to the regulation, or lack thereof, of hedge funds. U.S. regulators have scrutinized the hedge fund industry on several occasions in the recent past without imposing substantial regulatory constraints. Will this time be any different? The focus of the regulators’ interest has shifted. Traditionally, they approached the hedge fund industry by focusing on systemic risk to and integrity of the financial markets. The current inquiry is almost exclusively driven by investor protection concerns. What has changed? First, since 2000, new kinds of investors have poured capital into hedge funds in the United States, facilitated by the “retailization” of hedge funds through the development of funds of hedge funds and the dismal performance of the stock market. Second, in a post-Enron era, regulators and policy makers are increasingly sensitive to investor protection concerns. On May 14 and 15, 2003, the SEC held for the first time a public roundtable discussion on the single topic of hedge funds. Among the investor protection concerns highlighted were: an increase in incidents of fraud, inadequate suitability determinations by brokers who market hedge fund interests to individual investors, conflicts of interest of managers who manage mutual funds and hedge funds side-by-side, a lack of transparency that hinders investors from making informed investment decisions, layering of fees, and unbounded discretion by managers in pricing private hedge fund securities. Although there has been discussion about imposing wide-ranging restrictions onhedge funds, such as reining in short selling, requiring disclosure of long/short positions and limiting leverage, such a response would be heavy-handed and probably unnecessary. The existing regulatory regime is largely adequate to address the most flagrant abuses. Moreover, as the hedge fund market further matures, it is likely that institutional investors will continue to weed out weak performers and mediocre or dishonest hedge fund managers. What is likely to emerge from the newest regulatory focus on investor protection is a measured response that would enhance the SEC’s enforcement and inspection authority, while leaving hedge funds’ inherent investment flexibility largely unfettered. A likely scenario, for example, might be a requirement that some, or possibly all, hedge fund sponsors register with the SEC as investment advisers. Today, most are exempt from registration, although more and more are registering to provide advice to public hedge funds and attract institutions. Registration would make it easier for the SEC to ferret out potential fraudsters in advance by reviewing the professional history of hedge fund operators, allow the SEC to bring administrative proceedings against hedge fund advisers for statutory violations and give the agency access to books and records that it does not have today. Other possible initiatives, including additional disclosure requirements for publicly offered hedge funds, are discussed below. This article addresses the question whether U.S. regulation of hedge funds is really taking a new direction. It (i) provides a brief overview of the current U.S. regulatory scheme, from which hedge funds are generally exempt, (ii) describes recent events in the United States that have contributed to regulators’ anxiety, (iii) examines the investor protection rationale for hedge fund regulation and considers whether these concerns do, in fact, merit increased regulation of hedge funds at this time, and (iv) considers the likelihood and possible scope of a potential regulatory response, principally by the SEC.
In an ideal world all investment products, including hedge funds, would be marketable to all investors. In this ideal world, all investors would fully understand the nature of the products and would be able to make an informed choice whether to invest. Of course the ideal world does not exist – the retail investment market is characterised by asymmetries of information. Product providers know most about the products on offer (or at least they should do). Investment advisers often know rather less than the provider but much more than their retail customers. Providers and intermediary advisers are understandably motivated by the desire to sell their products. There is therefore a risk that investment products will be mis-sold by investment advisers or mis-bought by ill-informed investors. This asymmetry of information is dealt with in most countries through regulation. However, the regulatory response in different countries is not necessarily the same. There are various ways in which protections can be applied and it is important to understand that the cultural background and regulatory histories of countries flavours the way regulation has developed. This means (as will be explained in greater detail later) that some countries are better able than others to admit hedge funds to the retail sector. Following this Introduction, Section II looks at some key background issues. Section III then looks at some important questions raised by the retail hedge fund issue. Many of these are questions of balance. Balance lies at the heart of regulation of course – regulation must always balance the needs of investors and with market efficiency. Understanding the “retail hedge fund” question requires particular attention to balance. Section IV then looks at the UK regime and how the FSA has answered the balance question. Section V offers some international perspectives. Section VI concludes. It will be seen that there is no obviously right answer to the question whether hedge fund products should be marketed to retail investors. Each regulator in each jurisdiction needs to make up its own mind on how to deal with the various issues and balances. It is evident, however, that internationally there is a move towards a greater variety of retail funds. There is nothing wrong with that, provided the regulators and the retail customers they protect, understand sufficiently what sort of protection is, or is not, being offered in the regulatory regime.
While hedge funds have been around at least since the 1940's, it has only been in the last decade or so that they have attracted the widespread attention of investors, academics and regulators. Investors, mainly wealthy individuals but also increasingly institutional investors, are attracted to hedge funds because they promise high “absolute” returns -- high returns even when returns on mainstream asset classes like stocks and bonds are low or negative. This prospect, not surprisingly, has increased interest in hedge funds in recent years as returns on stocks have plummeted around the world, and as investors have sought alternative investment strategies to insulate them in the future from the kind of bear markets we are now experiencing. Government regulators, too, have become increasingly attentive to hedge funds, especially since the notorious collapse of the hedge fund Long-Term Capital Management (LTCM) in September 1998. Over the course of only a few months during the summer of 1998 LTCM lost billions of dollars because of failed investment strategies that were not well understood even by its own investors, let alone by its bankers and derivatives counterparties. LTCM had built up huge leverage both on and off the balance sheet, so that when its investments soured it was unable to meet the demands of creditors and derivatives counterparties. Had LTCM’s counterparties terminated and liquidated their positions with LTCM, the result could have been a severe liquidity shortage and sharp changes in asset prices, which many feared could have impaired the solvency of other financial institutions and destabilized financial markets generally. The Federal Reserve did not wait to see if this would happen. It intervened to organize an immediate (September 1998) creditor-bailout by LTCM’s largest creditors and derivatives counterparties, preventing the wholesale liquidation of LTCM’s positions. Over the course of the year that followed the bailout, the creditor committee charged with managing LTCM’s positions effected an orderly work-out and liquidation of LTCM’s positions. We will never know what would have happened had the Federal Reserve not intervened. In defending the Federal Reserve’s unusual actions in coming to the assistance of an unregulated financial institutions like a hedge fund, William McDonough, the president of the Federal Reserve Bank of New York, stated that it was the Federal Reserve’s judgement that the “...abrupt and disorderly close-out of LTCM’s positions would pose unacceptable risks to the American economy. ... there was a likelihood that a number of credit and interest rate markets would experience extreme price moves and possibly cease to function for a period of one or more days and maybe longer. This would have caused a vicious cycle: a loss of investor confidence, lending to further liquidations of positions, and so on.” The near-collapse of LTCM galvanized regulators throughout the world to examine the operations of hedge funds to determine if they posed a risk to investors and to financial stability more generally. Studies were undertaken by nearly every major central bank, regulatory agency, and international “regulatory” committee (such as the Basle Committee and IOSCO), and reports were issued, by among others, The President’s Working Group on Financial Markets, the United States General Accounting Office (GAO), the Counterparty Risk Management Policy Group, the Basle Committee on Banking Supervision, and the International Organization of Securities Commissions (IOSCO). Many of these studies concluded that there was a need for greater disclosure by hedge funds in order to increase transparency and enhance market discipline, by creditors, derivatives counterparties and investors. In the Fall of 1999 two bills were introduced before the U.S. Congress directed at increasing hedge fund disclosure (the “Hedge Fund Disclosure Act” [the “Baker Bill”] and the “Markey/Dorgan Bill”). But when the legislative firestorm sparked by the LTCM’s episode finally quieted, there was no new regulation of hedge funds. This paper provides an overview of the regulation of hedge funds and examines the key regulatory issues that now confront regulators throughout the world. In particular, two major issues are examined. First, whether hedge funds pose a systemic threat to the stability of financial markets, and, if so, whether additional government regulation would be useful. And second, whether existing regulation provides sufficient protection for hedge fund investors, and, if not, what additional regulation is needed.
When performance measures are used for evaluation purposes, agents have some incentives to learn how their actions affect these measures. We show that the use of imperfect performance measures can cause an agent to devote too many resources (too much effort) to acquiring information. Doing so can be costly to the principal because the agent can use information to game the performance measure to the detriment of the principal. We analyze the impact of endogenous information acquisition on the optimal incentive strength and the quality of the performance measure used.
The volume is a collection of papers given at the conference “sub8 -- Sinn und Bedeutung”, the eighth annual conference of the Gesellschaft für Semantik, held at the Johann-Wolfgang-Goethe-Universität, Frankfurt (Germany) in September 2003. During this conference, experts presented and discussed various aspects of semantics. The very different topics included in this book provide insight into fields of ongoing Semantics research.
Compelling evidence for the creation of a new form of matter has been claimed to be found in Pb+Pb collisions at SPS. We discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion collisions. It is demonstrated that so far none of the proposed signals like J/psi meson production/suppression, strangeness enhancement, dileptons, and directed flow unambigiously show that a phase of deconfined matter has been formed in SPS Pb+Pb collisions. We emphasize the need for systematic future measurements to search for simultaneous irregularities in the excitation functions of several observables in order to come close to pinning the properties of hot, dense QCD matter from data.
We calculate the Gaussian radius parameters of the pion-emitting source in high energy heavy ion collisions, assuming a first order phase transition from a thermalized Quark-Gluon-Plasma (QGP) to a gas of hadrons. Such a model leads to a very long-lived dissipative hadronic rescattering phase which dominates the properties of the two-pion correlation functions. The radii are found to depend only weakly on the thermalization time tau i, the critical temperature T c (and thus the latent heat), and the specific entropy of the QGP. The dissipative hadronic stage enforces large variations of the pion emission times around the mean. Therefore, the model calculations suggest a rapid increase of R out/R side as a function of K T if a thermalized QGP were formed.
The equilibration of hot and dense nuclear matter produced in the central cell of central Au+Au collisions at RHIC (sqrt s = 200 A GeV) energies is studied within a microscopic transport model. The pressure in the cell becomes isotropic at t approx 5 fm/c after beginning of the collision. Within the next 15 fm/c the expansion of matter in the cell proceeds almost isentropically with the entropy per baryon ratio S/A approx 150, and the equation of state in the (P,epsilon) plane has a very simple form, P=0.15 epsilon. Comparison with the statistical model of an ideal hadron gas indicates that the time t approx 20 fm/c may be too short to reach the fully equilibrated state. Particularly, the creation of long-lived resonance-rich matter in the cell decelerates the relaxation to chemical equilibrium. This resonance-abundant state can be detected experimentally after the thermal freeze-out of particles.
The yields of strange particles are calculated with the UrQMD model for p,Pb(158 AGeV)Pb collisions and compared to experimental data. The yields are enhanced in central collisions if compared to proton induced or peripheral Pb+Pb collisions. The enhancement is due to secondary interactions. Nevertheless, only a reduction of the quark masses or equivalently an increase of the string tension provides an adequate description of the large observed enhancement factors (WA97 and NA49). Furthermore, the yields of unstable strange resonances as the Lambda star(1520) resonance or the phi meson are considerably affected by hadronic rescattering of the decay products.
The equilibration of hot and dense nuclear matter produced in the central region in central Au+Au collisions at square root s = 200A GeV is studied within the microscopic transport model UrQMD. The pressure here becomes isotropic at t approx 5 fm/c. Within the next 15 fm/c the expansion of the matter proceeds almost isentropically with the entropy per baryon ratio S/A approx 150. During this period the equation of state in the (P, epsilon)-plane has a very simple form, P = 0.15 epsilon. Comparison with the statistical model (SM) of an ideal hadron gas reveals that the time of approx 20 fm/c may be too short to attain the fully equilibrated state. Particularly, the fractions of resonances are overpopulated in contrast to the SM values. The creation of such a long-lived resonance-rich state slows down the relaxation to chemical equilibrium and can be detected experimentally.
Enhanced antiproton production in Pb(160 AGeV)+Pb reactions: evidence for quark gluon matter?
(2000)
The centrality dependence of the antiproton per participant ratio is studied in Pb(160 AGeV)+Pb reactions. Antiproton production in collisions of heavy nuclei at the CERN/SPS seems considerably enhanced as compared to conventional hadronic physics, given by the antiproton production rates in pp and antiproton annihilation in p p reactions. This enhancement is consistent with the observation of strong in-medium effects in other hadronic observables and may be an indication of partial restoration of chiral symmetry.
The relaxation of hot nuclear matter to an equilibrated state in the central zone of heavy-ion collisions at energies from AGS to RHIC is studied within the microscopic UrQMD model. It is found that the system reaches the (quasi)equilibrium stage for the period of 10-15 fm/c. Within this time the matter in the cell expands nearly isentropically with the entropy to baryon ratio S/A = 150 - 170. Thermodynamic characteristics of the system at AGS and at SPS energies at the endpoints of this stage are very close to the parameters of chemical and thermal freeze-out extracted from the thermal fit to experimental data. Predictions are made for the full RHIC energy square root s = 200$ AGeV. The formation of a resonance-rich state at RHIC energies is discussed.