Refine
Year of publication
Document Type
- Article (15825)
- Part of Periodical (2818)
- Working Paper (2353)
- Preprint (2085)
- Doctoral Thesis (2065)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (753)
- Report (471)
- Review (165)
Language
- English (29537) (remove)
Keywords
- taxonomy (744)
- new species (444)
- morphology (174)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (117)
- biodiversity (101)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5347)
- Physik (3819)
- Wirtschaftswissenschaften (1921)
- Frankfurt Institute for Advanced Studies (FIAS) (1762)
- Biowissenschaften (1550)
- Center for Financial Studies (CFS) (1494)
- Informatik (1401)
- Biochemie und Chemie (1090)
- Sustainable Architecture for Finance in Europe (SAFE) (1071)
- House of Finance (HoF) (710)
This study uses Markov-switching models to evaluate the informational content of the term structure as a predictor of recessions in eight OECD countries. The empirical results suggest that for all countries the term spread is sensibly modelled as a two-state regime-switching process. Moreover, our simple univariate model turns out to be a filter that transforms accurately term spread changes into turning point predictions. The term structure is confirmed to be a reliable recession indicator. However, the results of probit estimations show that the markov-switching filter does not significantly improve the forecasting ability of the spread.
Modeling short-term interest rates as following regime-switching processes has become increasingly popular. Theoretically, regime-switching models are able to capture rational expectations of infrequently occurring discrete events. Technically, they allow for potential time-varying stationarity. After discussing both aspects with reference to the recent literature, this paper provides estimations of various univariate regime-switching specifications for the German three-month money market rate and bivariate specifications additionally including the term spread. However, the main contribution is a multi-step out-of-sample forecasting competition. It turns out that forecasts are improved substantially when allowing for state-dependence. Particularly, the informational content of the term spread for future short rate changes can be exploited optimally within a multivariate regime-switching framework.
Collateral, default risk, and relationship lending : an empirical study on financial contracting
(2000)
This paper provides further insights into the nature of relationship lending by analyzing the link between relationship lending, borrower quality and collateral as a key variable in loan contract design. We used a unique data set based on the examination of credit files of five leading German banks, thus relying on information actually used in the process of bank credit decision-making and contract design. In particular, bank internal borrower ratings serve to evaluate borrower quality, and the bank's own assessment of its housebank status serves to identify information-intensive relationships. Additionally, we used data on workout activities for borrowers facing financial distress. We found no significant correlation between ex ante borrower quality and the incidence or degree of collateralization. Our results indicate that the use of collateral in loan contract design is mainly driven by aspects of relationship lending and renegotiations. We found that relationship lenders or housebanks do require more collateral from their debtors, thereby increasing the borrower's lock-in and strengthening the banks' bargaining power in future renegotiation situations. This result is strongly supported by our analysis of the correlation between ex post risk, collateral and relationship lending since housebanks do more frequently engage in workout activities for distressed borrowers, and collateralization increases workout probability. First version: March 12, 1999
We analyze the role of different kinds of primary and secondary market interventions for the government's goal to maximize its revenues from public bond issuances. Some of these interventions can be thought of as characteristics of a "primary dealer system". After all, we see that a primary dealer system with a restricted number of participants may be useful in case of only restricted competition among sufficiently heterogeneous market makers. We further show that minimum secondary market turnover requirements for primary dealers with respect to bond sales seem to be in general more adequate than the definition of maximum bid-ask-spreads or minimum turnover requirements with respect to bond purchases. Moreover, official price management operations are not able to completely substitute for a system of primary dealers. Finally it should be noted that there is in general no reason for monetary compensations to primary dealers since they already possess some privileges with respect to public bond auction.
This paper considers the desirability of the observed tendency of central banks to adjust interest rates only gradually in response to changes in economic conditions. It shows, in the context of a simple model of optimizing private-sector behavior, that such inertial behavior on the part of the central bank may indeed be optimal, in the sense of minimizing a loss function that penalizes inflation variations, deviations of output from potential, and interest-rate variability. Sluggish adjustment characterizes an optimal policy commitment, even though no such inertia would be present in the case of a reputationless (Markovian) equilibrium under discretion. Optimal interest-rate feedback rules are also characterized, and shown to involve substantial positive coefficients on lagged interest rates. This provides a theoretical explanation for the numerical results obtained by Rotemberg and Woodford (1998) in their quantitative model of the U.S. economy.
This paper analyses two reasons why inflation may interfere with price adjustment so as to create inefficiencies in resource allocation at low rates of inflation. The first argument is that the higher the rate of inflation the lower the likelihood that downward nominal rigidities are binding (the Tobin argument) which implies a non-linear Phillips-curve. The second argument is that low inflation strengthens nominal price rigidities and thus impairs the flexibility of the price system resulting in a less efficient resource allocation. It is argued that inflation can be too low from a welfare point of view due to the presence of nominal rigidities, but the quantitative importance is an open question.
As inflation rates in the United States decline, analysts are asking if there are economic reasons to hold the rates at levels above zero. Previous studies of whether inflation "greases the wheels" of the labor market ignore inflation's potential for disrupting wage patterns in the same market. This paper outlines an institutionally-based model of wage-setting that allows the benefits of inflation (downward wage flexibility) to be separated from disruptive uncertainty about inflation rate (undue variation in relative prices). Our estimates, using a unique 40-year panel of wage changes made by large mid-western employers, suggest that low rates of inflation do help the economy to adjust to changes in labor supply and demand. However, when inflation's disruptive effects are balanced against this benefit the labor market justification for pursuing a positive long-term inflation goal effectively disappears.
Since 1990, a number of countries have adopted inflation targeting as their declared monetary strategy. Interpretations of the significance of this movement, however, have differed widely. To some, inflation targeting mandates the single-minded, rule-like pursuit of price stability without regard for other policy objectives; to others, inflation targeting represents nothing more than the latest version of cheap talk by central banks unable to sustain monetary commitments. Advocates of inflation targeting, including the adopting central banks themselves, have expressed the view that the efforts at transparency and communication in the inflation targeting framework grant the central bank greater short-run flexibility in pursuit of its long-run inflation goal. This paper assesses whether the talk that inflation targeting central banks engage in matters to central bank behavior, and which interpretation of the strategy is consistent with that assessment. We identify five distinct interpretations of inflation targeting, consistent with various strands of the current literature, and identify those interpretations as movements between various strategies in a conventional model of time-inconsistency in monetary policy. The empirical implications of these interpretations are then compared to the response of central banks to movements in inflation of three countries that adopted inflation targets in the early 1990s: The United Kingdom, Canada, and New Zealand. For all three, the evidence shows a break in the behavior of inflation consistent with a strengthened commitment to price stability. In no case, however, is there evidence that the strategy entails a single-minded pursuit of the inflation target. For the U.K., the results are consistent with the successful implementation the optimal state-contingent rule, thereby combining flexibility and credibility; similarly, New Zealand's improved inflation performance was achieved without a discernable increase in counter-inflationary conservatism. The results for Canada are less clear, perhaps reflecting the broader fiscal and international developments affecting the Canadian economy during this period.
Derivatives usage in risk management by U.S. and German non-financial firms : a comparative survey
(1998)
This paper is a comparative study of the responses to the 1995 Wharton School survey of derivative usage among US non-financial firms and a 1997 companion survey on German non-financial firms. It is not a mere comparison of the results of both studies but a comparative study, drawing a comparable subsample of firms from the US study to match the sample of German firms on both size and industry composition. We find that German firms are more likely to use derivatives than US firms, with 78% of German firms using derivatives compared to 57% of US firms. Aside from this higher overall usage, the general pattern of usage across industry and size groupings is comparable across the two countries. In both countries, foreign currency derivative usage is most common, followed closely by interest rate derivatives, with commodity derivatives a distant third. Usage rates across all three classes of derivatives are higher for German firms than US firms. In contrast to the similarities, firms in the two countries differ notably on issues such as the primary goal of hedging, their choice of instruments, and the influence of their market view when taking derivative positions. These differences appear to be driven by the greater importance of financial accounting statements in Germany than the US and stricter German corporate policies of control over derivative activities within the firm. German firms also indicate significantly less concern about derivative related issues than US firms, which appears to arise from a more basic and simple strategy for using derivatives. Finally, among the derivative non-users, German firms tend to cite reasons suggesting derivatives were not needed whereas US firms tend to cite reasons suggesting a possible role for derivatives, but a hesitation to use them for some reason.
The purpose of the paper is to survey and discuss inflation targeting in the context of monetary policy rules. The paper provides a general conceptual discussion of monetary policy rules, attempts to clarify the essential characteristics of inflation targeting, compares inflation targeting to the other monetary policy rules, and draws some conclusions for the monetary policy of the European system of Central Banks.
Despite the relevance of credit financing for the profit and risk situation of commercial banks only little empirical evidence on the initial credit decision and monitoring process exists due to the lack of appropriate data on bank debt financing. The present paper provides a systematic overview of a data set generated during the Center for Financial Studies research project on "Credit Management" which was designed to fill this empirical void. The data set contains a broad list of variables taken from the credit files of five major German banks. It is a random sample drawn from all customers which have engaged in some form of borrowing from the banks in question between January 1992 and January 1997 and which meet a number of selection criteria. The sampling design and data collection procedure are discussed in detail. Additionally, the project's research agenda is described and some general descriptive statistics of the firms in our sample are provided.
We studied information and interaction processes in six lending relationships between a universal bank and medium sized firms. The study is based on the credit files of the respective firms. If no problems occur in these lending relationships, bank monitoring is based mainly on cheap, retrospective and internal data. In case of distress, more expensive, prospective and external information is used. The level of monitoring and the willingness to renegotiate the lending relationship depends on what the lending officers can learn about the future prospects of the firm from the behaviour of the debtors. We identify both signalling and bonding activities. Such learning from past behaviour seems to allow monitoring at low cost, whereas the direct observation of the firm's investment outlook seems to be very costly. Also, too much knowledge about the firm's investments might leave the bank in a very strong bargaining position and distort investment incentives. Therefore, the traditional view of credit assessment as observation of the quality of a borrower's investment programme needs to be reconsidered.
Shares trading in the Bolsa mexicana de Valores do not seem to react to company news. Using a sample of Mexican corporate news announcements from the period July 1994 through June 1996, this paper finds that there is nothing unusual about returns, volatility of returns, volume of trade or bid-ask spreads in the event window. This suggests one of five possibilities: our sample size is small; or markets are inefficient; or markets are efficient but the corporate news announcements are not value-relevant; or markets are efficient and corporate news announcements are value-relevant, but they have been fully anticipated; or markets are efficient and corporate news announcements are value-relevant, but unrestricted insider trading has caused prices to fully incorporate the information. The evidence supports the last hypothesis. The paper thus points towards a methodology for ranking emerging stock markets in terms of their market integrity, an approach that can be used with the limited data available in such markets.
No one seems to be neutral about the effects of EMU on the German economy. Roughly speaking, there are two camps: those who see the euro as the advent of a newly open, large, and efficient regime which will lead to improvements in European and in particular in German competitiveness; those who see the euro as a weakening of the German commitment to price stability. From a broader macroeconomic perspective, however, it is clear that EMU is unlikely to cause directly any meaningful change either for the better in Standort Deutschland or for the worse in the German price stability. There is ample evidence that changes in monetary regimes (so long as non leaving hyperinflation) induce little changes in real economic structures such as labor or financial markets. Regional asymmetries of the sorts in the EU do not tend to translate into monetary differences. Most importantly, there is no good reason to believe that the ECB will behave any differently than the Bundesbank.
Where do we stand in the theory of finance? : a selective overview with reference to Erich Gutenberg
(1998)
For the past 20 years, financial markets research has concerned itself with issues related to the evaluation and management of financial securities in efficient capital markets and with issues of management control in incomplete markets. The following selective overview focuses on key aspects of the theory and empirical experience of management control under conditions of asymmetric information. The objective is examine the validity of the recently advanced hypothesis on the myths of corporate control. The present overview is based on Gutenberg's position that there exists a discrete corporate interest, as distinct from and separate from the interests of the shareholders or other stakeholders. In the third volume of Grundlagen der BWL: Die Finanzen, published in 1969, this position of Gutenberg's is coupled with an appeal for a so-called financial equilibrium to be maintained. Not until recently have models grounded in capital market theory been developed which also allow for a firm's management to exercise autonomy vis-à-vis its stakeholder. This paper was prepared for the Erich Gutenberg centenary conference on December 12 and 13, 1997 in Cologne.
This study examines the relation of bank loan terms like interest rates, collateral, and lines of credit to borrower risk defined by the banks' internal credit rating. The analysis is not restricted to a static view. It also incorporates rating transition and its implications on the relation. Money illusion and phenomena linked with relationship banking are discovered as important factors. The results show that riskier borrowers pay higher loan rate premiums and rely more on bank finance. Housebanks obtain more collateral and provide more finance. Caused by money illusion in times of high market interest rates loan rate premiums are relatively small whereas in times of low market interest rates they are relatively high. There was no evidence for an appropriate adjustment of loan terms to rating changes. But bank market power represented by a weighted average of credit rating before and after a rating transition serves to compensate for low earlier profits caused by phenomena of interest rate smoothing. Klassifikation: G21.
Banks increasingly recognize the need to measure and manage the credit risk of their loans on a portfolio basis. We address the subportfolio "middle market". Due to their specific lending policy for this market segment it is an important task for banks to systematically identify regional and industrial credit concentrations and reduce the detected concentrations through diversification. In recent years, the development of markets for credit securitization and credit derivatives has provided new credit risk management tools. However, in the addressed market segment adverse selection and moral hazard problems are quite severe. A potential successful application of credit securitization and credit derivatives for managing credit risk of middle market commercial loan portfolios depends on the development of incentive-compatible structures which solve or at least mitigate the adverse selection and moral hazard problems. In this paper we identify a number of general requirements and describe two possible solution concepts.
Das Ziel der Untersuchung von ultra-relativistischen Schwerionenkollisionen ist die Suche nach dem Quark Gluon Plasma (QGP), einem Zustand hochdichter stark wechselwirkender Materie in dem der Einschluss von Quarks und Gluonen in Hadronen aufgehoben ist. Die bisher gewonnenen experimentellen Hinweise deuten daraufhin,daß in Schwerionenkollisionen bei den derzeit höchsten zur Verfügung stehenden Energien von 158 GeV/Nukleon in Pb+Pb Reaktionen am CERN-SPS die Rahmenbedingungen für einen Phasenübergang von hadronischer Materie zu einer partonischen Phaseerfüllt sind. Die exakte Phasenstruktur stark wechselwirkender Materie hingegen ist derzeit noch nicht vollständig verstanden. Da inklusive hadronische Observablen und "penetrierende Proben" nicht direkt sensitiv auf die Existenz und Natur des Phasenübergangs sind, wurde die Analyse von Einzelereignis-"event-by-event"-Fluktuationenvorgeschlagen. Das Fluktuationsverhalten von Einzelereignis-Observablen sollte direkt sensitiv auf die Natur des zu beobachtenden Phasenübergangssein. In dieser Arbeit wurden Fluktuationen in der "chemischen" Zusammensetzung der Teilchenquelle untersucht und erste Ergebnisse werden präsentiert.
During the last years the lending business has come under considerable competitive pressure and bank managers often express concern regarding its profitability vis-a-vis other activities. This paper tries to empirically identify factors that are able to explain the financial performance of bank lending activities. The analysis is based on the CFS-data-set that has been collected in 1997 from 200 medium-sized firms. Two regressions are performed: The first is directed towards relationships between the interest rate premiums and various determining factors, the second aims at detecting relationships between those factors and the occurrence of several types of problems during the course of a credit engagement. Furthermore, the results of both regressions are used to test theoretical hypotheses regarding the impact of certain parameters on credit terms and distress probabilities. The findings are somewhat “puzzling“: First, the rating is not as significant as expected. Second, credit contracts seem to be priced lower for situations with greater risks. Finally, the results do not fully support any of three hypotheses that are often advanced to describe the role of collateral and covenants in credit contracts.
The mammalian retina contains around 30 morphological varieties of amacrine cell types. These interneurons receive excitatory glutamatergic input from bipolar cells and provide GABA- and glycinergic inhibition to other cells in the retina. Amacrine cells exhibit widely varying light evoked responses, in large part defined by their presynaptic partners. We wondered whether amacrine functional diversity is based on a differential expression of glutamate receptors among cell populations and types. In whole cell patch-clamp experiments on mouse retinal slices, we used selective agonists and antagonists to discriminate responses mediated by NMDA/ non-NMDA (NBQX) and AMPA/ KA receptors (cyclothiazide, GYKI 52466, GYKI 53655, SYM 2081). We sampled a large variety of individual cell types, which were classified by their dendritic field size into either narrow-field or wide-field cells after filling with Lucifer yellow or neurobiotin. In addition, we used transgenic GlyT2-EGFP mice, whose glycinergic neurons express EGFP. This allowed us to classify amacrines on basis of their neurotransmitter into either glycinergic or GABAergic cells. All cells (n = 300) had good responses to non-NMDA agonists. Specific AMPA receptor responses could be obtained from almost all cells recorded: 94% of the AII (n = 17), 87% of the narrow-field (n = 45), 81% of the wide-field (n = 21), 85% of the glycinergic (n = 20) and 78% of the GABAergic cells (n = 9). KA receptor selective drugs were also effective on the majority of the AII (79%, n = 14), narrow-field (93%, n = 43), wide-field (85%, n = 26), glycinergic (94%, n = 16) and GABAergic amacrine cells (100%, n = 6). Among the cells tested for the two receptors (n = 65), we encountered both exclusive expression of AMPA or KA receptors and co-expression of the two types. Most narrow-field (70%, n = 27), glycinergic (81%, n = 16) and GABAergic cells (67%, n = 6) were found to have both AMPA and KA receptors. In contrast, only less than half of the wide-field cells (43%, n = 14) were found to co-express AMPA and KA receptors, most of them expressing exclusively AMPA (36%) or KA receptors (21%). We could elicit small NMDA responses from most of the wide-field (75%, n = 13) and GABAergic cells (67%, n = 3), whereas only 47% of the narrow-field (n = 15), 14% of the AII (n = 22) and no glycinergic cell (n = 2) reacted to NMDA. Abstract 83 Our data suggest that AMPA, KA and NMDA receptors are differentially expressed among different types of amacrine cells rather than among populations with different neurotransmitters or different dendritic coverage of the retina. Selective expression of kinetically different glutamate receptors among amacrine types may be involved in generating transient and sustained inhibitory pathways in the retina. Since AMPA and KA receptors are not generally clustered at the same postsynaptic sites, a single amacrine cell expressing both AMPA and KA receptors may provide inhibition with different temporal characteristics to individual synaptic partners.
Life of Varroa destructor, Anderson and Trueman, an ectoparasitic mite of honeybees, is divided into a reproductive phase in the bee brood and a phoretic phase during which the mite is attached to the adult bee. Phoretic mites leave the colony with workers involved in foraging tasks. Little information is available on the mortality of mites outside the colony. Mites may or not return to the colony as a result of death of the infested foragers, host change by drifting of foragers, or removal of mites outside the colony. That mites do not return to the colony was indicated by substantially higher infestation of outflying workers compared to the infestation of returning workers (Kutschker, 1999). The main objective of the study was to provide information whether V. destructor influences flight behaviour of foragers and consequently returning frequency of foragers to the colony. I first repeated the experiment of Kutschker (1999) examining the infestation of outflying and returning workers. Further, I registered flight duration of foragers using a video method. In this experiment I compared also the infestation and flight duration of bees of different genetic origin, Carnica from Oberursel and bees from Primorsky region. I investigated returning time of workers, returning frequency until evening, drifting to other colonies and orientation toward the nest entrance in the experiments in which workers were released in close vicinity of the colony. At last, I measured the loss of foragers in relation to colony infestation using a Bee Scan. Results from this study, listed below, showed considerable influence of V. destructor on flight behavior of foragers translating into loss of mites. Loss of mites with foragers add substantial component to mite mortality and was underestimated in previous studies. Such loss might be viewed as a mechanism of resistance against V. destructor. a) The mean infestation of outflying workers (0.019±0.018) was twice as the mean infestation of returning workers (0.009±0.018). The difference in the infestation between outflying and returning workers was more marked in highly infested colonies. b) Investigation of individually tagged workers by use of a two camera video recording device showed significantly higher infestation of outflying workers compared to returning workers. Mites were lost by the non returning of infested foragers (22%) and by loss of mites from foragers that returned to the colony without the mite (20%). A small portion of mites (1.8%) was gained. Loss of mites significantly exceeded mite gain. c) The flight duration of infested workers determined by using the same two camera video system was significantly higher in infested compared to uninfested workers of the same age that flew closest at time. The median flight duration of infested workers was 1.7 higher (214s) than the median duration of unifested workers (128s). d) Infested workers took 2.3 times longer to return to the colony than uninfested workers of the same age when released from the same locations, closest at time. The returning time increased with the distance of release. In a group of bees released simultaneously the infestation was higher in bees returning later and in those that did not return in the observation period of 15 min. e) Released workers did not return to the colony 1.5 more frequently than uninfested workers in evening. The difference in returning was significant for locations of 20 and 50m from the colony. No difference in returning between infested and uninfested workers were observed for the most distant location of 400m. f) No significant difference was found in returning time and/or in the returning frequency until evening between workers artificially infested overnight and naturally infested workers. Artificially infested workers returned later and less frequently than a control group indicating rapid influence of V. destructor on flight behavior of foragers. g) The orientation ability of infested workers toward the nest entrance was impaired. Infested workers compared to uninfested workers twice as often approached a dummy entrance before finding the nest entrance. h) No significant differences were found in drifting between infested and uninfested workers. Drifting in the neighboring nucleus colony occurred in about 1% occasions after release of marked workers. Similarly, more infested, but not significantly more infested workers (2.6%) entered a different colored hive than the same colored hive (1.9%). However, the number of drifting bees were to low to make results conclusive. i) The comparison between Carnica and Primorsky workers revealed higher infestation in Carnica compared to Primorsky. Further, Primorsky workers lost more mites during foraging due to mite loss from foragers and non returning of infested workers. No significant differences in flight duration were observed between the two bee stocks. j) Loss of foragers, as determined by the Bee Scan counts of outflying and returning foragers, and the infestation of outflying bees increased significantly over a period of 70 days. A colony with 7.7. higher infestation of outflying foragers lost 2.2. time more bees per flight per day compared to a low infested colony. k) The estimates of mite loss with foragers from mite population per day up to 3.1% exceeds approximately mite mortality of 1% within the colony as represented by counting dead mites on bottom board inserts.
Cold target recoil ion momentum spectroscopy (COLTRIM) has been employed to image the momentum distributions of continuum electrons liberated in the impact of slow He2+ on He and H2. The distributions were measured for fully determined motion of the nuclei that is as a function of the impact parameter and in a well de ned scattering plane The single ionization (SI) of H2 leading to H2+ recoil ions in nondissociative states (He2+ + H+ -> He2+ + H+ + e-) and the transfer ionization (TI) of H2 leading to H2 dissociation into two free protons (He2+ H2 -> He+ + H+ + H+ + e-) were investigated. Similar measurements have been carried out for He target, the corresponding atomic two electron system, i.e. the single ionization (SI) (He2+ + He -> He+ + He2+ e- and the transfer ionization (TI) (He2+ + He -> He+ + He2+ + e-). These measurements have been exploited to understand the results obtained for H target. In comparing the continuum electron momentum distributions for H2 with that for He, a high degree of similarity is observed. In the case of transfer ionization of H2, the electron momentum distributions generated for parallel and perpendicular molecular orientations revealed no orientation dependence. The in scattering plane electron momentum distributions for the transfer ionization of H2 by He2+ and for the transfer ionization of He by He2e showed that the salient feature of these distributions for both collisions systems consists in the appearance of two groups of electrons with difeerent structures. In addition to the group of the saddle electrons forming two jets separated by a valley along the projectile axis we nd a new group of electrons moving with a velocity higher than the projectile velocity These new fast forward electrons result from a narrow range of impact parameters and appear as image saddle in the projectile frame. In contrast to the transfer ionization of He, the fast forward electrons group disappears in the in scattering plane electron momentum distribution generated for the single ionization of He. Instead of this group another new group of electrons appear These electrons exhibit an amount of backscattering These backward elec trons appear as image saddle in the target frame The structures that the saddle electrons show are owing to the quasi molecular nature of the collision process For the TI of H2, the TI of He and the SI of He, a pi-orbital shape of the electron momentum distribution is observed This indicates the importance of the rotational coupling 2-p-theta -> 2p-pi in the initial promotion of the ground state followed by further promotions to the continuum The backward electrons as well as the fast forward electrons are not discussed in the theoretical literature at all. However, a number of obvious indications of the existence of the backward and fast forward electrons could be seen in the experimental works of Abdallah et al. as well as in the theoretical calculations of Sidky et al One might speculate that electrons which are promoted on the saddle for some time during the collision could finally swing around the He+ ion in the way out of the collision, i.e. either around the projectile in the forward direction as in the TI case forming the fast forward electrons or around the recoil ion in the backward direction as in the SI case forming the backward electrons. This might be a result of the strong gradient, and hence the large acceleration of the screened He+ potential.
Alzheimer’s disease (AD) is the most common neurodegenerative disorder world wide, causing presenile dementia and death of millions of people. During AD damage and massive loss of brain cells occur. Alzheimer’s disease is genetically heterogeneous and may therefore represent a common phenotype that results from various genetic and environmental influences and risk factors. In approximately 10% of patients, changes of the genetic information were detected (gene mutations). In these cases, Alzheimer’s disease is inherited as an autosomal dominant trait (familial Alzheimer’s disease, FAD). In rare cases of familial Alzheimer’s disease (about 1-3%), mutations have been detected in genes on chromosomes 14 and 1 (encoding for Presenilin 1 and 2, respectively), and on chromosome 21 encoding for the amyloid precursor protein (APP), which is responsible for the release of the cell-damaging protein amyloid-beta (ß-amyloid, Aß). Familial forms of early-onset Alzheimer’s disease are rare; however, their importance extends far beyond their frequency, because they allow to identify some of the critical pathogenetic pathways of the disease. All familial Alzheimer mutations share a common feature: they lead to an enhanced production of the Aß, which is the major constituent of senile plaques in brains of AD patients. New data indicates that Aß promotes neuronal degeneration. Therefore, one aim of these thesis was to elucidate the neurotoxic biochemical pathways induced by Aß, investigating the effect of the FAD Swedish APP double mutation (APPsw) on oxidative stress-induced cell death mechanisms. This mutation results in a three- to sixfold increased Aß production compared to wild-type APP (APPwt). As cell models, the neuronal PC12 (rat pheochromocytoma) and the HEK (human embryonic kidney 293) cell lines were used, which have been transfected with human wiltyp APP or human APP containing the Swedish double mutation. The used cell models offer two important advantages. First, compared to experiments using high concentrations of Aß at micromolar levels applied extracellularly to cells, PC12 APPsw cells secret low Aß levels similar to the situation in FAD brains. Thus, this cell model represents a very suitable approach to elucidate the AD-specific cell death pathways mimicking physiological conditions. Second, these two cell lines (PC12 and HEK APPwt and APPsw) with different production levels of Aß may additionally allow to study dose-dependent effects of Aß. The here obtained results provide evidence for the enhanced cell vulnerability caused by the Swedish APP mutation and elucidate the cell death mechanism probably initiated by intracellulary produced Aß. Here it seems likely that increased production of Aß at physiological levels primes APPsw PC12 cells to undergo cell death only after additional stress, while chronic high levels in HEK cells already lead to enhanced basal apoptotic levels. Crucial effects of the Swedish APP mutation include the impairments of cellular energy metabolism affecting mitochondrial membrane potential and ATP levels as well as the additional activation of caspase 2, caspase 8 and JNK in response to oxidative stress. Thereby ,the following model can be proposed: PC12 cells harboring the Swedish APP mutation have a reduced energy metabolism compared to APPwt or control cells. However, this effect does not leads to enhanced basal apoptotic levels of cultured cells. An exposure of PC12 cells to oxidative stress leads to mitochondrial dysfunction, e.g., decrease in mitochondrial membrane potential and depletion in ATP. The consequence is the activation of the intrinsic apoptotic pathway releasing cytochrome c and Smac resulting in the activation of caspase 9. This effect is amplified by the overexpression of APP, since both APPsw and APPwt PC12 cells show enhanced cytochrome c and Smac release as well as enhanced caspase 9 activity as vector transfected control. In APPsw PC12 cells a parallel pathway is additionally emphased. Due to reduced ATP levels or enhanced Aß production JNK is activated. Furthermore, the extrinsic apoptotic pathway is enhanced, since caspase 8 and caspase 2 activation was clearly enhanced by the Swedish APP mutation. Both pathways may then converge by activating the effector enzyme, caspase 3, and the execution of cell death. In addition, caspase independent effects also needs to be considered. One possibility could be the implication of AIF since AIF expression was found to be induced by the Swedish APP mutation. In APPsw HEK cells high chronic Aß levels leads to enhanced apoptotic levels, reduce mitochondrial membrane potential and ATP levels even under basal conditions. Summarizing, a hypothetical sequence of events is proposed linking FAD, Aß production, JNK-activation, mitochondrial dysfunction with caspase pathway and neuronal loss for our cell model. The brain has a high metabolic rate and is exposured to gradually rising levels of oxidative stress during life. In Swedish FAD patients the levels of oxidative stress are increased in the temporal inferior cortex. This study using a cell model mimicking the in vivo situation in AD brains indicates that probably both, increased Aß production and the gradual rise of oxidative stress throughout life converge at a final common pathway of an increased vulnerability of neurons to apoptotic cell death from FAD patients. Presenilin (PS) 1 is an aspartyl protease, involved in the gamma-secretase mediated proteolysis of Amyloid-ß-protein (Aß), the major constituent of senile plaques in brains of Alzheimer’s disease (AD) patients. Recent studies have suggested an additional role for presenilin proteins in apoptotic cell death observed in AD. Since PS 1 is proteolytic cleaved by caspase 3, it has been prosposed that the resulting C-terminal fragment of PS1 (PSCas) could play a role in signal transduction during apoptosis. Moreover, it was shown that mutant presenilins causing early-onset of familial Alzheimer's disease (FAD) may render cells vulnerable to apoptosis. The mechanism by which PS1 regulates apoptotic cell death is yet not understood. Therefore one aim of our present study was to clarify the involvement of PS1 in the proteolytic cascade of apoptosis and if the cleavage of PS1 by caspase 3 has an regulatory function. Here it is demonstrated that both, PS1 and PS1Cas lead to a reduced vulnerability of PC12 and Jurkat cells to different apoptotic stimuli. However a mutation at the caspase 3 recognition site (D345A/ PSmut), which inhibits cleavage of PS1 by caspase 3, show no differences in the effect of PS1 or PSCas towards apoptotic stimuli. This suggest that proteolysis of PS1 by caspase 3 is not a determinant, but only a secondary effect during apoptosis. Since several FAD mutation distributed through the whole PS1 gene lead to enhanced apoptosis, an abolishment of the antiapoptotic effect of PS1 might contribute to the massive neurodegeneration in early age of FAD patients. Here, the regulate properties of PS1 in apoptosis may not be through an caspase 3 dependent cleavage and generation of PSCas, but rather through interaction of PS1 with other proteins involved in apoptosis.
The German financial market is often characterized as a bank-based system with strong bank-customer relationships. The corresponding notion of a housebank is closely related to the theoretical idea of relationship lending. It is the objective of this paper to provide a direct comparison between housebanks and "normal" banks as to their credit policy. Therefore, we analyze a new data set, representing a random sample of borrowers drawn from the credit portfolios of five leading German banks over a period of five years. We use credit-file data rather than industry survey data and, thus, focus the analysis on information that is directly related to actual credit decisions. In particular, we use bank-internal borrower rating data to evaluate borrower quality, and the bank's own assessment of its housebank status to control for information-intensive relationships.
This paper reviews the factors that will determine the shape of financial markets under EMU. It argues that financial markets will not be unified by the introduction of the euro. National central banks have a vested interest in preserving local idiosyncracies (e.g. the Wechsels in Germany) and they might be allowed to do so by promoting the use of so-called tier two assets under the common monetary policy. Moreover, a host of national regulations (prudential and fiscal) will make assets expressed in euro imperfect substitutes across borders. Prudential control will also continue to be handled differently from country to country. In the long run these national idiosyncracies cannot survive competitive pressures in the euro area. The year 1999 will thus see the beginning of a process of unification of financial markets that will be irresistible in the long run, but might still take some time to complete.
In this paper we analyze the relation between fund performance and market share. Using three performance measures we first establish that significant differences in the risk-adjusted returns of the funds in the sample exist. Thus, investors may react to past fund performance when making their investment decisions. We estimated a model relating past performance to changes in market share and found that past performance has a significant positive effect on market share. The results of a specification test indicate that investors react to risk-adjusted returns rather than to raw returns. This suggests that investors may be more sophisticated than is often assumed.
From the mid-seventies on, the central banks of most major industrial countries switched to monetary targeting. The Bundesbank was the first central bank to take this step, making the switch at the end of 1974. This changeover to monetary targeting was due to the difficulties which the Bundesbank - like other central banks - was facing in pursuing its original strategy, and whichcame to a head in the early seventies, when inflation escalated. A second factor was the collapse of the Bretton Woods system of fixed exchange rates, which created the necessary scope for national monetary targeting. Finally, the advance of monetarist ideas fostered the explicit turn towards monetary targets, although the Bundesbank did not implement these in a mechanistic way. Whereas the Bundesbank has adhered to its policy of monetary targeting up to the present, nowadays monetary targeting plays only a minor role worldwide. Many central banks have switched to the strategy of direct inflation targeting. Others favour a more discretionary approach or a policy which is geared to the exchange rate. In the academic debate, monetary targeting is often presented as an outdated approach which has long since lost its basis of stable money demand. These findings give riseto a number of questions: Has monetary targeting actually become outdated? Which role is played by the concrete design of this strategy, and, against this background, how easily can it be transferred to European monetary union? This paper aims to answer these questions, drawing on the particular experience which the Bundesbank has gained of monetary targeting. It seems appropriate to discuss monetary targeting by using a specific example, since this notion is not very precise. This applies, for example, to the money definition used, the way the target is derived, the stringency applied in pursuing the target and the monetary management procedure.
In this speech (given at the CFSresearch conference on the Implementation of Price Stability held at the Bundesbank Frankfurt am Main, 10. - 12. Sept 1998), John Vickers discusses theoretical and practical issues relating to inflation targeting as used in the United Kingdom doing the past six years. After outlining the role of the Bank s Monetary Policy Committee, he considers the Committee s task from a theoretical perspective, beforediscussing the concept and measurement of domestically generated inflation.
Credit Unions are cooperative financial institutions specializing in the basic financial needs of certain groups of consumers. A distinguishing feature of credit unions is the legal requirement that members share a common bond. This organizing principle recently became the focus of national attention as the Supreme Court and the U.S. Congress took opposite sides in a controversy regarding the number of common bonds that could co-exist within the membership of a single credit union. Despite its importance, little research has been done into how common bonds affect how credit unions actually operate. We frame the issues with a simple theoretical model of credit-union formation and consolidation. To provide intuition into the flexibility of multiple-group credit unions in serving members, we simulate the model and present some comparative-static results. We then apply a semi-parametric empirical model to a large dataset drawn from federally chartered occupational credit unions in 1996 to investigate the effects of common bonds. Our results suggest that credit unions with multiple common bonds have higher participation rates than credit unions that are otherwise similar but whose membership shares a single common bond.
"In this paper, I analyse the conduct of business rules included in the Directive on Markets in Financial Instruments (MiFID) which has replaced the Investment Services Directive (ISD). These rules, in addition to being part of the regulation of investment intermediaries, operate as contractual standards in the relationships between intermediaries and their clients. While the need to harmonise similar rules is generally acknowledged, in the present paper I ask whether the Lamfalussy regulatory architecture, which governs securities lawmaking in the EU, has in some way improved regulation in this area. In section II, I examine the general aspects of the Lamfalussy process. In section III, I critically analyse the MiFID s provisions on conduct of business obligations, best execution of transactions and client order handling, taking into account the new regime of trade internalisation by investment intermediaries and the ensuing competition between these intermediaries and market operators. In sectionIV, I draw some general conclusions on the re-regulation made under the Lamfalussy regulatory structure and its limits. In this section, I make a few preliminary comments on the relevance of conduct of business rules to contract law, the ISD rules of conduct and the role of harmonisation."
In contrast to the class A heat stress transcription factors (Hsfs) of plants, a considerable number of Hsfs assigned to classes B and C have no evident function as transcription activators on their own. In the course of my PhD work I showed that tomato HsfB1, a heat stress induced member of class B Hsf family, is a novel type of transcriptional coactivator in plants. Together with class A Hsfs, e.g. tomato HsfA1, it plays an important role in efficient transcrition initiation during heat stress by forming a type of enhanceosome on fragments of Hsp promoter. Characterization of promoter architecture of hsp promoters led to the identification of novel, complex heat stress element (HSE) clusters, which are required for optimal synergistic interactions of HsfA1 and HsfB1. In addition, HsfB1 showed synergistic activation of the expression of a subset of viral and house keeping promoters. CaMV35S promoter, the most widely expressed constitutive promoter turned out to be the the most interesting candidate to study this effect in detail. Because, for most house-keeping promoters tested during this study, the activators responsible for constitutive expression are not known, but in case of CaMV35S promoter they are quite well known (the bZip proteins, TGA1/2). These proteins belong to the acidic activators, similar to class A Hsfs. Actually, on heat stress inducible promoters HsfA1 or other class A Hsfs are the synergistic partners of HsfB1, whereas on house-keeping or viral promoters, HsfB1 shows synergistic transcriptional activation in cooperation with the promoter specific acidic activators, e.g. with TGA proteins on 35S promoter. In agreement with this the binding sites for HsfB1 were identified in both house-keeping and 35S promoter. It has been suggested during this study that HsfB1 acts in the maintenance of transcription of a sub-set of house-keeping and viral genes during heat stress. The coactivator function of HsfB1 depends on a single lysine residue in the GRGK motif in its CTD. Since, this motif is highly conserved among histones as the acetylation motif, especially in histones H2A and H4,. It was suggested that the GRGK motif acts as a recruitment motif, and together with the other acidic activator is responsible for corecruitment of a histone acetyl transferase (HAT). So, the effect of mammalian CBP (a well known HAT) and its plant orthologs (HAC1) was tested on the stimulation of synergistic reporter gene activation obtained with HsfA1 and HsfB1. Both in plant and mammalian cells, CBP/HAC1 further stimulated the HsfA1/B1 synergistic effect. Corecruitment of HAC1 was proven by in vitro pull down assays, where the NTD of HAC1 interacted specifically both with HsfA1 and HsfB1. Formation of a ternary complex between HsfA1, HsfB1 and CBP/HAC1 was shown via coimmunoprecipitation and electrophoretic mobility shift assays (EMSA). In conclusion, the work presented in my thesis presents a new model for transcriptional regulation during an ongoing heat stress.
In an attempt to search for potential candidate molecules involved in the pathogenesis of endometriosis, a novel 2910 bp cDNA encoding a putative 411 amino acid protein, shrew-1 was discovered. By computational analysis it was predicted to be an integral membrane protein with an outside-in transmembrane domain but no homology with any known protein or domain could be identified. Antibodies raised against the putative open-reading frame peptide of shrew-1 labelled a protein of ca. 48 kDa in extracts of shrew-1 mRNA positive tissues and also detected ectopically expressed shrew-1. In the course of my PhD work, I confirmed the prediction that shrew-1 is indeed a transmembrane protein, by expressing epitope-tagged shrew-1 in epithelial cells and analysing the transfected cells by surface biotinylation and immunoblots. Additionally, I could show that shrew-1 is able to target to E-cadherin-mediated adherens junctions and interacts with the E-cadherin-catenin complex in polarised MCF7 and MDCK cells, but not with the N-cadherin-catenin complex in non-polarised epithelial cells. A direct interaction of shrew-1 with beta-catenin could be shown in an in vitro pull-down assay. From this data, it could be assumed that shrew-1 might play a role in the function and/or regulation of the dynamics of E-cadherin-mediated junctional complexes. In the next part of my thesis, I showed that stable overexpression of shrew-1 in normal MDCK cells. causes changes in morphology of the cells and turns them invasive. Furthermore, transcription by ²-catenin was activated in these MDCK cells stably overexpressing shrew-1. It was probably the imbalance of shrew-1 protein at the adherens junctions that led to the misregulation of adherens junctions associated proteins, i.e. E-cadherin and beta-catenin. Caveolin-1 is another integral membrane protein that forms complexes with Ecadherin- beta-catenin complexes and also plays a role in the endocytosis of E-cadherin during junctional disruption. By immunofluorescence and biochemical studies, caveolin-1 was identified as another interacting partner of shrew-1. However, the functional relevance of this interaction is still not clear. In conclusion, it can be said that shrew-1 interacts with the key players of invasion and metastasis, E-cadherin and caveolin-1, suggesting its possible role in these processes and making it an interesting candidate to unravel other unknown mechanisms involved in the complex process of invasion.
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
In this paper we demonstrate how to relate the semantics given by the nondeterministic call-by-need calculus FUNDIO [SS03] to Haskell. After introducing new correct program transformations for FUNDIO, we translate the core language used in the Glasgow Haskell Compiler into the FUNDIO language, where the IO construct of FUNDIO corresponds to direct-call IO-actions in Haskell. We sketch the investigations of [Sab03b] where a lot of program transformations performed by the compiler have been shown to be correct w.r.t. the FUNDIO semantics. This enabled us to achieve a FUNDIO-compatible Haskell-compiler, by turning o not yet investigated transformations and the small set of incompatible transformations. With this compiler, Haskell programs which use the extension unsafePerformIO in arbitrary contexts, can be compiled in a "safe" manner.
This paper proposes a non-standard way to combine lazy functional languages with I/O. In order to demonstrate the usefulness of the approach, a tiny lazy functional core language FUNDIO , which is also a call-by-need lambda calculus, is investigated. The syntax of FUNDIO has case, letrec, constructors and an IO-interface: its operational semantics is described by small-step reductions. A contextual approximation and equivalence depending on the input-output behavior of normal order reduction sequences is defined and a context lemma is proved. This enables to study a semantics of FUNDIO and its semantic properties. The paper demonstrates that the technique of complete reduction diagrams enables to show a considerable set of program transformations to be correct. Several optimizations of evaluation are given, including strictness optimizations and an abstract machine, and shown to be correct w.r.t. contextual equivalence. Correctness of strictness optimizations also justifies correctness of parallel evaluation. Thus this calculus has a potential to integrate non-strict functional programming with a non-deterministic approach to input-output and also to provide a useful semantics for this combination. It is argued that monadic IO and unsafePerformIO can be combined in Haskell, and that the result is reliable, if all reductions and transformations are correct w.r.t. to the FUNDIO-semantics. Of course, we do not address the typing problems the are involved in the usage of Haskell s unsafePerformIO. The semantics can also be used as a novel semantics for strict functional languages with IO, where the sequence of IOs is not fixed.
Context unification is a variant of second order unification. It can also be seen as a generalization of string unification to tree unification. Currently it is not known whether context unification is decidable. A specialization of context unification is stratified context unification, which is decidable. However, the previous algorithm has a very bad worst case complexity. Recently it turned out that stratified context unification is equivalent to satisfiability of one-step rewrite constraints. This paper contains an optimized algorithm for strati ed context unification exploiting sharing and power expressions. We prove that the complexity is determined mainly by the maximal depth of SO-cycles. Two observations are used: i. For every ambiguous SO-cycle, there is a context variable that can be instantiated with a ground context of main depth O(c*d), where c is the number of context variables and d is the depth of the SO-cycle. ii. the exponent of periodicity is O(2 pi ), which means it has an O(n)sized representation. From a practical point of view, these observations allow us to conclude that the unification algorithm is well-behaved, if the maximal depth of SO-cycles does not grow too large.
Context unification is a variant of second-order unification and also a generalization of string unification. Currently it is not known whether context uni cation is decidable. An expressive fragment of context unification is stratified context unification. Recently, it turned out that stratified context unification and one-step rewrite constraints are equivalent. This paper contains a description of a decision algorithm SCU for stratified context unification together with a proof of its correctness, which shows decidability of stratified context unification as well as of satisfiability of one-step rewrite constraints.
It is well known that first order uni cation is decidable, whereas second order and higher order unification is undecidable. Bounded second order unification (BSOU) is second order unification under the restriction that only a bounded number of holes in the instantiating terms for second order variables is permitted, however, the size of the instantiation is not restricted. In this paper, a decision algorithm for bounded second order unification is described. This is the fist non-trivial decidability result for second order unification, where the (finite) signature is not restricted and there are no restrictions on the occurrences of variables. We show that the monadic second order unification (MSOU), a specialization of BSOU is in sum p s. Since MSOU is related to word unification, this is compares favourably to the best known upper bound NEXPTIME (and also to the announced upper bound PSPACE) for word unification. This supports the claim that bounded second order unification is easier than context unification, whose decidability is currently an open question.
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
The extraction of strictness information marks an indispensable element of an efficient compilation of lazy functional languages like Haskell. Based on the method of abstract reduction we have developed an e cient strictness analyser for a core language of Haskell. It is completely written in Haskell and compares favourably with known implementations. The implementation is based on the G#-machine, which is an extension of the G-machine that has been adapted to the needs of abstract reduction.
This paper describes context analysis, an extension to strictness analysis for lazy functional languages. In particular it extends Wadler's four point domain and permits in nitely many abstract values. A calculus is presented based on abstract reduction which given the abstract values for the result automatically finds the abstract values for the arguments. The results of the analysis are useful for veri fication purposes and can also be used in compilers which require strictness information.
A partial rehabilitation of side-effecting I/O : non-determinism in non-strict functional languages
(1996)
We investigate the extension of non-strict functional languages like Haskell or Clean by a non-deterministic interaction with the external world. Using call-by-need and a natural semantics which describes the reduction of graphs, this can be done such that the Church-Rosser Theorems 1 and 2 hold. Our operational semantics is a base to recognise which particular equivalencies are preserved by program transformations. The amount of sequentialisation may be smaller than that enforced by other approaches and the programming style is closer to the common one of side-effecting programming. However, not all program transformations used by an optimising compiler for Haskell remain correct in all contexts. Our result can be interpreted as a possibility to extend current I/O-mechanism by non-deterministic deterministic memoryless function calls. For example, this permits a call to a random number generator. Adding memoryless function calls to monadic I/O is possible and has a potential to extend the Haskell I/O-system.
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
We consider the problem of unifying a set of equations between second-order terms. Terms are constructed from function symbols, constant symbols and variables, and furthermore using monadic second-order variables that may stand for a term with one hole, and parametric terms. We consider stratified systems, where for every first-order and second-order variable, the string of second-order variables on the path from the root of a term to every occurrence of this variable is always the same. It is shown that unification of stratified second-order terms is decidable by describing a nondeterministic decision algorithm that eventually uses Makanin's algorithm for deciding the unifiability of word equations. As a generalization, we show that the method can be used as a unification procedure for non-stratified second-order systems, and describe conditions for termination in the general case.
Lavater was admired and detested for his unconventional approach to theology and his rediscovery of physiognomy. He was an avid communicator and through his correspondence became known to almost all leading personalities of eighteenth century Europe, such as Goethe, Wieland and Rousseau. The more than 21,000 letters in Lavater's estate in the Zentralbibliothek Zürich display the enormous thematic variety produced during a remarkable forty years of correspondence. This unique source material is now being published for the first time. IDC Publishers makes this collection available for research to such various disciplines as theology, history, literature, arts, humanities and above all, the history of eighteenth century culture. Scope: * 9,121 letters from Lavater * 12,302 letters to Lavater * 1,850 correspondents
This Article concerns the duty of care in American corporate law. To fully understand that duty, it is necessary to distinguish between roles, functions, standards of conduct, and standards of review. A role consists of an organized and socially recognized pattern of activity in which individuals regularly engage. In organizations, roles take the form of positions, such as the position of the director. A function consists of an activity that an actor is expected to engage in by virtue of his role or position. A standard of conduct states the way in which an actor should play a role, act in his position, or conduct his functions. A standard of review states the test that a court should apply when it reviews an actor’s conduct to determine whether to impose liability, grant injunctive relief, or determine the validity of his actions. In many or most areas of law, standards of conduct and standards of review tend to be conflated. For example, the standard of conduct that governs automobile drivers is that they should drive carefully, and the standard of review in a liability claim against a driver is whether he drove carefully. Similarly, the standard of conduct that governs an agent who engages in a transaction with his principal is that the agent must deal fairly, and the standard of review in a claim by the principal against an agent, based on such a transaction, is whether the agent dealt fairly. The conflation of standards of conduct and standards of review is so common that it is easy to overlook the fact that whether the two kinds of standards are or should be identical in any given area is a matter of prudential judgment. In a corporate world in which information was perfect, the risk of liability for assuming a given corporate role was always commensurate with the incentives for assuming the role, and institutional considerations never required deference to a corporate organ, the standards of conduct and review in corporate law might be identical. In the real world, however, these conditions seldom hold, and in American corporate law the standards of review pervasively diverge from the standards of conduct. Traditionally, the two major areas of American corporate law that involved standards of conduct and review have been the duty of care and the duty of loyalty. The duty of loyalty concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does involve his own self-interest. The duty of care concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does not involve his own self-interest.
Revised Draft: January 2005, First Draft: December 8, 2004 The picture of dispersed, isolated and uninterested shareholders so graphically drawn by Adolf Berle and Gardiner Means in 19321 is for the most part no longer accurate in today's market, although their famous observations on the separation of control and ownership of public corporations remain true.
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The attitude expressed by Carl Fuerstenberg, a leading German banker of his time, succinctly embodies one of the principal issues facing the large enterprise – the divergence of interest between the management of the firm and outside equity shareholders. Why do, or should, investors put some of their savings in the hands of others, to expend as they see fit, with no commitment to repayment or a return? The answers are far from simple, and involve a complex interaction among a number of legal rules, economic institutions and market forces. Yet crafting a viable response is essential to the functioning of a modern economy based upon technology with scale economies whose attainment is dependent on the creation of large firms.
With the Council regulation (EC) No. 1346/2000 of 29 May 2000 on insolvency proceedings, that came into effect May 31, 2002 the European Union has introduced a legal framework for dealing with cross-border insolvency proceedings. In order to achieve the aim of improving the efficiency and effectiveness of insolvency proceedings having cross-border effects within the European Community, the provisions on jurisdiction, recognition and applicable law in this area are contained in a Regulation, a Community law measure which is binding and directly applicable in Member States. The goals of the Regulation, with 47 articles, are to enable cross-border insolvency proceedings to operate efficiently and effectively, to provide for co-ordination of the measures to be taken with regard to the debtor’s assets and to avoid forum shopping. The Insolvency Regulation, therefore, provides rules for the international jurisdiction of a court in a Member State for the opening of insolvency proceedings, the (automatic) recognition of these proceedings in other Member States and the powers of the ‘liquidator’ in the other Member States. The Regulation also deals with important choice of law (or: private international law) provisions. The Regulation is directly applicable in the Member States3 for all insolvency proceedings opened after 31 May 2002.
Increasingly, alternative investments via hedge funds are gaining importance in Germany. Just recently, this subject was taken up in the legal literature, too; this resulted in a higher product transparency. However, German investment law and, particularly, the special division "hedge funds" is still a field dominated by practitioners. First, the present situation shall be outlined. In addition, a description of the current development is given, in which the practical knowledge of the author is included. Finally, the hedge fund regulation intended by the legislator at the beginning of the year 2004 is legally evaluated against this background.
In response to recent developments in the financial markets and the stunning growth of the hedge fund industry in the United States, policy makers, most notably the Securities and Exchange Commission (“SEC”), are turning their attention to the regulation, or lack thereof, of hedge funds. U.S. regulators have scrutinized the hedge fund industry on several occasions in the recent past without imposing substantial regulatory constraints. Will this time be any different? The focus of the regulators’ interest has shifted. Traditionally, they approached the hedge fund industry by focusing on systemic risk to and integrity of the financial markets. The current inquiry is almost exclusively driven by investor protection concerns. What has changed? First, since 2000, new kinds of investors have poured capital into hedge funds in the United States, facilitated by the “retailization” of hedge funds through the development of funds of hedge funds and the dismal performance of the stock market. Second, in a post-Enron era, regulators and policy makers are increasingly sensitive to investor protection concerns. On May 14 and 15, 2003, the SEC held for the first time a public roundtable discussion on the single topic of hedge funds. Among the investor protection concerns highlighted were: an increase in incidents of fraud, inadequate suitability determinations by brokers who market hedge fund interests to individual investors, conflicts of interest of managers who manage mutual funds and hedge funds side-by-side, a lack of transparency that hinders investors from making informed investment decisions, layering of fees, and unbounded discretion by managers in pricing private hedge fund securities. Although there has been discussion about imposing wide-ranging restrictions onhedge funds, such as reining in short selling, requiring disclosure of long/short positions and limiting leverage, such a response would be heavy-handed and probably unnecessary. The existing regulatory regime is largely adequate to address the most flagrant abuses. Moreover, as the hedge fund market further matures, it is likely that institutional investors will continue to weed out weak performers and mediocre or dishonest hedge fund managers. What is likely to emerge from the newest regulatory focus on investor protection is a measured response that would enhance the SEC’s enforcement and inspection authority, while leaving hedge funds’ inherent investment flexibility largely unfettered. A likely scenario, for example, might be a requirement that some, or possibly all, hedge fund sponsors register with the SEC as investment advisers. Today, most are exempt from registration, although more and more are registering to provide advice to public hedge funds and attract institutions. Registration would make it easier for the SEC to ferret out potential fraudsters in advance by reviewing the professional history of hedge fund operators, allow the SEC to bring administrative proceedings against hedge fund advisers for statutory violations and give the agency access to books and records that it does not have today. Other possible initiatives, including additional disclosure requirements for publicly offered hedge funds, are discussed below. This article addresses the question whether U.S. regulation of hedge funds is really taking a new direction. It (i) provides a brief overview of the current U.S. regulatory scheme, from which hedge funds are generally exempt, (ii) describes recent events in the United States that have contributed to regulators’ anxiety, (iii) examines the investor protection rationale for hedge fund regulation and considers whether these concerns do, in fact, merit increased regulation of hedge funds at this time, and (iv) considers the likelihood and possible scope of a potential regulatory response, principally by the SEC.
In an ideal world all investment products, including hedge funds, would be marketable to all investors. In this ideal world, all investors would fully understand the nature of the products and would be able to make an informed choice whether to invest. Of course the ideal world does not exist – the retail investment market is characterised by asymmetries of information. Product providers know most about the products on offer (or at least they should do). Investment advisers often know rather less than the provider but much more than their retail customers. Providers and intermediary advisers are understandably motivated by the desire to sell their products. There is therefore a risk that investment products will be mis-sold by investment advisers or mis-bought by ill-informed investors. This asymmetry of information is dealt with in most countries through regulation. However, the regulatory response in different countries is not necessarily the same. There are various ways in which protections can be applied and it is important to understand that the cultural background and regulatory histories of countries flavours the way regulation has developed. This means (as will be explained in greater detail later) that some countries are better able than others to admit hedge funds to the retail sector. Following this Introduction, Section II looks at some key background issues. Section III then looks at some important questions raised by the retail hedge fund issue. Many of these are questions of balance. Balance lies at the heart of regulation of course – regulation must always balance the needs of investors and with market efficiency. Understanding the “retail hedge fund” question requires particular attention to balance. Section IV then looks at the UK regime and how the FSA has answered the balance question. Section V offers some international perspectives. Section VI concludes. It will be seen that there is no obviously right answer to the question whether hedge fund products should be marketed to retail investors. Each regulator in each jurisdiction needs to make up its own mind on how to deal with the various issues and balances. It is evident, however, that internationally there is a move towards a greater variety of retail funds. There is nothing wrong with that, provided the regulators and the retail customers they protect, understand sufficiently what sort of protection is, or is not, being offered in the regulatory regime.
While hedge funds have been around at least since the 1940's, it has only been in the last decade or so that they have attracted the widespread attention of investors, academics and regulators. Investors, mainly wealthy individuals but also increasingly institutional investors, are attracted to hedge funds because they promise high “absolute” returns -- high returns even when returns on mainstream asset classes like stocks and bonds are low or negative. This prospect, not surprisingly, has increased interest in hedge funds in recent years as returns on stocks have plummeted around the world, and as investors have sought alternative investment strategies to insulate them in the future from the kind of bear markets we are now experiencing. Government regulators, too, have become increasingly attentive to hedge funds, especially since the notorious collapse of the hedge fund Long-Term Capital Management (LTCM) in September 1998. Over the course of only a few months during the summer of 1998 LTCM lost billions of dollars because of failed investment strategies that were not well understood even by its own investors, let alone by its bankers and derivatives counterparties. LTCM had built up huge leverage both on and off the balance sheet, so that when its investments soured it was unable to meet the demands of creditors and derivatives counterparties. Had LTCM’s counterparties terminated and liquidated their positions with LTCM, the result could have been a severe liquidity shortage and sharp changes in asset prices, which many feared could have impaired the solvency of other financial institutions and destabilized financial markets generally. The Federal Reserve did not wait to see if this would happen. It intervened to organize an immediate (September 1998) creditor-bailout by LTCM’s largest creditors and derivatives counterparties, preventing the wholesale liquidation of LTCM’s positions. Over the course of the year that followed the bailout, the creditor committee charged with managing LTCM’s positions effected an orderly work-out and liquidation of LTCM’s positions. We will never know what would have happened had the Federal Reserve not intervened. In defending the Federal Reserve’s unusual actions in coming to the assistance of an unregulated financial institutions like a hedge fund, William McDonough, the president of the Federal Reserve Bank of New York, stated that it was the Federal Reserve’s judgement that the “...abrupt and disorderly close-out of LTCM’s positions would pose unacceptable risks to the American economy. ... there was a likelihood that a number of credit and interest rate markets would experience extreme price moves and possibly cease to function for a period of one or more days and maybe longer. This would have caused a vicious cycle: a loss of investor confidence, lending to further liquidations of positions, and so on.” The near-collapse of LTCM galvanized regulators throughout the world to examine the operations of hedge funds to determine if they posed a risk to investors and to financial stability more generally. Studies were undertaken by nearly every major central bank, regulatory agency, and international “regulatory” committee (such as the Basle Committee and IOSCO), and reports were issued, by among others, The President’s Working Group on Financial Markets, the United States General Accounting Office (GAO), the Counterparty Risk Management Policy Group, the Basle Committee on Banking Supervision, and the International Organization of Securities Commissions (IOSCO). Many of these studies concluded that there was a need for greater disclosure by hedge funds in order to increase transparency and enhance market discipline, by creditors, derivatives counterparties and investors. In the Fall of 1999 two bills were introduced before the U.S. Congress directed at increasing hedge fund disclosure (the “Hedge Fund Disclosure Act” [the “Baker Bill”] and the “Markey/Dorgan Bill”). But when the legislative firestorm sparked by the LTCM’s episode finally quieted, there was no new regulation of hedge funds. This paper provides an overview of the regulation of hedge funds and examines the key regulatory issues that now confront regulators throughout the world. In particular, two major issues are examined. First, whether hedge funds pose a systemic threat to the stability of financial markets, and, if so, whether additional government regulation would be useful. And second, whether existing regulation provides sufficient protection for hedge fund investors, and, if not, what additional regulation is needed.
When performance measures are used for evaluation purposes, agents have some incentives to learn how their actions affect these measures. We show that the use of imperfect performance measures can cause an agent to devote too many resources (too much effort) to acquiring information. Doing so can be costly to the principal because the agent can use information to game the performance measure to the detriment of the principal. We analyze the impact of endogenous information acquisition on the optimal incentive strength and the quality of the performance measure used.
The volume is a collection of papers given at the conference “sub8 -- Sinn und Bedeutung”, the eighth annual conference of the Gesellschaft für Semantik, held at the Johann-Wolfgang-Goethe-Universität, Frankfurt (Germany) in September 2003. During this conference, experts presented and discussed various aspects of semantics. The very different topics included in this book provide insight into fields of ongoing Semantics research.
Compelling evidence for the creation of a new form of matter has been claimed to be found in Pb+Pb collisions at SPS. We discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion collisions. It is demonstrated that so far none of the proposed signals like J/psi meson production/suppression, strangeness enhancement, dileptons, and directed flow unambigiously show that a phase of deconfined matter has been formed in SPS Pb+Pb collisions. We emphasize the need for systematic future measurements to search for simultaneous irregularities in the excitation functions of several observables in order to come close to pinning the properties of hot, dense QCD matter from data.
We calculate the Gaussian radius parameters of the pion-emitting source in high energy heavy ion collisions, assuming a first order phase transition from a thermalized Quark-Gluon-Plasma (QGP) to a gas of hadrons. Such a model leads to a very long-lived dissipative hadronic rescattering phase which dominates the properties of the two-pion correlation functions. The radii are found to depend only weakly on the thermalization time tau i, the critical temperature T c (and thus the latent heat), and the specific entropy of the QGP. The dissipative hadronic stage enforces large variations of the pion emission times around the mean. Therefore, the model calculations suggest a rapid increase of R out/R side as a function of K T if a thermalized QGP were formed.
The equilibration of hot and dense nuclear matter produced in the central cell of central Au+Au collisions at RHIC (sqrt s = 200 A GeV) energies is studied within a microscopic transport model. The pressure in the cell becomes isotropic at t approx 5 fm/c after beginning of the collision. Within the next 15 fm/c the expansion of matter in the cell proceeds almost isentropically with the entropy per baryon ratio S/A approx 150, and the equation of state in the (P,epsilon) plane has a very simple form, P=0.15 epsilon. Comparison with the statistical model of an ideal hadron gas indicates that the time t approx 20 fm/c may be too short to reach the fully equilibrated state. Particularly, the creation of long-lived resonance-rich matter in the cell decelerates the relaxation to chemical equilibrium. This resonance-abundant state can be detected experimentally after the thermal freeze-out of particles.
The yields of strange particles are calculated with the UrQMD model for p,Pb(158 AGeV)Pb collisions and compared to experimental data. The yields are enhanced in central collisions if compared to proton induced or peripheral Pb+Pb collisions. The enhancement is due to secondary interactions. Nevertheless, only a reduction of the quark masses or equivalently an increase of the string tension provides an adequate description of the large observed enhancement factors (WA97 and NA49). Furthermore, the yields of unstable strange resonances as the Lambda star(1520) resonance or the phi meson are considerably affected by hadronic rescattering of the decay products.
The equilibration of hot and dense nuclear matter produced in the central region in central Au+Au collisions at square root s = 200A GeV is studied within the microscopic transport model UrQMD. The pressure here becomes isotropic at t approx 5 fm/c. Within the next 15 fm/c the expansion of the matter proceeds almost isentropically with the entropy per baryon ratio S/A approx 150. During this period the equation of state in the (P, epsilon)-plane has a very simple form, P = 0.15 epsilon. Comparison with the statistical model (SM) of an ideal hadron gas reveals that the time of approx 20 fm/c may be too short to attain the fully equilibrated state. Particularly, the fractions of resonances are overpopulated in contrast to the SM values. The creation of such a long-lived resonance-rich state slows down the relaxation to chemical equilibrium and can be detected experimentally.
Enhanced antiproton production in Pb(160 AGeV)+Pb reactions: evidence for quark gluon matter?
(2000)
The centrality dependence of the antiproton per participant ratio is studied in Pb(160 AGeV)+Pb reactions. Antiproton production in collisions of heavy nuclei at the CERN/SPS seems considerably enhanced as compared to conventional hadronic physics, given by the antiproton production rates in pp and antiproton annihilation in p p reactions. This enhancement is consistent with the observation of strong in-medium effects in other hadronic observables and may be an indication of partial restoration of chiral symmetry.
The relaxation of hot nuclear matter to an equilibrated state in the central zone of heavy-ion collisions at energies from AGS to RHIC is studied within the microscopic UrQMD model. It is found that the system reaches the (quasi)equilibrium stage for the period of 10-15 fm/c. Within this time the matter in the cell expands nearly isentropically with the entropy to baryon ratio S/A = 150 - 170. Thermodynamic characteristics of the system at AGS and at SPS energies at the endpoints of this stage are very close to the parameters of chemical and thermal freeze-out extracted from the thermal fit to experimental data. Predictions are made for the full RHIC energy square root s = 200$ AGeV. The formation of a resonance-rich state at RHIC energies is discussed.
The behavior of hadronic matter at high baryon densities is studied within Ultrarelativistic Quantum Molecular Dynamics (URQMD). Baryonic stopping is observed for Au+Au collisions from SIS up to SPS energies. The excitation function of flow shows strong sensitivities to the underlying equation of state (EOS), allowing for systematic studies of the EOS. Effects of a density dependent pole of the rho-meson propagator on dilepton spectra are studied for different systems and centralities at CERN energies.
Dilepton spectra are calculated within the microscopic transport model UrQMD and compared to data from the CERES experiment. The invariant mass spectra in the region between 300 MeV and 600 MeV depend strongly on the mass dependence of the rho meson decay width which is not sufficiently determined by the Vector Meson Dominance model. A consistent explanation of both the recent Pb+Au data and the proton induced data can be given without additional medium effects.
The hypothesis of local equilibrium (LE) in relativistic heavy ion collisions at energies from AGS to RHIC is checked in the microscopic transport model. We find that kinetic, thermal, and chemical equilibration of the expanding hadronic matter is nearly reached in central collisions at AGS energy for t >_ fm/c in a central cell. At these times the equation of state may be approximated by a simple dependence P ~= (0.12-0.15) epsilon. Increasing deviations of the yields and the energy spectra of hadrons from statistical model values are observed for increasing bombarding energies. The origin of these deviations is traced to the irreversible multiparticle decays of strings and many-body (N >_ 3) decays of resonances. The violations of LE indicate that the matter in the cell reaches a steady state instead of idealized equilibrium. The entropy density in the cell is only about 6% smaller than that of the equilibrium state.
Local equilibrium in heavy ion collisions. Microscopic model versus statistical model analysis
(1999)
The assumption of local equilibrium in relativistic heavy ion collisions at energies from 10.7 AGeV (AGS) up to 160 AGeV (SPS) is checked in the microscopic transport model. Dynamical calculations performed for a central cell in the reaction are compared to the predictions of the thermal statistical model. We find that kinetic, thermal and chemical equilibration of the expanding hadronic matter are nearly approached late in central collisions at AGS energy for t >= 10 fm/c in a central cell. At these times the equation of state may be approximated by a simple dependence P ~= (0.12-0.15) epsilon. Increasing deviations of the yields and the energy spectra of hadrons from statistical model values are observed for increasing energy, 40 AGeV and 160 AGeV. These violations of local equilibrium indicate that a fully equilibrated state is not reached, not even in the central cell of heavy ion collisions at energies above 10 AGeV. The origin of these findings is traced to the multiparticle decays of strings and many-body decays of resonances.
In dieser Arbeit werden Untersuchungen über die Anwendbarkeit von vier Methoden zur selektiven Einführung von Radikalen in DNA vorgestellt. Hierzu wurde die EPR-Spektroskopie (Elektronen-paramagnetische Resonanz) benutzt. Die selektive Einführung und Erzeugung von Radikalen in DNA ist nötig, um J-Kopplungen in DNA zu untersuchen. Vor dem Fernziel der Bestimmung der Austauschkopplungskonstanten J in biradikalischer DNA und deren Korrelation mit der charge-transfer-Geschwindigkeitskonstanten kCT stellen diese Untersuchungen einen wichtigen Ausgangspunkt dar. Stabile aromatische Nitroxide. Simulationen von Raumtemperatur-CW-X-Band-EPRSpektren fünf verschiedener aromatischer Nitroxide, welche potentielle DNA-Interkalatoren sind, wurden durchgeführt. Die aromatischen Nitroxide zeigen aufgelöste Hyperfeinkopplungen, welche zu dem Schluss führen, dass die Spindichte in hohem Maße delokalisiert ist, was die Verwendung dieser Verbindungen zur Messung von J-Kopplungen in biradikalischer DNA erlaubt. Transiente Guanin-Radikale. Transiente Guanin-Radikale werden in DNA selektiv durch die Flash-Quench-Technik erzeugt, bei der optisch anregbare Ruthenium-Interkalatoren verwendet werden. Transiente Thymyl-Radikale aus UV-bestrahltem 4'-Pivaloyl-Thymidin. Es werden photoinduzierte Prozesse untersucht, welche durch Bestrahlung von Thymin-Nukleosiden, die an der 4’-Position die optisch spaltbare Pivaloyl-Gruppe tragen, erzeugt werden. Dieses Nukleosid wurde speziell dafür entworfen, um Elektronenlöcher in DNA zu injizieren. In dieser Arbeit wird gezeigt, dass diese Verbindung benutzt werden kann, um selektiv eine Thymin-Base zu reduzieren. Transiente Thymyl-Radikale erzeugt durch ein neuartig modifiziertes Thymin nach UV-Bestrahlung. Photoinduzierte Prozesse, welche durch Bestrahlung eines ähnlichen Thymidin-Nukleosids erzeugt wurden, werden hier untersucht. Dieses Thymidin- Nukleosid wurde modifiziert, indem die optisch spaltbare Pivaloyl-Gruppe an eine Seitenkette angehängt wurde, welche an der C6-Position der Thymin-Base sitzt. Die Thymin-Base wurde speziell dafür entworfen, um Elektronen in DNA zu injizieren. In dieser Arbeit wurde bestätigt, dass ein Überschuss-Elektron selektiv auf eine Thymin-Base transferiert werden kann.
The behavior of hadronic matter at high baryon densities is studied within Ultrarelativistic Quantum Molecular Dynamics (URQMD). Baryonic stopping is observed for Au+Au collisions from SIS up to SPS energies. The excitation function of flow shows strong sensitivities to the underlying equation of state (EOS), allowing for systematic studies of the EOS. Dilepton spectra are calculated with and without shifting the rho pole. Except for S+Au collisions our calculations reproduce the CERES data.
Quantum Molecular Dynamics (QMD) calculations of central collisions between heavy nuclei are used to study fragment production and the creation of collective flow. It is shown that the final phase space distributions are compatible with the expectations from a thermally equilibrated source, which in addition exhibits a collective transverse expansion. However, the microscopic analyses of the transient states in the intermediate reaction stages show that the event shapes are more complex and that equilibrium is reached only in very special cases but not in event samples which cover a wide range of impact parameters as it is the case in experiments. The basic features of a new molecular dynamics model (UQMD) for heavy ion collisions from the Fermi energy regime up to the highest presently available energies are outlined.
We study the thermodynamic properties of infinite nuclear matter with the Ultrarelativistic Quantum Molecular Dynamics (URQMD), a semiclassical transport model, running in a box with periodic boundary conditions. It appears that the energy density rises faster than T4 at high temperatures of T approx. 200 - 300 MeV. This indicates an increase in the number of degrees of freedom. Moreover, We have calculated direct photon production in Pb+Pb collisions at 160 GeV/u within this model. The direct photon slope from the microscopic calculation equals that from a hydrodynamical calculation without a phase transition in the equation of state of the photon source.
Die in Englisch verfasste Dissertation, die unter der Betreuung von Herrn Prof. Dr. H. F. de Groote, Fachbereich Mathematik, entstand, ist der Mathematischen Physik zuzuordnen. Sie behandelt Stonesche Spektren von Neumannscher Algebren, observable Funktionen sowie einige Anwendungen in der Physik. Das abschließende Kapitel liefert eine Verallgemeinerung des Kochen-Specker-Theorems. Stonesche Spektren und observable Funktionen wurden von de Groote eingeführt. Das Stonesche Spektrum einer von Neumann-Algebra ist eine Verallgemeinerung des Gelfand-Spektrums, die observablen Funktionen verallgemeinern die Gelfand-Transformierten. Da de Grootes Ergebnisse zum großen Teil unveröffentlicht sind, folgt nach dem Einleitungskapitel im zweiten Kapitel eine Übersichtsdarstellung dieser Ergebnisse. Das dritte Kapitel behandelt die Stoneschen Spektren endlicher von Neumann-Algebren. Für Algebren vom Typ In wird eine vollständige Charakterisierung des Stoneschen Spektrums entwickelt. Zu Typ-II1-Algebren werden einige Resultate vorgestellt. Das vierte Kapitel liefert. einige einfache Anwendungen des Formalismus auf die Physik. Das fünfte Kapitel gibt erstmals einen funktionalanalytischen Beweis des Kochen-Specker-Theorems und liefert die Verallgemeinerung dieses Satzes, wobei die Situation für alle von Neumann-Algebren geklärt wird.
The centrality dependence of (multi-)strange hadron abundances is studied for Pb(158 AGeV)Pb reactions and compared to p(158 GeV)Pb collisions. The microscopic transport model UrQMD is used for this analysis. The predicted Lambda/pi-, Xi-/pi- and Omega-/pi- ratios are enhanced due to rescattering in central Pb-Pb collisions as compared to peripheral Pb-Pb or p-Pb collisions. A reduction of the constituent quark masses to the current quark masses m_s \sim 230 MeV, m_q \sim 10 MeV, as motivated by chiral symmetry restoration, enhances the hyperon yields to the experimentally observed high values. Similar results are obtained by an ad hoc overall increase of the color electric field strength (effective string tension of kappa=3 GeV/fm). The enhancement depends strongly on the kinematical cuts. The maximum enhancement is predicted around midrapidity. For Lambda's, strangeness suppression is predicted at projectile/target rapidity. For Omega's, the predicted enhancement can be as large as one order of magnitude. Comparisons of Pb-Pb data to proton induced asymmetric (p-A) collisions are hampered due to the predicted strong asymmetry in the various rapidity distributions of the different (strange) particle species. In p-Pb collisions, strangeness is locally (in rapidity) not conserved. The present comparison to the data of the WA97 and NA49 collaborations clearly supports the suggestion that conventional (free) hadronic scenarios are unable to describe the observed high (anti-)hyperon yields in central collisions. The doubling of the strangeness to nonstrange suppression factor, gamma_s \approx 0.65, might be interpreted as a signal of a phase of nearly massless particles.
Directed and elliptic flow
(1999)
We compare microscopic transport model calculations to recent data on the directed and elliptic flow of various hadrons in 2 - 10 A GeV Au+Au and Pb (158 A GeV) Pb collisions. For the Au+Au excitation function a transition from the squeeze-out to an in-plane enhanced emission is consistently described with mean field potentials corresponding to one incompressibility. For the Pb (158 A GeV) Pb system the elliptic flow prefers in-plane emission both for protons and pions, the directed flow of protons is opposite to that of the pions, which exhibit anti-flow. Strong directed transverse flow is present for protons and Lambdas in Au (6 A GeV) Au collisions as well. Both for the SPS and the AGS energies the agreement between data and calculations is remarkable.
Microscopic calculations of central collisions between heavy nuclei are used to study fragment production and the creation of collective flow. It is shown that the final phase space distributions are compatible with the expectations from a thermally equilibrated source, which in addition exhibits a collective transverse expansion. However, the microscopic analyses of the transient states in the reaction stages of highest density and during the expansion show that the system does not reach global equilibrium. Even if a considerable amount of equilibration is assumed, the connection of the measurable final state to the macroscopic parameters, e.g. the temperature, of the transient "equilibrium" state remains ambiguous.
Die Ermittlung von Proteinstukturen mittels NMR-Spektroskopie ist ein komplexer Prozess, wobei die Resonanzfrequenzen und die Signalintensitäten den Atomen des Proteins zugeordnet werden. Zur Bestimmung der räumlichen Proteinstruktur sind folgende Schritte erforderlich: die Präparation der Probe und 15N/13C Isotopenanreicherung, Durchführung der NMR Experimente, Prozessierung der Spektren, Bestimmung der Signalresonanzen ('Peak-picking'), Zuordnung der chemischen Verschiebungen, Zuordnung der NOESY-Spektren und das Sammeln von konformationellen Strukturparametern, Strukturrechnung und Strukturverfeinerung. Aktuelle Methoden zur automatischen Strukturrechnung nutzen eine Reihe von Computeralgorithmen, welche Zuordnungen der NOESY-Spektren und die Strukturrechnung durch einen iterativen Prozess verbinden. Obwohl neue Arten von Strukturparametern wie dipolare Kopplungen, Orientierungsinformationen aus kreuzkorrelierten Relaxationsraten oder Strukturinformationen, die sich in Gegenwart paramagnetischer Zentren in Proteinen ergeben, wichtige Neuerungen für die Proteinstrukturrechnung darstellen, sind die Abstandsinformationen aus NOESY-Spektren weiterhin die wichtigste Basis für die NMR-Strukturbestimmung. Der hohe zeitliche Aufwand des 'peak-picking' in NOESY-Spektren ist hauptsächlich bedingt durch spektrale Überlagerung, Rauschsignale und Artefakte in NOESY-Spektren. Daher werden für das effizientere automatische 'Peak-picking' zuverlässige Filter benötigt, um die relevanten Signale auszuwählen. In der vorliegenden Arbeit wird ein neuer Algorithmus für die automatische Proteinstrukturrechnung beschrieben, der automatisches 'Peak-picking' von NOESY-Spektren beinhaltet, die mit Hilfe von Wavelets entrauscht wurden. Der kritische Punkt dieses Algorithmus ist die Erzeugung inkrementeller Peaklisten aus NOESY-Spektren, die mit verschiedenen auf Wavelets basierenden Entrauschungsprozeduren prozessiert wurden. Mit Hilfe entrauschter NOESY-Spektren erhält man Signallisten mit verschiedenen Konfidenzbereichen, die in unterschiedlichen Schritten der kombinierten NOE-Zuordnung/Strukturrechnung eingesetzt werden. Das erste Strukturmodell beruht auf stark entrauschten Spektren, die die konservativste Signalliste mit als weitgehend sicher anzunehmenden Signalen ergeben. In späteren Stadien werden Signallisten aus weniger stark entrauschten Spektren mit einer größeren Anzahl von Signalen verwendet. Die Auswirkung der verschiedenen Entrauschungsprozeduren auf Vollständigkeit und Richtigkeit der NOESY Peaklisten wurde im Detail untersucht. Durch die Kombination von Wavelet-Entrauschung mit einem neuen Algorithmus zur Integration der Signale in Verbindung mit zusätzlichen Filtern, die die Konsistenz der Peakliste prüfen ('Network-anchoring' der Spinsysteme und Symmetrisierung der Peakliste), wird eine schnelle Konvergenz der automatischen Strukturrechnung erreicht. Der neue Algorithmus wurde in ARIA integriert, einem weit verbreiteten Computerprogramm für die automatische NOE-Zuordnung und Strukturrechnung. Der Algorithmus wurde an der Monomereinheit der Polysulfid-Schwefel-Transferase (Sud) aus Wolinella succinogenes verifiziert, deren hochaufgelöste Lösungsstruktur vorher auf konventionelle Weise bestimmt wurde. Neben der Möglichkeit zur Bestimmung von Proteinlösungsstrukturen bietet sich die NMR-Spektroskopie auch als wirkungsvolles Werkzeug zur Untersuchung von Protein-Ligand- und Protein-Protein-Wechselwirkungen an. Sowohl NMR Spektren von isotopenmarkierten Proteinen, als auch die Spektren von Liganden können für das 'Screening' nach Inhibitoren benutzt werden. Im ersten Fall wird die Sensitivität der 1H- und 15N-chemischen Verschiebungen des Proteinrückgrats auf kleine geometrische oder elektrostatische Veränderungen bei der Ligandbindung als Indikator benutzt. Als 'Screening'-Verfahren, bei denen Ligandensignale beobachtet werden, stehen verschiedene Methoden zur Verfügung: Transfer-NOEs, Sättigungstransferdifferenzexperimente (STD, 'saturation transfer difference'), ePHOGSY, diffusionseditierte und NOE-basierende Methoden. Die meisten dieser Techniken können zum rationalen Design von inhibitorischen Verbindungen verwendet werden. Für die Evaluierung von Untersuchungen mit einer großen Anzahl von Inhibitoren werden effiziente Verfahren zur Mustererkennung wie etwa die PCA ('Principal Component Analysis') verwendet. Sie eignet sich zur Visualisierung von Ähnlichkeiten bzw. Unterschieden von Spektren, die mit verschiedenen Inhibitoren aufgenommen wurden. Die experimentellen Daten werden zuvor mit einer Serie von Filtern bearbeitet, die u.a. Artefakte reduzieren, die auf nur kleinen Änderungen der chemischen Verschiebungen beruhen. Der am weitesten verbreitete Filter ist das sogenannte 'bucketing', bei welchem benachbarte Punkte zu einen 'bucket' aufsummiert werden. Um typische Nachteile der 'bucketing'-Prozedur zu vermeiden, wurde in der vorliegenden Arbeit der Effekt der Wavelet-Entrauschung zur Vorbereitung der NMR-Daten für PCA am Beispiel vorhandener Serien von HSQC-Spektren von Proteinen mit verschiedenen Liganden untersucht. Die Kombination von Wavelet-Entrauschung und PCA ist am effizientesten, wenn PCA direkt auf die Wavelet-Koeffizienten angewandt wird. Durch die Abgrenzung ('thresholding') der Wavelet-Koeffizienten in einer Multiskalenanalyse wird eine komprimierte Darstellung der Daten erreicht, welche Rauschartefakte minimiert. Die Kompression ist anders als beim 'bucketing' keine 'blinde' Kompression, sondern an die Eigenschaften der Daten angepasst. Der neue Algorithmus kombiniert die Vorteile einer Datenrepresentation im Wavelet-Raum mit einer Datenvisualisierung durch PCA. In der vorliegenden Arbeit wird gezeigt, dass PCA im Wavelet- Raum ein optimiertes 'clustering' erlaubt und dabei typische Artefakte eliminiert werden. Darüberhinaus beschreibt die vorliegende Arbeit eine de novo Strukturbestimmung der periplasmatischen Polysulfid-Schwefel-Transferase (Sud) aus dem anaeroben gram-negativen Bakterium Wolinella succinogenes. Das Sud-Protein ist ein polysulfidbindendes und transferierendes Enzym, das bei niedriger Polysulfidkonzentration eine schnelle Polysulfid-Schwefel-Reduktion katalysiert. Sud ist ein 30 kDa schweres Homodimer, welches keine prosthetischen Gruppen oder schwere Metallionen enthält. Jedes Monomer enhält ein Cystein, welches kovalent bis zu zehn Polysulfid-Schwefel (Sn 2-) Ionen bindet. Es wird vermutet, dass Sud die Polysulfidkette auf ein katalytischen Molybdän-Ion transferiert, welches sich im aktiven Zentrum des membranständigen Enzyms Polysulfid-Reduktase (Psr) auf dessen dem Periplasma zugewandten Seite befindet. Dabei wird eine reduktive Spaltung der Kette katalysiert. Die Lösungsstruktur des Homodimeres Sud wurde mit Hilfe heteronuklearer, mehrdimensionaler NMR-Techniken bestimmt. Die Struktur beruht auf von NOESY-Spektren abgeleiteten Distanzbeschränkungen, Rückgratwasserstoffbindungen und Torsionswinkeln, sowie auf residuellen dipolaren Kopplungen, die für die Verfeinerung der Struktur und für die relative Orientierung der Monomereinheiten wichtig waren. In den NMR Spektren der Homodimere haben alle symmetrieverwandte Kerne äquivalente magnetische Umgebungen, weshalb ihre chemischen Verschiebungen entartet sind. Die symmetrische Entartung vereinfacht das Problem der Resonanzzuordnung, da nur die Hälfte der Kerne zugeordnet werden müssen. Die NOESY-Zuordnung und die Strukturrechnung werden dadurch erschwert, dass es nicht möglich ist, zwischen den Intra-Monomer-, Inter-Monomer- und Co-Monomer- (gemischten) NOESY-Signalen zu unterscheiden. Um das Problem der Symmetrie-Entartung der NOESY-Daten zu lösen, stehen zwei Möglichkeiten zur Verfügung: (I) asymmetrische Markierungs-Experimente, um die intra- von den intermolekularen NOESY-Signalen zu unterscheiden, (II) spezielle Methoden der Strukturrechnung, die mit mehrdeutigen Distanzbeschränkungen arbeiten können. Die in dieser Arbeit vorgestellte Struktur wurde mit Hilfe der Symmetrie-ADR- ('Ambigous Distance Restraints') Methode in Kombination mit Daten von asymetrisch isotopenmarkierten Dimeren berechnet. Die Koordinaten des Sud-Dimers zusammen mit den NMR-basierten Strukturdaten wur- den in der RCSB-Proteindatenbank unter der PDB-Nummer 1QXN abgelegt. Das Sud-Protein zeigt nur wenig Homologie zur Primärsequenz anderer Proteine mit ähnlicher Funktion und bekannter dreidimensionaler Struktur. Bekannte Proteine sind die Schwefeltransferase oder das Rhodanese-Enzym, welche beide den Transfer von einem Schwefelatom eines passenden Donors auf den nukleophilen Akzeptor (z.B von Thiosulfat auf Cyanid) katalysieren. Die dreidimensionalen Strukturen dieser Proteine zeigen eine typische a=b Topologie und haben eine ähnliche Umgebung im aktiven Zentrum bezüglich der Konformation des Proteinrückgrades. Die Schleife im aktiven Zentrum umgibt das katalytische Cystein, welches in allen Rhodanese-Enzymen vorhanden ist, und scheint im Sud-Protein flexibel zu sein (fehlende Resonanzzuordnung der Aminosäuren 89-94). Das Polysulfidende ragt aus einer positiv geladenen Bindungstasche heraus (Reste: R46, R67, K90, R94), wo Sud wahrscheinlich in Kontakt mit der Polysulfidreduktase tritt. Das strukturelle Ergebnis wurde durch Mutageneseexperimente bestätigt. In diesen Experimenten konnte gezeigt werden, dass alle Aminosäurereste im aktiven Zentrum essentiell für die Schwefeltransferase-Aktivität des Sud-Proteins sind. Die Substratbindung wurde früher durch den Vergleich von [15N,1H]-TROSY-HSQC-Spektren des Sud-Proteins in An- und Abwesenheit des Polysulfidliganden untersucht. Bei der Substratbindung scheint sich die lokale Geometrie der Polysulfidbindungsstelle und der Dimerschnittstelle zu verändern. Die konformationellen Änderungen und die langsame Dynamik, hervorgerufen durch die Ligandbindung können die weitere Polysulfid-Schwefel-Aktivität auslösen. Ein zweites Polysulfid-Schwefeltransferaseprotein (Str, 40 kDa) mit einer fünffach höheren nativen Konzentration im Vergleich zu Sud wurde im Bakterienperiplasma von Wolinella succinogenes entdeckt. Es wird angenommen, dass beide Protein einen Polysulfid-Schwefel-Komplex bilden, wobei Str wässriges Polysulfid sammelt und an Sud abgibt, welches den Schwefeltransfer zum katalytischen Molybdän-Ion auf das aktive Zentrum der dem Periplasma zugewandten Seite der Polysulfidreduktase durchführt. Änderungen chemischer Verschiebungen in [15N,1H]-TROSY-HSQC-Spektren zeigen, dass ein Polysulfid-Schwefeltransfer zwischen Str und Sud stattfindet. Eine mögliche Protein-Protein-Wechselwirkungsfläche konnte bestimmt werden. In der Abwesenheit des Polysulfidsubstrates wurden keine Wechselwirkungen zwischen Sud und Str beobachtet, was die Vermutung bestätigt, dass beide Proteine nur dann miteinander wechselwirken und den Polysulfid-Schwefeltransfer ermöglichen, wenn als treibende Kraft Polysulfid präsent ist.
We analyze the reaction dynamics of central Pb+Pb collisions at 160 GeV/nucleon. First we estimate the energy density pile-up at mid-rapidity and calculate its excitation function: The energy density is decomposed into hadronic and partonic contributions. A detailed analysis of the collision dynamics in the framework of a microscopic transport model shows the importance of partonic degrees of freedom and rescattering of leading (di)quarks in the early phase of the reaction for E >= 30 GeV/nucleon. The energy density reaches up to 4 GeV/fm 3, 95% of which are contained in partonic degrees of freedom. It is shown that cells of hadronic matter, after the early reaction phase, can be viewed as nearly chemically equilibrated. This matter never exceeds energy densities of 0.4 GeV/fm 3, i.e. a density above which the notion of separated hadrons loses its meaning. The final reaction stage is analyzed in terms of hadron ratios, freeze-out distributions and a source analysis for final state pions.
Thermodynamical variables and their time evolution are studied for central relativistic heavy ion collisions from 10.7 to 160 AGeV in the microscopic Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). The UrQMD model exhibits drastic deviations from equilibrium during the early high density phase of the collision. Local thermal and chemical equilibration of the hadronic matter seems to be established only at later stages of the quasi-isentropic expansion in the central reaction cell with volume 125 fm 3. Baryon energy spectra in this cell are reproduced by Boltzmann distributions at all collision energies for t > 10 fm/c with a unique rapidly dropping temperature. At these times the equation of state has a simple form: P = (0.12 - 0.15) Epsilon. At SPS energies the strong deviation from chemical equilibrium is found for mesons, especially for pions, even at the late stage of the reaction. The final enhancement of pions is supported by experimental data.
Equilibrium properties of infinite relativistic hadron matter are investigated using the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) model. The simulations are performed in a box with periodic boundary conditions. Equilibration times depend critically on energy and baryon densities. Energy spectra of various hadronic species are shown to be isotropic and consistent with a single temperature in equilibrium. The variation of energy density versus temperature shows a Hagedorn-like behavior with a limiting temperature of 130 +/- 10 MeV. Comparison of abundances of different particle species to ideal hadron gas model predictions show good agreement only if detailed balance is implemented for all channels. At low energy densities, high mass resonances are not relevant; however, their importance raises with increasing energy density. The relevance of these different conceptual frameworks for any interpretation of experimental data is questioned.
Local kinetic and chemical equilibration is studied for Au+Au collisions at 10.7 AGeV in the microscopic Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). The UrQMD model exhibits dramatic deviations from equilibrium during the high density phase of the collision. Thermal and chemical equilibration of the hadronic matter seems to be established in the later stages during a quasiisentropic expansion, observed in the central reaction cell with volume 125 fm3. For t > 10 fm/c the hadron energy spectra in the cell are nicely reproduced by Boltzmann distributions with a common rapidly dropping temperature. Hadron yields change drastically and at the late expansion stage follow closely those of an ideal gas statistical model. The equation of state seems to be simple at late times: P = 0.12 Epsilon. The time evolution of other thermodynamical variables in the cell is also presented.
In this paper, the concepts of microscopic transport theory are introduced and the features and shortcomings of the most commonly used ansatzes are discussed. In particular, the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport model is described in great detail. Based on the same principles as QMD and RQMD, it incorporates a vastly extended collision term with full baryon-antibaryon symmetry, 55 baryon and 32 meson species. Isospin is explicitly treated for all hadrons. The range of applicability stretches from E lab < 100$ MeV/nucleon up to E lab> 200$ GeV/nucleon, allowing for a consistent calculation of excitation functions from the intermediate energy domain up to ultrarelativistic energies. The main physics topics under discussion are stopping, particle production and collective flow.
Ratios of hadronic abundances are analyzed for pp and nucleus-nucleus collisions at sqrt(s)=20 GeV using the microscopic transport model UrQMD. Secondary interactions significantly change the primordial hadronic cocktail of the system. A comparison to data shows a strong dependence on rapidity. Without assuming thermal and chemical equilibrium, predicted hadron yields and ratios agree with many of the data, the few observed discrepancies are discussed.
We present calculations of two-pion and two-kaon correlation functions in relativistic heavy ion collisions from a relativistic transport model that includes explicitly a first-order phase transition from a thermalized quark-gluon plasma to a hadron gas. We compare the obtained correlation radii with recent data from RHIC. The predicted R_side radii agree with data while the R_out and R_long radii are overestimated. We also address the impact of in-medium modifications, for example, a broadening of the rho-meson, on the correlation radii. In particular, the longitudinal correlation radius R_long is reduced, improving the comparison to data.
We calculate the kaon HBT radius parameters for high energy heavy ion collisions, assuming a first order phase transition from a thermalized Quark-Gluon-Plasma to a gas of hadrons. At high transverse momenta K_T ~ 1 GeV/c direct emission from the phase boundary becomes important, the emission duration signal, i.e., the R_out/R_side ratio, and its sensitivity to T_c (and thus to the latent heat of the phase transition) are enlarged. Moreover, the QGP+hadronic rescattering transport model calculations do not yield unusual large radii (R_i<9fm). Finite momentum resolution effects have a strong impact on the extracted HBT parameters (R_i and lambda) as well as on the ratio R_out/R_side.
We investigate transverse hadron spectra from relativistic nucleus-nucleus collisions which reflect important aspects of the dynamics - such as the generation of pressure - in the hot and dense zone formed in the early phase of the reaction. Our analysis is performed within two independent transport approaches (HSD and UrQMD) that are based on quark, diquark, string and hadronic degrees of freedom. Both transport models show their reliability for elementary pp as well as light-ion (C+C, Si+Si) reactions. However, for central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV the measured K+- transverse mass spectra have a larger inverse slope parameter than expected from the calculation. Thus the pressure generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding shows that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential and temperature - is generated by strong partonic interactions in the early phase of central Au+Au (Pb+Pb) collisions.
We calculate the antibaryon-to-baryon ratios, anti-p/p, anti-Lambda/Lambda, anti-Xi/Xi, and anti-Omega/Omega for Au+Au collisions at RHIC (sqrt{s}_{NN}=200 GeV). The effects of strong color fields associated with an enhanced strangeness and diquark production probability and with an effective decrease of formation times are investigated. Antibaryon-to-baryon ratios increase with the color field strength. The ratios also increase with the strangeness content |S|. The netbaryon number at midrapidity considerably increases with the color field strength while the netproton number remains roughly the same. This shows that the enhanced baryon transport involves a conversion into the hyperon sector (hyperonization) which can be observed in the (Lambda - anti-Lambda)/(p - anti-p) ratio.
We make predictions for the kaon interferometry measurements in Au+Au collisions at the Relativistic Heavy Ion Collider (RHIC). A first order phase transition from a thermalized Quark-Gluon-Plasma (QGP) to a gas of hadrons is assumed for the transport calculations. The fraction of kaons that are directly emitted from the phase boundary is considerably enhanced at large transverse momenta K T ~ 1 GeV/c. In this kinematic region, the sensitivity of the R out/R side ratio to the QGP-properties is enlarged. Here, the results of the 1-dimensional correlation analysis are presented. The extracted interferometry radii, depending on K-Theta, are not unusually large and are strongly affected by momentum resolution effects.
The disappearance of flow
(1995)
We investigate the disappearance of collective flow in the reaction plane in heavy-ion collisions within a microscopic model (QMD). A systematic study of the impact parameter dependence is performed for the system Ca+Ca. The balance energy strongly increases with impact parameter. Momentum dependent interactions reduce the balance energies for intermediate impact parameters b ~ 4.5 fm. Dynamical negative flow is not visible in the laboratory frame but does exist in the contact frame for the heavy system Au+Au. For semi-peripheral collisions of Ca+Ca with b ~ 6.5 fm a new two-component flow is discussed. Azimuthal distributions exhibit strong collectiv flow signals, even at the balance energy.
We investigate hadron production as well as transverse hadron spectra in nucleus-nucleus collisions from 2 A.GeV to 21.3 A.TeV within two independent transport approaches (UrQMD and HSD) that are based on quark, diquark, string and hadronic degrees of freedom. The comparison to experimental data demonstrates that both approaches agree quite well with each other and with the experimental data on hadron production. The enhancement of pion production in central Au+Au (Pb+Pb) collisions relative to scaled pp collisions (the 'kink') is well described by both approaches without involving any phase transition. However, the maximum in the K+/Pi+ ratio at 20 to 30 A.GeV (the 'horn') is missed by ~ 40%. A comparison to the transverse mass spectra from pp and C+C (or Si+Si) reactions shows the reliability of the transport models for light systems. For central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV, however, the measured K +/- m-theta-spectra have a larger inverse slope parameter than expected from the calculations. The approximately constant slope of K+/-spectra at SPS (the 'step') is not reproduced either. Thus the pressure generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding suggests that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential and temperature - might be generated by strong interactions in the early pre-hadronic/partonic phase of central Au+Au (Pb+Pb) collisions.
Report-no: UFTP-492/1999 Journal-ref: Phys.Rev. C61 (2000) 024909 We investigate flow in semi-peripheral nuclear collisions at AGS and SPS energies within macroscopic as well as microscopic transport models. The hot and dense zone assumes the shape of an ellipsoid which is tilted by an angle Theta with respect to the beam axis. If matter is close to the softest point of the equation of state, this ellipsoid expands predominantly orthogonal to the direction given by Theta. This antiflow component is responsible for the previously predicted reduction of the directed transverse momentum around the softest point of the equation of state.