Refine
Year of publication
Document Type
- Article (15646)
- Part of Periodical (2814)
- Working Paper (2350)
- Doctoral Thesis (2050)
- Preprint (1946)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (750)
- Report (471)
- Review (165)
Language
- English (29189) (remove)
Keywords
- taxonomy (738)
- new species (441)
- morphology (173)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (116)
- biodiversity (99)
- Deutsch (98)
- inflammation (96)
Institute
- Medizin (5312)
- Physik (3709)
- Wirtschaftswissenschaften (1903)
- Frankfurt Institute for Advanced Studies (FIAS) (1652)
- Biowissenschaften (1536)
- Center for Financial Studies (CFS) (1485)
- Informatik (1389)
- Biochemie und Chemie (1084)
- Sustainable Architecture for Finance in Europe (SAFE) (1065)
- House of Finance (HoF) (708)
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
We consider the problem of unifying a set of equations between second-order terms. Terms are constructed from function symbols, constant symbols and variables, and furthermore using monadic second-order variables that may stand for a term with one hole, and parametric terms. We consider stratified systems, where for every first-order and second-order variable, the string of second-order variables on the path from the root of a term to every occurrence of this variable is always the same. It is shown that unification of stratified second-order terms is decidable by describing a nondeterministic decision algorithm that eventually uses Makanin's algorithm for deciding the unifiability of word equations. As a generalization, we show that the method can be used as a unification procedure for non-stratified second-order systems, and describe conditions for termination in the general case.
Lavater was admired and detested for his unconventional approach to theology and his rediscovery of physiognomy. He was an avid communicator and through his correspondence became known to almost all leading personalities of eighteenth century Europe, such as Goethe, Wieland and Rousseau. The more than 21,000 letters in Lavater's estate in the Zentralbibliothek Zürich display the enormous thematic variety produced during a remarkable forty years of correspondence. This unique source material is now being published for the first time. IDC Publishers makes this collection available for research to such various disciplines as theology, history, literature, arts, humanities and above all, the history of eighteenth century culture. Scope: * 9,121 letters from Lavater * 12,302 letters to Lavater * 1,850 correspondents
This Article concerns the duty of care in American corporate law. To fully understand that duty, it is necessary to distinguish between roles, functions, standards of conduct, and standards of review. A role consists of an organized and socially recognized pattern of activity in which individuals regularly engage. In organizations, roles take the form of positions, such as the position of the director. A function consists of an activity that an actor is expected to engage in by virtue of his role or position. A standard of conduct states the way in which an actor should play a role, act in his position, or conduct his functions. A standard of review states the test that a court should apply when it reviews an actor’s conduct to determine whether to impose liability, grant injunctive relief, or determine the validity of his actions. In many or most areas of law, standards of conduct and standards of review tend to be conflated. For example, the standard of conduct that governs automobile drivers is that they should drive carefully, and the standard of review in a liability claim against a driver is whether he drove carefully. Similarly, the standard of conduct that governs an agent who engages in a transaction with his principal is that the agent must deal fairly, and the standard of review in a claim by the principal against an agent, based on such a transaction, is whether the agent dealt fairly. The conflation of standards of conduct and standards of review is so common that it is easy to overlook the fact that whether the two kinds of standards are or should be identical in any given area is a matter of prudential judgment. In a corporate world in which information was perfect, the risk of liability for assuming a given corporate role was always commensurate with the incentives for assuming the role, and institutional considerations never required deference to a corporate organ, the standards of conduct and review in corporate law might be identical. In the real world, however, these conditions seldom hold, and in American corporate law the standards of review pervasively diverge from the standards of conduct. Traditionally, the two major areas of American corporate law that involved standards of conduct and review have been the duty of care and the duty of loyalty. The duty of loyalty concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does involve his own self-interest. The duty of care concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does not involve his own self-interest.
Revised Draft: January 2005, First Draft: December 8, 2004 The picture of dispersed, isolated and uninterested shareholders so graphically drawn by Adolf Berle and Gardiner Means in 19321 is for the most part no longer accurate in today's market, although their famous observations on the separation of control and ownership of public corporations remain true.
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The attitude expressed by Carl Fuerstenberg, a leading German banker of his time, succinctly embodies one of the principal issues facing the large enterprise – the divergence of interest between the management of the firm and outside equity shareholders. Why do, or should, investors put some of their savings in the hands of others, to expend as they see fit, with no commitment to repayment or a return? The answers are far from simple, and involve a complex interaction among a number of legal rules, economic institutions and market forces. Yet crafting a viable response is essential to the functioning of a modern economy based upon technology with scale economies whose attainment is dependent on the creation of large firms.
With the Council regulation (EC) No. 1346/2000 of 29 May 2000 on insolvency proceedings, that came into effect May 31, 2002 the European Union has introduced a legal framework for dealing with cross-border insolvency proceedings. In order to achieve the aim of improving the efficiency and effectiveness of insolvency proceedings having cross-border effects within the European Community, the provisions on jurisdiction, recognition and applicable law in this area are contained in a Regulation, a Community law measure which is binding and directly applicable in Member States. The goals of the Regulation, with 47 articles, are to enable cross-border insolvency proceedings to operate efficiently and effectively, to provide for co-ordination of the measures to be taken with regard to the debtor’s assets and to avoid forum shopping. The Insolvency Regulation, therefore, provides rules for the international jurisdiction of a court in a Member State for the opening of insolvency proceedings, the (automatic) recognition of these proceedings in other Member States and the powers of the ‘liquidator’ in the other Member States. The Regulation also deals with important choice of law (or: private international law) provisions. The Regulation is directly applicable in the Member States3 for all insolvency proceedings opened after 31 May 2002.
Increasingly, alternative investments via hedge funds are gaining importance in Germany. Just recently, this subject was taken up in the legal literature, too; this resulted in a higher product transparency. However, German investment law and, particularly, the special division "hedge funds" is still a field dominated by practitioners. First, the present situation shall be outlined. In addition, a description of the current development is given, in which the practical knowledge of the author is included. Finally, the hedge fund regulation intended by the legislator at the beginning of the year 2004 is legally evaluated against this background.
In response to recent developments in the financial markets and the stunning growth of the hedge fund industry in the United States, policy makers, most notably the Securities and Exchange Commission (“SEC”), are turning their attention to the regulation, or lack thereof, of hedge funds. U.S. regulators have scrutinized the hedge fund industry on several occasions in the recent past without imposing substantial regulatory constraints. Will this time be any different? The focus of the regulators’ interest has shifted. Traditionally, they approached the hedge fund industry by focusing on systemic risk to and integrity of the financial markets. The current inquiry is almost exclusively driven by investor protection concerns. What has changed? First, since 2000, new kinds of investors have poured capital into hedge funds in the United States, facilitated by the “retailization” of hedge funds through the development of funds of hedge funds and the dismal performance of the stock market. Second, in a post-Enron era, regulators and policy makers are increasingly sensitive to investor protection concerns. On May 14 and 15, 2003, the SEC held for the first time a public roundtable discussion on the single topic of hedge funds. Among the investor protection concerns highlighted were: an increase in incidents of fraud, inadequate suitability determinations by brokers who market hedge fund interests to individual investors, conflicts of interest of managers who manage mutual funds and hedge funds side-by-side, a lack of transparency that hinders investors from making informed investment decisions, layering of fees, and unbounded discretion by managers in pricing private hedge fund securities. Although there has been discussion about imposing wide-ranging restrictions onhedge funds, such as reining in short selling, requiring disclosure of long/short positions and limiting leverage, such a response would be heavy-handed and probably unnecessary. The existing regulatory regime is largely adequate to address the most flagrant abuses. Moreover, as the hedge fund market further matures, it is likely that institutional investors will continue to weed out weak performers and mediocre or dishonest hedge fund managers. What is likely to emerge from the newest regulatory focus on investor protection is a measured response that would enhance the SEC’s enforcement and inspection authority, while leaving hedge funds’ inherent investment flexibility largely unfettered. A likely scenario, for example, might be a requirement that some, or possibly all, hedge fund sponsors register with the SEC as investment advisers. Today, most are exempt from registration, although more and more are registering to provide advice to public hedge funds and attract institutions. Registration would make it easier for the SEC to ferret out potential fraudsters in advance by reviewing the professional history of hedge fund operators, allow the SEC to bring administrative proceedings against hedge fund advisers for statutory violations and give the agency access to books and records that it does not have today. Other possible initiatives, including additional disclosure requirements for publicly offered hedge funds, are discussed below. This article addresses the question whether U.S. regulation of hedge funds is really taking a new direction. It (i) provides a brief overview of the current U.S. regulatory scheme, from which hedge funds are generally exempt, (ii) describes recent events in the United States that have contributed to regulators’ anxiety, (iii) examines the investor protection rationale for hedge fund regulation and considers whether these concerns do, in fact, merit increased regulation of hedge funds at this time, and (iv) considers the likelihood and possible scope of a potential regulatory response, principally by the SEC.
In an ideal world all investment products, including hedge funds, would be marketable to all investors. In this ideal world, all investors would fully understand the nature of the products and would be able to make an informed choice whether to invest. Of course the ideal world does not exist – the retail investment market is characterised by asymmetries of information. Product providers know most about the products on offer (or at least they should do). Investment advisers often know rather less than the provider but much more than their retail customers. Providers and intermediary advisers are understandably motivated by the desire to sell their products. There is therefore a risk that investment products will be mis-sold by investment advisers or mis-bought by ill-informed investors. This asymmetry of information is dealt with in most countries through regulation. However, the regulatory response in different countries is not necessarily the same. There are various ways in which protections can be applied and it is important to understand that the cultural background and regulatory histories of countries flavours the way regulation has developed. This means (as will be explained in greater detail later) that some countries are better able than others to admit hedge funds to the retail sector. Following this Introduction, Section II looks at some key background issues. Section III then looks at some important questions raised by the retail hedge fund issue. Many of these are questions of balance. Balance lies at the heart of regulation of course – regulation must always balance the needs of investors and with market efficiency. Understanding the “retail hedge fund” question requires particular attention to balance. Section IV then looks at the UK regime and how the FSA has answered the balance question. Section V offers some international perspectives. Section VI concludes. It will be seen that there is no obviously right answer to the question whether hedge fund products should be marketed to retail investors. Each regulator in each jurisdiction needs to make up its own mind on how to deal with the various issues and balances. It is evident, however, that internationally there is a move towards a greater variety of retail funds. There is nothing wrong with that, provided the regulators and the retail customers they protect, understand sufficiently what sort of protection is, or is not, being offered in the regulatory regime.
While hedge funds have been around at least since the 1940's, it has only been in the last decade or so that they have attracted the widespread attention of investors, academics and regulators. Investors, mainly wealthy individuals but also increasingly institutional investors, are attracted to hedge funds because they promise high “absolute” returns -- high returns even when returns on mainstream asset classes like stocks and bonds are low or negative. This prospect, not surprisingly, has increased interest in hedge funds in recent years as returns on stocks have plummeted around the world, and as investors have sought alternative investment strategies to insulate them in the future from the kind of bear markets we are now experiencing. Government regulators, too, have become increasingly attentive to hedge funds, especially since the notorious collapse of the hedge fund Long-Term Capital Management (LTCM) in September 1998. Over the course of only a few months during the summer of 1998 LTCM lost billions of dollars because of failed investment strategies that were not well understood even by its own investors, let alone by its bankers and derivatives counterparties. LTCM had built up huge leverage both on and off the balance sheet, so that when its investments soured it was unable to meet the demands of creditors and derivatives counterparties. Had LTCM’s counterparties terminated and liquidated their positions with LTCM, the result could have been a severe liquidity shortage and sharp changes in asset prices, which many feared could have impaired the solvency of other financial institutions and destabilized financial markets generally. The Federal Reserve did not wait to see if this would happen. It intervened to organize an immediate (September 1998) creditor-bailout by LTCM’s largest creditors and derivatives counterparties, preventing the wholesale liquidation of LTCM’s positions. Over the course of the year that followed the bailout, the creditor committee charged with managing LTCM’s positions effected an orderly work-out and liquidation of LTCM’s positions. We will never know what would have happened had the Federal Reserve not intervened. In defending the Federal Reserve’s unusual actions in coming to the assistance of an unregulated financial institutions like a hedge fund, William McDonough, the president of the Federal Reserve Bank of New York, stated that it was the Federal Reserve’s judgement that the “...abrupt and disorderly close-out of LTCM’s positions would pose unacceptable risks to the American economy. ... there was a likelihood that a number of credit and interest rate markets would experience extreme price moves and possibly cease to function for a period of one or more days and maybe longer. This would have caused a vicious cycle: a loss of investor confidence, lending to further liquidations of positions, and so on.” The near-collapse of LTCM galvanized regulators throughout the world to examine the operations of hedge funds to determine if they posed a risk to investors and to financial stability more generally. Studies were undertaken by nearly every major central bank, regulatory agency, and international “regulatory” committee (such as the Basle Committee and IOSCO), and reports were issued, by among others, The President’s Working Group on Financial Markets, the United States General Accounting Office (GAO), the Counterparty Risk Management Policy Group, the Basle Committee on Banking Supervision, and the International Organization of Securities Commissions (IOSCO). Many of these studies concluded that there was a need for greater disclosure by hedge funds in order to increase transparency and enhance market discipline, by creditors, derivatives counterparties and investors. In the Fall of 1999 two bills were introduced before the U.S. Congress directed at increasing hedge fund disclosure (the “Hedge Fund Disclosure Act” [the “Baker Bill”] and the “Markey/Dorgan Bill”). But when the legislative firestorm sparked by the LTCM’s episode finally quieted, there was no new regulation of hedge funds. This paper provides an overview of the regulation of hedge funds and examines the key regulatory issues that now confront regulators throughout the world. In particular, two major issues are examined. First, whether hedge funds pose a systemic threat to the stability of financial markets, and, if so, whether additional government regulation would be useful. And second, whether existing regulation provides sufficient protection for hedge fund investors, and, if not, what additional regulation is needed.
When performance measures are used for evaluation purposes, agents have some incentives to learn how their actions affect these measures. We show that the use of imperfect performance measures can cause an agent to devote too many resources (too much effort) to acquiring information. Doing so can be costly to the principal because the agent can use information to game the performance measure to the detriment of the principal. We analyze the impact of endogenous information acquisition on the optimal incentive strength and the quality of the performance measure used.
The volume is a collection of papers given at the conference “sub8 -- Sinn und Bedeutung”, the eighth annual conference of the Gesellschaft für Semantik, held at the Johann-Wolfgang-Goethe-Universität, Frankfurt (Germany) in September 2003. During this conference, experts presented and discussed various aspects of semantics. The very different topics included in this book provide insight into fields of ongoing Semantics research.
Compelling evidence for the creation of a new form of matter has been claimed to be found in Pb+Pb collisions at SPS. We discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion collisions. It is demonstrated that so far none of the proposed signals like J/psi meson production/suppression, strangeness enhancement, dileptons, and directed flow unambigiously show that a phase of deconfined matter has been formed in SPS Pb+Pb collisions. We emphasize the need for systematic future measurements to search for simultaneous irregularities in the excitation functions of several observables in order to come close to pinning the properties of hot, dense QCD matter from data.
We calculate the Gaussian radius parameters of the pion-emitting source in high energy heavy ion collisions, assuming a first order phase transition from a thermalized Quark-Gluon-Plasma (QGP) to a gas of hadrons. Such a model leads to a very long-lived dissipative hadronic rescattering phase which dominates the properties of the two-pion correlation functions. The radii are found to depend only weakly on the thermalization time tau i, the critical temperature T c (and thus the latent heat), and the specific entropy of the QGP. The dissipative hadronic stage enforces large variations of the pion emission times around the mean. Therefore, the model calculations suggest a rapid increase of R out/R side as a function of K T if a thermalized QGP were formed.
The equilibration of hot and dense nuclear matter produced in the central cell of central Au+Au collisions at RHIC (sqrt s = 200 A GeV) energies is studied within a microscopic transport model. The pressure in the cell becomes isotropic at t approx 5 fm/c after beginning of the collision. Within the next 15 fm/c the expansion of matter in the cell proceeds almost isentropically with the entropy per baryon ratio S/A approx 150, and the equation of state in the (P,epsilon) plane has a very simple form, P=0.15 epsilon. Comparison with the statistical model of an ideal hadron gas indicates that the time t approx 20 fm/c may be too short to reach the fully equilibrated state. Particularly, the creation of long-lived resonance-rich matter in the cell decelerates the relaxation to chemical equilibrium. This resonance-abundant state can be detected experimentally after the thermal freeze-out of particles.
The yields of strange particles are calculated with the UrQMD model for p,Pb(158 AGeV)Pb collisions and compared to experimental data. The yields are enhanced in central collisions if compared to proton induced or peripheral Pb+Pb collisions. The enhancement is due to secondary interactions. Nevertheless, only a reduction of the quark masses or equivalently an increase of the string tension provides an adequate description of the large observed enhancement factors (WA97 and NA49). Furthermore, the yields of unstable strange resonances as the Lambda star(1520) resonance or the phi meson are considerably affected by hadronic rescattering of the decay products.
The equilibration of hot and dense nuclear matter produced in the central region in central Au+Au collisions at square root s = 200A GeV is studied within the microscopic transport model UrQMD. The pressure here becomes isotropic at t approx 5 fm/c. Within the next 15 fm/c the expansion of the matter proceeds almost isentropically with the entropy per baryon ratio S/A approx 150. During this period the equation of state in the (P, epsilon)-plane has a very simple form, P = 0.15 epsilon. Comparison with the statistical model (SM) of an ideal hadron gas reveals that the time of approx 20 fm/c may be too short to attain the fully equilibrated state. Particularly, the fractions of resonances are overpopulated in contrast to the SM values. The creation of such a long-lived resonance-rich state slows down the relaxation to chemical equilibrium and can be detected experimentally.
Enhanced antiproton production in Pb(160 AGeV)+Pb reactions: evidence for quark gluon matter?
(2000)
The centrality dependence of the antiproton per participant ratio is studied in Pb(160 AGeV)+Pb reactions. Antiproton production in collisions of heavy nuclei at the CERN/SPS seems considerably enhanced as compared to conventional hadronic physics, given by the antiproton production rates in pp and antiproton annihilation in p p reactions. This enhancement is consistent with the observation of strong in-medium effects in other hadronic observables and may be an indication of partial restoration of chiral symmetry.
The relaxation of hot nuclear matter to an equilibrated state in the central zone of heavy-ion collisions at energies from AGS to RHIC is studied within the microscopic UrQMD model. It is found that the system reaches the (quasi)equilibrium stage for the period of 10-15 fm/c. Within this time the matter in the cell expands nearly isentropically with the entropy to baryon ratio S/A = 150 - 170. Thermodynamic characteristics of the system at AGS and at SPS energies at the endpoints of this stage are very close to the parameters of chemical and thermal freeze-out extracted from the thermal fit to experimental data. Predictions are made for the full RHIC energy square root s = 200$ AGeV. The formation of a resonance-rich state at RHIC energies is discussed.
The behavior of hadronic matter at high baryon densities is studied within Ultrarelativistic Quantum Molecular Dynamics (URQMD). Baryonic stopping is observed for Au+Au collisions from SIS up to SPS energies. The excitation function of flow shows strong sensitivities to the underlying equation of state (EOS), allowing for systematic studies of the EOS. Effects of a density dependent pole of the rho-meson propagator on dilepton spectra are studied for different systems and centralities at CERN energies.
Dilepton spectra are calculated within the microscopic transport model UrQMD and compared to data from the CERES experiment. The invariant mass spectra in the region between 300 MeV and 600 MeV depend strongly on the mass dependence of the rho meson decay width which is not sufficiently determined by the Vector Meson Dominance model. A consistent explanation of both the recent Pb+Au data and the proton induced data can be given without additional medium effects.
The hypothesis of local equilibrium (LE) in relativistic heavy ion collisions at energies from AGS to RHIC is checked in the microscopic transport model. We find that kinetic, thermal, and chemical equilibration of the expanding hadronic matter is nearly reached in central collisions at AGS energy for t >_ fm/c in a central cell. At these times the equation of state may be approximated by a simple dependence P ~= (0.12-0.15) epsilon. Increasing deviations of the yields and the energy spectra of hadrons from statistical model values are observed for increasing bombarding energies. The origin of these deviations is traced to the irreversible multiparticle decays of strings and many-body (N >_ 3) decays of resonances. The violations of LE indicate that the matter in the cell reaches a steady state instead of idealized equilibrium. The entropy density in the cell is only about 6% smaller than that of the equilibrium state.
Local equilibrium in heavy ion collisions. Microscopic model versus statistical model analysis
(1999)
The assumption of local equilibrium in relativistic heavy ion collisions at energies from 10.7 AGeV (AGS) up to 160 AGeV (SPS) is checked in the microscopic transport model. Dynamical calculations performed for a central cell in the reaction are compared to the predictions of the thermal statistical model. We find that kinetic, thermal and chemical equilibration of the expanding hadronic matter are nearly approached late in central collisions at AGS energy for t >= 10 fm/c in a central cell. At these times the equation of state may be approximated by a simple dependence P ~= (0.12-0.15) epsilon. Increasing deviations of the yields and the energy spectra of hadrons from statistical model values are observed for increasing energy, 40 AGeV and 160 AGeV. These violations of local equilibrium indicate that a fully equilibrated state is not reached, not even in the central cell of heavy ion collisions at energies above 10 AGeV. The origin of these findings is traced to the multiparticle decays of strings and many-body decays of resonances.
In dieser Arbeit werden Untersuchungen über die Anwendbarkeit von vier Methoden zur selektiven Einführung von Radikalen in DNA vorgestellt. Hierzu wurde die EPR-Spektroskopie (Elektronen-paramagnetische Resonanz) benutzt. Die selektive Einführung und Erzeugung von Radikalen in DNA ist nötig, um J-Kopplungen in DNA zu untersuchen. Vor dem Fernziel der Bestimmung der Austauschkopplungskonstanten J in biradikalischer DNA und deren Korrelation mit der charge-transfer-Geschwindigkeitskonstanten kCT stellen diese Untersuchungen einen wichtigen Ausgangspunkt dar. Stabile aromatische Nitroxide. Simulationen von Raumtemperatur-CW-X-Band-EPRSpektren fünf verschiedener aromatischer Nitroxide, welche potentielle DNA-Interkalatoren sind, wurden durchgeführt. Die aromatischen Nitroxide zeigen aufgelöste Hyperfeinkopplungen, welche zu dem Schluss führen, dass die Spindichte in hohem Maße delokalisiert ist, was die Verwendung dieser Verbindungen zur Messung von J-Kopplungen in biradikalischer DNA erlaubt. Transiente Guanin-Radikale. Transiente Guanin-Radikale werden in DNA selektiv durch die Flash-Quench-Technik erzeugt, bei der optisch anregbare Ruthenium-Interkalatoren verwendet werden. Transiente Thymyl-Radikale aus UV-bestrahltem 4'-Pivaloyl-Thymidin. Es werden photoinduzierte Prozesse untersucht, welche durch Bestrahlung von Thymin-Nukleosiden, die an der 4’-Position die optisch spaltbare Pivaloyl-Gruppe tragen, erzeugt werden. Dieses Nukleosid wurde speziell dafür entworfen, um Elektronenlöcher in DNA zu injizieren. In dieser Arbeit wird gezeigt, dass diese Verbindung benutzt werden kann, um selektiv eine Thymin-Base zu reduzieren. Transiente Thymyl-Radikale erzeugt durch ein neuartig modifiziertes Thymin nach UV-Bestrahlung. Photoinduzierte Prozesse, welche durch Bestrahlung eines ähnlichen Thymidin-Nukleosids erzeugt wurden, werden hier untersucht. Dieses Thymidin- Nukleosid wurde modifiziert, indem die optisch spaltbare Pivaloyl-Gruppe an eine Seitenkette angehängt wurde, welche an der C6-Position der Thymin-Base sitzt. Die Thymin-Base wurde speziell dafür entworfen, um Elektronen in DNA zu injizieren. In dieser Arbeit wurde bestätigt, dass ein Überschuss-Elektron selektiv auf eine Thymin-Base transferiert werden kann.
The behavior of hadronic matter at high baryon densities is studied within Ultrarelativistic Quantum Molecular Dynamics (URQMD). Baryonic stopping is observed for Au+Au collisions from SIS up to SPS energies. The excitation function of flow shows strong sensitivities to the underlying equation of state (EOS), allowing for systematic studies of the EOS. Dilepton spectra are calculated with and without shifting the rho pole. Except for S+Au collisions our calculations reproduce the CERES data.
Quantum Molecular Dynamics (QMD) calculations of central collisions between heavy nuclei are used to study fragment production and the creation of collective flow. It is shown that the final phase space distributions are compatible with the expectations from a thermally equilibrated source, which in addition exhibits a collective transverse expansion. However, the microscopic analyses of the transient states in the intermediate reaction stages show that the event shapes are more complex and that equilibrium is reached only in very special cases but not in event samples which cover a wide range of impact parameters as it is the case in experiments. The basic features of a new molecular dynamics model (UQMD) for heavy ion collisions from the Fermi energy regime up to the highest presently available energies are outlined.
We study the thermodynamic properties of infinite nuclear matter with the Ultrarelativistic Quantum Molecular Dynamics (URQMD), a semiclassical transport model, running in a box with periodic boundary conditions. It appears that the energy density rises faster than T4 at high temperatures of T approx. 200 - 300 MeV. This indicates an increase in the number of degrees of freedom. Moreover, We have calculated direct photon production in Pb+Pb collisions at 160 GeV/u within this model. The direct photon slope from the microscopic calculation equals that from a hydrodynamical calculation without a phase transition in the equation of state of the photon source.
Die in Englisch verfasste Dissertation, die unter der Betreuung von Herrn Prof. Dr. H. F. de Groote, Fachbereich Mathematik, entstand, ist der Mathematischen Physik zuzuordnen. Sie behandelt Stonesche Spektren von Neumannscher Algebren, observable Funktionen sowie einige Anwendungen in der Physik. Das abschließende Kapitel liefert eine Verallgemeinerung des Kochen-Specker-Theorems. Stonesche Spektren und observable Funktionen wurden von de Groote eingeführt. Das Stonesche Spektrum einer von Neumann-Algebra ist eine Verallgemeinerung des Gelfand-Spektrums, die observablen Funktionen verallgemeinern die Gelfand-Transformierten. Da de Grootes Ergebnisse zum großen Teil unveröffentlicht sind, folgt nach dem Einleitungskapitel im zweiten Kapitel eine Übersichtsdarstellung dieser Ergebnisse. Das dritte Kapitel behandelt die Stoneschen Spektren endlicher von Neumann-Algebren. Für Algebren vom Typ In wird eine vollständige Charakterisierung des Stoneschen Spektrums entwickelt. Zu Typ-II1-Algebren werden einige Resultate vorgestellt. Das vierte Kapitel liefert. einige einfache Anwendungen des Formalismus auf die Physik. Das fünfte Kapitel gibt erstmals einen funktionalanalytischen Beweis des Kochen-Specker-Theorems und liefert die Verallgemeinerung dieses Satzes, wobei die Situation für alle von Neumann-Algebren geklärt wird.
The centrality dependence of (multi-)strange hadron abundances is studied for Pb(158 AGeV)Pb reactions and compared to p(158 GeV)Pb collisions. The microscopic transport model UrQMD is used for this analysis. The predicted Lambda/pi-, Xi-/pi- and Omega-/pi- ratios are enhanced due to rescattering in central Pb-Pb collisions as compared to peripheral Pb-Pb or p-Pb collisions. A reduction of the constituent quark masses to the current quark masses m_s \sim 230 MeV, m_q \sim 10 MeV, as motivated by chiral symmetry restoration, enhances the hyperon yields to the experimentally observed high values. Similar results are obtained by an ad hoc overall increase of the color electric field strength (effective string tension of kappa=3 GeV/fm). The enhancement depends strongly on the kinematical cuts. The maximum enhancement is predicted around midrapidity. For Lambda's, strangeness suppression is predicted at projectile/target rapidity. For Omega's, the predicted enhancement can be as large as one order of magnitude. Comparisons of Pb-Pb data to proton induced asymmetric (p-A) collisions are hampered due to the predicted strong asymmetry in the various rapidity distributions of the different (strange) particle species. In p-Pb collisions, strangeness is locally (in rapidity) not conserved. The present comparison to the data of the WA97 and NA49 collaborations clearly supports the suggestion that conventional (free) hadronic scenarios are unable to describe the observed high (anti-)hyperon yields in central collisions. The doubling of the strangeness to nonstrange suppression factor, gamma_s \approx 0.65, might be interpreted as a signal of a phase of nearly massless particles.
Directed and elliptic flow
(1999)
We compare microscopic transport model calculations to recent data on the directed and elliptic flow of various hadrons in 2 - 10 A GeV Au+Au and Pb (158 A GeV) Pb collisions. For the Au+Au excitation function a transition from the squeeze-out to an in-plane enhanced emission is consistently described with mean field potentials corresponding to one incompressibility. For the Pb (158 A GeV) Pb system the elliptic flow prefers in-plane emission both for protons and pions, the directed flow of protons is opposite to that of the pions, which exhibit anti-flow. Strong directed transverse flow is present for protons and Lambdas in Au (6 A GeV) Au collisions as well. Both for the SPS and the AGS energies the agreement between data and calculations is remarkable.
Microscopic calculations of central collisions between heavy nuclei are used to study fragment production and the creation of collective flow. It is shown that the final phase space distributions are compatible with the expectations from a thermally equilibrated source, which in addition exhibits a collective transverse expansion. However, the microscopic analyses of the transient states in the reaction stages of highest density and during the expansion show that the system does not reach global equilibrium. Even if a considerable amount of equilibration is assumed, the connection of the measurable final state to the macroscopic parameters, e.g. the temperature, of the transient "equilibrium" state remains ambiguous.
Die Ermittlung von Proteinstukturen mittels NMR-Spektroskopie ist ein komplexer Prozess, wobei die Resonanzfrequenzen und die Signalintensitäten den Atomen des Proteins zugeordnet werden. Zur Bestimmung der räumlichen Proteinstruktur sind folgende Schritte erforderlich: die Präparation der Probe und 15N/13C Isotopenanreicherung, Durchführung der NMR Experimente, Prozessierung der Spektren, Bestimmung der Signalresonanzen ('Peak-picking'), Zuordnung der chemischen Verschiebungen, Zuordnung der NOESY-Spektren und das Sammeln von konformationellen Strukturparametern, Strukturrechnung und Strukturverfeinerung. Aktuelle Methoden zur automatischen Strukturrechnung nutzen eine Reihe von Computeralgorithmen, welche Zuordnungen der NOESY-Spektren und die Strukturrechnung durch einen iterativen Prozess verbinden. Obwohl neue Arten von Strukturparametern wie dipolare Kopplungen, Orientierungsinformationen aus kreuzkorrelierten Relaxationsraten oder Strukturinformationen, die sich in Gegenwart paramagnetischer Zentren in Proteinen ergeben, wichtige Neuerungen für die Proteinstrukturrechnung darstellen, sind die Abstandsinformationen aus NOESY-Spektren weiterhin die wichtigste Basis für die NMR-Strukturbestimmung. Der hohe zeitliche Aufwand des 'peak-picking' in NOESY-Spektren ist hauptsächlich bedingt durch spektrale Überlagerung, Rauschsignale und Artefakte in NOESY-Spektren. Daher werden für das effizientere automatische 'Peak-picking' zuverlässige Filter benötigt, um die relevanten Signale auszuwählen. In der vorliegenden Arbeit wird ein neuer Algorithmus für die automatische Proteinstrukturrechnung beschrieben, der automatisches 'Peak-picking' von NOESY-Spektren beinhaltet, die mit Hilfe von Wavelets entrauscht wurden. Der kritische Punkt dieses Algorithmus ist die Erzeugung inkrementeller Peaklisten aus NOESY-Spektren, die mit verschiedenen auf Wavelets basierenden Entrauschungsprozeduren prozessiert wurden. Mit Hilfe entrauschter NOESY-Spektren erhält man Signallisten mit verschiedenen Konfidenzbereichen, die in unterschiedlichen Schritten der kombinierten NOE-Zuordnung/Strukturrechnung eingesetzt werden. Das erste Strukturmodell beruht auf stark entrauschten Spektren, die die konservativste Signalliste mit als weitgehend sicher anzunehmenden Signalen ergeben. In späteren Stadien werden Signallisten aus weniger stark entrauschten Spektren mit einer größeren Anzahl von Signalen verwendet. Die Auswirkung der verschiedenen Entrauschungsprozeduren auf Vollständigkeit und Richtigkeit der NOESY Peaklisten wurde im Detail untersucht. Durch die Kombination von Wavelet-Entrauschung mit einem neuen Algorithmus zur Integration der Signale in Verbindung mit zusätzlichen Filtern, die die Konsistenz der Peakliste prüfen ('Network-anchoring' der Spinsysteme und Symmetrisierung der Peakliste), wird eine schnelle Konvergenz der automatischen Strukturrechnung erreicht. Der neue Algorithmus wurde in ARIA integriert, einem weit verbreiteten Computerprogramm für die automatische NOE-Zuordnung und Strukturrechnung. Der Algorithmus wurde an der Monomereinheit der Polysulfid-Schwefel-Transferase (Sud) aus Wolinella succinogenes verifiziert, deren hochaufgelöste Lösungsstruktur vorher auf konventionelle Weise bestimmt wurde. Neben der Möglichkeit zur Bestimmung von Proteinlösungsstrukturen bietet sich die NMR-Spektroskopie auch als wirkungsvolles Werkzeug zur Untersuchung von Protein-Ligand- und Protein-Protein-Wechselwirkungen an. Sowohl NMR Spektren von isotopenmarkierten Proteinen, als auch die Spektren von Liganden können für das 'Screening' nach Inhibitoren benutzt werden. Im ersten Fall wird die Sensitivität der 1H- und 15N-chemischen Verschiebungen des Proteinrückgrats auf kleine geometrische oder elektrostatische Veränderungen bei der Ligandbindung als Indikator benutzt. Als 'Screening'-Verfahren, bei denen Ligandensignale beobachtet werden, stehen verschiedene Methoden zur Verfügung: Transfer-NOEs, Sättigungstransferdifferenzexperimente (STD, 'saturation transfer difference'), ePHOGSY, diffusionseditierte und NOE-basierende Methoden. Die meisten dieser Techniken können zum rationalen Design von inhibitorischen Verbindungen verwendet werden. Für die Evaluierung von Untersuchungen mit einer großen Anzahl von Inhibitoren werden effiziente Verfahren zur Mustererkennung wie etwa die PCA ('Principal Component Analysis') verwendet. Sie eignet sich zur Visualisierung von Ähnlichkeiten bzw. Unterschieden von Spektren, die mit verschiedenen Inhibitoren aufgenommen wurden. Die experimentellen Daten werden zuvor mit einer Serie von Filtern bearbeitet, die u.a. Artefakte reduzieren, die auf nur kleinen Änderungen der chemischen Verschiebungen beruhen. Der am weitesten verbreitete Filter ist das sogenannte 'bucketing', bei welchem benachbarte Punkte zu einen 'bucket' aufsummiert werden. Um typische Nachteile der 'bucketing'-Prozedur zu vermeiden, wurde in der vorliegenden Arbeit der Effekt der Wavelet-Entrauschung zur Vorbereitung der NMR-Daten für PCA am Beispiel vorhandener Serien von HSQC-Spektren von Proteinen mit verschiedenen Liganden untersucht. Die Kombination von Wavelet-Entrauschung und PCA ist am effizientesten, wenn PCA direkt auf die Wavelet-Koeffizienten angewandt wird. Durch die Abgrenzung ('thresholding') der Wavelet-Koeffizienten in einer Multiskalenanalyse wird eine komprimierte Darstellung der Daten erreicht, welche Rauschartefakte minimiert. Die Kompression ist anders als beim 'bucketing' keine 'blinde' Kompression, sondern an die Eigenschaften der Daten angepasst. Der neue Algorithmus kombiniert die Vorteile einer Datenrepresentation im Wavelet-Raum mit einer Datenvisualisierung durch PCA. In der vorliegenden Arbeit wird gezeigt, dass PCA im Wavelet- Raum ein optimiertes 'clustering' erlaubt und dabei typische Artefakte eliminiert werden. Darüberhinaus beschreibt die vorliegende Arbeit eine de novo Strukturbestimmung der periplasmatischen Polysulfid-Schwefel-Transferase (Sud) aus dem anaeroben gram-negativen Bakterium Wolinella succinogenes. Das Sud-Protein ist ein polysulfidbindendes und transferierendes Enzym, das bei niedriger Polysulfidkonzentration eine schnelle Polysulfid-Schwefel-Reduktion katalysiert. Sud ist ein 30 kDa schweres Homodimer, welches keine prosthetischen Gruppen oder schwere Metallionen enthält. Jedes Monomer enhält ein Cystein, welches kovalent bis zu zehn Polysulfid-Schwefel (Sn 2-) Ionen bindet. Es wird vermutet, dass Sud die Polysulfidkette auf ein katalytischen Molybdän-Ion transferiert, welches sich im aktiven Zentrum des membranständigen Enzyms Polysulfid-Reduktase (Psr) auf dessen dem Periplasma zugewandten Seite befindet. Dabei wird eine reduktive Spaltung der Kette katalysiert. Die Lösungsstruktur des Homodimeres Sud wurde mit Hilfe heteronuklearer, mehrdimensionaler NMR-Techniken bestimmt. Die Struktur beruht auf von NOESY-Spektren abgeleiteten Distanzbeschränkungen, Rückgratwasserstoffbindungen und Torsionswinkeln, sowie auf residuellen dipolaren Kopplungen, die für die Verfeinerung der Struktur und für die relative Orientierung der Monomereinheiten wichtig waren. In den NMR Spektren der Homodimere haben alle symmetrieverwandte Kerne äquivalente magnetische Umgebungen, weshalb ihre chemischen Verschiebungen entartet sind. Die symmetrische Entartung vereinfacht das Problem der Resonanzzuordnung, da nur die Hälfte der Kerne zugeordnet werden müssen. Die NOESY-Zuordnung und die Strukturrechnung werden dadurch erschwert, dass es nicht möglich ist, zwischen den Intra-Monomer-, Inter-Monomer- und Co-Monomer- (gemischten) NOESY-Signalen zu unterscheiden. Um das Problem der Symmetrie-Entartung der NOESY-Daten zu lösen, stehen zwei Möglichkeiten zur Verfügung: (I) asymmetrische Markierungs-Experimente, um die intra- von den intermolekularen NOESY-Signalen zu unterscheiden, (II) spezielle Methoden der Strukturrechnung, die mit mehrdeutigen Distanzbeschränkungen arbeiten können. Die in dieser Arbeit vorgestellte Struktur wurde mit Hilfe der Symmetrie-ADR- ('Ambigous Distance Restraints') Methode in Kombination mit Daten von asymetrisch isotopenmarkierten Dimeren berechnet. Die Koordinaten des Sud-Dimers zusammen mit den NMR-basierten Strukturdaten wur- den in der RCSB-Proteindatenbank unter der PDB-Nummer 1QXN abgelegt. Das Sud-Protein zeigt nur wenig Homologie zur Primärsequenz anderer Proteine mit ähnlicher Funktion und bekannter dreidimensionaler Struktur. Bekannte Proteine sind die Schwefeltransferase oder das Rhodanese-Enzym, welche beide den Transfer von einem Schwefelatom eines passenden Donors auf den nukleophilen Akzeptor (z.B von Thiosulfat auf Cyanid) katalysieren. Die dreidimensionalen Strukturen dieser Proteine zeigen eine typische a=b Topologie und haben eine ähnliche Umgebung im aktiven Zentrum bezüglich der Konformation des Proteinrückgrades. Die Schleife im aktiven Zentrum umgibt das katalytische Cystein, welches in allen Rhodanese-Enzymen vorhanden ist, und scheint im Sud-Protein flexibel zu sein (fehlende Resonanzzuordnung der Aminosäuren 89-94). Das Polysulfidende ragt aus einer positiv geladenen Bindungstasche heraus (Reste: R46, R67, K90, R94), wo Sud wahrscheinlich in Kontakt mit der Polysulfidreduktase tritt. Das strukturelle Ergebnis wurde durch Mutageneseexperimente bestätigt. In diesen Experimenten konnte gezeigt werden, dass alle Aminosäurereste im aktiven Zentrum essentiell für die Schwefeltransferase-Aktivität des Sud-Proteins sind. Die Substratbindung wurde früher durch den Vergleich von [15N,1H]-TROSY-HSQC-Spektren des Sud-Proteins in An- und Abwesenheit des Polysulfidliganden untersucht. Bei der Substratbindung scheint sich die lokale Geometrie der Polysulfidbindungsstelle und der Dimerschnittstelle zu verändern. Die konformationellen Änderungen und die langsame Dynamik, hervorgerufen durch die Ligandbindung können die weitere Polysulfid-Schwefel-Aktivität auslösen. Ein zweites Polysulfid-Schwefeltransferaseprotein (Str, 40 kDa) mit einer fünffach höheren nativen Konzentration im Vergleich zu Sud wurde im Bakterienperiplasma von Wolinella succinogenes entdeckt. Es wird angenommen, dass beide Protein einen Polysulfid-Schwefel-Komplex bilden, wobei Str wässriges Polysulfid sammelt und an Sud abgibt, welches den Schwefeltransfer zum katalytischen Molybdän-Ion auf das aktive Zentrum der dem Periplasma zugewandten Seite der Polysulfidreduktase durchführt. Änderungen chemischer Verschiebungen in [15N,1H]-TROSY-HSQC-Spektren zeigen, dass ein Polysulfid-Schwefeltransfer zwischen Str und Sud stattfindet. Eine mögliche Protein-Protein-Wechselwirkungsfläche konnte bestimmt werden. In der Abwesenheit des Polysulfidsubstrates wurden keine Wechselwirkungen zwischen Sud und Str beobachtet, was die Vermutung bestätigt, dass beide Proteine nur dann miteinander wechselwirken und den Polysulfid-Schwefeltransfer ermöglichen, wenn als treibende Kraft Polysulfid präsent ist.
We analyze the reaction dynamics of central Pb+Pb collisions at 160 GeV/nucleon. First we estimate the energy density pile-up at mid-rapidity and calculate its excitation function: The energy density is decomposed into hadronic and partonic contributions. A detailed analysis of the collision dynamics in the framework of a microscopic transport model shows the importance of partonic degrees of freedom and rescattering of leading (di)quarks in the early phase of the reaction for E >= 30 GeV/nucleon. The energy density reaches up to 4 GeV/fm 3, 95% of which are contained in partonic degrees of freedom. It is shown that cells of hadronic matter, after the early reaction phase, can be viewed as nearly chemically equilibrated. This matter never exceeds energy densities of 0.4 GeV/fm 3, i.e. a density above which the notion of separated hadrons loses its meaning. The final reaction stage is analyzed in terms of hadron ratios, freeze-out distributions and a source analysis for final state pions.
Thermodynamical variables and their time evolution are studied for central relativistic heavy ion collisions from 10.7 to 160 AGeV in the microscopic Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). The UrQMD model exhibits drastic deviations from equilibrium during the early high density phase of the collision. Local thermal and chemical equilibration of the hadronic matter seems to be established only at later stages of the quasi-isentropic expansion in the central reaction cell with volume 125 fm 3. Baryon energy spectra in this cell are reproduced by Boltzmann distributions at all collision energies for t > 10 fm/c with a unique rapidly dropping temperature. At these times the equation of state has a simple form: P = (0.12 - 0.15) Epsilon. At SPS energies the strong deviation from chemical equilibrium is found for mesons, especially for pions, even at the late stage of the reaction. The final enhancement of pions is supported by experimental data.
Equilibrium properties of infinite relativistic hadron matter are investigated using the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) model. The simulations are performed in a box with periodic boundary conditions. Equilibration times depend critically on energy and baryon densities. Energy spectra of various hadronic species are shown to be isotropic and consistent with a single temperature in equilibrium. The variation of energy density versus temperature shows a Hagedorn-like behavior with a limiting temperature of 130 +/- 10 MeV. Comparison of abundances of different particle species to ideal hadron gas model predictions show good agreement only if detailed balance is implemented for all channels. At low energy densities, high mass resonances are not relevant; however, their importance raises with increasing energy density. The relevance of these different conceptual frameworks for any interpretation of experimental data is questioned.
Local kinetic and chemical equilibration is studied for Au+Au collisions at 10.7 AGeV in the microscopic Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). The UrQMD model exhibits dramatic deviations from equilibrium during the high density phase of the collision. Thermal and chemical equilibration of the hadronic matter seems to be established in the later stages during a quasiisentropic expansion, observed in the central reaction cell with volume 125 fm3. For t > 10 fm/c the hadron energy spectra in the cell are nicely reproduced by Boltzmann distributions with a common rapidly dropping temperature. Hadron yields change drastically and at the late expansion stage follow closely those of an ideal gas statistical model. The equation of state seems to be simple at late times: P = 0.12 Epsilon. The time evolution of other thermodynamical variables in the cell is also presented.
In this paper, the concepts of microscopic transport theory are introduced and the features and shortcomings of the most commonly used ansatzes are discussed. In particular, the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport model is described in great detail. Based on the same principles as QMD and RQMD, it incorporates a vastly extended collision term with full baryon-antibaryon symmetry, 55 baryon and 32 meson species. Isospin is explicitly treated for all hadrons. The range of applicability stretches from E lab < 100$ MeV/nucleon up to E lab> 200$ GeV/nucleon, allowing for a consistent calculation of excitation functions from the intermediate energy domain up to ultrarelativistic energies. The main physics topics under discussion are stopping, particle production and collective flow.
Ratios of hadronic abundances are analyzed for pp and nucleus-nucleus collisions at sqrt(s)=20 GeV using the microscopic transport model UrQMD. Secondary interactions significantly change the primordial hadronic cocktail of the system. A comparison to data shows a strong dependence on rapidity. Without assuming thermal and chemical equilibrium, predicted hadron yields and ratios agree with many of the data, the few observed discrepancies are discussed.
We present calculations of two-pion and two-kaon correlation functions in relativistic heavy ion collisions from a relativistic transport model that includes explicitly a first-order phase transition from a thermalized quark-gluon plasma to a hadron gas. We compare the obtained correlation radii with recent data from RHIC. The predicted R_side radii agree with data while the R_out and R_long radii are overestimated. We also address the impact of in-medium modifications, for example, a broadening of the rho-meson, on the correlation radii. In particular, the longitudinal correlation radius R_long is reduced, improving the comparison to data.
We calculate the kaon HBT radius parameters for high energy heavy ion collisions, assuming a first order phase transition from a thermalized Quark-Gluon-Plasma to a gas of hadrons. At high transverse momenta K_T ~ 1 GeV/c direct emission from the phase boundary becomes important, the emission duration signal, i.e., the R_out/R_side ratio, and its sensitivity to T_c (and thus to the latent heat of the phase transition) are enlarged. Moreover, the QGP+hadronic rescattering transport model calculations do not yield unusual large radii (R_i<9fm). Finite momentum resolution effects have a strong impact on the extracted HBT parameters (R_i and lambda) as well as on the ratio R_out/R_side.
We investigate transverse hadron spectra from relativistic nucleus-nucleus collisions which reflect important aspects of the dynamics - such as the generation of pressure - in the hot and dense zone formed in the early phase of the reaction. Our analysis is performed within two independent transport approaches (HSD and UrQMD) that are based on quark, diquark, string and hadronic degrees of freedom. Both transport models show their reliability for elementary pp as well as light-ion (C+C, Si+Si) reactions. However, for central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV the measured K+- transverse mass spectra have a larger inverse slope parameter than expected from the calculation. Thus the pressure generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding shows that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential and temperature - is generated by strong partonic interactions in the early phase of central Au+Au (Pb+Pb) collisions.
We calculate the antibaryon-to-baryon ratios, anti-p/p, anti-Lambda/Lambda, anti-Xi/Xi, and anti-Omega/Omega for Au+Au collisions at RHIC (sqrt{s}_{NN}=200 GeV). The effects of strong color fields associated with an enhanced strangeness and diquark production probability and with an effective decrease of formation times are investigated. Antibaryon-to-baryon ratios increase with the color field strength. The ratios also increase with the strangeness content |S|. The netbaryon number at midrapidity considerably increases with the color field strength while the netproton number remains roughly the same. This shows that the enhanced baryon transport involves a conversion into the hyperon sector (hyperonization) which can be observed in the (Lambda - anti-Lambda)/(p - anti-p) ratio.
We make predictions for the kaon interferometry measurements in Au+Au collisions at the Relativistic Heavy Ion Collider (RHIC). A first order phase transition from a thermalized Quark-Gluon-Plasma (QGP) to a gas of hadrons is assumed for the transport calculations. The fraction of kaons that are directly emitted from the phase boundary is considerably enhanced at large transverse momenta K T ~ 1 GeV/c. In this kinematic region, the sensitivity of the R out/R side ratio to the QGP-properties is enlarged. Here, the results of the 1-dimensional correlation analysis are presented. The extracted interferometry radii, depending on K-Theta, are not unusually large and are strongly affected by momentum resolution effects.
The disappearance of flow
(1995)
We investigate the disappearance of collective flow in the reaction plane in heavy-ion collisions within a microscopic model (QMD). A systematic study of the impact parameter dependence is performed for the system Ca+Ca. The balance energy strongly increases with impact parameter. Momentum dependent interactions reduce the balance energies for intermediate impact parameters b ~ 4.5 fm. Dynamical negative flow is not visible in the laboratory frame but does exist in the contact frame for the heavy system Au+Au. For semi-peripheral collisions of Ca+Ca with b ~ 6.5 fm a new two-component flow is discussed. Azimuthal distributions exhibit strong collectiv flow signals, even at the balance energy.