Refine
Year of publication
Document Type
- Article (15661)
- Part of Periodical (2814)
- Working Paper (2350)
- Doctoral Thesis (2052)
- Preprint (1946)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (750)
- Report (471)
- Review (165)
Language
- English (29206) (remove)
Keywords
- taxonomy (738)
- new species (441)
- morphology (173)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (116)
- biodiversity (99)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5321)
- Physik (3710)
- Wirtschaftswissenschaften (1903)
- Frankfurt Institute for Advanced Studies (FIAS) (1653)
- Biowissenschaften (1539)
- Center for Financial Studies (CFS) (1485)
- Informatik (1389)
- Biochemie und Chemie (1084)
- Sustainable Architecture for Finance in Europe (SAFE) (1065)
- House of Finance (HoF) (708)
While hedge funds have been around at least since the 1940's, it has only been in the last decade or so that they have attracted the widespread attention of investors, academics and regulators. Investors, mainly wealthy individuals but also increasingly institutional investors, are attracted to hedge funds because they promise high “absolute” returns -- high returns even when returns on mainstream asset classes like stocks and bonds are low or negative. This prospect, not surprisingly, has increased interest in hedge funds in recent years as returns on stocks have plummeted around the world, and as investors have sought alternative investment strategies to insulate them in the future from the kind of bear markets we are now experiencing. Government regulators, too, have become increasingly attentive to hedge funds, especially since the notorious collapse of the hedge fund Long-Term Capital Management (LTCM) in September 1998. Over the course of only a few months during the summer of 1998 LTCM lost billions of dollars because of failed investment strategies that were not well understood even by its own investors, let alone by its bankers and derivatives counterparties. LTCM had built up huge leverage both on and off the balance sheet, so that when its investments soured it was unable to meet the demands of creditors and derivatives counterparties. Had LTCM’s counterparties terminated and liquidated their positions with LTCM, the result could have been a severe liquidity shortage and sharp changes in asset prices, which many feared could have impaired the solvency of other financial institutions and destabilized financial markets generally. The Federal Reserve did not wait to see if this would happen. It intervened to organize an immediate (September 1998) creditor-bailout by LTCM’s largest creditors and derivatives counterparties, preventing the wholesale liquidation of LTCM’s positions. Over the course of the year that followed the bailout, the creditor committee charged with managing LTCM’s positions effected an orderly work-out and liquidation of LTCM’s positions. We will never know what would have happened had the Federal Reserve not intervened. In defending the Federal Reserve’s unusual actions in coming to the assistance of an unregulated financial institutions like a hedge fund, William McDonough, the president of the Federal Reserve Bank of New York, stated that it was the Federal Reserve’s judgement that the “...abrupt and disorderly close-out of LTCM’s positions would pose unacceptable risks to the American economy. ... there was a likelihood that a number of credit and interest rate markets would experience extreme price moves and possibly cease to function for a period of one or more days and maybe longer. This would have caused a vicious cycle: a loss of investor confidence, lending to further liquidations of positions, and so on.” The near-collapse of LTCM galvanized regulators throughout the world to examine the operations of hedge funds to determine if they posed a risk to investors and to financial stability more generally. Studies were undertaken by nearly every major central bank, regulatory agency, and international “regulatory” committee (such as the Basle Committee and IOSCO), and reports were issued, by among others, The President’s Working Group on Financial Markets, the United States General Accounting Office (GAO), the Counterparty Risk Management Policy Group, the Basle Committee on Banking Supervision, and the International Organization of Securities Commissions (IOSCO). Many of these studies concluded that there was a need for greater disclosure by hedge funds in order to increase transparency and enhance market discipline, by creditors, derivatives counterparties and investors. In the Fall of 1999 two bills were introduced before the U.S. Congress directed at increasing hedge fund disclosure (the “Hedge Fund Disclosure Act” [the “Baker Bill”] and the “Markey/Dorgan Bill”). But when the legislative firestorm sparked by the LTCM’s episode finally quieted, there was no new regulation of hedge funds. This paper provides an overview of the regulation of hedge funds and examines the key regulatory issues that now confront regulators throughout the world. In particular, two major issues are examined. First, whether hedge funds pose a systemic threat to the stability of financial markets, and, if so, whether additional government regulation would be useful. And second, whether existing regulation provides sufficient protection for hedge fund investors, and, if not, what additional regulation is needed.
When performance measures are used for evaluation purposes, agents have some incentives to learn how their actions affect these measures. We show that the use of imperfect performance measures can cause an agent to devote too many resources (too much effort) to acquiring information. Doing so can be costly to the principal because the agent can use information to game the performance measure to the detriment of the principal. We analyze the impact of endogenous information acquisition on the optimal incentive strength and the quality of the performance measure used.
The volume is a collection of papers given at the conference “sub8 -- Sinn und Bedeutung”, the eighth annual conference of the Gesellschaft für Semantik, held at the Johann-Wolfgang-Goethe-Universität, Frankfurt (Germany) in September 2003. During this conference, experts presented and discussed various aspects of semantics. The very different topics included in this book provide insight into fields of ongoing Semantics research.
Compelling evidence for the creation of a new form of matter has been claimed to be found in Pb+Pb collisions at SPS. We discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion collisions. It is demonstrated that so far none of the proposed signals like J/psi meson production/suppression, strangeness enhancement, dileptons, and directed flow unambigiously show that a phase of deconfined matter has been formed in SPS Pb+Pb collisions. We emphasize the need for systematic future measurements to search for simultaneous irregularities in the excitation functions of several observables in order to come close to pinning the properties of hot, dense QCD matter from data.
We calculate the Gaussian radius parameters of the pion-emitting source in high energy heavy ion collisions, assuming a first order phase transition from a thermalized Quark-Gluon-Plasma (QGP) to a gas of hadrons. Such a model leads to a very long-lived dissipative hadronic rescattering phase which dominates the properties of the two-pion correlation functions. The radii are found to depend only weakly on the thermalization time tau i, the critical temperature T c (and thus the latent heat), and the specific entropy of the QGP. The dissipative hadronic stage enforces large variations of the pion emission times around the mean. Therefore, the model calculations suggest a rapid increase of R out/R side as a function of K T if a thermalized QGP were formed.
The equilibration of hot and dense nuclear matter produced in the central cell of central Au+Au collisions at RHIC (sqrt s = 200 A GeV) energies is studied within a microscopic transport model. The pressure in the cell becomes isotropic at t approx 5 fm/c after beginning of the collision. Within the next 15 fm/c the expansion of matter in the cell proceeds almost isentropically with the entropy per baryon ratio S/A approx 150, and the equation of state in the (P,epsilon) plane has a very simple form, P=0.15 epsilon. Comparison with the statistical model of an ideal hadron gas indicates that the time t approx 20 fm/c may be too short to reach the fully equilibrated state. Particularly, the creation of long-lived resonance-rich matter in the cell decelerates the relaxation to chemical equilibrium. This resonance-abundant state can be detected experimentally after the thermal freeze-out of particles.
The yields of strange particles are calculated with the UrQMD model for p,Pb(158 AGeV)Pb collisions and compared to experimental data. The yields are enhanced in central collisions if compared to proton induced or peripheral Pb+Pb collisions. The enhancement is due to secondary interactions. Nevertheless, only a reduction of the quark masses or equivalently an increase of the string tension provides an adequate description of the large observed enhancement factors (WA97 and NA49). Furthermore, the yields of unstable strange resonances as the Lambda star(1520) resonance or the phi meson are considerably affected by hadronic rescattering of the decay products.
The equilibration of hot and dense nuclear matter produced in the central region in central Au+Au collisions at square root s = 200A GeV is studied within the microscopic transport model UrQMD. The pressure here becomes isotropic at t approx 5 fm/c. Within the next 15 fm/c the expansion of the matter proceeds almost isentropically with the entropy per baryon ratio S/A approx 150. During this period the equation of state in the (P, epsilon)-plane has a very simple form, P = 0.15 epsilon. Comparison with the statistical model (SM) of an ideal hadron gas reveals that the time of approx 20 fm/c may be too short to attain the fully equilibrated state. Particularly, the fractions of resonances are overpopulated in contrast to the SM values. The creation of such a long-lived resonance-rich state slows down the relaxation to chemical equilibrium and can be detected experimentally.
Enhanced antiproton production in Pb(160 AGeV)+Pb reactions: evidence for quark gluon matter?
(2000)
The centrality dependence of the antiproton per participant ratio is studied in Pb(160 AGeV)+Pb reactions. Antiproton production in collisions of heavy nuclei at the CERN/SPS seems considerably enhanced as compared to conventional hadronic physics, given by the antiproton production rates in pp and antiproton annihilation in p p reactions. This enhancement is consistent with the observation of strong in-medium effects in other hadronic observables and may be an indication of partial restoration of chiral symmetry.
The relaxation of hot nuclear matter to an equilibrated state in the central zone of heavy-ion collisions at energies from AGS to RHIC is studied within the microscopic UrQMD model. It is found that the system reaches the (quasi)equilibrium stage for the period of 10-15 fm/c. Within this time the matter in the cell expands nearly isentropically with the entropy to baryon ratio S/A = 150 - 170. Thermodynamic characteristics of the system at AGS and at SPS energies at the endpoints of this stage are very close to the parameters of chemical and thermal freeze-out extracted from the thermal fit to experimental data. Predictions are made for the full RHIC energy square root s = 200$ AGeV. The formation of a resonance-rich state at RHIC energies is discussed.
The behavior of hadronic matter at high baryon densities is studied within Ultrarelativistic Quantum Molecular Dynamics (URQMD). Baryonic stopping is observed for Au+Au collisions from SIS up to SPS energies. The excitation function of flow shows strong sensitivities to the underlying equation of state (EOS), allowing for systematic studies of the EOS. Effects of a density dependent pole of the rho-meson propagator on dilepton spectra are studied for different systems and centralities at CERN energies.
Dilepton spectra are calculated within the microscopic transport model UrQMD and compared to data from the CERES experiment. The invariant mass spectra in the region between 300 MeV and 600 MeV depend strongly on the mass dependence of the rho meson decay width which is not sufficiently determined by the Vector Meson Dominance model. A consistent explanation of both the recent Pb+Au data and the proton induced data can be given without additional medium effects.
The hypothesis of local equilibrium (LE) in relativistic heavy ion collisions at energies from AGS to RHIC is checked in the microscopic transport model. We find that kinetic, thermal, and chemical equilibration of the expanding hadronic matter is nearly reached in central collisions at AGS energy for t >_ fm/c in a central cell. At these times the equation of state may be approximated by a simple dependence P ~= (0.12-0.15) epsilon. Increasing deviations of the yields and the energy spectra of hadrons from statistical model values are observed for increasing bombarding energies. The origin of these deviations is traced to the irreversible multiparticle decays of strings and many-body (N >_ 3) decays of resonances. The violations of LE indicate that the matter in the cell reaches a steady state instead of idealized equilibrium. The entropy density in the cell is only about 6% smaller than that of the equilibrium state.
Local equilibrium in heavy ion collisions. Microscopic model versus statistical model analysis
(1999)
The assumption of local equilibrium in relativistic heavy ion collisions at energies from 10.7 AGeV (AGS) up to 160 AGeV (SPS) is checked in the microscopic transport model. Dynamical calculations performed for a central cell in the reaction are compared to the predictions of the thermal statistical model. We find that kinetic, thermal and chemical equilibration of the expanding hadronic matter are nearly approached late in central collisions at AGS energy for t >= 10 fm/c in a central cell. At these times the equation of state may be approximated by a simple dependence P ~= (0.12-0.15) epsilon. Increasing deviations of the yields and the energy spectra of hadrons from statistical model values are observed for increasing energy, 40 AGeV and 160 AGeV. These violations of local equilibrium indicate that a fully equilibrated state is not reached, not even in the central cell of heavy ion collisions at energies above 10 AGeV. The origin of these findings is traced to the multiparticle decays of strings and many-body decays of resonances.
In dieser Arbeit werden Untersuchungen über die Anwendbarkeit von vier Methoden zur selektiven Einführung von Radikalen in DNA vorgestellt. Hierzu wurde die EPR-Spektroskopie (Elektronen-paramagnetische Resonanz) benutzt. Die selektive Einführung und Erzeugung von Radikalen in DNA ist nötig, um J-Kopplungen in DNA zu untersuchen. Vor dem Fernziel der Bestimmung der Austauschkopplungskonstanten J in biradikalischer DNA und deren Korrelation mit der charge-transfer-Geschwindigkeitskonstanten kCT stellen diese Untersuchungen einen wichtigen Ausgangspunkt dar. Stabile aromatische Nitroxide. Simulationen von Raumtemperatur-CW-X-Band-EPRSpektren fünf verschiedener aromatischer Nitroxide, welche potentielle DNA-Interkalatoren sind, wurden durchgeführt. Die aromatischen Nitroxide zeigen aufgelöste Hyperfeinkopplungen, welche zu dem Schluss führen, dass die Spindichte in hohem Maße delokalisiert ist, was die Verwendung dieser Verbindungen zur Messung von J-Kopplungen in biradikalischer DNA erlaubt. Transiente Guanin-Radikale. Transiente Guanin-Radikale werden in DNA selektiv durch die Flash-Quench-Technik erzeugt, bei der optisch anregbare Ruthenium-Interkalatoren verwendet werden. Transiente Thymyl-Radikale aus UV-bestrahltem 4'-Pivaloyl-Thymidin. Es werden photoinduzierte Prozesse untersucht, welche durch Bestrahlung von Thymin-Nukleosiden, die an der 4’-Position die optisch spaltbare Pivaloyl-Gruppe tragen, erzeugt werden. Dieses Nukleosid wurde speziell dafür entworfen, um Elektronenlöcher in DNA zu injizieren. In dieser Arbeit wird gezeigt, dass diese Verbindung benutzt werden kann, um selektiv eine Thymin-Base zu reduzieren. Transiente Thymyl-Radikale erzeugt durch ein neuartig modifiziertes Thymin nach UV-Bestrahlung. Photoinduzierte Prozesse, welche durch Bestrahlung eines ähnlichen Thymidin-Nukleosids erzeugt wurden, werden hier untersucht. Dieses Thymidin- Nukleosid wurde modifiziert, indem die optisch spaltbare Pivaloyl-Gruppe an eine Seitenkette angehängt wurde, welche an der C6-Position der Thymin-Base sitzt. Die Thymin-Base wurde speziell dafür entworfen, um Elektronen in DNA zu injizieren. In dieser Arbeit wurde bestätigt, dass ein Überschuss-Elektron selektiv auf eine Thymin-Base transferiert werden kann.
The behavior of hadronic matter at high baryon densities is studied within Ultrarelativistic Quantum Molecular Dynamics (URQMD). Baryonic stopping is observed for Au+Au collisions from SIS up to SPS energies. The excitation function of flow shows strong sensitivities to the underlying equation of state (EOS), allowing for systematic studies of the EOS. Dilepton spectra are calculated with and without shifting the rho pole. Except for S+Au collisions our calculations reproduce the CERES data.
Quantum Molecular Dynamics (QMD) calculations of central collisions between heavy nuclei are used to study fragment production and the creation of collective flow. It is shown that the final phase space distributions are compatible with the expectations from a thermally equilibrated source, which in addition exhibits a collective transverse expansion. However, the microscopic analyses of the transient states in the intermediate reaction stages show that the event shapes are more complex and that equilibrium is reached only in very special cases but not in event samples which cover a wide range of impact parameters as it is the case in experiments. The basic features of a new molecular dynamics model (UQMD) for heavy ion collisions from the Fermi energy regime up to the highest presently available energies are outlined.
We study the thermodynamic properties of infinite nuclear matter with the Ultrarelativistic Quantum Molecular Dynamics (URQMD), a semiclassical transport model, running in a box with periodic boundary conditions. It appears that the energy density rises faster than T4 at high temperatures of T approx. 200 - 300 MeV. This indicates an increase in the number of degrees of freedom. Moreover, We have calculated direct photon production in Pb+Pb collisions at 160 GeV/u within this model. The direct photon slope from the microscopic calculation equals that from a hydrodynamical calculation without a phase transition in the equation of state of the photon source.
Die in Englisch verfasste Dissertation, die unter der Betreuung von Herrn Prof. Dr. H. F. de Groote, Fachbereich Mathematik, entstand, ist der Mathematischen Physik zuzuordnen. Sie behandelt Stonesche Spektren von Neumannscher Algebren, observable Funktionen sowie einige Anwendungen in der Physik. Das abschließende Kapitel liefert eine Verallgemeinerung des Kochen-Specker-Theorems. Stonesche Spektren und observable Funktionen wurden von de Groote eingeführt. Das Stonesche Spektrum einer von Neumann-Algebra ist eine Verallgemeinerung des Gelfand-Spektrums, die observablen Funktionen verallgemeinern die Gelfand-Transformierten. Da de Grootes Ergebnisse zum großen Teil unveröffentlicht sind, folgt nach dem Einleitungskapitel im zweiten Kapitel eine Übersichtsdarstellung dieser Ergebnisse. Das dritte Kapitel behandelt die Stoneschen Spektren endlicher von Neumann-Algebren. Für Algebren vom Typ In wird eine vollständige Charakterisierung des Stoneschen Spektrums entwickelt. Zu Typ-II1-Algebren werden einige Resultate vorgestellt. Das vierte Kapitel liefert. einige einfache Anwendungen des Formalismus auf die Physik. Das fünfte Kapitel gibt erstmals einen funktionalanalytischen Beweis des Kochen-Specker-Theorems und liefert die Verallgemeinerung dieses Satzes, wobei die Situation für alle von Neumann-Algebren geklärt wird.
The centrality dependence of (multi-)strange hadron abundances is studied for Pb(158 AGeV)Pb reactions and compared to p(158 GeV)Pb collisions. The microscopic transport model UrQMD is used for this analysis. The predicted Lambda/pi-, Xi-/pi- and Omega-/pi- ratios are enhanced due to rescattering in central Pb-Pb collisions as compared to peripheral Pb-Pb or p-Pb collisions. A reduction of the constituent quark masses to the current quark masses m_s \sim 230 MeV, m_q \sim 10 MeV, as motivated by chiral symmetry restoration, enhances the hyperon yields to the experimentally observed high values. Similar results are obtained by an ad hoc overall increase of the color electric field strength (effective string tension of kappa=3 GeV/fm). The enhancement depends strongly on the kinematical cuts. The maximum enhancement is predicted around midrapidity. For Lambda's, strangeness suppression is predicted at projectile/target rapidity. For Omega's, the predicted enhancement can be as large as one order of magnitude. Comparisons of Pb-Pb data to proton induced asymmetric (p-A) collisions are hampered due to the predicted strong asymmetry in the various rapidity distributions of the different (strange) particle species. In p-Pb collisions, strangeness is locally (in rapidity) not conserved. The present comparison to the data of the WA97 and NA49 collaborations clearly supports the suggestion that conventional (free) hadronic scenarios are unable to describe the observed high (anti-)hyperon yields in central collisions. The doubling of the strangeness to nonstrange suppression factor, gamma_s \approx 0.65, might be interpreted as a signal of a phase of nearly massless particles.
Directed and elliptic flow
(1999)
We compare microscopic transport model calculations to recent data on the directed and elliptic flow of various hadrons in 2 - 10 A GeV Au+Au and Pb (158 A GeV) Pb collisions. For the Au+Au excitation function a transition from the squeeze-out to an in-plane enhanced emission is consistently described with mean field potentials corresponding to one incompressibility. For the Pb (158 A GeV) Pb system the elliptic flow prefers in-plane emission both for protons and pions, the directed flow of protons is opposite to that of the pions, which exhibit anti-flow. Strong directed transverse flow is present for protons and Lambdas in Au (6 A GeV) Au collisions as well. Both for the SPS and the AGS energies the agreement between data and calculations is remarkable.
Microscopic calculations of central collisions between heavy nuclei are used to study fragment production and the creation of collective flow. It is shown that the final phase space distributions are compatible with the expectations from a thermally equilibrated source, which in addition exhibits a collective transverse expansion. However, the microscopic analyses of the transient states in the reaction stages of highest density and during the expansion show that the system does not reach global equilibrium. Even if a considerable amount of equilibration is assumed, the connection of the measurable final state to the macroscopic parameters, e.g. the temperature, of the transient "equilibrium" state remains ambiguous.
Die Ermittlung von Proteinstukturen mittels NMR-Spektroskopie ist ein komplexer Prozess, wobei die Resonanzfrequenzen und die Signalintensitäten den Atomen des Proteins zugeordnet werden. Zur Bestimmung der räumlichen Proteinstruktur sind folgende Schritte erforderlich: die Präparation der Probe und 15N/13C Isotopenanreicherung, Durchführung der NMR Experimente, Prozessierung der Spektren, Bestimmung der Signalresonanzen ('Peak-picking'), Zuordnung der chemischen Verschiebungen, Zuordnung der NOESY-Spektren und das Sammeln von konformationellen Strukturparametern, Strukturrechnung und Strukturverfeinerung. Aktuelle Methoden zur automatischen Strukturrechnung nutzen eine Reihe von Computeralgorithmen, welche Zuordnungen der NOESY-Spektren und die Strukturrechnung durch einen iterativen Prozess verbinden. Obwohl neue Arten von Strukturparametern wie dipolare Kopplungen, Orientierungsinformationen aus kreuzkorrelierten Relaxationsraten oder Strukturinformationen, die sich in Gegenwart paramagnetischer Zentren in Proteinen ergeben, wichtige Neuerungen für die Proteinstrukturrechnung darstellen, sind die Abstandsinformationen aus NOESY-Spektren weiterhin die wichtigste Basis für die NMR-Strukturbestimmung. Der hohe zeitliche Aufwand des 'peak-picking' in NOESY-Spektren ist hauptsächlich bedingt durch spektrale Überlagerung, Rauschsignale und Artefakte in NOESY-Spektren. Daher werden für das effizientere automatische 'Peak-picking' zuverlässige Filter benötigt, um die relevanten Signale auszuwählen. In der vorliegenden Arbeit wird ein neuer Algorithmus für die automatische Proteinstrukturrechnung beschrieben, der automatisches 'Peak-picking' von NOESY-Spektren beinhaltet, die mit Hilfe von Wavelets entrauscht wurden. Der kritische Punkt dieses Algorithmus ist die Erzeugung inkrementeller Peaklisten aus NOESY-Spektren, die mit verschiedenen auf Wavelets basierenden Entrauschungsprozeduren prozessiert wurden. Mit Hilfe entrauschter NOESY-Spektren erhält man Signallisten mit verschiedenen Konfidenzbereichen, die in unterschiedlichen Schritten der kombinierten NOE-Zuordnung/Strukturrechnung eingesetzt werden. Das erste Strukturmodell beruht auf stark entrauschten Spektren, die die konservativste Signalliste mit als weitgehend sicher anzunehmenden Signalen ergeben. In späteren Stadien werden Signallisten aus weniger stark entrauschten Spektren mit einer größeren Anzahl von Signalen verwendet. Die Auswirkung der verschiedenen Entrauschungsprozeduren auf Vollständigkeit und Richtigkeit der NOESY Peaklisten wurde im Detail untersucht. Durch die Kombination von Wavelet-Entrauschung mit einem neuen Algorithmus zur Integration der Signale in Verbindung mit zusätzlichen Filtern, die die Konsistenz der Peakliste prüfen ('Network-anchoring' der Spinsysteme und Symmetrisierung der Peakliste), wird eine schnelle Konvergenz der automatischen Strukturrechnung erreicht. Der neue Algorithmus wurde in ARIA integriert, einem weit verbreiteten Computerprogramm für die automatische NOE-Zuordnung und Strukturrechnung. Der Algorithmus wurde an der Monomereinheit der Polysulfid-Schwefel-Transferase (Sud) aus Wolinella succinogenes verifiziert, deren hochaufgelöste Lösungsstruktur vorher auf konventionelle Weise bestimmt wurde. Neben der Möglichkeit zur Bestimmung von Proteinlösungsstrukturen bietet sich die NMR-Spektroskopie auch als wirkungsvolles Werkzeug zur Untersuchung von Protein-Ligand- und Protein-Protein-Wechselwirkungen an. Sowohl NMR Spektren von isotopenmarkierten Proteinen, als auch die Spektren von Liganden können für das 'Screening' nach Inhibitoren benutzt werden. Im ersten Fall wird die Sensitivität der 1H- und 15N-chemischen Verschiebungen des Proteinrückgrats auf kleine geometrische oder elektrostatische Veränderungen bei der Ligandbindung als Indikator benutzt. Als 'Screening'-Verfahren, bei denen Ligandensignale beobachtet werden, stehen verschiedene Methoden zur Verfügung: Transfer-NOEs, Sättigungstransferdifferenzexperimente (STD, 'saturation transfer difference'), ePHOGSY, diffusionseditierte und NOE-basierende Methoden. Die meisten dieser Techniken können zum rationalen Design von inhibitorischen Verbindungen verwendet werden. Für die Evaluierung von Untersuchungen mit einer großen Anzahl von Inhibitoren werden effiziente Verfahren zur Mustererkennung wie etwa die PCA ('Principal Component Analysis') verwendet. Sie eignet sich zur Visualisierung von Ähnlichkeiten bzw. Unterschieden von Spektren, die mit verschiedenen Inhibitoren aufgenommen wurden. Die experimentellen Daten werden zuvor mit einer Serie von Filtern bearbeitet, die u.a. Artefakte reduzieren, die auf nur kleinen Änderungen der chemischen Verschiebungen beruhen. Der am weitesten verbreitete Filter ist das sogenannte 'bucketing', bei welchem benachbarte Punkte zu einen 'bucket' aufsummiert werden. Um typische Nachteile der 'bucketing'-Prozedur zu vermeiden, wurde in der vorliegenden Arbeit der Effekt der Wavelet-Entrauschung zur Vorbereitung der NMR-Daten für PCA am Beispiel vorhandener Serien von HSQC-Spektren von Proteinen mit verschiedenen Liganden untersucht. Die Kombination von Wavelet-Entrauschung und PCA ist am effizientesten, wenn PCA direkt auf die Wavelet-Koeffizienten angewandt wird. Durch die Abgrenzung ('thresholding') der Wavelet-Koeffizienten in einer Multiskalenanalyse wird eine komprimierte Darstellung der Daten erreicht, welche Rauschartefakte minimiert. Die Kompression ist anders als beim 'bucketing' keine 'blinde' Kompression, sondern an die Eigenschaften der Daten angepasst. Der neue Algorithmus kombiniert die Vorteile einer Datenrepresentation im Wavelet-Raum mit einer Datenvisualisierung durch PCA. In der vorliegenden Arbeit wird gezeigt, dass PCA im Wavelet- Raum ein optimiertes 'clustering' erlaubt und dabei typische Artefakte eliminiert werden. Darüberhinaus beschreibt die vorliegende Arbeit eine de novo Strukturbestimmung der periplasmatischen Polysulfid-Schwefel-Transferase (Sud) aus dem anaeroben gram-negativen Bakterium Wolinella succinogenes. Das Sud-Protein ist ein polysulfidbindendes und transferierendes Enzym, das bei niedriger Polysulfidkonzentration eine schnelle Polysulfid-Schwefel-Reduktion katalysiert. Sud ist ein 30 kDa schweres Homodimer, welches keine prosthetischen Gruppen oder schwere Metallionen enthält. Jedes Monomer enhält ein Cystein, welches kovalent bis zu zehn Polysulfid-Schwefel (Sn 2-) Ionen bindet. Es wird vermutet, dass Sud die Polysulfidkette auf ein katalytischen Molybdän-Ion transferiert, welches sich im aktiven Zentrum des membranständigen Enzyms Polysulfid-Reduktase (Psr) auf dessen dem Periplasma zugewandten Seite befindet. Dabei wird eine reduktive Spaltung der Kette katalysiert. Die Lösungsstruktur des Homodimeres Sud wurde mit Hilfe heteronuklearer, mehrdimensionaler NMR-Techniken bestimmt. Die Struktur beruht auf von NOESY-Spektren abgeleiteten Distanzbeschränkungen, Rückgratwasserstoffbindungen und Torsionswinkeln, sowie auf residuellen dipolaren Kopplungen, die für die Verfeinerung der Struktur und für die relative Orientierung der Monomereinheiten wichtig waren. In den NMR Spektren der Homodimere haben alle symmetrieverwandte Kerne äquivalente magnetische Umgebungen, weshalb ihre chemischen Verschiebungen entartet sind. Die symmetrische Entartung vereinfacht das Problem der Resonanzzuordnung, da nur die Hälfte der Kerne zugeordnet werden müssen. Die NOESY-Zuordnung und die Strukturrechnung werden dadurch erschwert, dass es nicht möglich ist, zwischen den Intra-Monomer-, Inter-Monomer- und Co-Monomer- (gemischten) NOESY-Signalen zu unterscheiden. Um das Problem der Symmetrie-Entartung der NOESY-Daten zu lösen, stehen zwei Möglichkeiten zur Verfügung: (I) asymmetrische Markierungs-Experimente, um die intra- von den intermolekularen NOESY-Signalen zu unterscheiden, (II) spezielle Methoden der Strukturrechnung, die mit mehrdeutigen Distanzbeschränkungen arbeiten können. Die in dieser Arbeit vorgestellte Struktur wurde mit Hilfe der Symmetrie-ADR- ('Ambigous Distance Restraints') Methode in Kombination mit Daten von asymetrisch isotopenmarkierten Dimeren berechnet. Die Koordinaten des Sud-Dimers zusammen mit den NMR-basierten Strukturdaten wur- den in der RCSB-Proteindatenbank unter der PDB-Nummer 1QXN abgelegt. Das Sud-Protein zeigt nur wenig Homologie zur Primärsequenz anderer Proteine mit ähnlicher Funktion und bekannter dreidimensionaler Struktur. Bekannte Proteine sind die Schwefeltransferase oder das Rhodanese-Enzym, welche beide den Transfer von einem Schwefelatom eines passenden Donors auf den nukleophilen Akzeptor (z.B von Thiosulfat auf Cyanid) katalysieren. Die dreidimensionalen Strukturen dieser Proteine zeigen eine typische a=b Topologie und haben eine ähnliche Umgebung im aktiven Zentrum bezüglich der Konformation des Proteinrückgrades. Die Schleife im aktiven Zentrum umgibt das katalytische Cystein, welches in allen Rhodanese-Enzymen vorhanden ist, und scheint im Sud-Protein flexibel zu sein (fehlende Resonanzzuordnung der Aminosäuren 89-94). Das Polysulfidende ragt aus einer positiv geladenen Bindungstasche heraus (Reste: R46, R67, K90, R94), wo Sud wahrscheinlich in Kontakt mit der Polysulfidreduktase tritt. Das strukturelle Ergebnis wurde durch Mutageneseexperimente bestätigt. In diesen Experimenten konnte gezeigt werden, dass alle Aminosäurereste im aktiven Zentrum essentiell für die Schwefeltransferase-Aktivität des Sud-Proteins sind. Die Substratbindung wurde früher durch den Vergleich von [15N,1H]-TROSY-HSQC-Spektren des Sud-Proteins in An- und Abwesenheit des Polysulfidliganden untersucht. Bei der Substratbindung scheint sich die lokale Geometrie der Polysulfidbindungsstelle und der Dimerschnittstelle zu verändern. Die konformationellen Änderungen und die langsame Dynamik, hervorgerufen durch die Ligandbindung können die weitere Polysulfid-Schwefel-Aktivität auslösen. Ein zweites Polysulfid-Schwefeltransferaseprotein (Str, 40 kDa) mit einer fünffach höheren nativen Konzentration im Vergleich zu Sud wurde im Bakterienperiplasma von Wolinella succinogenes entdeckt. Es wird angenommen, dass beide Protein einen Polysulfid-Schwefel-Komplex bilden, wobei Str wässriges Polysulfid sammelt und an Sud abgibt, welches den Schwefeltransfer zum katalytischen Molybdän-Ion auf das aktive Zentrum der dem Periplasma zugewandten Seite der Polysulfidreduktase durchführt. Änderungen chemischer Verschiebungen in [15N,1H]-TROSY-HSQC-Spektren zeigen, dass ein Polysulfid-Schwefeltransfer zwischen Str und Sud stattfindet. Eine mögliche Protein-Protein-Wechselwirkungsfläche konnte bestimmt werden. In der Abwesenheit des Polysulfidsubstrates wurden keine Wechselwirkungen zwischen Sud und Str beobachtet, was die Vermutung bestätigt, dass beide Proteine nur dann miteinander wechselwirken und den Polysulfid-Schwefeltransfer ermöglichen, wenn als treibende Kraft Polysulfid präsent ist.
We analyze the reaction dynamics of central Pb+Pb collisions at 160 GeV/nucleon. First we estimate the energy density pile-up at mid-rapidity and calculate its excitation function: The energy density is decomposed into hadronic and partonic contributions. A detailed analysis of the collision dynamics in the framework of a microscopic transport model shows the importance of partonic degrees of freedom and rescattering of leading (di)quarks in the early phase of the reaction for E >= 30 GeV/nucleon. The energy density reaches up to 4 GeV/fm 3, 95% of which are contained in partonic degrees of freedom. It is shown that cells of hadronic matter, after the early reaction phase, can be viewed as nearly chemically equilibrated. This matter never exceeds energy densities of 0.4 GeV/fm 3, i.e. a density above which the notion of separated hadrons loses its meaning. The final reaction stage is analyzed in terms of hadron ratios, freeze-out distributions and a source analysis for final state pions.
Thermodynamical variables and their time evolution are studied for central relativistic heavy ion collisions from 10.7 to 160 AGeV in the microscopic Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). The UrQMD model exhibits drastic deviations from equilibrium during the early high density phase of the collision. Local thermal and chemical equilibration of the hadronic matter seems to be established only at later stages of the quasi-isentropic expansion in the central reaction cell with volume 125 fm 3. Baryon energy spectra in this cell are reproduced by Boltzmann distributions at all collision energies for t > 10 fm/c with a unique rapidly dropping temperature. At these times the equation of state has a simple form: P = (0.12 - 0.15) Epsilon. At SPS energies the strong deviation from chemical equilibrium is found for mesons, especially for pions, even at the late stage of the reaction. The final enhancement of pions is supported by experimental data.
Equilibrium properties of infinite relativistic hadron matter are investigated using the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) model. The simulations are performed in a box with periodic boundary conditions. Equilibration times depend critically on energy and baryon densities. Energy spectra of various hadronic species are shown to be isotropic and consistent with a single temperature in equilibrium. The variation of energy density versus temperature shows a Hagedorn-like behavior with a limiting temperature of 130 +/- 10 MeV. Comparison of abundances of different particle species to ideal hadron gas model predictions show good agreement only if detailed balance is implemented for all channels. At low energy densities, high mass resonances are not relevant; however, their importance raises with increasing energy density. The relevance of these different conceptual frameworks for any interpretation of experimental data is questioned.
Local kinetic and chemical equilibration is studied for Au+Au collisions at 10.7 AGeV in the microscopic Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). The UrQMD model exhibits dramatic deviations from equilibrium during the high density phase of the collision. Thermal and chemical equilibration of the hadronic matter seems to be established in the later stages during a quasiisentropic expansion, observed in the central reaction cell with volume 125 fm3. For t > 10 fm/c the hadron energy spectra in the cell are nicely reproduced by Boltzmann distributions with a common rapidly dropping temperature. Hadron yields change drastically and at the late expansion stage follow closely those of an ideal gas statistical model. The equation of state seems to be simple at late times: P = 0.12 Epsilon. The time evolution of other thermodynamical variables in the cell is also presented.
In this paper, the concepts of microscopic transport theory are introduced and the features and shortcomings of the most commonly used ansatzes are discussed. In particular, the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport model is described in great detail. Based on the same principles as QMD and RQMD, it incorporates a vastly extended collision term with full baryon-antibaryon symmetry, 55 baryon and 32 meson species. Isospin is explicitly treated for all hadrons. The range of applicability stretches from E lab < 100$ MeV/nucleon up to E lab> 200$ GeV/nucleon, allowing for a consistent calculation of excitation functions from the intermediate energy domain up to ultrarelativistic energies. The main physics topics under discussion are stopping, particle production and collective flow.
Ratios of hadronic abundances are analyzed for pp and nucleus-nucleus collisions at sqrt(s)=20 GeV using the microscopic transport model UrQMD. Secondary interactions significantly change the primordial hadronic cocktail of the system. A comparison to data shows a strong dependence on rapidity. Without assuming thermal and chemical equilibrium, predicted hadron yields and ratios agree with many of the data, the few observed discrepancies are discussed.
We present calculations of two-pion and two-kaon correlation functions in relativistic heavy ion collisions from a relativistic transport model that includes explicitly a first-order phase transition from a thermalized quark-gluon plasma to a hadron gas. We compare the obtained correlation radii with recent data from RHIC. The predicted R_side radii agree with data while the R_out and R_long radii are overestimated. We also address the impact of in-medium modifications, for example, a broadening of the rho-meson, on the correlation radii. In particular, the longitudinal correlation radius R_long is reduced, improving the comparison to data.
We calculate the kaon HBT radius parameters for high energy heavy ion collisions, assuming a first order phase transition from a thermalized Quark-Gluon-Plasma to a gas of hadrons. At high transverse momenta K_T ~ 1 GeV/c direct emission from the phase boundary becomes important, the emission duration signal, i.e., the R_out/R_side ratio, and its sensitivity to T_c (and thus to the latent heat of the phase transition) are enlarged. Moreover, the QGP+hadronic rescattering transport model calculations do not yield unusual large radii (R_i<9fm). Finite momentum resolution effects have a strong impact on the extracted HBT parameters (R_i and lambda) as well as on the ratio R_out/R_side.
We investigate transverse hadron spectra from relativistic nucleus-nucleus collisions which reflect important aspects of the dynamics - such as the generation of pressure - in the hot and dense zone formed in the early phase of the reaction. Our analysis is performed within two independent transport approaches (HSD and UrQMD) that are based on quark, diquark, string and hadronic degrees of freedom. Both transport models show their reliability for elementary pp as well as light-ion (C+C, Si+Si) reactions. However, for central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV the measured K+- transverse mass spectra have a larger inverse slope parameter than expected from the calculation. Thus the pressure generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding shows that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential and temperature - is generated by strong partonic interactions in the early phase of central Au+Au (Pb+Pb) collisions.
We calculate the antibaryon-to-baryon ratios, anti-p/p, anti-Lambda/Lambda, anti-Xi/Xi, and anti-Omega/Omega for Au+Au collisions at RHIC (sqrt{s}_{NN}=200 GeV). The effects of strong color fields associated with an enhanced strangeness and diquark production probability and with an effective decrease of formation times are investigated. Antibaryon-to-baryon ratios increase with the color field strength. The ratios also increase with the strangeness content |S|. The netbaryon number at midrapidity considerably increases with the color field strength while the netproton number remains roughly the same. This shows that the enhanced baryon transport involves a conversion into the hyperon sector (hyperonization) which can be observed in the (Lambda - anti-Lambda)/(p - anti-p) ratio.
We make predictions for the kaon interferometry measurements in Au+Au collisions at the Relativistic Heavy Ion Collider (RHIC). A first order phase transition from a thermalized Quark-Gluon-Plasma (QGP) to a gas of hadrons is assumed for the transport calculations. The fraction of kaons that are directly emitted from the phase boundary is considerably enhanced at large transverse momenta K T ~ 1 GeV/c. In this kinematic region, the sensitivity of the R out/R side ratio to the QGP-properties is enlarged. Here, the results of the 1-dimensional correlation analysis are presented. The extracted interferometry radii, depending on K-Theta, are not unusually large and are strongly affected by momentum resolution effects.
The disappearance of flow
(1995)
We investigate the disappearance of collective flow in the reaction plane in heavy-ion collisions within a microscopic model (QMD). A systematic study of the impact parameter dependence is performed for the system Ca+Ca. The balance energy strongly increases with impact parameter. Momentum dependent interactions reduce the balance energies for intermediate impact parameters b ~ 4.5 fm. Dynamical negative flow is not visible in the laboratory frame but does exist in the contact frame for the heavy system Au+Au. For semi-peripheral collisions of Ca+Ca with b ~ 6.5 fm a new two-component flow is discussed. Azimuthal distributions exhibit strong collectiv flow signals, even at the balance energy.
We investigate hadron production as well as transverse hadron spectra in nucleus-nucleus collisions from 2 A.GeV to 21.3 A.TeV within two independent transport approaches (UrQMD and HSD) that are based on quark, diquark, string and hadronic degrees of freedom. The comparison to experimental data demonstrates that both approaches agree quite well with each other and with the experimental data on hadron production. The enhancement of pion production in central Au+Au (Pb+Pb) collisions relative to scaled pp collisions (the 'kink') is well described by both approaches without involving any phase transition. However, the maximum in the K+/Pi+ ratio at 20 to 30 A.GeV (the 'horn') is missed by ~ 40%. A comparison to the transverse mass spectra from pp and C+C (or Si+Si) reactions shows the reliability of the transport models for light systems. For central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV, however, the measured K +/- m-theta-spectra have a larger inverse slope parameter than expected from the calculations. The approximately constant slope of K+/-spectra at SPS (the 'step') is not reproduced either. Thus the pressure generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding suggests that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential and temperature - might be generated by strong interactions in the early pre-hadronic/partonic phase of central Au+Au (Pb+Pb) collisions.
Report-no: UFTP-492/1999 Journal-ref: Phys.Rev. C61 (2000) 024909 We investigate flow in semi-peripheral nuclear collisions at AGS and SPS energies within macroscopic as well as microscopic transport models. The hot and dense zone assumes the shape of an ellipsoid which is tilted by an angle Theta with respect to the beam axis. If matter is close to the softest point of the equation of state, this ellipsoid expands predominantly orthogonal to the direction given by Theta. This antiflow component is responsible for the previously predicted reduction of the directed transverse momentum around the softest point of the equation of state.
REVTEX, 27 pages incl. 10 figures and 3 tables; Phys. Rev. C (in press) Journal-ref: Phys.Rev. C62 (2000) 064906. We study the local equilibrium in the central V = 125 fm3 cell in heavy-ion collisions at energies from 10.7 A GeV (AGS) to 160 A GeV (SPS) calculated in the microscopic transport model. In the present paper the hadron yields and energy spectra in the cell are compared with those of infinite nuclear matter, as calculated within the same model. The agreement between the spectra in the two systems is established for times t >= 10 fm/c in the central cell. The cell results do not deviate noticeably from the infinite matter calculations with rising incident energy, in contrast to the apparent discrepancy with predictions of the statistical model (SM) of an ideal hadron gas. The entropy of this state is found to be very close to the maximum entropy, while hadron abundances and energy spectra differ significantly from those of the SM.
To be published in J. Phys. G - Proceedings of SQM 2004 : We review the results from the various hydrodynamical and transport models on the collective flow observables from AGS to RHIC energies. A critical discussion of the present status of the CERN experiments on hadron collective flow is given. We emphasize the importance of the flow excitation function from 1 to 50 A.GeV: here the hydrodynamic model has predicted the collapse of the v2-flow ~ 10 A.GeV; at 40 A.GeV it has been recently observed by the NA49 collaboration. Since hadronic rescattering models predict much larger flow than observed at this energy we interpret this observation as evidence for a first order phase transition at high baryon density r b. Moreover, the connection of the elliptic flow v2 to jet suppression is examined. It is proven experimentally that the collective flow is not faked by minijet fragmentation. Additionally, detailed transport studies show that the away-side jet suppression can only partially (< 50%) be due to hadronic rescattering. Furthermore, the change in sign of v1, v2 closer to beam rapidity is related to the occurence of a high density first order phase transition in the RHIC data at 62.5, 130 and 200 A.GeV.
We investigate hadron production and transverse hadron spectra in nucleus-nucleus collisions from 2 A·GeV to 21.3 A·TeV within two independent transport approaches (UrQMD and HSD) based on quark, diquark, string and hadronic degrees of freedom. The enhancement of pion production in central Au+Au (Pb+Pb) collisions relative to scaled pp collisions (the ’kink’) is described well by both approaches without involving a phase transition. However, the maximum in the K+ p+ ratio at 20 to 30 A·GeV (the ’horn’) is missed by ~ 40%. Also, at energies above ~5 A·GeV, the measured K± mT-spectra have a larger inverse slope than expected from the models. Thus the pressure generated by hadronic interactions in the transport models at high energies is too low. This finding suggests that the additional pressure - as expected from lattice QCD at finite quark chemical potential and temperature - might be generated by strong interactions in the early pre-hadronic/partonic phase of central heavy-ion collisions. Finally, we discuss the emergence of density perturbations in a first-order phase transition and why they might affect relative hadron multiplicities, collective flow, and hadron mean-free paths at decoupling. A minimum in the collective flow v2 excitation function was discovered experimentally at 40 A·GeV - such a behavior has been predicted long ago as signature for a first order phase transition.
We investigate hadron production as well as transverse hadron spectra from proton-proton, proton-nucleus and nucleus-nucleus collisions from 2 A·GeV to 21.3 A·TeV within two independent transport approaches (HSD and UrQMD) that are based on quark, diquark, string and hadronic degrees of freedom. The comparison to experimental data on transverse mass spectra from pp, pA and C+C (or Si+Si) reactions shows the reliability of the transport models for light systems. For central Au+Au (Pb+Pb) collisions at bombarding energies above ~5 A·GeV, furthermore, the measured K± transverse mass spectra have a larger inverse slope parameter than expected from the default calculations. We investigate various scenarios to explore their potential effects on the K± spectra. In particular the initial state Cronin effect is found to play a substantial role at top SPS and RHIC energies. However, the maximum in the K+/..+ ratio at 20 to 30 A·GeV is missed by 40% and the approximately constant slope of the K± spectra at SPS energies is not reproduced either. Our systematic analysis suggests that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential µq and temperature T- should be generated by strong interactions in the early pre-hadronic/partonic phase of central Au+Au (Pb+Pb) collisions.
We investigate the effects of strong color fields and of the associated enhanced intrinsic transverse momenta on the phi-meson production in ultrarelativistic heavy ion collisions at RHIC. The observed consequences include a change of the spectral slopes, varying particle ratios, and also modified mean transverse momenta. In particular, the composition of the production processes of phi-mesons, that is, direct production vs. coalescence-like production, depends strongly on the strength of the color fields and intrinsic transverse momenta and thus represents a sensitive probe for their measurement.
In this paper, I investigate more closely the contribution of modal operators to the semantics of comparatives and I show that there is no need for a maximality or minimality operator. Following Kratzer s (1981, 1991) analysis of modal elements, I assume that the meaning of a modal sentence is dependent on a conversational background and an ordering source. For comparative environments, I demonstrate that the ordering source reduces a set of possible degrees to a single degree that is most (or least) wanted or expected, i.e., maximality and minimality readings of comparative constructions are an effect of the pragmatic meaning of the modal.
Im Rahmen dieser Dissertation wurde die Photophysik und die elektronische Struktur einer Klasse neuartiger Donator-Akzeptor-Ladungstransfer-Komplexe untersucht. Im Wesentlichen bestehen diese Verbindungen aus einem Ferrocen-Donator (Fc) und organischen Akzeptoren, die über B-N-Bindungen verbrückt sind, welche sich bei dieser Art von makromolekularen Systemen spontan bilden. Zentraler Gegenstand dieser Arbeit war die spektroskopische Untersuchung des Metall-zu-Ligand-Ladungstransfers (engl. Abkürzung: MLCT) im elektronischen Anregungszustand dieser kationischen Komplexverbindungen, die im Weiteren als „Fc-B-bpy“-Verbindungen bezeichnet werden. Die vorliegende Arbeit analysiert eine Vielzahl miteinander verwandter Fc-B-bpy-Derivate. Die Arbeit ist gegliedert in 1.) die Analyse der Absorptionsspektren vom UV- bis zum nahen Infrarot-Spektralbereich (250-1000 nm) von Lösungen, dotierten Polymer-Dünnfilmen und Einkristallen, 2.) die zeitaufgelöste optische Spektroskopie des angeregten Zustands auf der Pikosekunden-Zeitskala, 3.) die Analyse elektrochemischer Messungen an Lösungen, und 4.) die Auswertung quantenchemischer Berechnungen. Für die zeitaufgelösten Messungen wurde ein komplexes optisches Spektroskopie-System mit breitbandigen Femtosekunden-Pulsen sowie den entsprechenden zeitaufgelösten Detektionsmethoden (spektral gefilterte Weißlicht-Detektion) aufgebaut. Die Ergebnisse dieser Arbeit beweisen die Existenz eines MLCT-Übergangs mit fast vollständigem Übergang eines Fc-Donator-Elektrons zum B-bpy-Akzeptor bei optischer Anregung. Die vergleichenden Untersuchungen der spektroskopischen Eigenschaften verschiedener Derivate liefern wichtige Information für die Entwicklung neuartiger Derivate, einschließlich verwandter Polymere, mit verbesserten spektroskopischen Eigenschaften. Es wurden transiente Absorptionsmessungen bestimmter Fc-B-bpy-Derivate in Lösung nach gepulster Anregung der MLCT-Bande (bei 500 nm) über einen Zeitbereich von 0,1-1000 ps und einen Wellenlängenbereich von 460-760 nm vorgenommen. Aus den Messergebnissen geht hervor, dass die Relaxation aus dem angeregten MLCT-Zustand in den Grundzustand auf verschiedenen Zeitskalen geschehen kann, welche im Bereich zwischen ~18 und 900 ps liegen. Ein Vergleich verschiedener Derivate mit unterschiedlicher Flexibilität in der Konformation zeigt, dass die Starrheit der Bindungen zwischen Donatoren und Akzeptoren ein wesentlicher Faktor für die Lebensdauer des angeregten Zustands ist. Wenn die Akzeptorgruppen relativ frei rotieren können, ist es der Verbindung möglich, eine Geometrie einzunehmen, von der aus ein effizienter, strahlungsfreier Übergang in den Grundzustand erfolgen kann. Dieser Befund zeigt einen Weg auf, wie neuartige, verwandte Verbindungen mit größerer Lebensdauer das angeregten Zustands synthetisiert werden können, indem darauf geachtet wird, daß eine starre molekulare Architektur zwischen Donator und Akzeptor verwirklicht wird.
Invited talk at the International Workshop XXX on Gross Properties of Nuclei and Nuclear Excitations - Ultrarelativistic Heavy-Ion Collisions, Jan. 13-19, 2002, Hirschegg, Austria. Report-no: LBNL-49674. We discuss predictions for the pion and kaon interferometry measurements in relativistic heavy ion collisions at SPS and RHIC energies. In particular, we confront relativistic transport model calculations that include explicitly a first-order phase transition from a thermalized quark-gluon plasma to a hadron gas with recent data from the RHIC experiments. We critically examine the "HBT-puzzle" both from the theoretical as well as from the experimental point of view. Alternative scenarios are briefly explained.
Invited talk at the XXXIII International Symposium on Multiparticle Dynamics, Krakow, Poland, 5-11 Sept, 2003. 5 pages, 1 figure Journal-ref: Acta Phys.Polon. B35 (2004) 23-28. We review the recent developments on microscopic transport calculations for two-particle correlations at low relative momenta in ultrarelativistic heavy ion collisions at RHIC.
Invited talk at the 7th International Conference on Strangeness in Quark Matter, SQM 2003, Atlantic Beach, North Carolina, USA, 12-17 Mar, 2003. 11 pages, 12 figures. Journal-ref: J.Phys. G30 (2004) S139-S150. We review recent developments in the field of microscopic transport model calculations for ultrarelativistic heavy ion collisions. In particular, we focus on the strangeness production, for example, the phi-meson and its role as a messenger of the early phase of the system evolution. Moreover, we discuss the important e ects of the (soft) field properties on the multiparticle system. We outline some current problems of the models as well as possible solutions to them
The wide-area deployment of WiFi hot spots challenges IP access providers. While new profit models are sought after by them, profitability as well as logistics for large-scale deployment of 802.11 wireless technology are still to be proven. Expenditure for hardware, locations, maintenance, connectivity, marketing, billing and customer care must be considered. Even for large carriers with infrastructure, the deployment of a large-scale WiFi infrastructure may be risky. This paper proposes a multi-level scheme for hot spot distribution and customer acquisition that reduces financial risk, cost of marketing and cost of maintenance for the large-scale deployment of WiFi hot spots.
Despite the apparent stability of the wage bargaining institutions in West Germany, aggregate union membership has been declining dramatically since the early 90's. However, aggregate gross membership numbers do not distinguish by employment status and it is impossible to disaggregate these sufficiently. This paper uses four waves of the German Socioeconomic Panel in 1985, 1989, 1993, and 1998 to perform a panel analysis of net union membership among employees. We estimate a correlated random effects probit model suggested in Chamberlain (1984) to take proper account of individual specfic effects. Our results suggest that at the individual level the propensity to be a union member has not changed considerably over time. Thus, the aggregate decline in membership is due to composition effects. We also use the estimates to predict net union density at the industry level based on the IAB employment subsample for the time period 1985 to 1997. JEL - Klassifikation: J5
The paper analyses the financial structure of German inward FDI. From a tax perspective, intra-company loans granted by the parent should be all the more strongly preferred over equity the lower the tax rate of the parent and the higher the tax rate of the German affiliate. From our study of a panel of more than 8,000 non-financial affiliates in Germany, we find only small effects of the tax rate of the foreign parent. However, our empirical results show that subsidiaries that on average are profitable react more strongly to changes in the German corporate tax rate than this is the case for less profitable firms. This gives support to the frequent concern that high German taxes are partly responsible for the high levels of intracompany loans. Taxation, however, does not fully explain the high levels of intra-company borrowing. Roughly 60% of the cross-border intra-company loans turn out to be held by firms that are running losses. JEL - Klassifikation H25 , F23 .
This paper is a draft for the chapter German banks and banking structure of the forthcoming book The German financial system . As such, the paper starts out with a description of past and present structural features of the German banking industry. Given the presented empirical evidence it then argues that great care has to be taken when generalising structural trends from one financial system to another. Whilst conventio nal commercial banking is clearly in decline in the US, it is far from clear whether the dominance of banks in the German financial system has been significantly eroded over the last decades. We interpret the immense stability in intermediation ratios and financing patterns of firms between 1970 and 2000 as strong evidence for our view that the way in which and the extent to which German banks fulfil the central functions for the financial system are still consistent with the overall logic of the German financial system. In spite of the current dire business environment for financial intermediaries we do not expect the German financial system and its banking industry as an integral part of this system to converge to the institutional arrangements typical for a market-oriented financial system. This Version: March 25, 2003
Initiated by the seminal work of Diamond/Dybvig (1983) and Diamond (1984), advances in the theory of financial intermediation have sharpened our understanding of the theoretical foundations of banks as special financial institutions. What makes them "unique" is the combination of accepting deposits and issuing loans. However, in recent years the notion of "disintermediation" has gained tremendous popularity, especially among American observers. These observers argue that deregulation, globalisation and advances in information technology have been eroding the role of banks as intermediaries and thus their alleged uniqueness. It is even assumed that ever more efficiently organised capital markets and specialised financial institutions that take advantage of these markets, such as mutual funds or finance companies, will lead to the demise of banks. Using a novel measurement concept based on intermediation and securitisation ratios, the present article provides evidence which shows that banking disintermediation is indeed a reality for the US financial system. This seems to indicate that American banks are not all that "unique"; they can be replaced to a considerable extent. Moreover, many observers seem to believe that what has happened in the US reflects a universal trend. However, empirical results reported in this paper indicate that such a trend has not manifested itself in other financial systems, and in particular, not in Germany or Japan. Evidence on the enormous structural differences between financial systems and the lack of unequivocal signs of convergence render any inferences from the American experience to other financial systems very problematic.
Abstract: It is commonplace in the debate on Germany's labor market problems to argue that high unemployment and low wage dispersion are related. This paper analyses the relationship between unemployment and residual wage dispersion for individuals with comparable attributes. In the conventional neoclassical point of view, wages are determined by the marginal product of the workers. Accordingly, increases in union minimum wages result in a decline of residual wage dispersion and higher unemployment. A competing view regards wage dispersion as the outcome of search frictions and the associated monopsony power of the firms. Accordingly, an increase in search frictions causes both higher unemployment and higher wage dispersion. The empirical analysis attempts to discriminate between the two hypotheses for West Germany analyzing the relationship between wage dispersion and both the level of unemployment as well as the transition rates between different labor market states. The findings are not completely consistent with either theory. However, as predicted by search theory, one robust result is that unemployment by cells is not negatively correlated with the within cell wage dispersion.
This paper evaluates the effects of Public Sponsored Training in East Germany in the context of reiterated treatments. Selection bias based on observed characteristics is corrected for by applying kernel matching based on the propensity score. We control for further selection and the presence of Ashenfelter's Dip before the program with conditional difference-in-differences estimators. Training as a first treatment shows insignificant effects on the transition rates. The effect of program sequences and the incremental effect of a second program on the reemployment probability are insignificant. However, the incremental effect on the probability to remain employed is slightly positive. JEL - Klassifikation: H43 , C23 , J6 , J64 , C14
Central wage bargaining and local wage flexibility : evidence from the entire wage distribution
(1998)
We argue that in labor markets with central wage bargaining wage flexibility varies systematically across the wage distribution: local wage flexibility is more relevant for the upper part of the wage distribution, and flexibility of wages negotiated under central wage bargaining affects the lower part of the wage distribution. Using a random sample of German social-security accounts, we estimate wage flexibility across the wage distribution by means of quantile regressions. The results support our hypothesis, as employees with low wages have significantly lower local wage flexibility than high wage employees. This effect is particularly relevant for the lower educational groups. On the other hand, employees with low wages tend to have a higher wage flexibility with respect to national unemployment.
The Box-Cox quantile regression model using the two stage method introduced by Chamberlain (1994) and Buchinsky (1995) provides an attractive extension of linear quantile regression techniques. However, a major numerical problem exists when implementing this method which has not been addressed so far in the literature. We suggest a simple solution modifying the estimator slightly. This modification is easy to implement. The modified estimator is still [square root] n-consistent and its asymptotic distribution can easily be derived. A simulation study confirms that the modified estimator works well.
This paper investigates the magnitude and the main determinants of share price reactions to buy-back announcements of German corporations. For our comprehensive sample of 224 announcements that took place between May 1998 and April 2003 we find average cumulative abnormal returns around -7.5% for the thirty days preceding the announcement and around +7.0 % for the ten days following the announcement. We regress post-announcement abnormal returns with multiple firm characteristics and provide evidence which supports the undervaluation signaling hypothesis but not the excess cash hypothesis or the tax-efficiency hypothesis. In extending prior empirical work, we also analyze price effects from initial statements of firms that they intend to seek shareholder approval for a buy-back plan. Observed cumulative abnormal returns on this initial date are in excess of 5% implying a total average price effect between 12% and 15% from implementing a buy-back plan. We conjecture that the German regulatory environment is the main reason why market variations to buy-back announcements are much stronger in Germany than in other countries and conclude that initial statements by managers to seek shareholders’ approval for a buy-back plan should also be subject to legal ad-hoc disclosure requirements.
This paper shows that abnormal stock price returns around open market repurchase announcements are about four times higher in Germany than in the US (12% versus 3%). We hypothesize that this observation can be explained by country differences in repurchase regulation. Our empirical evidence indicates that German managers primarily buy back shares to signal an undervaluation of their firm. We demonstrate that the stringent repurchase process prescribed by German law attributes a higher credibility to such a signal than lax US regulations and thereby corroborate our hypothesis.
This paper analyzes empirically the distribution of unemployment durations in West- Germany before and after the changes during the mid 1980s in the maximum entitlement periods for unemployment benefits for elderly unemployed. The analysis is based on the comprehensive IAB employment subsample containing register panel data for about 500.000 individuals in West Germany. We analyze two proxies for unemployment since the data do not precisely measure unemployment in an economic sense. We provide a theoretical analysis of the link between the durations of nonemployment and of unemployment between jobs. Our empirical analysis finds significant changes in the distributions of nonemployment durations for older unemployed individuals. At the same time, the distribution of unemployment durations between jobs did not change in response to the reforms. Our findings are consistent with an interpretation that many firms and workers used the more bene cial laws as a part of early retirement packages but those workers who were still looking for a job did not reduce their search effort in response to the extension of the maximum entitlement periods. This interpretation is consistent with our theoretical model under plausible assumptions. JEL: C24, J64, J65
This paper examines intraday stock price effects and trading activity caused by ad hoc disclosures in Germany. The evidence suggests that the observed stock prices react within 90 minutes after the ad hoc disclosures. Trading volumes take even longer to adjust. We find no evidence for abnormal price reactions or abnormal trading volume before announcements. The bigger the company that announces an ad hoc disclosure, the less severe is the abnormal price effect following the announcement. The number of analysts is negatively correlated to the trading volume effect before the ad hoc disclosure. The higher the trading volume on the last trading day before the announcement, the greater is the price effect after the ad hoc disclosures and the greater the trading volume effect. Keywords: ad hoc disclosure rules, intraday stock price adjustments, market efficiency.
We show that multi-bank loan pools improve the risk-return profile of banks’ loan business. Banks write simple contracts on the proceeds from pooled loan portfolios, taking into account the free-rider problems in joint loan production. Thus, banks benefit greatly from diversifying credit risk while limiting the efficiency loss due to adverse incentives. We present calibration results that the formation of loan pools reduce the volatility in default rates, proxying for credit risk, of participating banks’ loan portfolios by roughly 70% in our sample. Under reasonable assumptions, the gain in return on equity (in certainty equivalent terms) is around 20 basis points annually.
Global reserves of coal, oil and natural gas are diminishing; global energy requirements however are dramatically increasing. Renewable energy sources lower the threat to the earth’s climate but are not able to meet the energy consumption in major urban areas. The opinion of many experts is that the future will be dominated by hydrogen. However, this gas is essentially totally manufactured from fossil fuels and is hence of limited abundance – not to mention the hazards involved in its utilisation. - A novel energy concept involving solar and thus carbon-independent hydrogen-based technology necessitates an intermediate storage vehicle for renewable energy. This future energy carrier should be simple to manufacture, be available to an unlimited degree or at least be suitable for recycling, be able to store and transport the energy without hazards, demonstrate a high energy density and release no carbon dioxide or other climatically detrimental substances. - Silicon successfully functions as a tailor-made intermediate linking decentrally operating renewable energy-generation technology with equally decentrally organised hydrogen-based infrastructure at any location of choice. In contrast to oil and in particular hydrogen, the transport and storage of silicon are free from potential hazards and require a simple infrastructure similar to that needed for coal.
This paper compares the accuracy of credit ratings of Moody s and Standard&Poors. Based on 11,428 issuer ratings and 350 defaults in several datasets from 1999 to 2003 a slight advantage for the rating system of Moody s is detected. Compared to former research the robustness of the results is increased by using nonparametric bootstrap approaches. Furthermore, robustness checks are made to control for the impact of Watchlist entries, staleness of ratings and the effect of unsolicited ratings on the results.
National borders in Europe have been opening since 1992 and the Union is expanding to embrace more countries prompting enterprises to consider alternative and more attractive locations outside their home country to handle part of their activities (Van Dijk and Pellenbarg, 2000; Cantwell and Iammarino, 2002). International relocation is becoming more and more popular even for small and medium-sized firms that are involved in a growing internationalisation process, mirroring the path of multinational enterprises. Italy, like other industrialised countries, is experiencing a fragmentation of the production chain: firms tend to shift high labour-intensive manufacturing activities to areas characterised by an abundance of low-cost labour (i.e. Central Eastern Europe, India, South East Asia, Latin America, Russia and Central Asia). The internationalisation process by Italian district SMEs has assumed significant dimensions. It has become a relevant topic in recent economic debate because of its consequences for the local context and, in particular, the implication for the survival of the Italian district model (see, among others, Becattini, 2002; Rullani, 1998 and Cor, 2000). The purpose of the paper is twofold: it aims at (i) identifying the managerial approaches to the internationalisation process adopted by the Italian district SMEs and by the Industrial District (ID) itself and (ii) at investigating whether the international delocalisation to the South Eastern European countries (SEECs) constitutes a threat or an opportunity for the Italian district model. The paper is organised as follows. The general introduction is followed by a description of the evolution of the internationalisation processes in Italy over the last three decades. Section three presents a discussion of the internationalisation strategies adopted by Italian SMEs. Section four focuses on the internationalisation process of the Italian industrial districts SMEs. A review of the studies on the subject is offered in section five. Section six presents a qualitative study on the internationalisation process as undergone by sports shoes manufacturers in the Montebelluna district, in north-east Italy. This study shows different managerial strategies to the internationalisation process and emphasises that the motivations can evolve over time, from originally cost-saving to increasingly market-oriented or global strategies. On the basis of a literature review, section seven investigates whether internationalisation constitutes a threat (i.e. loss of jobs and knowledge) or an opportunity (i.e. enlargement of the ID, update district s competitiveness) for the district model. Finally, some summarising remarks in section eight conclude the paper.
Over the past decade, a variety of studies have shown that other sectors in addition to high technology industries can provide a basis for regional growth and income and employment opportunities. In addition, design-intensive, craft-based, creative industries which operate in frequently changing, fashion-oriented markets have established regional concentrations. Such industries focus on the production of products and services with a particular cultural and social content and frequently integrate new information technologies into their operations and outputs. Among these industries, the media and, more recently, multimedia industries have received particular atte ntion (Brail/ Gertler 1999; Egan/ Saxenian 1999). Especially, the film (motion picture) and TV industries have been the focus of a number of studies (e.g. Storper/ Christopherson 1987; Scott 1996). For the purpose of this paper, cultural products industries are defined as those industries which are involved in the commodification of culture, especially those operations that depend for their success on the commercialization of objects and services that transmit social and cultural messages (Scott 1996, p. 306). Empirical studies on the size, structure and organizational attributes of the firms in media-related industry clusters have revealed a number of common characteristics (Scott 1996; Brail/ Gertler 1999; Egan/ Saxenian 1999). Most firms in these industries are fairly young, often existing for only a few years. They also tend to be small in terms of employment. Often, regional clusters of specialized industries are the product of a local growth process which has been driven by innovative local start-ups. In their early stages, many firms have been established by teams of persons rather than by individual entrepreneurs and have heavily relied on owner capital. Another important feature which distinguishes these industries from others is that they concentrate in inner-city instead of suburban locations (Storper/ Christopherson 1987; Eberts/ Norcliffe 1998; Brail/ Gertler 1999). In this study, I provide evidence that the Leipzig media industry shows similar tendencies and characteristics as those displayed by the multimedia and cultural products industry clusters in Los Angeles, San Francisco and Toronto, albeit at a much smaller scale. Cultural products industries are characterized by a strong tendency towards the formation of regional clusters despite the fact that in some sectors, such as the multimedia industry, technological opportunities (i.e. internet technologies) have seemingly reduced the necessity of proximity in operations between interlinked firms. In fact, it seems that regional concentration tendencies are even more dominant in cultural products industries than in many industries of the old economy . Cultural products industries have formed particular regional clusters of suppliers, producers and customers which are interlinked within the same commodity chains (Scott 1996; Les- 2 lie/ Reimer 1999). These clusters are characterized by a deep social division of labor between vertically-linked firms and patterns of interaction and cooperation in production and innovation. Within close networks of social relations and reflexive collective action, they have developed a strong tendency towards product- and process-related specialization (Storper 1997; Maskell/ Malmberg 1999; Porter 2000). In the context of the rise of a new media industry cluster in Leipzig, Germany, I discuss those approaches in the next section of this paper which provide an understanding of complex industrial clustering processes. Therein socio-institutional settings, inter-firm communication and interactive learning play a decisive role in generating regional innovation and growth. However, I will also emphasize that interfirm networks can have a negative impact on competitiveness if social relations and linkages are too close, too exclusive and too rigid. Leipzig's historical role as a location of media-related businesses will be presented in section 3. As part of this, I will argue the need to view the present cluster of media firms as an independent phenomenon which is not a mere continuation of tradition. In section 4 the start-up and location processes are analyzed which have contributed to the rise of a new media industry cluster in Leipzig during the 1990's. Related to this, section 5 will discuss the role and variety of institutions which have developed in Leipzig and how they support specialization processes. This will be interpreted as a process of reembedding into a local context. In section 6, I will discuss how media firms have become over-embedded due to their strong orientation towards regional markets. This will be followed by some brief conclusions regarding the growth potential of the Leipzig media industry.
There are few changes in the history of human existence comparable to urbanization in scope and potential to bring about biologic change. The transition in the developed world from an agricultural to an industrial-urban society has already produced substantial changes in human health, morphology and growth (Schell, Smith and Bilsborough, 1993, p.1). By the year 2000, about 50% of the world s total population will be living crowded in urban areas and soon thereafter, by the year 2025 as the global urban population reaches the 5 billion mark more of the world s population will be living in urban areas. This has enormous health consequences. By the close of the twenty-first century, more people will be packed into the urban areas of the developing world than are alive on the planet today (UNCHS (Habitat), 1996, p.xxi). Africa presents a particularly poignant example of the problems involved, as it has the fastest population and urban growth in the world as well as the lowest economic development and growth and many of the poorest countries, especially in Tropical Africa. Thus it exemplifies in stark reality many of the worst difficulties of urban health and ecology (Clarke, 1993, p.260). This essay is therefore concerned to analyse the trends of urbanization in Africa. This is followed by an overview of the environmental conditions of Africa s towns and cities. The subsequent section explores the links between the urban environment and health. Although the focus is with physical hazards it is important to note that the social milieu is also vital in the reproduction of health. The paper concludes by providing some policy recommendations.
Characterised as the mighty capital of the eurozone (Sassen 1999, 83), Frankfurt is said to be a rising world city primarily due to its financial centre. This is reflected in the use of such common catchphrases as Bankfurt and Mainhattan for the city, as well as its reference in scientific publications. As Ronneberger and Keil (1995, 305) state, for instance, a service economy [...] mastered by the finance sector forms the basis for the continuing integration of Frankfurt into the international market. Frankfurt is the most important German as well as European financial centres. Thirteen of the 30 largest German banks and about two thirds of Germany s foreign banks are seated here. Frankfurt s stock exchange (ranked 4th in the world) is by far the biggest in Germany with a turnover-share of more than 80%. Its derivatives exchange (Eurex) aims to become the biggest in the world. As the host city for the European Central Bank, it is also the centre of European monetary policy. As a major node in the global financial network today, Frankfurt s specific functions within this network will be investigated in this paper. Unlike most other predominant national financial centres, Frankfurt has not continuously held this position in Germany s since the middle ages: It re-gained it s position from Berlin only after World War II. In contrast to the static phenomenon financial centre which is well covered in the literature emergence and development of financial centres is not as well understood. The study of the development of the financial centre Frankfurt after World War II gives insights into the dynamics of the self-reinforcing mechanisms within financial centres; the second topic covered in the paper. The paper is organised as follows: the remainder of this chapter looks at the method used in this study and the theory of financial centres with an emphasis on the basic approaches to the emergence of financial centres. After that it is asked whether Frankfurt meets the basic requirements for the concept of path dependence, i.e. that there are self-reinforcing mechanisms. After a positive answer to that, the development of Frankfurt as a financial centre is discussed as well as its role as a node in the world (financial) system today in chapter two. Chapter three provides some more or less speculative remarks about Frankfurt s future; the last chapter briefly summarises the findings of the paper.
One of the most important but less understood phenomena in the beginning of the 21st century has been a shift toward knowledge-based economic activity in the comparative advantage of modern industrialized countries. Two broad trends has been observed in the global economy. That is, the output from the world's science and technology system has been growing rapidly and the nature of investment has been changed (MILLER, 1996). The relative proportions of physical and intangible investment have changed considerably with the relative increase of intangible investments since the 1980s. In addition, there has been increased complementarity between physical and intangible investments and more important role of high technology in both kinds of investment (MILLER, 1996). Even in the newly industrialized countries, the growth of technology intensive industries, the increase of R&D activities and the growth of the knowledge intensive producer services have been common feature in recent years. In this change of the structure of productive assets, the role of knowledge is well recognized as the most fundamental resources in recent years (OECD, 1996; WORLD BANK, 1998). The development of information and communication technology (ICT) and globalisation trend have promoted this shift toward knowledge-based economy.
The globalisation of contemporary capitalism is bringing about at least two important implications for the emergence and significance of business services. First, the social division of labour steadily increases (ILLERIS 1996). Within the complex organisation of production and trade new intermediate actors emerge either from the externalisation of existing functions in the course of corporate restructuring policies or from the fragmentation of the production chain into newly defined functions. Second, competitive advantages of firms increasingly rest on their ability to innovate and learn. As global communication erodes knowledge advantages more quickly, product life cycles shorten and permanent organisational learning results to be crucial for the creation and maintenance of competitiveness. Intra- and interorganisational relations of firms now are the key assets for learning and reflexivity (STORPER 1997). These two aspects of globalisation help understand why management consulting - as only one among other knowledge intensive business services (KIBS) - has been experiencing such a boost throughout the last two decades. Throughout the last ten years, the business has grown annually by 10% on average in Europe. Management consulting can be seen first, as a new organisational intermediate and second, as an agent of change and reflexivity to business organisations. Although the KIBS industry may not take a great share of the national GDP its impact on national economies should not be underestimated. Estimations show that today up to 80% of the value added to industrial products stem from business services (ILLERIS 1996). Economic geographers have been paying more attention to KIBS since the late 1970s and focus on the transformation of the spatial economy through the emerging business services. This market survey is conceived as a first step of a research programme on the internationalisation of management consulting and as a contribution to the lively debate in economic geography. The management consulting industry is unlimited in many ways: There are only scarce institutional boundaries, low barriers to entry, a very heterogeneous supply structure and multiple forms of transaction. Official statistics have not yet provided devices of grasping this market and it may be therefore, that research and literature on this business are rather poor. The following survey is an attempt to selectively compile existing material, empirical studies and statistics in order to draw a sketchy picture of the European market, its institutional constraints, agents and dynamics. German examples will be employed to pursue arguments in more depth.
During the 1980s and early 1990s, the importance of small firm growth and industrial districts in Italy became the focus of a large number of regional development studies. According to this literature, successful industrial districts are characterized by intensive cooperation and market producer-user interaction between small and medium-sized, flexibly specialized firms (Piore and Sabel, 1984; Scott, 1988). In addition, specialized local labor markets develop which are complemented by a variety of supportive institutions and a tradition of collaboration based on trust relations (Amin and Robins, 1990; Amin and Thrift, 1995). It has also been emphasized that industrial districts are deeply embedded into the socio-institutional structures within their particular regions (Grabher, 1993). Many case studies have attempted to find evidence that the regional patterns identified in Italy are a reflection of a general trend in industrial development rather than just being historical exceptions. Silicon Valley, which is focused on high technology production, has been identified as being one such production complex similar to those in Italy (see, for instance, Hayter, 1997). However, some remarkable differences do exist in the institutional context of this region, as well as its particular social division of labor (Markusen, 1996). Even though critics, such as Amin and Robins (1990), emphasized quite early that the Italian experience could not easily be applied to other socio-cultural settings, many studies have classified other high technology regions in the U.S. as being industrial districts, such as Boston s Route 128 area. Too much attention has been paid to the performance of small and medium-sized firms and the regional level of industrial production in the ill-fated debate regarding industrial districts (Martinelli and Schoenberger, 1991). Harrison (1997) has provided substantial evidence that large firms continue to dominate the global economy. This does not, however, imply that a de-territorialization of economic growth is necessarily taking place as globalization tendencies continue (Storper, 1997; Maskell and Malmberg, 1998). In the case of Boston, it has been misleading to define its regional economy as being an industrial district. Neither have small and medium-sized firms been decisive in the development of the Route 128 area nor has the region developed a tradition of close communication between vertically-disintegrated firms (Dorfman, 1983; Bathelt, 1991a). Saxenian (1994) found that Boston s economy contrasted sharply with that of an industrial district. Specifically, the region has been dominated by large, vertically-integrated high technology firms which are reliant on proprietary technologies and autarkic firm structures. Several studies have tried to compare the development of the Route 128 region to Silicon Valley. These studies have shown that both regions developed into major 2 agglomerations of high technology industries in the post-World War II period. Due to their different traditions, structures and practices, Silicon Valley and Route 128 have followed divergent development paths which have resulted in a different regional specialization (Dorfman, 1983; Saxenian, 1985; Kenney and von Burg, 1999). In the mid 1970s, both regions were almost equally important in terms of the size of their high technology sectors. Since then, however, Silicon Valley has become more important and has now the largest agglomeration of leading-edge technologies in the U.S. (Saxenian, 1994). Saxenian (1994) argues that the superior performance of high technology industries in Silicon Valley over those in Boston is based on different organizational patterns and manufacturing cultures which are embedded in those socio-institutional traditions which are particular to each region. Despite the fact that Saxenian (1994) has been criticized for basing her conclusions on weak empirical research (i.e. Harrison, 1997; Markusen, 1998), she offers a convincing explanation as to why the development paths of both regions have differed.1 Saxenian s (1994) study does not, however, identify which structures and processes have enabled both regions to overcome economic crises. In the case of the Boston economy, high technology industries have proven that they are capable of readjusting and rejuvenating their product and process structures in such a way that further innovation and growth is stimulated. This is also exemplified by the region s recent economic development. In the late 1980s, Boston experienced an economic decline when the minicomputer industry lost its competitive basis and defense expenditures were drastically reduced. The number of high technology manufacturing jobs decreased by more than 45,000 between 1987 and 1995. By the mid 1990s, however, the regional economy began to recover. The rapidly growing software sector compensated for some of the losses experienced in manufacturing. In this paper, I aim to identify the forces behind this economic recovery. I will investigate whether high technology firms have uncovered new ways to overcome the crisis and the extent to which they have given up their focus on self-reliance and autarkic structures. The empirical findings will also be discussed in the context of the recent debate about the importance of regional competence and collective learning (Storper, 1997; Maskell and Malmberg, 1998). There is a growing body of literature which suggests that some regional economies During the 1980s and early 1990s, the importance of small firm growth and industrial districts in Italy became the focus of a large number of regional development studies. According to this literature, successful industrial districts are characterized by intensive cooperation and market producer-user interaction between small and medium-sized, flexibly specialized firms (Piore and Sabel, 1984; Scott, 1988). In addition, specialized local labor markets develop which are complemented by a variety of supportive institutions and a tradition of collaboration based on trust relations (Amin and Robins, 1990; Amin and Thrift, 1995). It has also been emphasized that industrial districts are deeply embedded into the socio-institutional structures within their particular regions (Grabher, 1993). Many case studies have attempted to find evidence that the regional patterns identified in Italy are a reflection of a general trend in industrial development rather than just being historical exceptions. Silicon Valley, which is focused on high technology production, has been identified as being one such production complex similar to those in Italy (see, for instance, Hayter, 1997). However, some remarkable differences do exist in the institutional context of this region, as well as its particular social division of labor (Markusen, 1996). Even though critics, such as Amin and Robins (1990), emphasized quite early that the Italian experience could not easily be applied to other socio-cultural settings, many studies have classified other high technology regions in the U.S. as being industrial districts, such as Boston s Route 128 area. Too much attention has been paid to the performance of small and medium-sized firms and the regional level of industrial production in the ill-fated debate regarding industrial districts (Martinelli and Schoenberger, 1991). Harrison (1997) has provided substantial evidence that large firms continue to dominate the global economy. This does not, however, imply that a de-territorialization of economic growth is necessarily taking place as globalization tendencies continue (Storper, 1997; Maskell and Malmberg, 1998). In the case of Boston, it has been misleading to define its regional economy as being an industrial district. Neither have small and medium-sized firms been decisive in the development of the Route 128 area nor has the region developed a tradition of close communication between vertically-disintegrated firms (Dorfman, 1983; Bathelt, 1991a). Saxenian (1994) found that Boston s economy contrasted sharply with that of an industrial district. Specifically, the region has been dominated by large, vertically-integrated high technology firms which are reliant on proprietary technologies and autarkic firm structures. Several studies have tried to compare the development of the Route 128 region to Silicon Valley. These studies have shown that both regions developed into major 2 agglomerations of high technology industries in the post-World War II period. Due to their different traditions, structures and practices, Silicon Valley and Route 128 have followed divergent development paths which have resulted in a different regional specialization (Dorfman, 1983; Saxenian, 1985; Kenney and von Burg, 1999). In the mid 1970s, both regions were almost equally important in terms of the size of their high technology sectors. Since then, however, Silicon Valley has become more important and has now the largest agglomeration of leading-edge technologies in the U.S. (Saxenian, 1994). Saxenian (1994) argues that the superior performance of high technology industries in Silicon Valley over those in Boston is based on different organizational patterns and manufacturing cultures which are embedded in those socio-institutional traditions which are particular to each region. Despite the fact that Saxenian (1994) has been criticized for basing her conclusions on weak empirical research (i.e. Harrison, 1997; Markusen, 1998), she offers a convincing explanation as to why the development paths of both regions have differed.1 Saxenian s (1994) study does not, however, identify which structures and processes have enabled both regions to overcome economic crises. In the case of the Boston economy, high technology industries have proven that they are capable of readjusting and rejuvenating their product and process structures in such a way that further innovation and growth is stimulated. This is also exemplified by the region s recent economic development. In the late 1980s, Boston experienced an economic decline when the minicomputer industry lost its competitive basis and defense expenditures were drastically reduced. The number of high technology manufacturing jobs decreased by more than 45,000 between 1987 and 1995. By the mid 1990s, however, the regional economy began to recover. The rapidly growing software sector compensated for some of the losses experienced in manufacturing. In this paper, I aim to identify the forces behind this economic recovery. I will investigate whether high technology firms have uncovered new ways to overcome the crisis and the extent to which they have given up their focus on self-reliance and autarkic structures. The empirical findings will also be discussed in the context of the recent debate about the importance of regional competence and collective learning (Storper, 1997; Maskell and Malmberg, 1998). There is a growing body of literature which suggests that some regional economies an develop into learning economies which are based on intra-regional production linkages, interactive technological learning processes, flexibility and proximity (Storper, 1992; Lundvall and Johnson, 1994; Gregersen and Johnson, 1997). In the next section of this paper, I will discuss some of the theoretical issues regarding localized learning processes, learning economies and learning regions (see, also, Bathelt, 1999). I will then describe the methodology used. What follows is a brief overview of how Boston s economy has specialized in high technology production. The main part of the paper will then focus on recent trends in Boston s high technology industries. It will be shown that the high technology economy consists of different subsectors which are not tied to a single technological development path. The various subsectors are, at least partially, dependent on different forces and unrelated processes. There is, however, tentative evidence which suggests that cooperative behavior and collective learning in supplierproducer- user relations have become important factors in securing reproductivity in the regional structure. The importance of these trends will be discussed in the conclusions.
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The paper undertakes a comparative study of the set of laws affecting corporate governance in the United States and Germany, and an evaluation of their design if one assumes that their objective were the protection of the interests of minority outside shareholders. The rationale for such an objective is reviewed, in terms of agency cost theory, and then the institutions that serve to bound agency costs are examined and critiqued. In particular, there is discussion of the applicable legal rules in each country, the role of the board of directors, the functioning of the market for corporate control, and (briefly) the use of incentive compensation. The paper concludes with the authors views on what taking shareholder protection seriously, in each country s legal system, would require.
This memorandum describes the approach of the U.S. Securities and Exchange Commission (the "SEC") in monitoring and, where appropriate, regulating the use of research reports by investment banking firms in connection with securities transactions. The memorandum addresses the historical system of regulation, which continues in large measure to apply. It also examines the new initiatives taken, following a number of prominent corporate, accounting and banking scandals and a significant decline in U.S. and international capital markets, to supplement the current system in what some have dubbed the "post-Enron era".
Recent empirical work shows that a better legal environment leads to lower expected rates of return in an international cross-section of countries. This paper investigates whether differences in firm-specific corporate governance also help to explain expected returns in a cross-section of firms within a single jurisdiction. Constructing a corporate governance rating (CGR) for German firms, we document a positive relationship between the CGR and firm value. In addition, there is strong evidence that expected returns are negatively correlated with the CGR, if dividend yields and price-earnings ratios are used as proxies for the cost of capital. Most results are robust for endogeneity, with causation running from corporate governance practices to firm fundamentals. Finally, an investment strategy that bought high-CGR firms and shorted low-CGR firms would have earned abnormal returns of around 12 percent on an annual basis during the sample period. We rationalize the empirical evidence with lower agency costs and/or the removal of certain governance malfunctions for the high-CGR firms.
The corporate convergence debate is usually presented in terms of competing efficiency and political claims. Convergence optimists assert that an economic logic will promote convergence on the most efficient form of economic organization, usually taken to be the public corporation governed under rules designed to maximize shareholder value. Convergence skeptics counterclaim that organizational diversity is possible, even probable, because of path dependent development of institutional complementarities whose abandonment is likely to be inefficient. The skeptics also assert that existing elites will use their political and economic advantages to block reform; the optimists counterclaim that the spread of shareholding will reshape politics.
The venture capital market and firms whose creation and early stages were financed by venture capital are among the crown jewels of the American economy. Beyond representing an important engine of macroeconomic growth and job creation, these firms have been a major force in commercializing cutting edge science, whether through their impact on existing industries as with the radical changes in pharmaceuticals catalyzed by venture-backed firms commercialization of biotechnology, or by the their role in developing entirely new industries as with the emergence of the internet and world wide web. The venture capital market thus provides a unique link between finance and innovation, providing start-up and early stage firms - organizational forms particularly well suited to innovation - with capital market access that is tailored to the special task of financing these high risk, high return activities.
This article presents a structural overview of corporate disclosure in Germany against the background of a rapidly evolving European market. Professor Baums first makes the theoretical case for mandatory disclosure and outlines the standard, regulatory elements of market transparency. He then turns to German law and illustrates both how it attempts to meet the principle, theoretical demands of disclosure and how it should be improved. The article also presents in some detail the actual channels of corporate disclosure used in Germany and the manner in which German law now fits into the overall development of the broader, European Community scheme, as well as the contemplated changes and improvements both at the national and the supranational level.
The paper was submitted to the conference on company law reform at the University of Cambridge, July 4th, 2002. Since the introduction of corporation laws in the individual German states during the first half of the 19th century, Germany has repeatedly amended and reformed its company law. Such reforms and amendments were prompted in part by stock exchange fraud and the collapse of large corporations, but also by a routine adjustment of law to changing commercial and societal conditions. During the last ten years, a series of significant changes to German company law led one commentator to speak from a "company law in permanent reform". Two years ago, the German Federal Chancellor established a Regierungskommission Corporate Governance ("Government Commission on Corporate Governance") and instructed it to examine the German Corporate Governance system and German company law as a whole, and formulate recommendations for reform.
On April 24, 2001 the European Commission presented a proposal for a Directive1 introducing supplementary supervision of financial conglomerates (the Proposed Directive). The Proposed Directive requires a closer coordination among supervisory authorities of different sectors of the financial industry and leads to changes in the number of existing Directives relating to the supervision of credit institutions, insurance undertakings and investment firms.
It is an established policy in the United States to separate commercial banking (the business of taking deposits and making commercial loans) from other commercial activities. The separation of banking and commercial activities is achieved by federal and state banking laws, which enumerate the powers that banks may exercise, the activities that banks may engage in, and the investments that banks may lawfully make, and expressly exclude banks from certain activities or relationships. Some of these provisions could be circumvented if a nonbank company could carry on banking activities through a banking subsidiary and nonbanking activities either itself or through a nonbanking subsidiary.
The institutionalization and internationalization of shareholdings, the globalization of capital markets and the rapid development of information technologies have placed our corporate law system under increasing pressure to adapt to the ever changing requirements of the market. For this reason, in May 2000, the German government called together a group of industrialists, representatives of shareholder associations and institutional investors, trade unionists, politicians and scholars to form an expert Panel with the task of reviewing the German corporate governance system. This Government Panel on Corporate Governance prepared a questionnaire on key issues in the field, and solicited responses and input from numerous national and international experts and institutions. In July 2001, the Commission presented its 320 page report (available at www.ottoschmidt. de/corporate_governance.htm) to the German Chancellor. The Report made nearly 150 recommendations for amendments or changes to existing provisions of German law and also set forth proposals on how the German corporate governance system should be further developed in order to maintain a normative framework that is suitable and attractive not only for companies, but also for domestic and foreign investors. In order that the Panel s proposals may receive careful consideration from a diverse audience, it seems very useful to keep a wider public informed of the Panel s recommendations. Therefore, also on behalf of the Panel, I very much appreciate that the international law firm Shearman & Sterling has taken the initiative to have the summary of the Panel s recommendations translated into English.
The road to shareowner power
(1999)
A dramatic rise in shareowner power and improvements in corporate governance tan be achieved in the next few years by expanding the role of proxy advisory firms. This will require changing the way such firms are paid. They are now paid directly by investors who buy their advice; but this arrangement suffers from a free-rider problem. Instead, they should be paid by each corporation about which they are advising, in accordance with shareholder vote so as to preclude management influence. This arrangement would make it economically feasible for advisory firms to expand their services, becoming proactive like relational investors. Any proxy advisor other than the market leader Stands to gain tremendously by initiating this new System. lt would eliminate the natura1 monopoly feature of the current System, and spread the tost more equitably across all shareowners. lt would also enable proxy advisory ftrms to market their Services to individual investors via the internet.
Shareholder voting is back on the agenda of public debate for several reasons. One is the investors’ internationalization of capital investments and the raising of funds globally by companies. It can be predicted that considering the growing together of capital markets the trend to international investments will increase not least because the introduction of the Euro will create a uniform European stock market. This leads to the question how the law deals with this development and its problems. The EU Commission has commissioned a comparative study dealing, inter alia, with shareholders’ representation at general meetings in the EU member states.1 The aim is to simplify the operating regulations for public limited companies in the EU. Furthermore, the internationalization of shareholdings leads the investors to ask how their interests are protected abroad. Are the mechanisms of shareholder protection sufficient for foreign investors? In particular the formation of transnational companies like Daimler-Chrysler will change corporate governance systems. It remains to be seen whether and how foreign institutional investors will use measures of - in this case - German corporate law to control the management. From a microeconomic point of view the question is what specific features of a given corporate governance system might contribute to better performance of firms. The following remarks will however, be confined to one specific aspect of corporate governance only, the exercise of shareholders’ voting rights at the general meeting.
I analyze the most powerful shareholders in Germany to illustrate the concentration of control over listed corporations. Compared to other developed economies, the German stock market is dominated by large shareholders. I show that 77% of the median firm’s voting rights arecontrolled by large blockholders. This corresponds to 47% of the market value of all firms listed in Germany’s official markets. About two thirds of this amount is controlled by banks, industrial firms, holdings, and insurance companies. I show that due to current legislation it is clear for neither group who ultimate exerts control over the shareholding firm itself. For the remaining blockholders, only blocks controlled by voting pools and individuals can be traced back to the highest level of ownership. In the aggregate, both groups control only 5.6% of all reported blocks. The German government controls 8%, and it is not clear who ultimately is responsible for the consequences of decisions.
We first analyze legal provisions relating to corporate transparency in Germany. We show that despite the new securities trading law (WpHG) of 1995, the practical efficacy of disclosure regulation is very low. On the one hand, the formation of business groups involving less regulated legal forms as intermediate layers can substantially reduce transparency. On the other hand, the implementation of the law is not practical and not very effective. We illustrate these arguments using several examples of WpHG filings. To illustrate the importance of transparency, we show next that German capital markets are dominated by few large firms accounting for most of the market’s capitalization and trading volume. Moreover, the concentration of control is very high. First, 85% of all officially listed AGs have a dominant shareholder (controlling more than 25% of the voting rights). Second, few large blockholders control several deciding voting blocks in listed corporations, while the majority controls only one block.
The article describes the legal structure of the Daimler-Chrysler merger. It asks why this specific structure rather than another cheaper way was chosen. This leads to the more general question of the pros and cons of mandatory corporate law as a regulatory device. The article advocates an "optional" approach: The legislator should offer various menus or sets of binding rules among which the parties may choose. (JEL: ...)
The previous proposal for a company law directive on takeovers in 1990 was rejected in Germany almost unanimously for several different reasons. The new "slimmed down" draft proposal, in the light of the subsidiarity principle, takes the different approaches to investorprotection in the various member states better into account. Notably, the most controversial principle of the previous draft, viz. the mandatory bid rule as the only means of investorprotection in case of a change of control, has been given up. Therefore a much higher degree of acceptance seems likely. The Bundesrat (upper house) and the industry associations have already expressed their consent; the Bundestag (Federal Parliament) will deal with the proposal shortly. The technique of a "frame directive" leaves ample leeway for the member states. That will shift the discussion back to the national level and there will lead to the question as to how to make use of this leeway (cf. II, III, below) rather than to a debate about principles as in the past. It seems likely that criticism will confine itself to more technical questions (cf. IV, below).
The corporate governance systems in Europe differ markedly. Economists tend to use stylized models and distinguish between the Anglo-American, the German and the Latinist model.1 In this view, for instance, the Austrian, Dutch, German, and Swiss systems are said to be variations of one model. For lawyers the picture is of course, much more detailed as particular rules may vary even where common principles prevail. Many comparative studies on these differences have been undertaken meanwhile.2 I do not want to add another study but to treat a different question. Are there as a consequence of growing internationalization, globalization of markets and technological change, also tendencies of convergence of our corporate governance systems? My answer will be in two parts. As corporate governance systems are traditionally mainly shaped by legislation, the first part will analyze the influence of the economic and technological change on the rule-setting process itself. How does this process react to the fundamental environmental change? That includes a short analysis of the solution of centralized harmonizing of company law within the EU as well as the question of whether EU-wide competition between national corporate law legislators can be observed or be expected in the future. The second part will then turn to the national level. It deals with actual tendencies of convergence or, more correctly, of approach by the German corporate governance system to the Anglo-American one.
Universal banking means that banks are permitted to offer all of the various kinds of financial services. This includes classical banking activities like the credit and deposit business, as well as investment services, placement and brokerage of securities, and even insurance activities, trading in real estate and others. German universal banks also hold stock in nonfinancial firms and offer to vote their clients' shares in other firms. This paper deals with universal banks and their role in the investment business, more specifically, their links with investment companies and their various roles as shareholders and providers of financial services to such companies. Banks and investment companies have, as financial intermediaries, one trait in common: they both transform capital of investors (depositors and shareholders of investment funds, respectively) into funds (loans and equity or debt securities, respectively) that are channeled to other firms. So why should a regulation forbid to combine these transformation tasks in one institution or group, and why should the law not allow banks to establish investment companies and provide all kinds of financial services to them in addition to their banking services? German banking and investment company law have answered these questions in the affirmative. This paper argues that the existing regulation is not a sound and recommendable one. The paper is organized as follows: Sections II - V identify four areas where the combination of banking and investment might either harm the shareholders of the investment funds and/or negatively affect other constituencies such as the shareholders of the banking institution. These sections will at the same time explore whether there are institutional or regulatory provisions in place or market forces at work that adequately protect investors and the other constituencies in question. Concluding remarks follow (VI.).
For the German observer the idea of a Company repurchasing its own shares seems to resemble the picture of a snake eating its own tail. It appears to be highly unnatura1 and one wonders how the tail tan possibly be eatable for the snake. Not in the United States. Although repurchases have once been subject to the most stubbornly fought conflict in US Company law only some modest disclosure requirements and safeguards against overt market manipulation exist today. Large repurchases are an almost everyday event and there is an increasing tendency. The aggregate value of shares repurchased by NYSE listed companies has increased from $ 1 .l billion in 1975 to $ 6.3 billion in 1982 to $ 37.1 billion in 1985*. Few examples may illustrate this practice further: Within three years Ford Motor Corp. repurchased 30 million shares for $ 1.2 billion. In 1985 Phillips Petroleum Corp. was faced with two hostile bids and took several defensive Steps, one of which was to tender for 20 million of its own shares at a total tost of $ 1 billion. And by the end of 1988 Exxon Corp. retired 28 percent of its shares that had once been outstanding at an aggregate tost of $ 14.5 billion. The Situation in Germany is completely different. As it will be shown under German law repurchases are severely restricted and do appreciable amount at all. not take place at an In contrast to German law the United Kingdom does not prohibit repurchases but requires companies to comply with such complex rules that US companies would regard simply as limiting their economic freedom. Therefore UK companies very seldom repurchase their own shares, too. This Paper deals with repurchases by quoted companies, in particular the UK public Company and the more or less German equivalent, the Aktiengesellschaft (AG). It seeks to ascertain the reasons why companies might want to engage in those activities. Moreover, it tries to analyse the Problems which may arise from repurchases and the safeguards which the UK and German legal Systems provide for these Problems.This Paper deals with repurchases by quoted companies, in particular the UK public Company and the more or less German equivalent, the Aktiengesellschaft (AG). It seeks to ascertain the reasons why companies might want to engage in those activities. Moreover, it tries to analyse the Problems which may arise from repurchases and the safeguards which the UK and German legal Systems provide for these Problems.
Until the late 1980s, asset securitisation was an US-American finance technique. Meanwhile this technique has been used also in some European countries, although to a much lesser extent. While some of them have adopted or developed their legal and regulatory framework, others remain on earlier stages. That may be because of the lack of economic incentives, but also because of remaining regulatory or legal impediments. The following overview deals with the legal and regulatory environment in five selected European countries. It is structured as follows: First, this finance technique will be described in outline to the benefit of the reader who might not be familiar with it. A further part will report the recent development and the underlying economic reasons that drive this development. The main part will then deal with international aspects and give an overview of some legal and regulatory issues in five European legislations. Tax and accounting questions are, however, excluded. Concluding remarks follow.
The following descriptive overview of the German corporate governance system and the current debate is structured as follows. Part II will give some information on the empirical background. Part III will describe the formal legal setting as well as actual practices in some key areas. Part IV will then deal with some issues of the current debate.