Refine
Year of publication
- 2004 (477) (remove)
Document Type
- Article (162)
- Working Paper (71)
- Part of a Book (67)
- Preprint (48)
- Doctoral Thesis (43)
- Part of Periodical (31)
- Conference Proceeding (28)
- Report (13)
- Book (10)
- diplomthesis (2)
Language
- English (477) (remove)
Has Fulltext
- yes (477) (remove)
Is part of the Bibliography
- no (477) (remove)
Keywords
- Syntax (26)
- Generative Transformationsgrammatik (23)
- Wortstellung (19)
- Deutsch (16)
- Optimalitätstheorie (12)
- Phonologie (11)
- Deutschland (9)
- Englisch (8)
- Formale Semantik (8)
- Informationsstruktur (8)
Institute
- Physik (75)
- Wirtschaftswissenschaften (38)
- Center for Financial Studies (CFS) (28)
- Medizin (27)
- Extern (24)
- Biochemie und Chemie (23)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Biowissenschaften (12)
- Informatik (12)
- Mathematik (9)
Taxation and tax policy reform appears on the political agenda in most advanced welfare states in Europe and North America. Of course studies of taxation and tax policy are nothing new and have existed ever since people have paid taxes. The current work is situated in the context of the future of the welfare state and the reinforced international economic and political integration referred to as "globalization." The purpose of this paper is to analyze how globalization is affecting tax policy in advanced welfare states. In comparing the evolution of tax policy in Canada with those in the United States, Germany and Sweden from 1960 to 1995, I will try to review the conventional antiglobalization thesis, i.e., that globalization leads to a "race to the bottom" in revenue and expenditures policies, or as others have called it, a "beggar the neighbour policy" (Tanzi and Bovenberg 1990, 187). ... Conclusion: The empirical data and theoretical models clearly show that globalization is one relatively minor factor among many that explain tax policy reforms. And even that limited influence is mediated by domestic political systems, institutions and constellations of actors. As the data has shown, the conventional globalization thesis of a race to the bottom is not borne out. Tax rates and tax revenues are still increasing, despite the ongoing trend toward international trade integration. Countervailing pressures like the high cost of welfare programs, different parties in government, strong labour unions, and institutional veto players counteract the pressure of globalization on tax policy. As for the future of taxation in Canada, it is more likely to be one of gradual evolution than radical change. Although the data don’t show any downward pressure on tax rates and tax revenues comparatively speaking, there are at least four key factors in Canada that are likely to put pressure on future tax rates, although regional political dynamics and the workings of fiscal federalism suggest that tax reductions will be a higher priority in some provinces than others (Hale 2002). First, neoliberalism will continue to shape fiscal and tax policy, including the role of the tax system in delivering social policies and programs in most parts of Canada. Second, governments that seek to define their own economic and social priorities rather than simply react to events beyond their borders will have to exercise centralized control over budgetary policies and spending levels if they hope to foster the economic growth needed to finance social services in the context of Canada’s changing demographics. Third, the ability of governments to combine the promotion of economic growth and higher living standards will be closely linked to their ability to develop a workable division of responsibilities among federal and provincial governments and with other national governments. Finally, the diffusion of new technologies will continue to transform national and regional economies while giving individuals greater opportunity to avoid government and tax regulations that run contrary to their perceived interests and values. This discussion of determinants that shape tax policy reform has shown that successful management of fiscal and tax policy requires a capacity to set priorities; adapt to changing circumstances; and build a consensus that enables competing economic, social, regional and ideological interests to identify their own well-being in the broader political and economic environment. Tax policy is shaped by many political, economic and social determinants. As Geoffrey Hale correctly concludes, "it should not be surprising if the tax system stubbornly refuses to confirm either economic theories or political ideologies, but reflects past decisions and the policy tradeoffs of the political process" (2002, 71). The notion of tax policy being driven by globalization and forces associated with globalization (both positive and negative) is simply not borne by the facts.
Despite a legal framework being in place for several years, the market share of qualified electronic signatures is disappointingly low. Mobile Signatures provide a new and promising opportunity for the deployment of an infrastructure for qualified electronic signatures. We that SIM-based signatures are the most secure and convenient solution. However, using the SIM-card as a secure signature creation device (SSCD) raises new challenges, because it would contain the user’s private key as well as the subscriber identification. Combining both functions in one card raises the question who will have the control over the keys and certificates. We propose a protocol called Certification on Demand (COD) that separates certification services from subscriber identification information and allows consumers to choose their appropriate certification services and service providers based on their needs. This infrastructure could be used to enable secure mobile brokerage services that can ommit the necessity of TAN lists and therefore allow a better integration of information and transaction services.
Chemically modified bases are frequently used to stabilize nucleic acids, to study the driving forces for nucleic acid structure formation and to tune DNA and RNA hybridization conditions. In particular, fluorobenzene and fluorobenzimidazole base analogues can act as universal bases able to pair with any natural base and to stabilize RNA duplex formation. Although these base analogues are compatible with an A-form RNA geometry, little is known about the influence on the fine structure and conformational dynamics of RNA. In the present study, nano-second molecular dynamics (MD) simulations have been performed to characterize the dynamics of RNA duplexes containing a central 1'-deoxy-1'-(2,4-difluorophenyl)-ß-D-ribofuranose base pair or opposite to an adenine base. For comparison, RNA with a central uridine:adenine pair and a 1'-deoxy-1'-(phenyl)-ß-D-ribofuranose opposite to an adenine was also investigated. The MD simulations indicate a stable overall A-form geometry for the RNAs with base analogues. However, the presence of the base analogues caused a locally enhanced mobility of the central bases inducing mainly base pair shear and opening motions. No stable ‘base-paired’ geometry was found for the base analogue pair or the base analogue:adenine pairs, which explains in part the universal base character of these analogues. Instead, the conformational fluctuations of the base analogues lead to an enhanced accessibility of the bases in the major and minor grooves of the helix compared with a regular base pair.
RDF is widely used in order to catalogue the chaos of data across the internet. But these descriptions must be stored, evaluated, analyzed and verified. This creates the need to search for an environment to realize these aspects and strengthen RDFs influence. InterSystems postrelational database Caché exposes many features that are similar to RDF and provide persistence with semantic part. Some models for relational databases exist but these lack features like object-oriented data-structures and multidimensional variables. The aim of this thesis is to develop an RDF model for Caché that saves RDF data in an object-oriented form. Furthermore an interface for importing RDF data will be presented and implemented.
Background: Cancer gene therapy will benefit from vectors that are able to replicate in tumor tissue and cause a bystander effect. Replication-competent murine leukemia virus (MLV) has been described to have potential as cancer therapeutics, however, MLV infection does not cause a cytopathic effect in the infected cell and viral replication can only be studied by immunostaining or measurement of reverse transcriptase activity. Results: We inserted the coding sequences for green fluorescent protein (GFP) into the proline-rich region (PRR) of the ecotropic envelope protein (Env) and were able to fluorescently label MLV. This allowed us to directly monitor viral replication and attachment to target cells by flow cytometry. We used this method to study viral replication of recombinant MLVs and split viral genomes, which were generated by replacement of the MLV env gene with the red fluorescent protein (RFP) and separately cloning GFP-Env into a retroviral vector. Co-transfection of both plasmids into target cells resulted in the generation of semi-replicative vectors, and the two color labeling allowed to determine the distribution of the individual genomes in the target cells and was indicative for the occurrence of recombination events. Conclusions: Fluorescently labeled MLVs are excellent tools for the study of factors that influence viral replication and can be used to optimize MLV-based replication-competent viruses or vectors for gene therapy.
Left dislocation in Zulu
(2004)
This paper examines left dislocation constructions in Zulu, a Southern Bantu language belonging to the Nguni group (Zone S 40). In Zulu left dislocation configurations, a topic phrase in the beginning of the sentence is linked to a resumptive element within the associated clause. Typically, the resumptive element is an incorporated pronoun (cf. Bresnan & Mchombo 1987), as illustrated by the examples in (1) and (2). In these examples, the object pronoun (in italics) is part of the verbal morphology and agrees with the noun class (gender) of the dislocate. This situation is schematically illustrated in (3), where co-indexation represents agreement: ...
Multiplayer games have become very popular in the PC market. Almost none of the current games are shipped without some support for multiplayer gaming. At the same time mobile devices are becoming more powerful and popularity of games on these platforms increases. However, there are almost no games that support multiplayer gaming despite the multiple options of these devices to connect with each other and build mobile ad hoc networks. Reasons for this lack of multiplayer support are the high diversity of mobile devices as well as the different protocols and their properties that these devices support. With “SmartBlaster” we developed a multiplayer game for several different platforms that is using several different channels (Bluetooth, IrDa, 802.11 and other networks supporting TCP/IP) to communicate between them.
In the present study possible sources and pathways of the gasoline additive methyl tertiary-butyl ether (MTBE) in the aquatic environment in Germany were investigated. The objective of the present study was to clarify some of the questions raised by a previous study on the MTBE situation in Germany. In the USA and Europe 12 million t and 3 million t of MTBE, respectively, are used as gasoline additive. The detection of MTBE in the aquatic environment and the potential risk for drinking water resources led to a phase-out of MTBE as gasoline additive in single states of the USA. Meanwhile there is also an ongoing discussion about the substitution of MTBE in Europe and Germany. The annual usage of MTBE in Germany is about 600,000 t. However, compared to the USA, significant less data exists on the occurrence of MTBE in the aquatic environment in Europe. Because of its physico-chemical properties, MTBE readily vaporizes from gasoline, is water soluble, adsorbs only weakly to the underground matrix and is largely persistent to biological degradation. The toxicity of MTBE remains to be completely investigated, but MTBE in drinking water has low taste- and odor thresholds of 20-40 microgram/L. The present study was conducted by collecting water samples and analyzing them for their MTBE concentrations through a combination of headspace-solid phase microextraction (HS-SPME) and gas chromatography-mass spectrometry (GC-MS). The detection limit was 10 ng/L. The method was successfully tested in the framework of an interlaboratory study and showed recoveries of reference values of 89% (74 ng/L) and 104% (256 ng/L). The relative standard deviations were 12% and 6%. The investigation of 83 water samples from 50 community water systems (CWSs) in Germany revealed a detection frequency of 40% and a concentration range of 17-712 ng/L. The detection of MTBE in the drinking water samples could be explained by a groundwater pollution and the pathway river - riverbank filtration - waterworks. Rivers are important drinking water sources. MTBE is emitted into rivers through a variety of sources. In the present study, potential point sources were investigated, i.e. MTBE production sites/refineries/tank farms and groundwater pollutions. For this purpose, the spatial distribution of MTBE in three German rivers with the named potential emission sources located close to the rivers was investigated by analyzing 49 corresponding river water samples. The influence of the potential emission sources groundwater pollution and refinery/tank farm was successfully demonstrated in certain parts of the River Saale and the River Rhine. Increasing MTBE concentrations from 24 ng/L to 379 ng/L and from 73 ng/L to 5 microgram/L, respectively, could be observed in the parts investigated in these two rivers. The identification of such emission sources is important for future modeling. Further sources of MTBE emission into surface water are industrial (non-petrochemical) and municipal sewage plant effluents. In the present study long-term monitoring of water from the River Main (n=67 samples), precipitation (n=89) and industrial (n=34) and municipal sewage plant effluents (n=66) was conducted. The comparison of the data sets revealed that maximum MTBE concentrations in the River Main of up to 1 microgram/L were most possibly due to single industrial effluents with MTBE concentrations of up to 28 microgram/L (measured in this study). The average MTBE content of 66 ng/L in the River Main most probably originated from municipal sewage plant effluents and further industrial effluents. Background concentrations of <30 ng/L could be related to the direct atmospheric input via precipitation. A certain aspect of the atmospheric MTBE input is represented by the input of MTBE into river water or groundwater through snow. In the present study 43 snow samples from 13 different locations were analyzed for their MTBE content. MTBE could be detected in 65% of the urban and rural samples. The concentrations ranged from 11-613 ng/L and were higher than the concentrations in rainwater samples formerly analyzed. Furthermore, a temperature dependency and wash-out effects could be observed. The atmospheric input of MTBE was in part also visible in the analyzed groundwater samples (n=170). The detection frequencies in non-urban and urban wells were 24% and 63%, respectively. The median concentrations were 177 ng/L and 57 ng/L. In wells located in the vicinity of sites with gasoline contaminated groundwater, MTBE concentrations of up to 42 mg/L could be observed. The MTBE emission sources and the different pathways of MTBE in the aquatic environment demonstrated in the present study and other works raise the question whether the use of MTBE in a bulk product like gasoline should be continued in the future. Currently, possible substitutes like ethyl tertiary-butyl ether (ETBE) or ethanol are being discussed.
This dissertation investigates developments in the performance of J. S. Bach’s music in the second half of the 20th century, as reflected in recordings of the Mass in B Minor, BWV 232. It places particular emphasis on issues relating to concepts of expression through performance. Between the 1950s and the 1980s, most Bach performers shared a partial consensus as to what constitutes expression in performance (e.g., intense sound; wide dynamic range; rubato). Arguments against the application of such techniques to Bach’s works were often linked with the view that his music is more “objective” than later repertoires; or, alternatively, that expressive elements in Bach’s music are self-sufficient, and should be not be intensified in performance. Historically-informed performance (HIP), from the late 1960s onwards, has been characterised by greater attention to the inflection of local details (i.e., individual figures and motifs). In terms of expressive intensity, this led to contradictory results. On the one hand, several HIP performances were characterised by a narrow overall dynamic range, light textures, fast tempi and few contrasts; these performances were often considered lightweight. On the other hand, HIP also promoted renewed interest in the practical application of Baroque theories of musical rhetoric, inspiring performances which projected varied intensity within movements. More recently, traditional means of expression have enjoyed renewed prominence. Ostensibly “romantic” features such as broad legati, long-range crescendi and diminuendi, and organic shaping of movements as wholes have been increasingly adopted by HIP musicians. In order to substantiate the narrative outlined above, the significance of the evidence preserved in sound recordings had to be checked against other sources of information. This dissertation is divided into two main parts. The first part focuses on specific “schools” of prominent Bach performers. Complete recordings of the Mass are examined in relation to the biographical and intellectual backgrounds of the main representatives of these schools, their verbally-expressed views on Bach’s music and on their own role as performers, and their style as documented in recordings of other works. The second part examines the performance history of specific movements within the Mass, comparing the interpretations preserved in sound recordings with relevant verbal analyses and commentaries. The dissertation as a whole therefore combines the resources of reception and performance studies. Beyond its specific historical conclusions concerning Bach performance in the post-war era, it also provides specific insights into Bach’s music, its meaning and its role in contemporary culture.
Calcium-activated potassium channels are fundamental regulators of neuron excitability. SK channels are activated by an intracellular increase of Ca++ (such as occurs during an action potential). They have a small single channel conductance (less than 20pS) and show no voltage dependence of activation. To date, there are only a few examples of high-resolution structures of eukaryotic membrane proteins. All of them were purified from natural sources. Since no abundant natural sources of eukaryotic K+ channels are available we overexpressed rSK2 in order to produce the quantities necessary for structural analysis. Unfortunately the Pichia pastoris expression system did not yield sufficient amount of pure protein, mainly because most of the protein was retained by in the ER and was only partially soluble. Subsequently, two constructs were expressed: SK2-FCYENE (containing a specific sequence that promotes surface expression), and SK2-q-CaM a concatamer of SK2 and calmodulin. Although these proved an improvement in terms of solubilisation, little improvement was found in terms of amounts of purified material obtained. For this reason we tested the Semliki Forest virus expression system, since the protein is expressed in a mammalian system where we hoped that it would be trafficked in the same way as in vivo. Using this system it was possible to express rSK2 and solubilise it with several detergents and to achieve much better purification. However, the levels were still not sufficient for high-resolution structural studies, although sufficient for single particle electron microscopy analysis.
This paper evaluates the effects of job creation schemes on the participating individuals in Germany. Since previous empirical studies of these measures have been based on relatively small datasets and focussed on East Germany, this is the first study which allows to draw policy-relevant conclusions. The very informative and exhaustive dataset at hand not only justifies the application of a matching estimator but also allows to take account of threefold heterogeneity. The recently developed multiple treatment framework is used to evaluate the effects with respect to regional, individual and programme heterogeneity. The results show considerable differences with respect to these sources of heterogeneity, but the overall finding is very clear. At the end of our observation period, that is two years after the start of the programmes, participants in job creation schemes have a significantly lower success probability on the labour market in comparison to matched non-participants.
We propose a new framework for modelling the time dependence in duration processes being in force on financial markets. The pioneering ACD model introduced by Engle and Russell (1998) will be extended in a manner that the duration process will be accompanied by an unobservable stochastic process. The Discrete Mixture ACD framework provides us with a general methodology which puts the idea into practice. It is established by introducing a discrete-valued latent regime variable which can be justified in the light of recent market microstructure theories. The empirical application demonstrates its ability to capture specific characteristics of intraday transaction durations while alternative approaches fail. JEL classification: C41, C22, C25, C51, G14.
In recent methodological work the well known ACD approach, originally introduced by Engle and Russell (1998), has been supplemented by the involvement of an unobservable stochastic process which accompanies the underlying process of durations via a discrete mixture of distributions. The Mixture ACD model, emanating from the specialized proposal of De Luca and Gallo (2004), has proved to be a moderate tool for description of financial duration data. The use of one and the same family of ordinary distributions has been common practice until now. Our contribution incites to use the rich parameterized comprehensive family of distributions which allows for interacting different distributional idiosyncrasies. JEL classification: C41, C22, C25, C51, G14
Information literacy is a mosaic of attitudes, understandings, capabilities and knowledge about which there are three myths. The first myth is that it is about the ability to use ICTs to access a wealth of information. The second is that students entering higher education are information literate because student centred, resource based, and ICT focused learning are now pervasive in secondary education. The third myth is that information literacy development can be addressed by library-centric generic approaches. This paper addresses those myths and emphasises the need for information literacy to be recognised as the critical whole of education and societal issue, fundamental to an information-enabled and better world. In formal education, information literacy can only be developed by infusion into curriculum design, pedagogies, and assessment.
Navigating information, facilitating knowledge: the library, the academy, and student learning
(2004)
Understanding the nature and complementarity of the phenomena of information and knowledge lend not only epistemological clarity to their relationship, but also reaffirms the place of the library in the academic mission of knowledge transfer, acquisition, interpretation, and creation. These in turn reassert the legitimacy of the academic library as necessary participant in the teaching enterprise of colleges and universities. Such legitimacy induces an obligation to teach, and that obligation needs to be explored and implemented with adequate vigor and reach. Librarians and the academy must, however, concede that the scope of the task calls for a solution that goes beyond shared responsibilities. Academic libraries should assume a full teaching function even as they continue their exploration and design of activities and programs aimed at reinforcing information literacy in the various disciplines on campus. All must concede that need for collaboration cannot provide grounds for questioning the desirability of autonomous teaching status for the academic library in information literacy education
Abstract: The medium modification of kaon and antikaon masses, compatible with low energy KN scattering data, are studied in a chiral SU(3) model. The mutual interactions with baryons in hot hadronic matter and the e ects from the baryonic Dirac sea on the K( ¯K ) masses are examined. The in-medium masses from the chiral SU(3) e ective model are compared to those from chiral perturbation theory. Furthermore, the influence of these in-medium e ects on kaon rapidity distributions and transverse energy spectra as well as the K, ¯K flow pattern in heavy-ion collision experiments at 1.5 to 2 A·GeV are investigated within the HSD transport approach. Detailed predictions on the transverse momentum and rapidity dependence of directed flow v1 and the elliptic flow v2 are provided for Ni+Ni at 1.93 A·GeV within the various models, that can be used to determine the in-medium K± properties from the experimental side in the near future.
Antibaryons bound in nuclei
(2004)
We study the possibility of producing a new kind of nuclear systems which in addition to ordinary nucleons contain a few antibaryons (B = p, , etc.). The properties of such systems are described within the relativistic mean field model by employing G parity transformed interactions for antibaryons. Calculations are first done for infinite systems and then for finite nuclei from 4He to 208Pb. It is demonstrated that the presence of a real antibaryon leads to a strong rearrangement of a target nucleus resulting in a significant increase of its binding energy and local compression. Noticeable e ects remain even after the antibaryon coupling constants are reduced by factor 3 4 compared to G parity motivated values. We have performed detailed calculations of the antibaryon annihilation rates in the nuclear environment by applying a kinetic approach. It is shown that due to significant reduction of the reaction Q values, the in medium annihilation rates should be strongly suppressed leading to relatively long lived antibaryon nucleus systems. Multi nucleon annihilation channels are analyzed too. We have also estimated formation probabilities of bound B + A systems in pA reactions and have found that their observation will be feasible at the future GSI antiproton facility. Several observable signatures are proposed. The possibility of producing multi quark antiquark clusters is discussed. PACS numbers: 25.43.+t, 21.10.-k, 21.30.Fe, 21.80.+a
We study the phase diagram of a generalized chiral SU(3)-flavor model in mean-field approxi- mation. In particular, the influence of the baryon resonances, and their couplings to the scalar and vector fields, on the characteristics of the chiral phase transition as a function of temperature and baryon-chemical potential is investigated. Present and future finite-density lattice calculations might constrain the couplings of the fields to the baryons. The results are compared to recent lattice QCD calculations and it is shown that it is non-trivial to obtain, simultaneously, stable cold nuclear matter.
A critical discussion of the present status of the CERN experiments on charm dynamics and hadron collective flow is given. We emphasize the importance of the flow excitation function from 1 to 50 A·GeV: here the hydrodynamic model has predicted the collapse of the v1-flow and of the v2-flow at 10 A·GeV; at 40 A·GeV it has been recently observed by the NA49 collaboration. Since hadronic rescattering models predict much larger flow than observed at this energy we interpret this observation as potential evidence for a first order phase transition at high baryon density B. A detailed discussion of the collective flow as a barometer for the equation of state (EoS) of hot dense matter at RHIC follows. Here, hadronic rescattering models can explain < 30% of the observed elliptic flow, v2, for pT > 2 GeV/c. This is interpreted as evidence for the production of superdense matter at RHIC with initial pressure far above hadronic pressure, p > 1 GeV/fm3. We suggest that the fluctuations in the flow, v1 and v2, should be measured in future since ideal hydrodynamics predicts that they are larger than 50 % due to initial state fluctuations. Furthermore, the QGP coe cient of viscosity may be determined experimentally from the fluctuations observed. The connection of v2 to jet suppression is examined. It is proven experimentally that the collective flow is not faked by minijet fragmentation. Additionally, detailed transport studies show that the awayside jet suppression can only partially (< 50%) be due to hadronic rescattering. We, finally, propose upgrades and second generation experiments at RHIC which inspect the first order phase transition in the fragmentation region, i.e. at µB 400 MeV (y 4 5), where the collapse of the proton flow should be seen in analogy to the 40 A·GeV data. The study of Jet-Wake-riding potentials and Bow shocks caused by jets in the QGP formed at RHIC can give further information on the equation of state (EoS) and transport coe cients of the Quark Gluon Plasma (QGP).
A scenario of heavy resonances, called massive Hagedorn states, is proposed which exhibits a fast (t H 1 fm/c) chemical equilibration of (strange) baryons and anti-baryons at the QCD critical temperature Tc. For relativistic heavy ion collisions this scenario predicts that hadronization is followed by a brief expansion phase during which the equilibration rate is higher than the expansion rate, so that baryons and antibaryons reach chemical equilibrium before chemical freeze-out occurs. PACS-Nr.: 12.38.Mh
The wave function of a spheroidal harmonic oscillator without spin-orbit interaction is expressed in terms of associated Laguerre and Hermite polynomials. The pairing gap and Fermi energy are found by solving the BCS system of two equations. Analytical relationships for the matrix elements of inertia are obtained function of the main quantum numbers and potential derivative. They may be used to test complex computer codes one should develop in a realistic approach of the fission dynamics. The results given for the 240 Pu nucleus are compared with a hydrodynamical model. The importance of taking into account the correction term due to the variation of the occupation number is stressed.
Complex fission phenomena
(2004)
Complex fission phenomena are studied in a unified way. Very general reflection asymmetrical equilibrium (saddle point) nuclear shapes are obtained by solving an integro-differential equation without being necessary to specify a certain parametrization. The mass asymmetry in binary cold fission of Th and U isotopes is explained as the result of adding a phenomenological shell correction to the liquid drop model deformation energy. Applications to binary, ternary, and quaternary fission are outlined.
A very general saddle point nuclear shape may be found as a solution of an integro-differential equation without giving apriori any shape parametrization. By introducing phenomenological shell corrections one obtains minima of deformation energy for binary fission of parent nuclei at a finite (non-zero) mass asymmetry. Results are presented for reflection asymmetric saddle point shapes of thorium and uranium even-mass isotopes with A=226-238 and A=230-238 respectively.
We show that an unambiguous way of determining the universal limiting fragmentation region is to consider the derivative (d 2 n / d eta 2) of the pseudo-rapidity distribution per participant pair. In addition, we find that the transition region between the fragmentation and the central plateau regions exhibits a second kind of universal behavior that is only apparent in d 2 n / d eta 2. The sqrt s dependence of the height of the central plateau (d n / d eta) eta=0 and the total charged particle multiplicity n total critically depend on the behavior of this universal transition curve. Analyzing available RHIC data, we show that (dn/d eta) eta=0 can be bounded by ln 2 s and n total can be bounded by ln 3 s. We also show that the deuteron-gold data from RHIC has the exactly same features as the gold-gold data indicating that these universal behaviors are a feature of the initial state parton-nucleus interactions and not a consequence of final state interactions. Predictions for LHC energy are also given.
We discuss modifications of the gyromagnetic moment of electrons and muons due to a minimal length scale combined with a modified fundamental scaleMf . First-order deviations from the theoretical standard model value for g-2 due to these String Theory-motivated e ects are derived. Constraints for the new fundamental scale Mf are given.
String theory suggests modifications of our spacetime such as extra dimensions and the existence of a mininal length scale. In models with addidional dimensions, the Planck scale can be lowered to values accessible by future colliders. Effective theories which extend beyond the standart-model by including extra dimensions and a minimal length allow computation of observables and can be used to make testable predictions. Expected effects that arise within these models are the production of gravitons and black holes. Furthermore, the Planck-length is a lower bound to the possible resolution of spacetime which might be reached soon.
We compare multiplicities as well as rapidity and transverse momentum distributions of protons, pions and kaons calculated within presently available transport approaches for heavy ion collisions around 1 AGeV. For this purpose, three reactions have been selected: Au+Au at 1 and 1.48 AGeV and Ni+Ni at 1.93 AGeV.
Course management software : supporting the university’s teaching with technology initiatives
(2004)
An increasingly important element of the teaching with technology activities at Northwestern University is the course management system, a web-based class communication and administration environment. The usage growth of the system is substantial and amplifies the need for integration with other web services and resources. Integration is particularly material in area of library services. This presentation contains a case study of Northwestern University's implementation of its course management system software and highlights examples of how the system is being used to enhance the teaching and learning. A description of the integration efforts with library resources is provided. The goal of the presentation is to equip librarians with the basic knowledge required to engage with their colleagues in conversations surrounding the nature of integration of these systems within the teaching and learning landscapes of their home institutions.
The key hypothesis is that the IT industry lure us into the IT world with a promise to solve our information problems. Do we sign the contract, we will recognise that the IT industry can´t keep the promise. One reason: they themselves lost sight over there own game. Therefore they have to invent new tools continiously. LIS professionals should not leave the field IT professionals. LIS professional should rather put stress on to reveal the difference in the value chain between data – information – knowledge. Information and knowledge is brainware and not produced by hard and software in the sense of IT philosophy. Against the background of the language game of Jean-François Lyotard, the author explains the information and knowledge society as language game invented by the IT industry. Furthermore his beliefs of postmodernen LIS professionals and the consequences involved for LIS traning will be presented.
A version of this paper was originally written for a plenary session about "The Futures of Ethnography" at the 1998 EASA conference in Frankfurt/Main. In the preparation of the paper, I sent out some questions to my former fellow researchers by e-mail. I thank Douglas Anthony, Jan-Patrick Heiß, Alaine Hutson, Matthias Krings, and Brian Larkin for their answers.
Evidence for an exotic S=-2, Q=-2 baryon resonance in proton-proton collisions at the CERN SPS
(2004)
Results of resonance searches in the Xi - pi -, Xi - pi +, Xi -bar+ pi -, and Xi -bar+ pi + invariant mass spectra in proton-proton collisions at sqrt[s]=17.2 GeV are presented. Evidence is shown for the existence of a narrow Xi - pi - baryon resonance with mass of 1.862±0.002 GeV/c2 and width below the detector resolution of about 0.018 GeV/c2. The significance is estimated to be above 4.2 sigma . This state is a candidate for the hypothetical exotic Xi --3/2 baryon with S=-2, I=3 / 2, and a quark content of (dsdsu-bar). At the same mass, a peak is observed in the Xi - pi + spectrum which is a candidate for the Xi 03/2 member of this isospin quartet with a quark content of (dsusd-bar). The corresponding antibaryon spectra also show enhancements at the same invariant mass.
Results on high transverse momentum charged particle emission with respect to the reaction plane are presented for Au+Au collisions at sqrt[sNN]=200 GeV. Two- and four-particle correlations results are presented as well as a comparison of azimuthal correlations in Au+Au collisions to those in p+p at the same energy. The elliptic anisotropy v2 is found to reach its maximum at pt~3 GeV/c, then decrease slowly and remain significant up to pt ~ 7-10 GeV/c. Stronger suppression is found in the back-to-back high-pt particle correlations for particles emitted out of plane compared to those emitted in plane. The centrality dependence of v2 at intermediate pt is compared to simple models based on jet quenching.
Azimuthally sensitive Hanbury Brown-Twiss interferometry in Au+Au collisions at sqrt[sNN]=200 GeV
(2004)
We present the results of a systematic study of the shape of the pion distribution in coordinate space at freeze-out in Au+Au collisions at BNL RHIC using two-pion Hanbury Brown-Twiss (HBT) interferometry. Oscillations of the extracted HBT radii versus emission angle indicate sources elongated perpendicular to the reaction plane. The results indicate that the pressure and expansion time of the collision system are not sufficient to completely quench its initial shape.
The pseudorapidity asymmetry and centrality dependence of charged hadron spectra in d+Au collisions at sqrt[sNN ]=200 GeV are presented. The charged particle density at midrapidity, its pseudorapidity asymmetry, and centrality dependence are reasonably reproduced by a multiphase transport model, by HIJING, and by the latest calculations in a saturation model. Ratios of transverse momentum spectra between backward and forward pseudorapidity are above unity for pT below 5 GeV/c . The ratio of central to peripheral spectra in d+Au collisions shows enhancement at 2< pT <6 GeV/c , with a larger effect at backward rapidity than forward rapidity. Our measurements are in qualitative agreement with gluon saturation and in contrast to calculations based on incoherent multiple partonic scatterings.
Transverse energy ( ET ) distributions have been measured for Au+Au collisions at sqrt[sNN ]=200 GeV by the STAR Collaboration at RHIC. ET is constructed from its hadronic and electromagnetic components, which have been measured separately. ET production for the most central collisions is well described by several theoretical models whose common feature is large energy density achieved early in the fireball evolution. The magnitude and centrality dependence of ET per charged particle agrees well with measurements at lower collision energy, indicating that the growth in ET for larger collision energy results from the growth in particle production. The electromagnetic fraction of the total ET is consistent with a final state dominated by mesons and independent of centrality.
We report inclusive photon measurements about midrapidity ( |y| <0.5 ) from 197 Au + 197 Au collisions at sqrt[sNN ]=130 GeV at RHIC. Photon pair conversions were reconstructed from electron and positron tracks measured with the Time Projection Chamber (TPC) of the STAR experiment. With this method, an energy resolution of Delta E/E ~ 2% at 0.5 GeV has been achieved. Reconstructed photons have also been used to measure the transverse momentum ( pt ) spectra of pi 0 mesons about midrapidity ( |y| <1 ) via the pi 0 --> gamma gamma decay channel. The fractional contribution of the pi 0 --> gamma gamma decay to the inclusive photon spectrum decreases by 20%±5% between pt =1.65 GeV/c and pt =2.4 GeV/c in the most central events, indicating that relative to pi 0 --> gamma gamma decay the contribution of other photon sources is substantially increasing.
We present STAR measurements of charged hadron production as a function of centrality in Au+Au collisions at sqrt[sNN ]=130 GeV . The measurements cover a phase space region of 0.2< pT <6.0 GeV/c in transverse momentum and -1< eta <1 in pseudorapidity. Inclusive transverse momentum distributions of charged hadrons in the pseudorapidity region 0.5< | eta | <1 are reported and compared to our previously published results for | eta | <0.5 . No significant difference is seen for inclusive pT distributions of charged hadrons in these two pseudorapidity bins. We measured dN/d eta distributions and truncated mean pT in a region of pT > pcutT , and studied the results in the framework of participant and binary scaling. No clear evidence is observed for participant scaling of charged hadron yield in the measured pT region. The relative importance of hard scattering processes is investigated through binary scaling fraction of particle production.
We report on the rapidity and centrality dependence of proton and antiproton transverse mass distributions from 197Au + 197Au collisions at sqrt[sNN ]=130 GeV as measured by the STAR experiment at the Relativistic Heavy Ion Collider (RHIC). Our results are from the rapidity and transverse momentum range of |y| <0.5 and 0.35< pt <1.00 GeV/c . For both protons and antiprotons, transverse mass distributions become more convex from peripheral to central collisions demonstrating characteristics of collective expansion. The measured rapidity distributions and the mean transverse momenta versus rapidity are flat within |y| <0.5 . Comparisons of our data with results from model calculations indicate that in order to obtain a consistent picture of the proton (antiproton) yields and transverse mass distributions the possibility of prehadronic collective expansion may have to be taken into account.
We present data on e+ e- pair production accompanied by nuclear breakup in ultraperipheral gold-gold collisions at a center of mass energy of 200 GeV per nucleon pair. The nuclear breakup requirement selects events at small impact parameters, where higher-order diagrams for pair production should be enhanced. We compare the data with two calculations: one based on the equivalent photon approximation, and the other using lowest-order quantum electrodynamics (QED). The data distributions agree with both calculations, except that the pair transverse momentum spectrum disagrees with the equivalent photon approach. We set limits on higher-order contributions to the cross section.
The transverse mass spectra and midrapidity yields for Xi s and Omega s are presented. For the 10% most central collisions, the Xi -bar+/h- ratio increases from the Super Proton Synchrotron to the Relativistic Heavy Ion Collider energies while the Xi -/h- stays approximately constant. A hydrodynamically inspired model fit to the Xi spectra, which assumes a thermalized source, seems to indicate that these multistrange particles experience a significant transverse flow effect, but are emitted when the system is hotter and the flow is smaller than values obtained from a combined fit to pi , K, p, and Lambda s.
Measurements of the production of forward high-energy pi 0 mesons from transversely polarized proton collisions at sqrt[s]=200 GeV are reported. The cross section is generally consistent with next-to-leading order perturbative QCD calculations. The analyzing power is small at xF below about 0.3, and becomes positive and large at higher xF, similar to the trend in data at sqrt[s] <= 20 GeV. The analyzing power is in qualitative agreement with perturbative QCD model expectations. This is the first significant spin result seen for particles produced with pT>1 GeV/c at a polarized proton collider.
Transverse mass and rapidity distributions for charged pions, charged kaons, protons, and antiprotons are reported for sqrt[sNN]=200 GeV pp and Au+Au collisions at Relativistic Heary Ion Collider (RHIC). Chemical and kinetic equilibrium model fits to our data reveal strong radial flow and long duration from chemical to kinetic freeze-out in central Au+Au collisions. The chemical freeze-out temperature appears to be independent of initial conditions at RHIC energies.
We report results on rho (770)0--> pi + pi - production at midrapidity in p+p and peripheral Au+Au collisions at sqrt[sNN]=200 GeV. This is the first direct measurement of rho (770)0--> pi + pi - in heavy-ion collisions. The measured rho 0 peak in the invariant mass distribution is shifted by ~40 MeV/c2 in minimum bias p+p interactions and ~70 MeV/c2 in peripheral Au+Au collisions. The rho 0 mass shift is dependent on transverse momentum and multiplicity. The modification of the rho 0 meson mass, width, and shape due to phase space and dynamical effects are discussed.
We report the first observations of the first harmonic (directed flow, v1) and the fourth harmonic (v4), in the azimuthal distribution of particles with respect to the reaction plane in Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC). Both measurements were done taking advantage of the large elliptic flow (v2) generated at RHIC. From the correlation of v2 with v1 it is determined that v2 is positive, or in-plane. The integrated v4 is about a factor of 10 smaller than v2. For the sixth (v6) and eighth (v8) harmonics upper limits on the magnitudes are reported.
We present STAR measurements of the azimuthal anisotropy parameter v2 and the binary-collision scaled centrality ratio RCP for kaons and lambdas ( Lambda + Lambda -bar) at midrapidity in Au+Au collisions at sqrt[sNN]=200 GeV. In combination, the v2 and RCP particle-type dependencies contradict expectations from partonic energy loss followed by standard fragmentation in vacuum. We establish pT ~ 5 GeV/c as the value where the centrality dependent baryon enhancement ends. The K0S and Lambda + Lambda -bar v2 values are consistent with expectations of constituent-quark-number scaling from models of hadron formation by parton coalescence or recombination.
Hackethal and Schmidt (2003) criticize a large body of literature on the financing of corporate sectors in different countries that questions some of the distinctions conventionally drawn between financial systems. Their criticism is directed against the use of net flows of finance and they propose alternative measures based on gross flows which they claim re-establish conventional distinctions. This paper argues that their criticism is invalid and that their alternative measures are misleading. There are real issues raised by the use of aggregate data but they are not the ones discussed in Hackethal and Schmidt’s paper. JEL Classification: G30
In contrast to the United States and the United Kingdom, little empirical work exists about the distributional characteristics of appraisalbased real estate returns outside these countries. The purpose of this study is to fill this gap by focusing on Germany. In line with other studies, this paper offers an extensive investigation into the distribution of German real estate returns and compares them with and U.S. and U.K. data in the same period. Furthermore, the comovements with bonds and stocks are also examined. In the core, the distributional characteristics for German real estate are comparable to that for the U.S. and U.K.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
In this paper, we calculate a transaction–based price index for apartments in Paris (France). The heterogeneous character of real estate is taken into account using an hedonic model. The functional form is specified using a general Box–Cox function. The data basis covers 84 686 transactions of the housing market in 1990:01–1999:12, which is one of the largest samples ever used in comparable studies. Low correlations of the price index with stock and bond indices (first differences) indicate diversification benefits from the inclusion of real estate in a mixed asset portfolio. JEL C43, C51, O18, R20.
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
This paper provides an in-depth analysis of the properties of popular tests for the existence and the sign of the market price of volatility risk. These tests are frequently based on the fact that for some option pricing models under continuous hedging the sign of the market price of volatility risk coincides with the sign of the mean hedging error. Empirically, however, these tests suffer from both discretization error and model mis-specification. We show that these two problems may cause the test to be either no longer able to detect additional priced risk factors or to be unable to identify the sign of their market prices of risk correctly. Our analysis is performed for the model of Black and Scholes (1973) (BS) and the stochastic volatility (SV) model of Heston (1993). In the model of BS, the expected hedging error for a discrete hedge is positive, leading to the wrong conclusion that the stock is not the only priced risk factor. In the model of Heston, the expected hedging error for a hedge in discrete time is positive when the true market price of volatility risk is zero, leading to the wrong conclusion that the market price of volatility risk is positive. If we further introduce model mis-specification by using the BS delta in a Heston world we find that the mean hedging error also depends on the slope of the implied volatility curve and on the equity risk premium. Under parameter scenarios which are similar to those reported in many empirical studies the test statistics tend to be biased upwards. The test often does not detect negative volatility risk premia, or it signals a positive risk premium when it is truly zero. The properties of this test furthermore strongly depend on the location of current volatility relative to its long-term mean, and on the degree of moneyness of the option. As a consequence tests reported in the literature may suffer from the problem that in a time-series framework the researcher cannot draw the hedging errors from the same distribution repeatedly. This implies that there is no guarantee that the empirically computed t-statistic has the assumed distribution. JEL: G12, G13 Keywords: Stochastic Volatility, Volatility Risk Premium, Discretization Error, Model Error
In a framework closely related to Diamond and Rajan (2001) we characterize different financial systems and analyze the welfare implications of different LOLR-policies in these financial systems. We show that in a bank-dominated financial system it is less likely that a LOLR-policy that follows the Bagehot rules is preferable. In financial systems with rather illiquid assets a discretionary individual liquidity assistance might be welfare improving, while in market-based financial systems, with rather liquid assets in the banks' balance sheets, emergency liquidity assistance provided freely to the market at a penalty rate is likely to be efficient. Thus, a "one size fits all"-approach that does not take the differences of financial systems into account is misguiding. JEL - Klassifikation: D52 , E44 , G21 , E52 , E58
When options are traded, one can use their prices and price changes to draw inference about the set of risk factors and their risk premia. We analyze tests for the existence and the sign of the market prices of jump risk that are based on option hedging errors. We derive a closed-form solution for the option hedging error and its expectation in a stochastic jump model under continuous trading and correct model specification. Jump risk is structurally different from, e.g., stochastic volatility: there is one market price of risk for each jump size (and not just \emph{the} market price of jump risk). Thus, the expected hedging error cannot identify the exact structure of the compensation for jump risk. Furthermore, we derive closed form solutions for the expected option hedging error under discrete trading and model mis-specification. Compared to the ideal case, the sign of the expected hedging error can change, so that empirical tests based on simplifying assumptions about trading frequency and the model may lead to incorrect conclusions.
This paper deals with the superhedging of derivatives and with the corresponding price bounds. A static superhedge results in trivial and fully nonparametric price bounds, which can be tightened if there exists a cheaper superhedge in the class of dynamic trading strategies. We focus on European path-independent claims and show under which conditions such an improvement is possible. For a stochastic volatility model with unbounded volatility, we show that a static superhedge is always optimal, and that, additionally, there may be infinitely many dynamic superhedges with the same initial capital. The trivial price bounds are thus the tightest ones. In a model with stochastic jumps or non-negative stochastic interest rates either a static or a dynamic superhedge is optimal. Finally, in a model with unbounded short rates, only a static superhedge is possible.
Empirical evidence suggests that even those firms presumably most in need of monitoringintensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank. JEL - Klassifikation: G21 , G78 , G33
This paper suggests a motive for bank mergers that goes beyond alleged and typically unverifiable scale economies: preemtive resolution of banks´ financial distress. Such "distress mergers" can be a significant motivation for mergers because they can foster reorganizations, realize diversification gains, and avoid public attention. However, since none of these potential benefits comes without a cost, the overall assessment of distress mergers is unclear. We conduct an empirical analysis to provide evidence on consequences of distress mergers. The analysis is based on comprehensive data from Germany´s savings and cooperatives banks sectors over the period 1993 to 2001. During this period both sectors faced significant structural problems and superordinate institutions (associations) presumably have engaged in coordinated actions to manage distress mergers. The data comprise 3640 banks and 1484 mergers. Our results suggest that bank mergers as a means of preemtive distress resolution have moderate costs in terms of the economic impact on performance. We do find strong evidence consistent with diversification gains. Thus, distress mergers seem to have benefits without affecting systematic stability adversely.
Tests for the existence and the sign of the volatility risk premium are often based on expected option hedging errors. When the hedge is performed under the ideal conditions of continuous trading and correct model specification, the sign of the premium is the same as the sign of the mean hedging error for a large class of stochastic volatility option pricing models. We show, however, that the problems of discrete trading and model mis-specification, which are necessarily present in any empirical study, may cause the standard test to yield unreliable results.
The question whether the adoption of International Financial Reporting Standards (IFRS) will result in measurable economic benefits is of special policy relevance in particular given the European Union’s decision to require the application of IFRS by listed companies from 2005/2007. In this paper, I investigate the common con-jecture that internationally recognized high quality reporting standards (IAS/IFRS or US-GAAP) reduce the cost of capital of adopting firms (e.g. Levitt 1998; IASB 2002). Building on Leuz/Verrecchia (2000), I use a set of German firms which pre-adopted such standards before 2005, but investigate the potential economic benefits by analyzing their expected cost of equity capital utilizing and customizing avail-able implied estimation methods (e.g. Gebhardt/Lee/Swaminathan 2001, Easton/Taylor/Shroff/Sougiannis 2002, Easton 2004). Evidence from a sample of about 13,000 HGB, 4,500 IAS/IFRS and 3,000 US-GAAP firm-month observations in the period 1993-2002 generally fails to document lower expected cost of equity capital and therefore measurable economic benefits for firms applying IAS/IFRS or US-GAAP. Accordingly, I caution to state that reporting under internationally accepted standards, per se, lowers the cost of equity capital of adopting firms.
In this study, we develop a technique for estimating a firm’s expected cost of equity capital derived from analyst consensus forecasts and stock prices. Building on the work of Gebhardt/Lee/-Swaminathan (2001) and Easton/Taylor/Shroff/Sougiannis (2002), our approach allows daily estimation, using only publicly available information at that date. We then estimate the expected cost of equity capital at the market, industry and individual firm level using historical German data from 1989-2002 and examine firm characteristics which are systematically related to these estimates. Finally, we demonstrate the applicability of the concept in a contemporary case study for DaimlerChrysler and the European automobile industry.
Empirical evidence suggests that even those firms presumably most in need of monitoring-intensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank.
We investigate the connection between corporate governance system configurations and the role of intermediaries in the respective systems from a informational perspective. Building on the economics of information we show that it is meaningful to distinguish between internalisation and externalisation as two fundamentally different ways of dealing with information in corporate governance systems. This lays the groundwork for a description of two types of corporate governance systems, i.e. insider control system and outsider control system, in which we focus on the distinctive role of intermediaries in the production and use of information. It will be argued that internalisation is the prevailing mode of information processing in insider control system while externalisation dominates in outsider control system. We also discuss shortly the interrelations between the prevailing corporate governance system and types of activities or industry structures supported.
Tractable hedging - an implementation of robust hedging strategies : [This Version: March 30, 2004]
(2004)
This paper provides a theoretical and numerical analysis of robust hedging strategies in diffusion–type models including stochastic volatility models. A robust hedging strategy avoids any losses as long as the realised volatility stays within a given interval. We focus on the effects of restricting the set of admissible strategies to tractable strategies which are defined as the sum over Gaussian strategies. Although a trivial Gaussian hedge is either not robust or prohibitively expensive, this is not the case for the cheapest tractable robust hedge which consists of two Gaussian hedges for one long and one short position in convex claims which have to be chosen optimally.
The main results obtained within the energy scan program at the CERN SPS are presented. The anomalies in energy dependence of hadron production indicate that the onset of deconfinement phase transition is located at about 30 A GeV. For the first time we seem to have clear evidence for the existence of a deconfined state of matter in nature. PACS numbers: 24.85.+p
A widely recognized paper by Colin Mayer (1988) has led to a profound revision of academic thinking about financing patterns of corporations in different countries. Using flow-of-funds data instead of balance sheet data, Mayer and others who followed his lead found that internal financing is the dominant mode of financing in all countries, that financing patterns do not differ very much between countries and that those differences which still seem to exist are not at all consistent with the common conviction that financial systems can be classified as being either bank-based or capital market-based. This leads to a puzzle insofar as it calls into question the empirical foundation of the widely held belief that there is a correspondence between the financing patterns of corporations on the one side, and the structure of the financial sector and the prevailing corporate governance system in a given country on the other side. The present paper addresses this puzzle on a methodological and an empirical basis. It starts by comparing and analyzing various ways of measuring financial structure and financing patterns and by demonstrating that the surprising empirical results found by studies that relied on net flows are due to a hidden assumption. It then derives an alternative method of measuring financing patterns, which also uses flow-of-funds data, but avoids the questionable assumption. This measurement concept is then applied to patterns of corporate financing in Germany, Japan and the United States. The empirical results, which use an estimation technique for determining gross flows of funds in those cases in which empirical data are not available, are very much in line with the commonly held belief prior to Mayer’s influential contribution and indicate that the financial systems of the three countries do indeed differ from one another in a substantial way, and moreover in a way which is largely in line with the general view of the differences between the financial systems of the countries covered in the present paper.
We present a detailed study of chemical freeze-out in nucleus-nucleus collisions at beam energies of 11.6, 30, 40, 80 and 158A GeV. By analyzing hadronic multiplicities within the statistical hadronization approach, we have studied the strangeness production as a function of centre of mass energy and of the parameters of the source. We have tested and compared different versions of the statistical model, with special emphasis on possible explanations of the observed strangeness hadronic phase space under-saturation. We show that, in this energy range, the use of hadron yields at midrapidity instead of in full phase space artificially enhances strangeness production and could lead to incorrect conclusions as far as the occurrence of full chemical equilibrium is concerned. In addition to the basic model with an extra strange quark non-equilibrium parameter, we have tested three more schemes: a two-component model superimposing hadrons coming out of single nucleon-nucleon interactions to those emerging from large fireballs at equilibrium, a model with local strangeness neutrality and a model with strange and light quark non-equilibrium parameters. The behaviour of the source parameters as a function of colliding system and collision energy is studied. The description of strangeness production entails a non-monotonic energy dependence of strangeness saturation parameter gamma_S with a maximum around 30A GeV. We also present predictions of the production rates of still unmeasured hadrons including the newly discovered Theta^+(1540) pentaquark baryon.
We suggest that the fluctuations of strange hadron multiplicity could be sensitive to the equation of state and microscopic structure of strongly interacting matter created at the early stage of high energy nucleus-nucleus collisions. They may serve as an important tool in the study of the deconfinement phase transition. We predict, within the statistical model of the early stage, that the ratio of properly filtered fluctuations of strange to non-strange hadron multiplicities should have a non-monotonic energy dependence with a minimum in the mixed phase region.
The data on mT spectra of K0S K+ and K- mesons produced in all inelastic p + p and p + pbar interactions in the energy range sqrt(s)NN=4.7-1800GeV are compiled and analyzed. The spectra are parameterized by a single exponential function, dN/(m_T*dm_T)=C exp(-m_T/T), and the inverse slope parameter T is the main object of study. The T parameter is found to be similar for K0S, K+ and K- mesons. It increases monotonically with collision energy from T~30MeV at sqrt(s)NN=4.7GeV to T~220MeV at sqrt(s)NN=1800GeV. The T parameter measured in p+p and p+pbar interactions is significantly lower than the corresponding parameter obtained for central Pb+Pb collisions at all studied energies. Also the shape of the energy dependence of T is different for central Pb+Pb collisions and p+p(pbar) interactions.
We propose a method to experimentally study the equation of state of strongly interacting matter created at the early stage of nucleus--nucleus collisions. The method exploits the relation between relative entropy and energy fluctuations and equation of state. As a measurable quantity, the ratio of properly filtered multiplicity to energy fluctuations is proposed. Within a statistical approach to the early stage of nucleus-nucleus collisions, the fluctuation ratio manifests a non--monotonic collision energy dependence with a maximum in the domain where the onset of deconfinement occurs.
Production of Lambda and Antilambda hyperons was measured in central Pb-Pb collisions at 40, 80, and 158 A GeV beam energy on a fixed target. Transverse mass spectra and rapidity distributions are given for all three energies. The Lambda/pi ratio at mid-rapidity and in full phase space shows a pronounced maximum between the highest AGS and 40 A GeV SPS energies, whereas the anti-Lambda}/pi ratio exhibits a monotonic increase. PACS numbers: 25.75.-q
Fluctuations of charged particle number are studied in the canonical ensemble. In the infinite volume limit the fluctuations in the canonical ensemble are different from the fluctuations in the grand canonical one. Thus, the well-known equivalence of both ensembles for the average quantities does not extend for the fluctuations. In view of a possible relevance of the results for the analysis of fluctuations in nuclear collisions at high energies, a role of the limited kinematical acceptance is studied.
Report from NA49
(2004)
The most recent data of NA49 on hadron production in nuclear collisions at CERN SPS energies are presented. Anomalies in the energy dependence of pion and kaon production in central Pb+Pb collisions are observed. They suggest that the onset of deconfinement is located at about 30 AGeV. Large multiplicity and transverse momentum fluctuations are measured for collisions of intermediate mass systems at 158 AGeV. The need for a new experimental programme at the CERN SPS is underlined.
The transverse mass mt distributions for deuterons and protons are measured in Pb+Pb reactions near midrapidity and in the range 0<mt–m<1.0 (1.5) GeV/c2 for minimum bias collisions at 158A GeV and for central collisions at 40 and 80 A GeV beam energies. The rapidity density dn/dy, inverse slope parameter T and mean transverse mass <mt> derived from mt distributions as well as the coalescence parameter B2 are studied as a function of the incident energy and the collision centrality. The deuteron mt spectra are significantly harder than those of protons, especially in central collisions. The coalescence factor B2 shows three systematic trends. First, it decreases strongly with increasing centrality reflecting an enlargement of the deuteron coalescence volume in central Pb+Pb collisions. Second, it increases with mt. Finally, B2 shows an increase with decreasing incident beam energy even within the SPS energy range. The results are discussed and compared to the predictions of models that include the collective expansion of the source created in Pb+Pb collisions.
Preliminary results on pion-pion Bose-Einstein correlations in central Pb+Pb collisions measured by the NA49 experiment are presented. Rapidity as well as transverse momentum dependence of the HBT-radii are shown for collisions at 20, 30, 40, 80, and 158 AGeV beam energy. Including results from AGS and RHIC experiments only a weak energy dependence of the radii is observed. Based on hydrodynamical models parameters like lifetime and geometrical radius of the source are derived from the dependence of the radii on transverse momentum.
Event-by-event fluctuations of particle ratios in central Pb + Pb collisions at 20 to 158 AGeV
(2004)
In the vicinity of the QCD phase transition, critical fluctuations have been predicted to lead to non-statistical fluctuations of particle ratios, depending on the nature of the phase transition. Recent results of the NA49 energy scan program show a sharp maximum of the ratio of K+ to Pi+ yields in central Pb+Pb collisions at beam energies of 20-30 AGeV. This observation has been interpreted as an indication of a phase transition at low SPS energies. We present first results on event-by-event fluctuations of the kaon to pion and proton to pion ratios at beam energies close to this maximum.
Results are presented on event-by-event electric charge fluctuations in central Pb+Pb collisions at 20, 30, 40, 80 and 158 AGeV. The observed fluctuations are close to those expected for a gas of pions correlated by global charge conservation only. These fluctuations are considerably larger than those calculated for an ideal gas of deconfined quarks and gluons. The present measurements do not necessarily exclude reduced fluctuations from a quark-gluon plasma because these might be masked by contributions from resonance decays.
System size and centrality dependence of the balance function in A + A collisions at √sNN = 17.2 GeV
(2004)
Electric charge correlations were studied for p+p, C+C, Si+Si and centrality selected Pb+Pb collisions at sqrt s_NN = 17.2$ GeV with the NA49 large acceptance detector at the CERN-SPS. In particular, long range pseudo-rapidity correlations of oppositely charged particles were measured using the Balance Function method. The width of the Balance Function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to force the hot and dense nuclear matter into a deconfined phase.
System size dependence of multiplicity fluctuations of charged particles produced in nuclear collisions at 158 A GeV was studied in the NA49 CERN experiment. Results indicate a non-monotonic dependence of the scaled variance of the multiplicity distribution with a maximum for semi-peripheral Pb+Pb interactions with number of projectile participants of about 35. This effect is not observed in a string-hadronic model of nuclear collision HIJING.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to transform the hot and dense nuclear matter into a deconfined phase.
In the early Nineties the Hague Conference on International Private Law on initiative of the United States started negotiations on a Convention on the Recognition and Enforcement of Foreign Judgments in Civil and Commercial Matters (the "Hague Convention"). In October 1999 the Special Commission on duty presented a preliminary text, which was drafted quite closely to the European Convention on Jurisdiction and Enforcement of Judgments in Civil and Commercial Matters (the "Brussels Convention"). The latter was concluded between the then 6 Member States of the EEC in Brussels in 1968 and amended several times on occasion of the entry of new Member States. In 2000, after the Treaty of Amsterdam altered the legal basis for judicial co-operation in civil matters in Europe, it was transformed into an EC Regulation (the "Brussels I Regulation"). The 1999 draft of the Hague Convention was heavily criticized by the USA and other states for its European approach of a double convention, regulating not only the recognition and enforcement of judgments, but at the same time the extent of and the limits to jurisdiction to adjudicate in international cases. During a diplomatic conference in June 2001 a second draft was presented which contained alternative versions of several articles and thus resembled more the existing dissent than a draft convention would. Difficulties to reach a consensus remained, especially with regard to activity based jurisdiction, intellectual property, consumer rights and employee rights. In addition, the appropriateness of the whole draft was questioned in light of the problems posed by the de-territorialization of relevant conduct through the advent of the Internet. In April 2002 it was decided to continue negotiations on an informal level on the basis of a nucleus approach. The core consensus as identified by a working group, however, was not very broad. The experts involved came to the conclusion that the project should be limited to choice of court agreements. In March 2004 a draft was presented which sets out its aims as follows: "The objective of the Convention is to make exclusive choice of court agreements as effective as possible in the context of international business. The hope is that the Convention will do for choice of court agreements what the New York Convention of 1958 has done for arbitration agreements." In April 2004 the Special Commission of the Hague Conference adopted a Draft "Convention on Exclusive Choice of Court Agreements", which according to its Art. 2 No. 1 a) is not applicable to choice of court agreements, to which a natural person acting primarily for personal, family or household purposes (a consumer) is a party". The broader project of a global judgments convention thus seems to be abandoned, or at least to be postponed for an unlimited time period. There are - of course - several reasons why the Hague Judgments project failed. Samuel Baumgartner has described an important one as the "Justizkonflikt" between the United States and Europe or, more specifically Germany. Within the context of the general topic of this conference, that is (international) jurisdiction for human rights, in the remainder of this presentation I shall elaborate on the socio-cultural aspects of the impartiality of judgments and their enforcement on a global scale.
In April 2003 I commented on the European Commission’s Action Plan on a More Coherent European Contract Law [COM(2003) 68 final] and the Green Paper on the Modernisation of the 1980 Rome Convention [COM(2002) 654 final].1 While the main argument of that paper, i.e. the common neglect of the inherent interrelation between both the further harmonisation of substantive contract law by directives or through an optional European Civil Code on the one hand and the modernisation of conflict rules for consumer contracts in Art. 5 Rome Convention on the other hand, remain pressing issues, and as the German Law Journal continues its efforts in offering timely and critical analysis on consumer law issues,2 there is a variety of recent developments worth noting.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen (NOy), with a maximum vortex average permanent NOy removal of over 5 ppb in late December between 500 and 550 K and a corresponding increase of NOy of over 2 ppb below about 450 K. The simulated vertical redistribution of NOy is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 3.4·10−6 cm−3 h−1 in the model, the observed denitrification is well reproduced. In the investigated winter 2002/2003, the denitrification has only moderate impact (<=10%) on the simulated vortex average ozone loss of about 1.1 ppm near the 460 K level. At higher altitudes, above 600 K potential temperature, the simulations show significant ozone depletion through NOx-catalytic cycles due to the unusual early exposure of vortex air to sunlight.
Configuration, simulation and visualization of simple biochemical reaction-diffusion systems in 3D
(2004)
Background In biological systems, molecules of different species diffuse within the reaction compartments and interact with each other, ultimately giving rise to such complex structures like living cells. In order to investigate the formation of subcellular structures and patterns (e.g. signal transduction) or spatial effects in metabolic processes, it would be helpful to use simulations of such reaction-diffusion systems. Pattern formation has been extensively studied in two dimensions. However, the extension to three-dimensional reaction-diffusion systems poses some challenges to the visualization of the processes being simulated. Scope of the Thesis The aim of this thesis is the specification and development of algorithms and methods for the three-dimensional configuration, simulation and visualization of biochemical reaction-diffusion systems consisting of a small number of molecules and reactions. After an initial review of existing literature about 2D/3D reaction-diffusion systems, a 3D simulation algorithm (PDE solver), based on an existing 2D-simulation algorithm for reaction-diffusion systems written by Prof. Herbert Sauro, has to be developed. In a succeeding step, this algorithm has to be optimized for high performance. A prototypic 3D configuration tool for the initial state of the system has to be developed. This basic tool should enable the user to define and store the location of molecules, membranes and channels within the reaction space of user-defined size. A suitable data structure has to be defined for the representation of the reaction space. The main focus of this thesis is the specification and prototypic implementation of a suitable reaction space visualization component for the display of the simulation results. In particular, the possibility of 3D visualization during course of the simulation has to be investigated. During the development phase, the quality and usability of the visualizations has to be evaluated in user tests. The simulation, configuration and visualization prototypes should be compliant with the Systems Biology Workbench to ensure compatibility with software from other authors. The thesis is carried out in close cooperation with Prof. Herbert Sauro at the Keck Graduate Institute, Claremont, CA, USA. Due to this international cooperation the thesis will be written in English.
We present a detailed study of chemical freeze-out in nucleus-nucleus collisions at beam energies of 11.6, 30, 40, 80 and 158A GeV. By analyzing hadronic multiplicities within the statistical hadronization approach, we have studied the chemical equilibration of the system as a function of center of mass energy and of the parameters of the source. Additionally, we have tested and compared different versions of the statistical model, with special emphasis on possible explanations of the observed strangeness hadronic phase space under-saturation.
New results on the production of Xi and Omega hyperons in Pb+Pb interactions at 40 A GeV and Lambda at 30 A GeV are presented. Transverse mass spectra as well as rapidity spectra of these hyperons are shown and compared to previously measured data at different beam energies. The energy dependence of hyperon production (4Pi yields) is discussed. Additionally, the centrality dependence of Xi- production at 40 A GeV is presented.
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
Assessing enhanced knowledge discovery systems (eKDSs) constitutes an intricate issue that is understood merely to a certain extent by now. Based upon an analysis of why it is difficult to formally evaluate eKDSs, it is argued for a change of perspective: eKDSs should be understood as intelligent tools for qualitative analysis that support, rather than substitute, the user in the exploration of the data; a qualitative gap will be identified as the main reason why the evaluation of enhanced knowledge discovery systems is difficult. In order to deal with this problem, the construction of a best practice model for eKDSs is advocated. Based on a brief recapitulation of similar work on spoken language dialogue systems, first steps towards achieving this goal are performed, and directions of future research are outlined.
This study analyses the labour market effects of fixed-term contracts (FTCs) in West Germany by microeconometric methods using individual and establishment level data. In the first part of the study the role of FTCs in firms’ labour demand is analysed. An econometric investigation of the firms’ reasons for using FTCs focussing on the identification of the link between dismissal protection for permanent contract workers and the firms’ use of FTCs is presented. Furthermore, a descriptive analysis of the role of FTCs in worker and job flows at the firm level is provided. The second part of the study evaluates the short-run effects of being employed on an FTC on working conditions and wages using a large cross-sectional dataset of employees. The final part of the study analyses whether taking up an FTC increases the (permanent contract) employment opportunities in the long-run (stepping stone effect) and whether FTCs affect job finding behaviour of unemployed job searchers. Firstly, an econometric unemployment duration analysis distinguishing between both types of contracts as destination states is performed. Secondly, the effects of entering into FTCs from unemployment on future (permanent contract) employment opportunities are evaluated attempting to account for the sequential decision problem of job searchers.
We modify the concept of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lovasz [LLL82] towards a faster reduction algorithm. We organize LLL-reduction in segments of the basis. Our SLLL-bases approximate the successive minima of the lattice in nearly the same way as LLL-bases. For integer lattices of dimension n given by a basis of length 2exp(O(n)), SLLL-reduction runs in O(n.exp(5+epsilon)) bit operations for every epsilon > 0, compared to O(exp(n7+epsilon)) for the original LLL and to O(exp(n6+epsilon)) for the LLL-algorithms of Schnorr (1988) and Storjohann (1996). We present an even faster algorithm for SLLL-reduction via iterated subsegments running in O(n*exp(3)*log n) arithmetic steps.
Let G be a Fuchsian group containing two torsion free subgroups defining isomorphic Riemann surfaces. Then these surface subgroups K and alpha-Kalpha exp(-1) are conjugate in PSl(2,R), but in general the conjugating element alpha cannot be taken in G or a finite index Fuchsian extension of G. We will show that in the case of a normal inclusion in a triangle group G these alpha can be chosen in some triangle group extending G. It turns out that the method leading to this result allows also to answer the question how many different regular dessins of the same type can exist on a given quasiplatonic Riemann surface.
The large conductance voltage- and Ca2+-activated potassium (BK) channel has been suggested to play an important role in the signal transduction process of cochlear inner hair cells. BK channels have been shown to be composed of the pore-forming alpha-subunit coexpressed with the auxiliary beta-1-subunit. Analyzing the hearing function and cochlear phenotype of BK channel alpha-(BKalpha–/–) and beta-1-subunit (BKbeta-1–/–) knockout mice, we demonstrate normal hearing function and cochlear structure of BKbeta-1–/– mice. During the first 4 postnatal weeks also, BKalpha–/– mice most surprisingly did not show any obvious hearing deficits. High-frequency hearing loss developed in BKalpha–/– mice only from ca. 8 weeks postnatally onward and was accompanied by a lack of distortion product otoacoustic emissions, suggesting outer hair cell (OHC) dysfunction. Hearing loss was linked to a loss of the KCNQ4 potassium channel in membranes of OHCs in the basal and midbasal cochlear turn, preceding hair cell degeneration and leading to a similar phenotype as elicited by pharmacologic blockade of KCNQ4 channels. Although the actual link between BK gene deletion, loss of KCNQ4 in OHCs, and OHC degeneration requires further investigation, data already suggest human BK-coding slo1 gene mutation as a susceptibility factor for progressive deafness, similar to KCNQ4 potassium channel mutations. © 2004, The National Academy of Sciences. Freely available online through the PNAS open access option.
Dendritic cells (DC) are known to present exogenous protein Ag effectively to T cells. In this study we sought to identify the proteases that DC employ during antigen processing. The murine epidermal-derived DC line Xs52, when pulsed with PPD, optimally activated the PPD-reactive Th1 clone LNC.2F1 as well as the Th2 clone LNC.4k1, and this activation was completely blocked by chloroquine pretreatment. These results validate the capacity of XS52 DC to digest PPD into immunogenic peptides inducing antigen specific T cell immune responses. XS52 DC, as well as splenic DC and DCs derived from bone marrow degraded standard substrates for cathepsins B, C, D/E, H, J, and L, tryptase, and chymases, indicating that DC express a variety of protease activities. Treatment of XS52 DC with pepstatin A, an inhibitor of aspartic acid proteases, completely abrogated their capacity to present native PPD, but not trypsin-digested PPD fragments to Th1 and Th2 cell clones. Pepstatin A also inhibited cathepsin D/E activity selectively among the XS52 DC-associated protease activities. On the other hand, inhibitors of serine proteases (dichloroisocoumarin, DCI) or of cystein proteases (E-64) did not impair XS52 DC presentation of PPD, nor did they inhibit cathepsin D/E activity. Finally, all tested DC populations (XS52 DC, splenic DC, and bone marrow-derived DC) constitutively expressed cathepsin D mRNA. These results suggest that DC primarily employ cathepsin D (and perhaps E) to digest PPD into antigenic peptides.
Background: The neurophysiological and neuroanatomical foundations of persistent developmental stuttering (PDS) are still a matter of dispute. A main argument is that stutterers show atypical anatomical asymmetries of speech-relevant brain areas, which possibly affect speech fluency. The major aim of this study was to determine whether adults with PDS have anomalous anatomy in cortical speech-language areas. Methods: Adults with PDS (n = 10) and controls (n = 10) matched for age, sex, hand preference, and education were studied using high-resolution MRI scans. Using a new variant of the voxel-based morphometry technique (augmented VBM) the brains of stutterers and non-stutterers were compared with respect to white matter (WM) and grey matter (GM) differences. Results: We found increased WM volumes in a right-hemispheric network comprising the superior temporal gyrus (including the planum temporale), the inferior frontal gyrus (including the pars triangularis), the precentral gyrus in the vicinity of the face and mouth representation, and the anterior middle frontal gyrus. In addition, we detected a leftward WM asymmetry in the auditory cortex in non-stutterers, while stutterers showed symmetric WM volumes. Conclusions: These results provide strong evidence that adults with PDS have anomalous anatomy not only in perisylvian speech and language areas but also in prefrontal and sensorimotor areas. Whether this atypical asymmetry of WM is the cause or the consequence of stuttering is still an unanswered question. This article is available from: http://www.biomedcentral.com/1471-2377/4/23 © 2004 Jäncke et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.