Refine
Year of publication
- 2004 (479) (remove)
Document Type
- Article (163)
- Working Paper (71)
- Part of a Book (68)
- Preprint (48)
- Doctoral Thesis (43)
- Part of Periodical (31)
- Conference Proceeding (28)
- Report (13)
- Book (10)
- diplomthesis (2)
Language
- English (479) (remove)
Has Fulltext
- yes (479) (remove)
Keywords
- Syntax (26)
- Generative Transformationsgrammatik (23)
- Wortstellung (19)
- Deutsch (16)
- Optimalitätstheorie (12)
- Phonologie (11)
- Deutschland (9)
- Englisch (8)
- Formale Semantik (8)
- Informationsstruktur (8)
Institute
- Physik (75)
- Wirtschaftswissenschaften (38)
- Center for Financial Studies (CFS) (28)
- Medizin (27)
- Extern (24)
- Biochemie und Chemie (23)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Biowissenschaften (12)
- Informatik (12)
- Mathematik (9)
Transverse energy ( ET ) distributions have been measured for Au+Au collisions at sqrt[sNN ]=200 GeV by the STAR Collaboration at RHIC. ET is constructed from its hadronic and electromagnetic components, which have been measured separately. ET production for the most central collisions is well described by several theoretical models whose common feature is large energy density achieved early in the fireball evolution. The magnitude and centrality dependence of ET per charged particle agrees well with measurements at lower collision energy, indicating that the growth in ET for larger collision energy results from the growth in particle production. The electromagnetic fraction of the total ET is consistent with a final state dominated by mesons and independent of centrality.
We report inclusive photon measurements about midrapidity ( |y| <0.5 ) from 197 Au + 197 Au collisions at sqrt[sNN ]=130 GeV at RHIC. Photon pair conversions were reconstructed from electron and positron tracks measured with the Time Projection Chamber (TPC) of the STAR experiment. With this method, an energy resolution of Delta E/E ~ 2% at 0.5 GeV has been achieved. Reconstructed photons have also been used to measure the transverse momentum ( pt ) spectra of pi 0 mesons about midrapidity ( |y| <1 ) via the pi 0 --> gamma gamma decay channel. The fractional contribution of the pi 0 --> gamma gamma decay to the inclusive photon spectrum decreases by 20%±5% between pt =1.65 GeV/c and pt =2.4 GeV/c in the most central events, indicating that relative to pi 0 --> gamma gamma decay the contribution of other photon sources is substantially increasing.
We present STAR measurements of charged hadron production as a function of centrality in Au+Au collisions at sqrt[sNN ]=130 GeV . The measurements cover a phase space region of 0.2< pT <6.0 GeV/c in transverse momentum and -1< eta <1 in pseudorapidity. Inclusive transverse momentum distributions of charged hadrons in the pseudorapidity region 0.5< | eta | <1 are reported and compared to our previously published results for | eta | <0.5 . No significant difference is seen for inclusive pT distributions of charged hadrons in these two pseudorapidity bins. We measured dN/d eta distributions and truncated mean pT in a region of pT > pcutT , and studied the results in the framework of participant and binary scaling. No clear evidence is observed for participant scaling of charged hadron yield in the measured pT region. The relative importance of hard scattering processes is investigated through binary scaling fraction of particle production.
We report on the rapidity and centrality dependence of proton and antiproton transverse mass distributions from 197Au + 197Au collisions at sqrt[sNN ]=130 GeV as measured by the STAR experiment at the Relativistic Heavy Ion Collider (RHIC). Our results are from the rapidity and transverse momentum range of |y| <0.5 and 0.35< pt <1.00 GeV/c . For both protons and antiprotons, transverse mass distributions become more convex from peripheral to central collisions demonstrating characteristics of collective expansion. The measured rapidity distributions and the mean transverse momenta versus rapidity are flat within |y| <0.5 . Comparisons of our data with results from model calculations indicate that in order to obtain a consistent picture of the proton (antiproton) yields and transverse mass distributions the possibility of prehadronic collective expansion may have to be taken into account.
We present data on e+ e- pair production accompanied by nuclear breakup in ultraperipheral gold-gold collisions at a center of mass energy of 200 GeV per nucleon pair. The nuclear breakup requirement selects events at small impact parameters, where higher-order diagrams for pair production should be enhanced. We compare the data with two calculations: one based on the equivalent photon approximation, and the other using lowest-order quantum electrodynamics (QED). The data distributions agree with both calculations, except that the pair transverse momentum spectrum disagrees with the equivalent photon approach. We set limits on higher-order contributions to the cross section.
The transverse mass spectra and midrapidity yields for Xi s and Omega s are presented. For the 10% most central collisions, the Xi -bar+/h- ratio increases from the Super Proton Synchrotron to the Relativistic Heavy Ion Collider energies while the Xi -/h- stays approximately constant. A hydrodynamically inspired model fit to the Xi spectra, which assumes a thermalized source, seems to indicate that these multistrange particles experience a significant transverse flow effect, but are emitted when the system is hotter and the flow is smaller than values obtained from a combined fit to pi , K, p, and Lambda s.
Measurements of the production of forward high-energy pi 0 mesons from transversely polarized proton collisions at sqrt[s]=200 GeV are reported. The cross section is generally consistent with next-to-leading order perturbative QCD calculations. The analyzing power is small at xF below about 0.3, and becomes positive and large at higher xF, similar to the trend in data at sqrt[s] <= 20 GeV. The analyzing power is in qualitative agreement with perturbative QCD model expectations. This is the first significant spin result seen for particles produced with pT>1 GeV/c at a polarized proton collider.
Transverse mass and rapidity distributions for charged pions, charged kaons, protons, and antiprotons are reported for sqrt[sNN]=200 GeV pp and Au+Au collisions at Relativistic Heary Ion Collider (RHIC). Chemical and kinetic equilibrium model fits to our data reveal strong radial flow and long duration from chemical to kinetic freeze-out in central Au+Au collisions. The chemical freeze-out temperature appears to be independent of initial conditions at RHIC energies.
We report results on rho (770)0--> pi + pi - production at midrapidity in p+p and peripheral Au+Au collisions at sqrt[sNN]=200 GeV. This is the first direct measurement of rho (770)0--> pi + pi - in heavy-ion collisions. The measured rho 0 peak in the invariant mass distribution is shifted by ~40 MeV/c2 in minimum bias p+p interactions and ~70 MeV/c2 in peripheral Au+Au collisions. The rho 0 mass shift is dependent on transverse momentum and multiplicity. The modification of the rho 0 meson mass, width, and shape due to phase space and dynamical effects are discussed.
We report the first observations of the first harmonic (directed flow, v1) and the fourth harmonic (v4), in the azimuthal distribution of particles with respect to the reaction plane in Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC). Both measurements were done taking advantage of the large elliptic flow (v2) generated at RHIC. From the correlation of v2 with v1 it is determined that v2 is positive, or in-plane. The integrated v4 is about a factor of 10 smaller than v2. For the sixth (v6) and eighth (v8) harmonics upper limits on the magnitudes are reported.
We present STAR measurements of the azimuthal anisotropy parameter v2 and the binary-collision scaled centrality ratio RCP for kaons and lambdas ( Lambda + Lambda -bar) at midrapidity in Au+Au collisions at sqrt[sNN]=200 GeV. In combination, the v2 and RCP particle-type dependencies contradict expectations from partonic energy loss followed by standard fragmentation in vacuum. We establish pT ~ 5 GeV/c as the value where the centrality dependent baryon enhancement ends. The K0S and Lambda + Lambda -bar v2 values are consistent with expectations of constituent-quark-number scaling from models of hadron formation by parton coalescence or recombination.
Hackethal and Schmidt (2003) criticize a large body of literature on the financing of corporate sectors in different countries that questions some of the distinctions conventionally drawn between financial systems. Their criticism is directed against the use of net flows of finance and they propose alternative measures based on gross flows which they claim re-establish conventional distinctions. This paper argues that their criticism is invalid and that their alternative measures are misleading. There are real issues raised by the use of aggregate data but they are not the ones discussed in Hackethal and Schmidt’s paper. JEL Classification: G30
In contrast to the United States and the United Kingdom, little empirical work exists about the distributional characteristics of appraisalbased real estate returns outside these countries. The purpose of this study is to fill this gap by focusing on Germany. In line with other studies, this paper offers an extensive investigation into the distribution of German real estate returns and compares them with and U.S. and U.K. data in the same period. Furthermore, the comovements with bonds and stocks are also examined. In the core, the distributional characteristics for German real estate are comparable to that for the U.S. and U.K.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
In this paper, we calculate a transaction–based price index for apartments in Paris (France). The heterogeneous character of real estate is taken into account using an hedonic model. The functional form is specified using a general Box–Cox function. The data basis covers 84 686 transactions of the housing market in 1990:01–1999:12, which is one of the largest samples ever used in comparable studies. Low correlations of the price index with stock and bond indices (first differences) indicate diversification benefits from the inclusion of real estate in a mixed asset portfolio. JEL C43, C51, O18, R20.
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
This paper provides an in-depth analysis of the properties of popular tests for the existence and the sign of the market price of volatility risk. These tests are frequently based on the fact that for some option pricing models under continuous hedging the sign of the market price of volatility risk coincides with the sign of the mean hedging error. Empirically, however, these tests suffer from both discretization error and model mis-specification. We show that these two problems may cause the test to be either no longer able to detect additional priced risk factors or to be unable to identify the sign of their market prices of risk correctly. Our analysis is performed for the model of Black and Scholes (1973) (BS) and the stochastic volatility (SV) model of Heston (1993). In the model of BS, the expected hedging error for a discrete hedge is positive, leading to the wrong conclusion that the stock is not the only priced risk factor. In the model of Heston, the expected hedging error for a hedge in discrete time is positive when the true market price of volatility risk is zero, leading to the wrong conclusion that the market price of volatility risk is positive. If we further introduce model mis-specification by using the BS delta in a Heston world we find that the mean hedging error also depends on the slope of the implied volatility curve and on the equity risk premium. Under parameter scenarios which are similar to those reported in many empirical studies the test statistics tend to be biased upwards. The test often does not detect negative volatility risk premia, or it signals a positive risk premium when it is truly zero. The properties of this test furthermore strongly depend on the location of current volatility relative to its long-term mean, and on the degree of moneyness of the option. As a consequence tests reported in the literature may suffer from the problem that in a time-series framework the researcher cannot draw the hedging errors from the same distribution repeatedly. This implies that there is no guarantee that the empirically computed t-statistic has the assumed distribution. JEL: G12, G13 Keywords: Stochastic Volatility, Volatility Risk Premium, Discretization Error, Model Error
In a framework closely related to Diamond and Rajan (2001) we characterize different financial systems and analyze the welfare implications of different LOLR-policies in these financial systems. We show that in a bank-dominated financial system it is less likely that a LOLR-policy that follows the Bagehot rules is preferable. In financial systems with rather illiquid assets a discretionary individual liquidity assistance might be welfare improving, while in market-based financial systems, with rather liquid assets in the banks' balance sheets, emergency liquidity assistance provided freely to the market at a penalty rate is likely to be efficient. Thus, a "one size fits all"-approach that does not take the differences of financial systems into account is misguiding. JEL - Klassifikation: D52 , E44 , G21 , E52 , E58
When options are traded, one can use their prices and price changes to draw inference about the set of risk factors and their risk premia. We analyze tests for the existence and the sign of the market prices of jump risk that are based on option hedging errors. We derive a closed-form solution for the option hedging error and its expectation in a stochastic jump model under continuous trading and correct model specification. Jump risk is structurally different from, e.g., stochastic volatility: there is one market price of risk for each jump size (and not just \emph{the} market price of jump risk). Thus, the expected hedging error cannot identify the exact structure of the compensation for jump risk. Furthermore, we derive closed form solutions for the expected option hedging error under discrete trading and model mis-specification. Compared to the ideal case, the sign of the expected hedging error can change, so that empirical tests based on simplifying assumptions about trading frequency and the model may lead to incorrect conclusions.
This paper deals with the superhedging of derivatives and with the corresponding price bounds. A static superhedge results in trivial and fully nonparametric price bounds, which can be tightened if there exists a cheaper superhedge in the class of dynamic trading strategies. We focus on European path-independent claims and show under which conditions such an improvement is possible. For a stochastic volatility model with unbounded volatility, we show that a static superhedge is always optimal, and that, additionally, there may be infinitely many dynamic superhedges with the same initial capital. The trivial price bounds are thus the tightest ones. In a model with stochastic jumps or non-negative stochastic interest rates either a static or a dynamic superhedge is optimal. Finally, in a model with unbounded short rates, only a static superhedge is possible.
Empirical evidence suggests that even those firms presumably most in need of monitoringintensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank. JEL - Klassifikation: G21 , G78 , G33
This paper suggests a motive for bank mergers that goes beyond alleged and typically unverifiable scale economies: preemtive resolution of banks´ financial distress. Such "distress mergers" can be a significant motivation for mergers because they can foster reorganizations, realize diversification gains, and avoid public attention. However, since none of these potential benefits comes without a cost, the overall assessment of distress mergers is unclear. We conduct an empirical analysis to provide evidence on consequences of distress mergers. The analysis is based on comprehensive data from Germany´s savings and cooperatives banks sectors over the period 1993 to 2001. During this period both sectors faced significant structural problems and superordinate institutions (associations) presumably have engaged in coordinated actions to manage distress mergers. The data comprise 3640 banks and 1484 mergers. Our results suggest that bank mergers as a means of preemtive distress resolution have moderate costs in terms of the economic impact on performance. We do find strong evidence consistent with diversification gains. Thus, distress mergers seem to have benefits without affecting systematic stability adversely.
Tests for the existence and the sign of the volatility risk premium are often based on expected option hedging errors. When the hedge is performed under the ideal conditions of continuous trading and correct model specification, the sign of the premium is the same as the sign of the mean hedging error for a large class of stochastic volatility option pricing models. We show, however, that the problems of discrete trading and model mis-specification, which are necessarily present in any empirical study, may cause the standard test to yield unreliable results.
The question whether the adoption of International Financial Reporting Standards (IFRS) will result in measurable economic benefits is of special policy relevance in particular given the European Union’s decision to require the application of IFRS by listed companies from 2005/2007. In this paper, I investigate the common con-jecture that internationally recognized high quality reporting standards (IAS/IFRS or US-GAAP) reduce the cost of capital of adopting firms (e.g. Levitt 1998; IASB 2002). Building on Leuz/Verrecchia (2000), I use a set of German firms which pre-adopted such standards before 2005, but investigate the potential economic benefits by analyzing their expected cost of equity capital utilizing and customizing avail-able implied estimation methods (e.g. Gebhardt/Lee/Swaminathan 2001, Easton/Taylor/Shroff/Sougiannis 2002, Easton 2004). Evidence from a sample of about 13,000 HGB, 4,500 IAS/IFRS and 3,000 US-GAAP firm-month observations in the period 1993-2002 generally fails to document lower expected cost of equity capital and therefore measurable economic benefits for firms applying IAS/IFRS or US-GAAP. Accordingly, I caution to state that reporting under internationally accepted standards, per se, lowers the cost of equity capital of adopting firms.
In this study, we develop a technique for estimating a firm’s expected cost of equity capital derived from analyst consensus forecasts and stock prices. Building on the work of Gebhardt/Lee/-Swaminathan (2001) and Easton/Taylor/Shroff/Sougiannis (2002), our approach allows daily estimation, using only publicly available information at that date. We then estimate the expected cost of equity capital at the market, industry and individual firm level using historical German data from 1989-2002 and examine firm characteristics which are systematically related to these estimates. Finally, we demonstrate the applicability of the concept in a contemporary case study for DaimlerChrysler and the European automobile industry.
Empirical evidence suggests that even those firms presumably most in need of monitoring-intensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank.
We investigate the connection between corporate governance system configurations and the role of intermediaries in the respective systems from a informational perspective. Building on the economics of information we show that it is meaningful to distinguish between internalisation and externalisation as two fundamentally different ways of dealing with information in corporate governance systems. This lays the groundwork for a description of two types of corporate governance systems, i.e. insider control system and outsider control system, in which we focus on the distinctive role of intermediaries in the production and use of information. It will be argued that internalisation is the prevailing mode of information processing in insider control system while externalisation dominates in outsider control system. We also discuss shortly the interrelations between the prevailing corporate governance system and types of activities or industry structures supported.
Tractable hedging - an implementation of robust hedging strategies : [This Version: March 30, 2004]
(2004)
This paper provides a theoretical and numerical analysis of robust hedging strategies in diffusion–type models including stochastic volatility models. A robust hedging strategy avoids any losses as long as the realised volatility stays within a given interval. We focus on the effects of restricting the set of admissible strategies to tractable strategies which are defined as the sum over Gaussian strategies. Although a trivial Gaussian hedge is either not robust or prohibitively expensive, this is not the case for the cheapest tractable robust hedge which consists of two Gaussian hedges for one long and one short position in convex claims which have to be chosen optimally.
The main results obtained within the energy scan program at the CERN SPS are presented. The anomalies in energy dependence of hadron production indicate that the onset of deconfinement phase transition is located at about 30 A GeV. For the first time we seem to have clear evidence for the existence of a deconfined state of matter in nature. PACS numbers: 24.85.+p
A widely recognized paper by Colin Mayer (1988) has led to a profound revision of academic thinking about financing patterns of corporations in different countries. Using flow-of-funds data instead of balance sheet data, Mayer and others who followed his lead found that internal financing is the dominant mode of financing in all countries, that financing patterns do not differ very much between countries and that those differences which still seem to exist are not at all consistent with the common conviction that financial systems can be classified as being either bank-based or capital market-based. This leads to a puzzle insofar as it calls into question the empirical foundation of the widely held belief that there is a correspondence between the financing patterns of corporations on the one side, and the structure of the financial sector and the prevailing corporate governance system in a given country on the other side. The present paper addresses this puzzle on a methodological and an empirical basis. It starts by comparing and analyzing various ways of measuring financial structure and financing patterns and by demonstrating that the surprising empirical results found by studies that relied on net flows are due to a hidden assumption. It then derives an alternative method of measuring financing patterns, which also uses flow-of-funds data, but avoids the questionable assumption. This measurement concept is then applied to patterns of corporate financing in Germany, Japan and the United States. The empirical results, which use an estimation technique for determining gross flows of funds in those cases in which empirical data are not available, are very much in line with the commonly held belief prior to Mayer’s influential contribution and indicate that the financial systems of the three countries do indeed differ from one another in a substantial way, and moreover in a way which is largely in line with the general view of the differences between the financial systems of the countries covered in the present paper.
We present a detailed study of chemical freeze-out in nucleus-nucleus collisions at beam energies of 11.6, 30, 40, 80 and 158A GeV. By analyzing hadronic multiplicities within the statistical hadronization approach, we have studied the strangeness production as a function of centre of mass energy and of the parameters of the source. We have tested and compared different versions of the statistical model, with special emphasis on possible explanations of the observed strangeness hadronic phase space under-saturation. We show that, in this energy range, the use of hadron yields at midrapidity instead of in full phase space artificially enhances strangeness production and could lead to incorrect conclusions as far as the occurrence of full chemical equilibrium is concerned. In addition to the basic model with an extra strange quark non-equilibrium parameter, we have tested three more schemes: a two-component model superimposing hadrons coming out of single nucleon-nucleon interactions to those emerging from large fireballs at equilibrium, a model with local strangeness neutrality and a model with strange and light quark non-equilibrium parameters. The behaviour of the source parameters as a function of colliding system and collision energy is studied. The description of strangeness production entails a non-monotonic energy dependence of strangeness saturation parameter gamma_S with a maximum around 30A GeV. We also present predictions of the production rates of still unmeasured hadrons including the newly discovered Theta^+(1540) pentaquark baryon.
We suggest that the fluctuations of strange hadron multiplicity could be sensitive to the equation of state and microscopic structure of strongly interacting matter created at the early stage of high energy nucleus-nucleus collisions. They may serve as an important tool in the study of the deconfinement phase transition. We predict, within the statistical model of the early stage, that the ratio of properly filtered fluctuations of strange to non-strange hadron multiplicities should have a non-monotonic energy dependence with a minimum in the mixed phase region.
The data on mT spectra of K0S K+ and K- mesons produced in all inelastic p + p and p + pbar interactions in the energy range sqrt(s)NN=4.7-1800GeV are compiled and analyzed. The spectra are parameterized by a single exponential function, dN/(m_T*dm_T)=C exp(-m_T/T), and the inverse slope parameter T is the main object of study. The T parameter is found to be similar for K0S, K+ and K- mesons. It increases monotonically with collision energy from T~30MeV at sqrt(s)NN=4.7GeV to T~220MeV at sqrt(s)NN=1800GeV. The T parameter measured in p+p and p+pbar interactions is significantly lower than the corresponding parameter obtained for central Pb+Pb collisions at all studied energies. Also the shape of the energy dependence of T is different for central Pb+Pb collisions and p+p(pbar) interactions.
We propose a method to experimentally study the equation of state of strongly interacting matter created at the early stage of nucleus--nucleus collisions. The method exploits the relation between relative entropy and energy fluctuations and equation of state. As a measurable quantity, the ratio of properly filtered multiplicity to energy fluctuations is proposed. Within a statistical approach to the early stage of nucleus-nucleus collisions, the fluctuation ratio manifests a non--monotonic collision energy dependence with a maximum in the domain where the onset of deconfinement occurs.
Production of Lambda and Antilambda hyperons was measured in central Pb-Pb collisions at 40, 80, and 158 A GeV beam energy on a fixed target. Transverse mass spectra and rapidity distributions are given for all three energies. The Lambda/pi ratio at mid-rapidity and in full phase space shows a pronounced maximum between the highest AGS and 40 A GeV SPS energies, whereas the anti-Lambda}/pi ratio exhibits a monotonic increase. PACS numbers: 25.75.-q
Fluctuations of charged particle number are studied in the canonical ensemble. In the infinite volume limit the fluctuations in the canonical ensemble are different from the fluctuations in the grand canonical one. Thus, the well-known equivalence of both ensembles for the average quantities does not extend for the fluctuations. In view of a possible relevance of the results for the analysis of fluctuations in nuclear collisions at high energies, a role of the limited kinematical acceptance is studied.
Report from NA49
(2004)
The most recent data of NA49 on hadron production in nuclear collisions at CERN SPS energies are presented. Anomalies in the energy dependence of pion and kaon production in central Pb+Pb collisions are observed. They suggest that the onset of deconfinement is located at about 30 AGeV. Large multiplicity and transverse momentum fluctuations are measured for collisions of intermediate mass systems at 158 AGeV. The need for a new experimental programme at the CERN SPS is underlined.
The transverse mass mt distributions for deuterons and protons are measured in Pb+Pb reactions near midrapidity and in the range 0<mt–m<1.0 (1.5) GeV/c2 for minimum bias collisions at 158A GeV and for central collisions at 40 and 80 A GeV beam energies. The rapidity density dn/dy, inverse slope parameter T and mean transverse mass <mt> derived from mt distributions as well as the coalescence parameter B2 are studied as a function of the incident energy and the collision centrality. The deuteron mt spectra are significantly harder than those of protons, especially in central collisions. The coalescence factor B2 shows three systematic trends. First, it decreases strongly with increasing centrality reflecting an enlargement of the deuteron coalescence volume in central Pb+Pb collisions. Second, it increases with mt. Finally, B2 shows an increase with decreasing incident beam energy even within the SPS energy range. The results are discussed and compared to the predictions of models that include the collective expansion of the source created in Pb+Pb collisions.
Preliminary results on pion-pion Bose-Einstein correlations in central Pb+Pb collisions measured by the NA49 experiment are presented. Rapidity as well as transverse momentum dependence of the HBT-radii are shown for collisions at 20, 30, 40, 80, and 158 AGeV beam energy. Including results from AGS and RHIC experiments only a weak energy dependence of the radii is observed. Based on hydrodynamical models parameters like lifetime and geometrical radius of the source are derived from the dependence of the radii on transverse momentum.
Event-by-event fluctuations of particle ratios in central Pb + Pb collisions at 20 to 158 AGeV
(2004)
In the vicinity of the QCD phase transition, critical fluctuations have been predicted to lead to non-statistical fluctuations of particle ratios, depending on the nature of the phase transition. Recent results of the NA49 energy scan program show a sharp maximum of the ratio of K+ to Pi+ yields in central Pb+Pb collisions at beam energies of 20-30 AGeV. This observation has been interpreted as an indication of a phase transition at low SPS energies. We present first results on event-by-event fluctuations of the kaon to pion and proton to pion ratios at beam energies close to this maximum.
Results are presented on event-by-event electric charge fluctuations in central Pb+Pb collisions at 20, 30, 40, 80 and 158 AGeV. The observed fluctuations are close to those expected for a gas of pions correlated by global charge conservation only. These fluctuations are considerably larger than those calculated for an ideal gas of deconfined quarks and gluons. The present measurements do not necessarily exclude reduced fluctuations from a quark-gluon plasma because these might be masked by contributions from resonance decays.
System size and centrality dependence of the balance function in A + A collisions at √sNN = 17.2 GeV
(2004)
Electric charge correlations were studied for p+p, C+C, Si+Si and centrality selected Pb+Pb collisions at sqrt s_NN = 17.2$ GeV with the NA49 large acceptance detector at the CERN-SPS. In particular, long range pseudo-rapidity correlations of oppositely charged particles were measured using the Balance Function method. The width of the Balance Function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to force the hot and dense nuclear matter into a deconfined phase.
System size dependence of multiplicity fluctuations of charged particles produced in nuclear collisions at 158 A GeV was studied in the NA49 CERN experiment. Results indicate a non-monotonic dependence of the scaled variance of the multiplicity distribution with a maximum for semi-peripheral Pb+Pb interactions with number of projectile participants of about 35. This effect is not observed in a string-hadronic model of nuclear collision HIJING.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to transform the hot and dense nuclear matter into a deconfined phase.
In the early Nineties the Hague Conference on International Private Law on initiative of the United States started negotiations on a Convention on the Recognition and Enforcement of Foreign Judgments in Civil and Commercial Matters (the "Hague Convention"). In October 1999 the Special Commission on duty presented a preliminary text, which was drafted quite closely to the European Convention on Jurisdiction and Enforcement of Judgments in Civil and Commercial Matters (the "Brussels Convention"). The latter was concluded between the then 6 Member States of the EEC in Brussels in 1968 and amended several times on occasion of the entry of new Member States. In 2000, after the Treaty of Amsterdam altered the legal basis for judicial co-operation in civil matters in Europe, it was transformed into an EC Regulation (the "Brussels I Regulation"). The 1999 draft of the Hague Convention was heavily criticized by the USA and other states for its European approach of a double convention, regulating not only the recognition and enforcement of judgments, but at the same time the extent of and the limits to jurisdiction to adjudicate in international cases. During a diplomatic conference in June 2001 a second draft was presented which contained alternative versions of several articles and thus resembled more the existing dissent than a draft convention would. Difficulties to reach a consensus remained, especially with regard to activity based jurisdiction, intellectual property, consumer rights and employee rights. In addition, the appropriateness of the whole draft was questioned in light of the problems posed by the de-territorialization of relevant conduct through the advent of the Internet. In April 2002 it was decided to continue negotiations on an informal level on the basis of a nucleus approach. The core consensus as identified by a working group, however, was not very broad. The experts involved came to the conclusion that the project should be limited to choice of court agreements. In March 2004 a draft was presented which sets out its aims as follows: "The objective of the Convention is to make exclusive choice of court agreements as effective as possible in the context of international business. The hope is that the Convention will do for choice of court agreements what the New York Convention of 1958 has done for arbitration agreements." In April 2004 the Special Commission of the Hague Conference adopted a Draft "Convention on Exclusive Choice of Court Agreements", which according to its Art. 2 No. 1 a) is not applicable to choice of court agreements, to which a natural person acting primarily for personal, family or household purposes (a consumer) is a party". The broader project of a global judgments convention thus seems to be abandoned, or at least to be postponed for an unlimited time period. There are - of course - several reasons why the Hague Judgments project failed. Samuel Baumgartner has described an important one as the "Justizkonflikt" between the United States and Europe or, more specifically Germany. Within the context of the general topic of this conference, that is (international) jurisdiction for human rights, in the remainder of this presentation I shall elaborate on the socio-cultural aspects of the impartiality of judgments and their enforcement on a global scale.
In April 2003 I commented on the European Commission’s Action Plan on a More Coherent European Contract Law [COM(2003) 68 final] and the Green Paper on the Modernisation of the 1980 Rome Convention [COM(2002) 654 final].1 While the main argument of that paper, i.e. the common neglect of the inherent interrelation between both the further harmonisation of substantive contract law by directives or through an optional European Civil Code on the one hand and the modernisation of conflict rules for consumer contracts in Art. 5 Rome Convention on the other hand, remain pressing issues, and as the German Law Journal continues its efforts in offering timely and critical analysis on consumer law issues,2 there is a variety of recent developments worth noting.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen (NOy), with a maximum vortex average permanent NOy removal of over 5 ppb in late December between 500 and 550 K and a corresponding increase of NOy of over 2 ppb below about 450 K. The simulated vertical redistribution of NOy is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 3.4·10−6 cm−3 h−1 in the model, the observed denitrification is well reproduced. In the investigated winter 2002/2003, the denitrification has only moderate impact (<=10%) on the simulated vortex average ozone loss of about 1.1 ppm near the 460 K level. At higher altitudes, above 600 K potential temperature, the simulations show significant ozone depletion through NOx-catalytic cycles due to the unusual early exposure of vortex air to sunlight.
Configuration, simulation and visualization of simple biochemical reaction-diffusion systems in 3D
(2004)
Background In biological systems, molecules of different species diffuse within the reaction compartments and interact with each other, ultimately giving rise to such complex structures like living cells. In order to investigate the formation of subcellular structures and patterns (e.g. signal transduction) or spatial effects in metabolic processes, it would be helpful to use simulations of such reaction-diffusion systems. Pattern formation has been extensively studied in two dimensions. However, the extension to three-dimensional reaction-diffusion systems poses some challenges to the visualization of the processes being simulated. Scope of the Thesis The aim of this thesis is the specification and development of algorithms and methods for the three-dimensional configuration, simulation and visualization of biochemical reaction-diffusion systems consisting of a small number of molecules and reactions. After an initial review of existing literature about 2D/3D reaction-diffusion systems, a 3D simulation algorithm (PDE solver), based on an existing 2D-simulation algorithm for reaction-diffusion systems written by Prof. Herbert Sauro, has to be developed. In a succeeding step, this algorithm has to be optimized for high performance. A prototypic 3D configuration tool for the initial state of the system has to be developed. This basic tool should enable the user to define and store the location of molecules, membranes and channels within the reaction space of user-defined size. A suitable data structure has to be defined for the representation of the reaction space. The main focus of this thesis is the specification and prototypic implementation of a suitable reaction space visualization component for the display of the simulation results. In particular, the possibility of 3D visualization during course of the simulation has to be investigated. During the development phase, the quality and usability of the visualizations has to be evaluated in user tests. The simulation, configuration and visualization prototypes should be compliant with the Systems Biology Workbench to ensure compatibility with software from other authors. The thesis is carried out in close cooperation with Prof. Herbert Sauro at the Keck Graduate Institute, Claremont, CA, USA. Due to this international cooperation the thesis will be written in English.
We present a detailed study of chemical freeze-out in nucleus-nucleus collisions at beam energies of 11.6, 30, 40, 80 and 158A GeV. By analyzing hadronic multiplicities within the statistical hadronization approach, we have studied the chemical equilibration of the system as a function of center of mass energy and of the parameters of the source. Additionally, we have tested and compared different versions of the statistical model, with special emphasis on possible explanations of the observed strangeness hadronic phase space under-saturation.
New results on the production of Xi and Omega hyperons in Pb+Pb interactions at 40 A GeV and Lambda at 30 A GeV are presented. Transverse mass spectra as well as rapidity spectra of these hyperons are shown and compared to previously measured data at different beam energies. The energy dependence of hyperon production (4Pi yields) is discussed. Additionally, the centrality dependence of Xi- production at 40 A GeV is presented.
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
Assessing enhanced knowledge discovery systems (eKDSs) constitutes an intricate issue that is understood merely to a certain extent by now. Based upon an analysis of why it is difficult to formally evaluate eKDSs, it is argued for a change of perspective: eKDSs should be understood as intelligent tools for qualitative analysis that support, rather than substitute, the user in the exploration of the data; a qualitative gap will be identified as the main reason why the evaluation of enhanced knowledge discovery systems is difficult. In order to deal with this problem, the construction of a best practice model for eKDSs is advocated. Based on a brief recapitulation of similar work on spoken language dialogue systems, first steps towards achieving this goal are performed, and directions of future research are outlined.
This study analyses the labour market effects of fixed-term contracts (FTCs) in West Germany by microeconometric methods using individual and establishment level data. In the first part of the study the role of FTCs in firms’ labour demand is analysed. An econometric investigation of the firms’ reasons for using FTCs focussing on the identification of the link between dismissal protection for permanent contract workers and the firms’ use of FTCs is presented. Furthermore, a descriptive analysis of the role of FTCs in worker and job flows at the firm level is provided. The second part of the study evaluates the short-run effects of being employed on an FTC on working conditions and wages using a large cross-sectional dataset of employees. The final part of the study analyses whether taking up an FTC increases the (permanent contract) employment opportunities in the long-run (stepping stone effect) and whether FTCs affect job finding behaviour of unemployed job searchers. Firstly, an econometric unemployment duration analysis distinguishing between both types of contracts as destination states is performed. Secondly, the effects of entering into FTCs from unemployment on future (permanent contract) employment opportunities are evaluated attempting to account for the sequential decision problem of job searchers.
We modify the concept of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lovasz [LLL82] towards a faster reduction algorithm. We organize LLL-reduction in segments of the basis. Our SLLL-bases approximate the successive minima of the lattice in nearly the same way as LLL-bases. For integer lattices of dimension n given by a basis of length 2exp(O(n)), SLLL-reduction runs in O(n.exp(5+epsilon)) bit operations for every epsilon > 0, compared to O(exp(n7+epsilon)) for the original LLL and to O(exp(n6+epsilon)) for the LLL-algorithms of Schnorr (1988) and Storjohann (1996). We present an even faster algorithm for SLLL-reduction via iterated subsegments running in O(n*exp(3)*log n) arithmetic steps.
Let G be a Fuchsian group containing two torsion free subgroups defining isomorphic Riemann surfaces. Then these surface subgroups K and alpha-Kalpha exp(-1) are conjugate in PSl(2,R), but in general the conjugating element alpha cannot be taken in G or a finite index Fuchsian extension of G. We will show that in the case of a normal inclusion in a triangle group G these alpha can be chosen in some triangle group extending G. It turns out that the method leading to this result allows also to answer the question how many different regular dessins of the same type can exist on a given quasiplatonic Riemann surface.
The large conductance voltage- and Ca2+-activated potassium (BK) channel has been suggested to play an important role in the signal transduction process of cochlear inner hair cells. BK channels have been shown to be composed of the pore-forming alpha-subunit coexpressed with the auxiliary beta-1-subunit. Analyzing the hearing function and cochlear phenotype of BK channel alpha-(BKalpha–/–) and beta-1-subunit (BKbeta-1–/–) knockout mice, we demonstrate normal hearing function and cochlear structure of BKbeta-1–/– mice. During the first 4 postnatal weeks also, BKalpha–/– mice most surprisingly did not show any obvious hearing deficits. High-frequency hearing loss developed in BKalpha–/– mice only from ca. 8 weeks postnatally onward and was accompanied by a lack of distortion product otoacoustic emissions, suggesting outer hair cell (OHC) dysfunction. Hearing loss was linked to a loss of the KCNQ4 potassium channel in membranes of OHCs in the basal and midbasal cochlear turn, preceding hair cell degeneration and leading to a similar phenotype as elicited by pharmacologic blockade of KCNQ4 channels. Although the actual link between BK gene deletion, loss of KCNQ4 in OHCs, and OHC degeneration requires further investigation, data already suggest human BK-coding slo1 gene mutation as a susceptibility factor for progressive deafness, similar to KCNQ4 potassium channel mutations. © 2004, The National Academy of Sciences. Freely available online through the PNAS open access option.
Dendritic cells (DC) are known to present exogenous protein Ag effectively to T cells. In this study we sought to identify the proteases that DC employ during antigen processing. The murine epidermal-derived DC line Xs52, when pulsed with PPD, optimally activated the PPD-reactive Th1 clone LNC.2F1 as well as the Th2 clone LNC.4k1, and this activation was completely blocked by chloroquine pretreatment. These results validate the capacity of XS52 DC to digest PPD into immunogenic peptides inducing antigen specific T cell immune responses. XS52 DC, as well as splenic DC and DCs derived from bone marrow degraded standard substrates for cathepsins B, C, D/E, H, J, and L, tryptase, and chymases, indicating that DC express a variety of protease activities. Treatment of XS52 DC with pepstatin A, an inhibitor of aspartic acid proteases, completely abrogated their capacity to present native PPD, but not trypsin-digested PPD fragments to Th1 and Th2 cell clones. Pepstatin A also inhibited cathepsin D/E activity selectively among the XS52 DC-associated protease activities. On the other hand, inhibitors of serine proteases (dichloroisocoumarin, DCI) or of cystein proteases (E-64) did not impair XS52 DC presentation of PPD, nor did they inhibit cathepsin D/E activity. Finally, all tested DC populations (XS52 DC, splenic DC, and bone marrow-derived DC) constitutively expressed cathepsin D mRNA. These results suggest that DC primarily employ cathepsin D (and perhaps E) to digest PPD into antigenic peptides.
Background: The neurophysiological and neuroanatomical foundations of persistent developmental stuttering (PDS) are still a matter of dispute. A main argument is that stutterers show atypical anatomical asymmetries of speech-relevant brain areas, which possibly affect speech fluency. The major aim of this study was to determine whether adults with PDS have anomalous anatomy in cortical speech-language areas. Methods: Adults with PDS (n = 10) and controls (n = 10) matched for age, sex, hand preference, and education were studied using high-resolution MRI scans. Using a new variant of the voxel-based morphometry technique (augmented VBM) the brains of stutterers and non-stutterers were compared with respect to white matter (WM) and grey matter (GM) differences. Results: We found increased WM volumes in a right-hemispheric network comprising the superior temporal gyrus (including the planum temporale), the inferior frontal gyrus (including the pars triangularis), the precentral gyrus in the vicinity of the face and mouth representation, and the anterior middle frontal gyrus. In addition, we detected a leftward WM asymmetry in the auditory cortex in non-stutterers, while stutterers showed symmetric WM volumes. Conclusions: These results provide strong evidence that adults with PDS have anomalous anatomy not only in perisylvian speech and language areas but also in prefrontal and sensorimotor areas. Whether this atypical asymmetry of WM is the cause or the consequence of stuttering is still an unanswered question. This article is available from: http://www.biomedcentral.com/1471-2377/4/23 © 2004 Jäncke et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Background: In rat, deafferentation of one labyrinth (unilateral labyrinthectomy) results in a characteristic syndrome of ocular and motor postural disorders (e.g., barrel rotation, circling behavior, and spontaneous nystagmus). Behavioral recovery (e.g., diminished symptoms), encompassing 1 week after unilateral labyrinthectomy, has been termed vestibular compensation. Evidence suggesting that the histamine H3 receptor plays a key role in vestibular compensation comes from studies indicating that betahistine, a histamine-like drug that acts as both a partial histamine H1 receptor agonist and an H3 receptor antagonist, can accelerate the process of vestibular compensation. Results: Expression levels for histamine H3 receptor (total) as well as three isoforms which display variable lengths of the third intracellular loop of the receptor were analyzed using in situ hybridization on brain sections containing the rat medial vestibular nucleus after unilateral labyrinthectomy. We compared these expression levels to H3 receptor binding densities. Total H3 receptor mRNA levels (detected by oligo probe H3X) as well as mRNA levels of the three receptor isoforms studied (detected by oligo probes H3A, H3B, and H3C) showed a pattern of increase, which was bilaterally significant at 24 h post-lesion for both H3X and H3C, followed by significant bilateral decreases in medial vestibular nuclei occurring 48 h (H3X and H3B) and 1 week post-lesion (H3A, H3B, and H3C). Expression levels of H3B was an exception to the forementioned pattern with significant decreases already detected at 24 h post-lesion. Coinciding with the decreasing trends in H3 receptor mRNA levels was an observed increase in H3 receptor binding densities occurring in the ipsilateral medial vestibular nuclei 48 h post-lesion. Conclusion: Progressive recovery of the resting discharge of the deafferentated medial vestibular nuclei neurons results in functional restoration of the static postural and occulomotor deficits, usually occurring within a time frame of 48 hours in rats. Our data suggests that the H3 receptor may be an essential part of pre-synaptic mechanisms required for reestablishing resting activities 48 h after unilateral labyrinthectomy.
Western cultures have witnessed a tremendous cultural and social transformation of sexuality in the years since the sexual revolution. Apart from a few public debates and scandals, the process has moved along gradually and quietly. Yet its real and symbolic effects are probably much more consequential than those generated by the sexual revolution of the sixties. Sigusch refers to the broad-based recoding and reassessment of the sexual sphere during the eighties and nineties as the "neosexual revolution". The neosexual revolution is dismantling the old patterns of sexuality and reassembling them anew. In the process, dimensions, intimate relationships, preferences and sexual fragments emerge, many of which had submerged, were unnamed or simply did not exist before. In general, sexuality has lost much of its symbolic meaning as a cultural phenomenon. Sexuality is no longer the great metaphor for pleasure and happiness, nor is it so greatly overestimated as it was during the sexual revolution. It is now widely taken for granted, much like egotism or motility. Whereas sex was once mystified in a positive sense - as ecstasy and transgression, it has now taken on a negative mystification characterized by abuse, violence and deadly infection. While the old sexuality was based primarily upon sexual instinct, orgasm and the heterosexual couple, neosexualities revolve predominantly around gender difference, thrills, self-gratification and prosthetic substitution. From the vast number of interrelated processes from which neosexualities emerge, three empirically observable phenomena have been selected for discussion here: the dissociation of the sexual sphere, the dispersion of sexual fragments and the diversification of intimate relationships. The outcome of the neosexual revolution may be described as "lean sexuality" and "self-sex".
Background: Common warts (verrucae vulgares) are human papilloma virus (HPV) infections with a high incidence and prevalence, most often affecting hands and feet, being able to impair quality of life. About 30 different therapeutic regimens described in literature reveal a lack of a single striking strategy. Recent publications showed positive results of photodynamic therapy (PDT) with 5-aminolevulinic acid (5-ALA) in the treatment of HPV-induced skin diseases, especially warts, using visible light (VIS) to stimulate an absorption band of endogenously formed protoporphyrin IX. Additional experiences adding waterfiltered infrared A (wIRA) during 5-ALA-PDT revealed positive effects. Aim of the study: First prospective randomised controlled blind study including PDT and wIRA in the treatment of recalcitrant common hand and foot warts. Comparison of "5-ALA cream (ALA) vs. placebo cream (PLC)" and "irradiation with visible light and wIRA (VIS+wIRA) vs. irradiation with visible light alone (VIS)". Methods: Pre-treatment with keratolysis (salicylic acid) and curettage. PDT treatment: topical application of 5-ALA (Medac) in "unguentum emulsificans aquosum" vs. placebo; irradiation: combination of VIS and a large amount of wIRA (Hydrosun® radiator type 501, 4 mm water cuvette, waterfiltered spectrum 590-1400 nm, contact-free, typically painless) vs. VIS alone. Post-treatment with retinoic acid ointment. One to three therapy cycles every 3 weeks. Main variable of interest: "Percent change of total wart area of each patient over the time" (18 weeks). Global judgement by patient and by physician and subjective rating of feeling/pain (visual analogue scales). 80 patients with therapy-resistant common hand and foot warts were assigned randomly into one of the four therapy groups with comparable numbers of warts at comparable sites in all groups. Results: The individual total wart area decreased during 18 weeks in group 1 (ALA+VIS+wIRA) and in group 2 (PLC+VIS+wIRA) significantly more than in both groups without wIRA (group 3 (ALA+VIS) and 4 (PLC+VIS)): medians and interquartile ranges: -94% (-100%/-84%) vs. -99% (-100%/-71%) vs. -47% (-75%/0%) vs. -73% (-92%/-27%). After 18 weeks the two groups with wIRA differed remarkably from the two groups without wIRA: 42% vs. 7% completely cured patients; 72% vs. 34% vanished warts. Global judgement by patient and by physician and subjective rating of feeling was much better in the two groups with wIRA than in the two groups without wIRA. Conclusions: The above described complete treatment scheme of hand and foot warts (keratolysis, curettage, PDT treatment, irradiation with VIS+wIRA, retinoic acid ointment; three therapy cycles every 3 weeks) proved to be effective. Within this treatment scheme wIRA as non-invasive and painless treatment modality revealed to be an important, effective factor, while photodynamic therapy with 5-ALA in the described form did not contribute recognisably - neither alone (without wIRA) nor in combination with wIRA - to a clinical improvement. For future treatment of warts an even improved scheme is proposed: one treatment cycle (keratolysis, curettage, wIRA, without PDT) once a week for six to nine weeks. © 2004 Fuchs et al; licensee German Medical Science. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL : http://www.egms.de/en/gms/volume2.shtml
We present an overview of the mathematics underlying the quantum Zeno effect. Classical, functional analytic results are put into perspective and compared with more recent ones. This yields some new insights into mathematical preconditions entailing the Zeno paradox, in particular a simplified proof of Misra's and Sudarshan's theorem. We empahsise the complex-analytic structures associated to the issue of existence of the Zeno dynamics. On grounds of the assembled material, we reason about possible future mathematical developments pertaining to the Zeno paradox and its counterpart, the anti-Zeno paradox, both of which seem to be close to complete characterisations. PACS-Klassifikation: 03.65.Xp, 03.65Db, 05.30.-d, 02.30.T . See the corresponding presentation: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Zeno Dynamics in Quantum Statistical Mechanics"
We study the quantum Zeno effect in quantum statistical mechanics within the operator algebraic framework. We formulate a condition for the appearance of the effect in W*-dynamical systems, in terms of the short-time behaviour of the dynamics. Examples of quantum spin systems show that this condition can be effectively applied to quantum statistical mechanical models. Furthermore, we derive an explicit form of the Zeno generator, and use it to construct Gibbs equilibrium states for the Zeno dynamics. As a concrete example, we consider the X-Y model, for which we show that a frequent measurement at a microscopic level, e.g. a single lattice site, can produce a macroscopic effect in changing the global equilibrium. PACS - Klassifikation: 03.65.Xp, 05.30.-d, 02.30. See the corresponding papers: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Mathematics of the Quantum Zeno Effect" and the talk "Zeno Dynamics in Quantum Statistical Mechanics" - http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1167/
A fundamental work on THz measurement techniques for application to steel manufacturing processes
(2004)
The terahertz (THz) waves had not been obtained except by a huge system, such as a free electron laser, until an invention of a photo-mixing technique at Bell laboratory in 1984 [1]. The first method using the Auston switch could generate up to 1 THz [2]. After then, as a result of some efforts for extending the frequency limit, a combination of antennas for the generation and the detection reached several THz [3, 4]. This technique has developed, so far, with taking a form of filling up the so-called THz gap . At the same time, a lot of researches have been trying to increase the output power as well [5-7]. In the 1990s, a big advantage in the frequency band was brought by non-linear optical methods [8-11]. The technique led to drastically expand the frequency region and recently to realize a measurement up to 41 THz [12]. On the other hand, some efforts have yielded new generation and detection methods from other approaches, a CW-THz as well as the pulse generation [13-19]. Especially, a THz luminescence and a laser, originated in a research on the Bloch oscillator, are recently generated from a quantum cascade structure, even at an only low temperature of 60 K [20-22]. This research attracts a lot of attention, because it would be a breakthrough for the THz technique to become widespread into industrial area as well as research, in a point of low costs and easier operations. It is naturally thought that a technology of short pulse lasers has helped the THz field to be developed. As a background of an appearance of a stable Ti:sapphire laser and a high power chirped pulse amplification (CPA) laser, instead of a dye laser, a lot of concentration on the techniques of a pulse compression and amplification have been done. [23] Viewed from an application side, the THz technique has come into the limelight as a promising measurement method. A discovery of absorption peaks of a protein and a DNA in the THz region is promoting to put the technique into practice in the field of medicine and pharmaceutical science from several years ago [24-27]. It is also known that some absorption of light polar-molecules exist in the region, therefore, some ideas of gas and water content monitoring in the chemical and the food industries are proposed [28-32]. Furthermore, a lot of reports, such as measurements of carrier distribution in semiconductors, refractive index of a thin film and an object shape as radar, indicate that this technique would have a wide range of application [33-37]. I believe that it is worth challenging to apply it into the steel-making industry, due to its unique advantages. The THz wavelength of 30-300 ¼m can cope with both independence of a surface roughness of steel products and a detection with a sub-millimeter precision, for a remote surface inspection. There is also a possibility that it can measure thickness or dielectric constants of relatively high conductive materials, because of a high permeability against non-polar dielectric materials, short pulse detection and with a high signal-to-noise ratio of 103-5. Furthermore, there is a possibility that it could be applicable to a measurement at high temperature, for less influence by a thermal radiation, compared with the visible and infrared light. These ideas have motivated me to start this THz work.
The Kochen-Specker theorem has been discussed intensely ever since its original proof in 1967. It is one of the central no-go theorems of quantum theory, showing the non-existence of a certain kind of hidden states models. In this paper, we first offer a new, non-combinatorial proof for quantum systems with a type I_n factor as algebra of observables, including I_infinity. Afterwards, we give a proof of the Kochen-Specker theorem for an arbitrary von Neumann algebra R without summands of types I_1 and I_2, using a known result on two-valued measures on the projection lattice P(R). Some connections with presheaf formulations as proposed by Isham and Butterfield are made.
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
The Basel Committee plans to differentiate risk-adjusted capital requirements between banks regulated under the internal ratings based (IRB) approach and banks under the standard approach. We investigate the consequences for the lending capacity and the failure risk of banks in a model with endogenous interest rates. The optimal regulatory response depends on the banks' inclination to increase their portfolio risk. If IRB-banks are well-capitalized or gain little from taking risks, then they will increase their market share and hold safe portfolios. As risk-taking incentives become more important, the optimal portfolio size of banks adopting intern rating systems will be increasingly constrained, and ultimately they may lose market share relative to banks using the standard approach. The regulator has only limited options to avoid the excessive adoption of internal rating systems. JEL Klassifikation: K13, H41.
We develop an estimated model of the U.S. economy in which agents form expectations by continually updating their beliefs regarding the behavior of the economy and monetary policy. We explore the effects of policymakers' misperceptions of the natural rate of unemployment during the late 1960s and 1970s on the formation of expectations and macroeconomic outcomes. We find that the combination of monetary policy directed at tight stabilization of unemployment near its perceived natural rate and large real-time errors in estimates of the natural rate uprooted heretofore quiescent in inflation expectations and destabilized the economy. Had monetary policy reacted less aggressively to perceived unemployment gaps, in inflation expectations would have remained anchored and the stag inflation of the 1970s would have been avoided. Indeed, we find that less activist policies would have been more effective at stabilizing both in inflation and unemployment. We argue that policymakers, learning from the experience of the 1970s, eschewed activist policies in favor of policies that concentrated on the achievement of price stability, contributing to the subsequent improvements in macroeconomic performance of the U.S. economy.
Recent evidence on the effect of government spending shocks on consumption cannot be easily reconciled with existing optimizing business cycle models. We extend the standard New Keynesian model to allow for the presence of rule-of-thumb (non-Ricardian) consumers. We show how the interaction of the latter with sticky prices and deficit financing can account for the existing evidence on the effects of government spending. JEL Klassifikation: E32, E62.
In a plain-vanilla New Keynesian model with two-period staggered price-setting, discretionary monetary policy leads to multiple equilibria. Complementarity between the pricing decisions of forward-looking firms underlies the multiplicity, which is intrinsically dynamic in nature. At each point in time, the discretionary monetary authority optimally accommodates the level of predetermined prices when setting the money supply because it is concerned solely about real activity. Hence, if other firms set a high price in the current period, an individual firm will optimally choose a high price because it knows that the monetary authority next period will accommodate with a high money supply. Under commitment, the mechanism generating complementarity is absent: the monetary authority commits not to respond to future predetermined prices. Multiple equilibria also arise in other similar contexts where (i) a policymaker cannot commit, and (ii) forward-looking agents determine a state variable to which future policy respond. JEL Klassifikation: E5, E61, D78
This paper analyzes the empirical relationship between credit default swap, bond and stock markets during the period 2000-2002. Focusing on the intertemporal comovement, we examine weekly and daily lead-lag relationships in a vector autoregressive model and the adjustment between markets caused by cointegration. First, we find that stock returns lead CDS and bond spread changes. Second, CDS spread changes Granger cause bond spread changes for a higher number of firms than vice versa. Third, the CDS market is significantly more sensitive to the stock market than the bond market and the magnitude of this sensitivity increases when credit quality becomes worse. Finally, the CDS market plays a more important role for price discovery than the corporate bond market. JEL Klassifikation: G10, G14, C32.
We characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. Our analysis is based on a unique data set of high-frequency futures returns for each of the markets. We find that news surprises produce conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. The details of the linkages are particularly intriguing as regards equity markets. We show that equity markets react differently to the same news depending on the state of the economy, with bad news having a positive impact during expansions and the traditionally-expected negative impact during recessions. We rationalize this by temporal variation in the competing "cash flow" and "discount rate" effects for equity valuation. This finding helps explain the time-varying correlation between stock and bond returns, and the relatively small equity market news effect when averaged across expansions and recessions. Lastly, relying on the pronounced heteroskedasticity in the high-frequency data, we document important contemporaneous linkages across all markets and countries over-and-above the direct news announcement effects. JEL Klassifikation: F3, F4, G1, C5
This paper analyzes banks' choice between lending to firms individually and sharing lending with other banks, when firms and banks are subject to moral hazard and monitoring is essential. Multiple-bank lending is optimal whenever the benefit of greater diversification in terms of higher monitoring dominates the costs of free-riding and duplication of efforts. The model predicts a greater use of multiple-bank lending when banks are small relative to investment projects, firms are less profitable, and poor financial integration, regulation and inefficient judicial systems increase monitoring costs. These results are consistent with empirical observations concerning small business lending and loan syndication. JEL Klassifikation: D82; G21; G32.
We analyze governance with a dataset on investments of venture capitalists in 3848 portfolio firms in 39 countries from North and South America, Europe and Asia spanning 1971-2003. We find that cross-country differences in Legality have a significant impact on the governance structure of investments in the VC industry: better laws facilitate faster deal screening and deal origination, a higher probability of syndication and a lower probability of potentially harmful co-investment, and facilitate board representation of the investor. We also show better laws reduce the probability that the investor requires periodic cash flows prior to exit, which is in conjunction with an increased probability of investment in high-tech companies. Klassifikation: G24, G31, G32.
A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. JEL Klassifikation: C1, G1
Earlier studies of the seigniorage inflation model have found that the high-inflation steady state is not stable under adaptive learning. We reconsider this issue and analyze the full set of solutions for the linearized model. Our main focus is on stationary hyperinflationary paths near the high-inflation steady state. The hyperinflationary paths are stable under learning if agents can utilize contemporaneous data. However, in an economy populated by a mixture of agents, some of whom only have access to lagged data, stable inflationary paths emerge only if the proportion of agents with access to contemporaneous data is sufficiently high. JEL Klassifikation: C62, D83, D84, E31
In this paper, we study the effectiveness of monetary policy in a severe recession and deflation when nominal interest rates are bounded at zero. We compare two alternative proposals for ameliorating the effect of the zero bound: an exchange-rate peg and price-level targeting. We conduct this quantitative comparison in an empirical macroeconometric model of Japan, the United States and the euro area. Furthermore, we use a stylized micro-founded two-country model to check our qualitative findings. We find that both proposals succeed in generating inflationary expectations and work almost equally well under full credibility of monetary policy. However, price-level targeting may be less effective under imperfect credibility, because the announced price-level target path is not directly observable. Klassifikation: E31, E52, E58, E61
We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not require targeting a positive average rate of inflation. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) mark-up shocks. JEL Klassifikation: C63, E31, E52 .
In this article, we investigate risk return characteristics and diversification benefits when private equity is used as a portfolio component. We use a unique dataset describing 642 US-American portfolio companies with 3620 private equity investments. Information about precisely dated cash flows at the company level enables for the first time a cash flow equivalent and simultaneous investment simulation in stocks, as well as the construction of stock portfolios for benchmarking purposes. With respect to the methodology involved, we construct private equity, stock-benchmark and mixed-asset portfolios using bootstrap simulations. For the late 1990s we find a dramatic increase in the extent to which private equity outperforms stock investment. In earlier years private equity was underperforming its stock benchmarks. Within the overall class of private equity, returns on earlier private equity investment categories, like venture capital, show on average higher variations and even higher rates of failure. It is in this category in particular that high average portfolio returns are generated solely by the ability to select a few extremely well performing companies, thus compensating for lost investments. There is a high marginal diversifiable risk reduction of about 80% when the portfolio size is increased to include 15 investments. When the portfolio size is increased from 15 to 200 there are few marginal risk diversification effects on the one hand, but a large increase in managing expenditure on the other, so that an actual average portfolio size between 20 and 28 investments seems to be well balanced. We provide empirical evidence that the non-diversifiable risk that a constrained investor, who is exclusively investing in private equity, has to hold exceeds that of constrained stock investors and also the market risk. From the viewpoint of unconstrained investors with complete investment freedom, risk can be optimally reduced by constructing mixed asset portfolios. According to the various private equity subcategories analyzed, there are big differences in optimal allocations to this asset class for minimizing mixed-asset portfolio variance or maximizing performance ratios. We observe optimal portfolio weightings to be between 3% and 65%.
We take a simple time-series approach to modeling and forecasting daily average temperature in U.S. cities, and we inquire systematically as to whether it may prove useful from the vantage point of participants in the weather derivatives market. The answer is, perhaps surprisingly, yes. Time-series modeling reveals conditional mean dynamics, and crucially, strong conditional variance dynamics, in daily average temperature, and it reveals sharp differences between the distribution of temperature and the distribution of temperature surprises. As we argue, it also holds promise for producing the long-horizon predictive densities crucial for pricing weather derivatives, so that additional inquiry into time-series weather forecasting methods will likely prove useful in weather derivatives contexts.
Despite powerful advances in yield curve modeling in the last twenty years, comparatively little attention has been paid to the key practical problem of forecasting the yield curve. In this paper we do so. We use neither the no-arbitrage approach, which focuses on accurately fitting the cross section of interest rates at any given time but neglects time-series dynamics, nor the equilibrium approach, which focuses on time-series dynamics (primarily those of the instantaneous rate) but pays comparatively little attention to fitting the entire cross section at any given time and has been shown to forecast poorly. Instead, we use variations on the Nelson-Siegel exponential components framework to model the entire yield curve, period-by-period, as a three-dimensional parameter evolving dynamically. We show that the three time-varying parameters may be interpreted as factors corresponding to level, slope and curvature, and that they may be estimated with high efficiency. We propose and estimate autoregressive models for the factors, and we show that our models are consistent with a variety of stylized facts regarding the yield curve. We use our models to produce term-structure forecasts at both short and long horizons, with encouraging results. In particular, our forecasts appear much more accurate at long horizons than various standard benchmark forecasts. JEL Code: G1, E4, C5
We consider three sets of phenomena that feature prominently - and separately - in the financial economics literature: conditional mean dependence (or lack thereof) in asset returns, dependence (and hence forecastability) in asset return signs, and dependence (and hence forecastability) in asset return volatilities. We show that they are very much interrelated, and we explore the relationships in detail. Among other things, we show that: (a) Volatility dependence produces sign dependence, so long as expected returns are nonzero, so that one should expect sign dependence, given the overwhelming evidence of volatility dependence; (b) The standard finding of little or no conditional mean dependence is entirely consistent with a significant degree of sign dependence and volatility dependence; (c) Sign dependence is not likely to be found via analysis of sign autocorrelations, runs tests, or traditional market timing tests, because of the special nonlinear nature of sign dependence; (d) Sign dependence is not likely to be found in very high-frequency (e.g., daily) or very low-frequency (e.g., annual) returns; instead, it is more likely to be found at intermediate return horizons; (e) Sign dependence is very much present in actual U.S. equity returns, and its properties match closely our theoretical predictions; (f) The link between volatility forecastability and sign forecastability remains intact in conditionally non-Gaussian environments, as for example with time-varying conditional skewness and/or kurtosis.
We extend the important idea of range-based volatility estimation to the multivariate case. In particular, we propose a range-based covariance estimator that is motivated by financial economic considerations (the absence of arbitrage), in addition to statistical considerations. We show that, unlike other univariate and multivariate volatility estimators, the range-based estimator is highly efficient yet robust to market microstructure noise arising from bid-ask bounce and asynchronous trading. Finally, we provide an empirical example illustrating the value of the high-frequency sample path information contained in the range-based estimates in a multivariate GARCH framework.
Financial theory creates a puzzle. Some authors argue that high-risk entrepreneurs choose debt contracts instead of equity contracts since risky but high returns are of relatively more value for a loan-financed firm. On the contrary, authors who focus explicitly on start-up finance predict that entrepreneurs are the more likely to seek equity-like venture capital contracts, the more risky their projects are. Our paper makes a first step to resolve this puzzle empirically. We present microeconometric evidence on the determinants of debt and equity financing in young and innovative SMEs. We pay special attention to the role of risk for the choice of the financing method. Since risk is not directly observable we use different indicators for financial and project risk. It turns out that our data generally confirms the hypothesis that the probability that a young high-tech firm receives equity financing is an increasing function of the financial risk. With regard to the intrinsic project risk, our results are less conclusive, as some of our indicators of a risky project are found to have a negative effect on the likelihood to be financed by private equity.
We study the returns the venture capital and private equity investment from 221 venture capital and private equity funds that are part of 72 venture capital and private equity firms, 5040 entrepreneurial firms (3826 venture capital and 1214 private equity), and spanning 32 years (1971 - 2003) and 39 countries from North and South America, Europe and Asia. We make use of four main categories of variables to proxy for value-added activities and risks that explain venture capital and private equity returns: market and legal environment, VC characteristics, entrepreneurial firm characteristics, and the characteristics and structure of the investment. We show Heckman sample selection issues in regards to both unrealized and partially realized investments are important to consider for analysing the determinants of realized returns. We further compare the actual unrealized returns, as reported to investment managers, to the predicted unrealized returns based on the estimates of realized returns from the sample selection models. We show there exists significant systematic biases in the reporting of unrealized investments to institutional investors depending on the level of the earnings aggressiveness and disclosure indices in a country, as well as proxies for the degree of information asymmetry between investment managers and venture capital and private equity fund managers. Klassifikation: G24, G28, G31, G32, G35
We analyze welfare maximizing monetary policy in a dynamic two-country model with price stickiness and imperfect competition. In this context, a typical terms of trade externality affects policy interaction between independent monetary authorities. Unlike the existing literature, we remain consistent to a public finance approach by an explicit consideration of all the distortions that are relevant to the Ramsey planner. This strategy entails two main advantages. First, it allows an accurate characterization of optimal policy in an economy that evolves around a steady-state which is not necessarily efficient. Second, it allows to describe a full range of alternative dynamic equilibria when price setters in both countries are completely forward-looking and households' preferences are not restricted. In this context, we study optimal policy both in the long-run and along a dynamic path, and we compare optimal commitment policy under Nash competition and under cooperation. By deriving a second order accurate solution to the policy functions, we also characterize the welfare gains from international policy cooperation. Klassifikation: E52, F41 . This version: January, 2004. First draft: October 2003 .
This paper considers a theoretical model of n asymmetric firms that reduce their initial unit costs by spending on R&D activities. In accordance with Schumpeterian hypotheses we obtain that more efficient (bigger) firms spend more in R&D and this leads to a more concentrated market structure. We also find a positive relationship between innovation and market concentration. This calls for a corrective tax on R&D activities to curtail strategic incentives to over-invest in R&D trying to achieve a higher market share. Klassifikation: L11, L52, O31 . February, 2004.
This paper aims to analyze the impact of different types of venture capitalists on the performance of their portfolio firms around and after the IPO. We thereby investigate the hypothesis that different governance structures, objectives and track record of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis by using a data set embracing all IPOs which occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after the IPO compared to all other IPOs and their share prices fluctuate less than those of their counterparts in this period of time. Obviously, independent VCs, which concentrated mainly on growth stocks (low book-to-market ratio) and large firms (high market value), were able to add value by leading to less post-IPO idiosyncratic risk and more return (after controlling for all other effects). On the contrary, firms backed by public VCs (being small and having a high book-to-market ratio) showed relative underperformance. Klassifikation: G10, G14, G24 . 29th January 2004 .
How might retirees consider deploying the retirement assets accumulated in a defined contribution pension plan? One possibility would be to purchase an immediate annuity. Another approach, called the "phased withdrawal" strategy in the literature, would have the retiree invest his funds and then withdraw some portion of the account annually. Using this second tactic, the withdrawal rate might be determined according to a fixed benefit level payable until the retiree dies or the funds run out, or it could be set using a variable formula, where the retiree withdraws funds according to a rule linked to life expectancy. Using a range of data consistent with the German experience, we evaluate several alternative designs for phased withdrawal strategies, allowing for endogenous asset allocation patterns, and also allowing the worker to make decisions both about when to retire and when to switch to an annuity. We show that one particular phased withdrawal rule is appealing since it offers relatively low expected shortfall risk, good expected payouts for the retiree during his life, and some bequest potential for the heirs. We also find that unisex mortality tables if used for annuity pricing can make women's expected shortfalls higher, expected benefits higher, and bequests lower under a phased withdrawal program. Finally, we show that delayed annuitization can be appealing since it provides higher expected benefits with lower expected shortfalls, at the cost of somewhat lower anticipated bequests. Klassifikation: G22, G23, J26, J32, H55 . January 2004.
Life of Varroa destructor, Anderson and Trueman, an ectoparasitic mite of honeybees, is divided into a reproductive phase in the bee brood and a phoretic phase during which the mite is attached to the adult bee. Phoretic mites leave the colony with workers involved in foraging tasks. Little information is available on the mortality of mites outside the colony. Mites may or not return to the colony as a result of death of the infested foragers, host change by drifting of foragers, or removal of mites outside the colony. That mites do not return to the colony was indicated by substantially higher infestation of outflying workers compared to the infestation of returning workers (Kutschker, 1999). The main objective of the study was to provide information whether V. destructor influences flight behaviour of foragers and consequently returning frequency of foragers to the colony. I first repeated the experiment of Kutschker (1999) examining the infestation of outflying and returning workers. Further, I registered flight duration of foragers using a video method. In this experiment I compared also the infestation and flight duration of bees of different genetic origin, Carnica from Oberursel and bees from Primorsky region. I investigated returning time of workers, returning frequency until evening, drifting to other colonies and orientation toward the nest entrance in the experiments in which workers were released in close vicinity of the colony. At last, I measured the loss of foragers in relation to colony infestation using a Bee Scan. Results from this study, listed below, showed considerable influence of V. destructor on flight behavior of foragers translating into loss of mites. Loss of mites with foragers add substantial component to mite mortality and was underestimated in previous studies. Such loss might be viewed as a mechanism of resistance against V. destructor. a) The mean infestation of outflying workers (0.019±0.018) was twice as the mean infestation of returning workers (0.009±0.018). The difference in the infestation between outflying and returning workers was more marked in highly infested colonies. b) Investigation of individually tagged workers by use of a two camera video recording device showed significantly higher infestation of outflying workers compared to returning workers. Mites were lost by the non returning of infested foragers (22%) and by loss of mites from foragers that returned to the colony without the mite (20%). A small portion of mites (1.8%) was gained. Loss of mites significantly exceeded mite gain. c) The flight duration of infested workers determined by using the same two camera video system was significantly higher in infested compared to uninfested workers of the same age that flew closest at time. The median flight duration of infested workers was 1.7 higher (214s) than the median duration of unifested workers (128s). d) Infested workers took 2.3 times longer to return to the colony than uninfested workers of the same age when released from the same locations, closest at time. The returning time increased with the distance of release. In a group of bees released simultaneously the infestation was higher in bees returning later and in those that did not return in the observation period of 15 min. e) Released workers did not return to the colony 1.5 more frequently than uninfested workers in evening. The difference in returning was significant for locations of 20 and 50m from the colony. No difference in returning between infested and uninfested workers were observed for the most distant location of 400m. f) No significant difference was found in returning time and/or in the returning frequency until evening between workers artificially infested overnight and naturally infested workers. Artificially infested workers returned later and less frequently than a control group indicating rapid influence of V. destructor on flight behavior of foragers. g) The orientation ability of infested workers toward the nest entrance was impaired. Infested workers compared to uninfested workers twice as often approached a dummy entrance before finding the nest entrance. h) No significant differences were found in drifting between infested and uninfested workers. Drifting in the neighboring nucleus colony occurred in about 1% occasions after release of marked workers. Similarly, more infested, but not significantly more infested workers (2.6%) entered a different colored hive than the same colored hive (1.9%). However, the number of drifting bees were to low to make results conclusive. i) The comparison between Carnica and Primorsky workers revealed higher infestation in Carnica compared to Primorsky. Further, Primorsky workers lost more mites during foraging due to mite loss from foragers and non returning of infested workers. No significant differences in flight duration were observed between the two bee stocks. j) Loss of foragers, as determined by the Bee Scan counts of outflying and returning foragers, and the infestation of outflying bees increased significantly over a period of 70 days. A colony with 7.7. higher infestation of outflying foragers lost 2.2. time more bees per flight per day compared to a low infested colony. k) The estimates of mite loss with foragers from mite population per day up to 3.1% exceeds approximately mite mortality of 1% within the colony as represented by counting dead mites on bottom board inserts.
Alzheimer’s disease (AD) is the most common neurodegenerative disorder world wide, causing presenile dementia and death of millions of people. During AD damage and massive loss of brain cells occur. Alzheimer’s disease is genetically heterogeneous and may therefore represent a common phenotype that results from various genetic and environmental influences and risk factors. In approximately 10% of patients, changes of the genetic information were detected (gene mutations). In these cases, Alzheimer’s disease is inherited as an autosomal dominant trait (familial Alzheimer’s disease, FAD). In rare cases of familial Alzheimer’s disease (about 1-3%), mutations have been detected in genes on chromosomes 14 and 1 (encoding for Presenilin 1 and 2, respectively), and on chromosome 21 encoding for the amyloid precursor protein (APP), which is responsible for the release of the cell-damaging protein amyloid-beta (ß-amyloid, Aß). Familial forms of early-onset Alzheimer’s disease are rare; however, their importance extends far beyond their frequency, because they allow to identify some of the critical pathogenetic pathways of the disease. All familial Alzheimer mutations share a common feature: they lead to an enhanced production of the Aß, which is the major constituent of senile plaques in brains of AD patients. New data indicates that Aß promotes neuronal degeneration. Therefore, one aim of these thesis was to elucidate the neurotoxic biochemical pathways induced by Aß, investigating the effect of the FAD Swedish APP double mutation (APPsw) on oxidative stress-induced cell death mechanisms. This mutation results in a three- to sixfold increased Aß production compared to wild-type APP (APPwt). As cell models, the neuronal PC12 (rat pheochromocytoma) and the HEK (human embryonic kidney 293) cell lines were used, which have been transfected with human wiltyp APP or human APP containing the Swedish double mutation. The used cell models offer two important advantages. First, compared to experiments using high concentrations of Aß at micromolar levels applied extracellularly to cells, PC12 APPsw cells secret low Aß levels similar to the situation in FAD brains. Thus, this cell model represents a very suitable approach to elucidate the AD-specific cell death pathways mimicking physiological conditions. Second, these two cell lines (PC12 and HEK APPwt and APPsw) with different production levels of Aß may additionally allow to study dose-dependent effects of Aß. The here obtained results provide evidence for the enhanced cell vulnerability caused by the Swedish APP mutation and elucidate the cell death mechanism probably initiated by intracellulary produced Aß. Here it seems likely that increased production of Aß at physiological levels primes APPsw PC12 cells to undergo cell death only after additional stress, while chronic high levels in HEK cells already lead to enhanced basal apoptotic levels. Crucial effects of the Swedish APP mutation include the impairments of cellular energy metabolism affecting mitochondrial membrane potential and ATP levels as well as the additional activation of caspase 2, caspase 8 and JNK in response to oxidative stress. Thereby ,the following model can be proposed: PC12 cells harboring the Swedish APP mutation have a reduced energy metabolism compared to APPwt or control cells. However, this effect does not leads to enhanced basal apoptotic levels of cultured cells. An exposure of PC12 cells to oxidative stress leads to mitochondrial dysfunction, e.g., decrease in mitochondrial membrane potential and depletion in ATP. The consequence is the activation of the intrinsic apoptotic pathway releasing cytochrome c and Smac resulting in the activation of caspase 9. This effect is amplified by the overexpression of APP, since both APPsw and APPwt PC12 cells show enhanced cytochrome c and Smac release as well as enhanced caspase 9 activity as vector transfected control. In APPsw PC12 cells a parallel pathway is additionally emphased. Due to reduced ATP levels or enhanced Aß production JNK is activated. Furthermore, the extrinsic apoptotic pathway is enhanced, since caspase 8 and caspase 2 activation was clearly enhanced by the Swedish APP mutation. Both pathways may then converge by activating the effector enzyme, caspase 3, and the execution of cell death. In addition, caspase independent effects also needs to be considered. One possibility could be the implication of AIF since AIF expression was found to be induced by the Swedish APP mutation. In APPsw HEK cells high chronic Aß levels leads to enhanced apoptotic levels, reduce mitochondrial membrane potential and ATP levels even under basal conditions. Summarizing, a hypothetical sequence of events is proposed linking FAD, Aß production, JNK-activation, mitochondrial dysfunction with caspase pathway and neuronal loss for our cell model. The brain has a high metabolic rate and is exposured to gradually rising levels of oxidative stress during life. In Swedish FAD patients the levels of oxidative stress are increased in the temporal inferior cortex. This study using a cell model mimicking the in vivo situation in AD brains indicates that probably both, increased Aß production and the gradual rise of oxidative stress throughout life converge at a final common pathway of an increased vulnerability of neurons to apoptotic cell death from FAD patients. Presenilin (PS) 1 is an aspartyl protease, involved in the gamma-secretase mediated proteolysis of Amyloid-ß-protein (Aß), the major constituent of senile plaques in brains of Alzheimer’s disease (AD) patients. Recent studies have suggested an additional role for presenilin proteins in apoptotic cell death observed in AD. Since PS 1 is proteolytic cleaved by caspase 3, it has been prosposed that the resulting C-terminal fragment of PS1 (PSCas) could play a role in signal transduction during apoptosis. Moreover, it was shown that mutant presenilins causing early-onset of familial Alzheimer's disease (FAD) may render cells vulnerable to apoptosis. The mechanism by which PS1 regulates apoptotic cell death is yet not understood. Therefore one aim of our present study was to clarify the involvement of PS1 in the proteolytic cascade of apoptosis and if the cleavage of PS1 by caspase 3 has an regulatory function. Here it is demonstrated that both, PS1 and PS1Cas lead to a reduced vulnerability of PC12 and Jurkat cells to different apoptotic stimuli. However a mutation at the caspase 3 recognition site (D345A/ PSmut), which inhibits cleavage of PS1 by caspase 3, show no differences in the effect of PS1 or PSCas towards apoptotic stimuli. This suggest that proteolysis of PS1 by caspase 3 is not a determinant, but only a secondary effect during apoptosis. Since several FAD mutation distributed through the whole PS1 gene lead to enhanced apoptosis, an abolishment of the antiapoptotic effect of PS1 might contribute to the massive neurodegeneration in early age of FAD patients. Here, the regulate properties of PS1 in apoptosis may not be through an caspase 3 dependent cleavage and generation of PSCas, but rather through interaction of PS1 with other proteins involved in apoptosis.
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
This Article concerns the duty of care in American corporate law. To fully understand that duty, it is necessary to distinguish between roles, functions, standards of conduct, and standards of review. A role consists of an organized and socially recognized pattern of activity in which individuals regularly engage. In organizations, roles take the form of positions, such as the position of the director. A function consists of an activity that an actor is expected to engage in by virtue of his role or position. A standard of conduct states the way in which an actor should play a role, act in his position, or conduct his functions. A standard of review states the test that a court should apply when it reviews an actor’s conduct to determine whether to impose liability, grant injunctive relief, or determine the validity of his actions. In many or most areas of law, standards of conduct and standards of review tend to be conflated. For example, the standard of conduct that governs automobile drivers is that they should drive carefully, and the standard of review in a liability claim against a driver is whether he drove carefully. Similarly, the standard of conduct that governs an agent who engages in a transaction with his principal is that the agent must deal fairly, and the standard of review in a claim by the principal against an agent, based on such a transaction, is whether the agent dealt fairly. The conflation of standards of conduct and standards of review is so common that it is easy to overlook the fact that whether the two kinds of standards are or should be identical in any given area is a matter of prudential judgment. In a corporate world in which information was perfect, the risk of liability for assuming a given corporate role was always commensurate with the incentives for assuming the role, and institutional considerations never required deference to a corporate organ, the standards of conduct and review in corporate law might be identical. In the real world, however, these conditions seldom hold, and in American corporate law the standards of review pervasively diverge from the standards of conduct. Traditionally, the two major areas of American corporate law that involved standards of conduct and review have been the duty of care and the duty of loyalty. The duty of loyalty concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does involve his own self-interest. The duty of care concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does not involve his own self-interest.
When performance measures are used for evaluation purposes, agents have some incentives to learn how their actions affect these measures. We show that the use of imperfect performance measures can cause an agent to devote too many resources (too much effort) to acquiring information. Doing so can be costly to the principal because the agent can use information to game the performance measure to the detriment of the principal. We analyze the impact of endogenous information acquisition on the optimal incentive strength and the quality of the performance measure used.
The volume is a collection of papers given at the conference “sub8 -- Sinn und Bedeutung”, the eighth annual conference of the Gesellschaft für Semantik, held at the Johann-Wolfgang-Goethe-Universität, Frankfurt (Germany) in September 2003. During this conference, experts presented and discussed various aspects of semantics. The very different topics included in this book provide insight into fields of ongoing Semantics research.