Refine
Year of publication
- 2004 (468) (remove)
Document Type
- Article (160)
- Working Paper (71)
- Part of a Book (68)
- Preprint (48)
- Doctoral Thesis (34)
- Part of Periodical (34)
- Conference Proceeding (25)
- Report (13)
- Book (10)
- diplomthesis (2)
Language
- English (468) (remove)
Keywords
- Syntax (26)
- Generative Transformationsgrammatik (23)
- Wortstellung (19)
- Deutsch (16)
- Optimalitätstheorie (12)
- Phonologie (11)
- Deutschland (9)
- Englisch (8)
- Formale Semantik (8)
- Informationsstruktur (8)
Institute
- Physik (74)
- Wirtschaftswissenschaften (38)
- Center for Financial Studies (CFS) (28)
- Extern (24)
- Biochemie und Chemie (23)
- Medizin (21)
- Frankfurt Institute for Advanced Studies (FIAS) (19)
- Informatik (12)
- Mathematik (9)
- Biowissenschaften (8)
We suggest that the fluctuations of strange hadron multiplicity could be sensitive to the equation of state and microscopic structure of strongly interacting matter created at the early stage of high energy nucleus-nucleus collisions. They may serve as an important tool in the study of the deconfinement phase transition. We predict, within the statistical model of the early stage, that the ratio of properly filtered fluctuations of strange to non-strange hadron multiplicities should have a non-monotonic energy dependence with a minimum in the mixed phase region.
The data on mT spectra of K0S K+ and K- mesons produced in all inelastic p + p and p + pbar interactions in the energy range sqrt(s)NN=4.7-1800GeV are compiled and analyzed. The spectra are parameterized by a single exponential function, dN/(m_T*dm_T)=C exp(-m_T/T), and the inverse slope parameter T is the main object of study. The T parameter is found to be similar for K0S, K+ and K- mesons. It increases monotonically with collision energy from T~30MeV at sqrt(s)NN=4.7GeV to T~220MeV at sqrt(s)NN=1800GeV. The T parameter measured in p+p and p+pbar interactions is significantly lower than the corresponding parameter obtained for central Pb+Pb collisions at all studied energies. Also the shape of the energy dependence of T is different for central Pb+Pb collisions and p+p(pbar) interactions.
We propose a method to experimentally study the equation of state of strongly interacting matter created at the early stage of nucleus--nucleus collisions. The method exploits the relation between relative entropy and energy fluctuations and equation of state. As a measurable quantity, the ratio of properly filtered multiplicity to energy fluctuations is proposed. Within a statistical approach to the early stage of nucleus-nucleus collisions, the fluctuation ratio manifests a non--monotonic collision energy dependence with a maximum in the domain where the onset of deconfinement occurs.
Production of Lambda and Antilambda hyperons was measured in central Pb-Pb collisions at 40, 80, and 158 A GeV beam energy on a fixed target. Transverse mass spectra and rapidity distributions are given for all three energies. The Lambda/pi ratio at mid-rapidity and in full phase space shows a pronounced maximum between the highest AGS and 40 A GeV SPS energies, whereas the anti-Lambda}/pi ratio exhibits a monotonic increase. PACS numbers: 25.75.-q
Fluctuations of charged particle number are studied in the canonical ensemble. In the infinite volume limit the fluctuations in the canonical ensemble are different from the fluctuations in the grand canonical one. Thus, the well-known equivalence of both ensembles for the average quantities does not extend for the fluctuations. In view of a possible relevance of the results for the analysis of fluctuations in nuclear collisions at high energies, a role of the limited kinematical acceptance is studied.
Report from NA49
(2004)
The most recent data of NA49 on hadron production in nuclear collisions at CERN SPS energies are presented. Anomalies in the energy dependence of pion and kaon production in central Pb+Pb collisions are observed. They suggest that the onset of deconfinement is located at about 30 AGeV. Large multiplicity and transverse momentum fluctuations are measured for collisions of intermediate mass systems at 158 AGeV. The need for a new experimental programme at the CERN SPS is underlined.
The transverse mass mt distributions for deuterons and protons are measured in Pb+Pb reactions near midrapidity and in the range 0<mt–m<1.0 (1.5) GeV/c2 for minimum bias collisions at 158A GeV and for central collisions at 40 and 80 A GeV beam energies. The rapidity density dn/dy, inverse slope parameter T and mean transverse mass <mt> derived from mt distributions as well as the coalescence parameter B2 are studied as a function of the incident energy and the collision centrality. The deuteron mt spectra are significantly harder than those of protons, especially in central collisions. The coalescence factor B2 shows three systematic trends. First, it decreases strongly with increasing centrality reflecting an enlargement of the deuteron coalescence volume in central Pb+Pb collisions. Second, it increases with mt. Finally, B2 shows an increase with decreasing incident beam energy even within the SPS energy range. The results are discussed and compared to the predictions of models that include the collective expansion of the source created in Pb+Pb collisions.
Preliminary results on pion-pion Bose-Einstein correlations in central Pb+Pb collisions measured by the NA49 experiment are presented. Rapidity as well as transverse momentum dependence of the HBT-radii are shown for collisions at 20, 30, 40, 80, and 158 AGeV beam energy. Including results from AGS and RHIC experiments only a weak energy dependence of the radii is observed. Based on hydrodynamical models parameters like lifetime and geometrical radius of the source are derived from the dependence of the radii on transverse momentum.
Event-by-event fluctuations of particle ratios in central Pb + Pb collisions at 20 to 158 AGeV
(2004)
In the vicinity of the QCD phase transition, critical fluctuations have been predicted to lead to non-statistical fluctuations of particle ratios, depending on the nature of the phase transition. Recent results of the NA49 energy scan program show a sharp maximum of the ratio of K+ to Pi+ yields in central Pb+Pb collisions at beam energies of 20-30 AGeV. This observation has been interpreted as an indication of a phase transition at low SPS energies. We present first results on event-by-event fluctuations of the kaon to pion and proton to pion ratios at beam energies close to this maximum.
Results are presented on event-by-event electric charge fluctuations in central Pb+Pb collisions at 20, 30, 40, 80 and 158 AGeV. The observed fluctuations are close to those expected for a gas of pions correlated by global charge conservation only. These fluctuations are considerably larger than those calculated for an ideal gas of deconfined quarks and gluons. The present measurements do not necessarily exclude reduced fluctuations from a quark-gluon plasma because these might be masked by contributions from resonance decays.
System size and centrality dependence of the balance function in A + A collisions at √sNN = 17.2 GeV
(2004)
Electric charge correlations were studied for p+p, C+C, Si+Si and centrality selected Pb+Pb collisions at sqrt s_NN = 17.2$ GeV with the NA49 large acceptance detector at the CERN-SPS. In particular, long range pseudo-rapidity correlations of oppositely charged particles were measured using the Balance Function method. The width of the Balance Function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to force the hot and dense nuclear matter into a deconfined phase.
System size dependence of multiplicity fluctuations of charged particles produced in nuclear collisions at 158 A GeV was studied in the NA49 CERN experiment. Results indicate a non-monotonic dependence of the scaled variance of the multiplicity distribution with a maximum for semi-peripheral Pb+Pb interactions with number of projectile participants of about 35. This effect is not observed in a string-hadronic model of nuclear collision HIJING.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to transform the hot and dense nuclear matter into a deconfined phase.
In the early Nineties the Hague Conference on International Private Law on initiative of the United States started negotiations on a Convention on the Recognition and Enforcement of Foreign Judgments in Civil and Commercial Matters (the "Hague Convention"). In October 1999 the Special Commission on duty presented a preliminary text, which was drafted quite closely to the European Convention on Jurisdiction and Enforcement of Judgments in Civil and Commercial Matters (the "Brussels Convention"). The latter was concluded between the then 6 Member States of the EEC in Brussels in 1968 and amended several times on occasion of the entry of new Member States. In 2000, after the Treaty of Amsterdam altered the legal basis for judicial co-operation in civil matters in Europe, it was transformed into an EC Regulation (the "Brussels I Regulation"). The 1999 draft of the Hague Convention was heavily criticized by the USA and other states for its European approach of a double convention, regulating not only the recognition and enforcement of judgments, but at the same time the extent of and the limits to jurisdiction to adjudicate in international cases. During a diplomatic conference in June 2001 a second draft was presented which contained alternative versions of several articles and thus resembled more the existing dissent than a draft convention would. Difficulties to reach a consensus remained, especially with regard to activity based jurisdiction, intellectual property, consumer rights and employee rights. In addition, the appropriateness of the whole draft was questioned in light of the problems posed by the de-territorialization of relevant conduct through the advent of the Internet. In April 2002 it was decided to continue negotiations on an informal level on the basis of a nucleus approach. The core consensus as identified by a working group, however, was not very broad. The experts involved came to the conclusion that the project should be limited to choice of court agreements. In March 2004 a draft was presented which sets out its aims as follows: "The objective of the Convention is to make exclusive choice of court agreements as effective as possible in the context of international business. The hope is that the Convention will do for choice of court agreements what the New York Convention of 1958 has done for arbitration agreements." In April 2004 the Special Commission of the Hague Conference adopted a Draft "Convention on Exclusive Choice of Court Agreements", which according to its Art. 2 No. 1 a) is not applicable to choice of court agreements, to which a natural person acting primarily for personal, family or household purposes (a consumer) is a party". The broader project of a global judgments convention thus seems to be abandoned, or at least to be postponed for an unlimited time period. There are - of course - several reasons why the Hague Judgments project failed. Samuel Baumgartner has described an important one as the "Justizkonflikt" between the United States and Europe or, more specifically Germany. Within the context of the general topic of this conference, that is (international) jurisdiction for human rights, in the remainder of this presentation I shall elaborate on the socio-cultural aspects of the impartiality of judgments and their enforcement on a global scale.
In April 2003 I commented on the European Commission’s Action Plan on a More Coherent European Contract Law [COM(2003) 68 final] and the Green Paper on the Modernisation of the 1980 Rome Convention [COM(2002) 654 final].1 While the main argument of that paper, i.e. the common neglect of the inherent interrelation between both the further harmonisation of substantive contract law by directives or through an optional European Civil Code on the one hand and the modernisation of conflict rules for consumer contracts in Art. 5 Rome Convention on the other hand, remain pressing issues, and as the German Law Journal continues its efforts in offering timely and critical analysis on consumer law issues,2 there is a variety of recent developments worth noting.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen (NOy), with a maximum vortex average permanent NOy removal of over 5 ppb in late December between 500 and 550 K and a corresponding increase of NOy of over 2 ppb below about 450 K. The simulated vertical redistribution of NOy is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 3.4·10−6 cm−3 h−1 in the model, the observed denitrification is well reproduced. In the investigated winter 2002/2003, the denitrification has only moderate impact (<=10%) on the simulated vortex average ozone loss of about 1.1 ppm near the 460 K level. At higher altitudes, above 600 K potential temperature, the simulations show significant ozone depletion through NOx-catalytic cycles due to the unusual early exposure of vortex air to sunlight.
Configuration, simulation and visualization of simple biochemical reaction-diffusion systems in 3D
(2004)
Background In biological systems, molecules of different species diffuse within the reaction compartments and interact with each other, ultimately giving rise to such complex structures like living cells. In order to investigate the formation of subcellular structures and patterns (e.g. signal transduction) or spatial effects in metabolic processes, it would be helpful to use simulations of such reaction-diffusion systems. Pattern formation has been extensively studied in two dimensions. However, the extension to three-dimensional reaction-diffusion systems poses some challenges to the visualization of the processes being simulated. Scope of the Thesis The aim of this thesis is the specification and development of algorithms and methods for the three-dimensional configuration, simulation and visualization of biochemical reaction-diffusion systems consisting of a small number of molecules and reactions. After an initial review of existing literature about 2D/3D reaction-diffusion systems, a 3D simulation algorithm (PDE solver), based on an existing 2D-simulation algorithm for reaction-diffusion systems written by Prof. Herbert Sauro, has to be developed. In a succeeding step, this algorithm has to be optimized for high performance. A prototypic 3D configuration tool for the initial state of the system has to be developed. This basic tool should enable the user to define and store the location of molecules, membranes and channels within the reaction space of user-defined size. A suitable data structure has to be defined for the representation of the reaction space. The main focus of this thesis is the specification and prototypic implementation of a suitable reaction space visualization component for the display of the simulation results. In particular, the possibility of 3D visualization during course of the simulation has to be investigated. During the development phase, the quality and usability of the visualizations has to be evaluated in user tests. The simulation, configuration and visualization prototypes should be compliant with the Systems Biology Workbench to ensure compatibility with software from other authors. The thesis is carried out in close cooperation with Prof. Herbert Sauro at the Keck Graduate Institute, Claremont, CA, USA. Due to this international cooperation the thesis will be written in English.
We present a detailed study of chemical freeze-out in nucleus-nucleus collisions at beam energies of 11.6, 30, 40, 80 and 158A GeV. By analyzing hadronic multiplicities within the statistical hadronization approach, we have studied the chemical equilibration of the system as a function of center of mass energy and of the parameters of the source. Additionally, we have tested and compared different versions of the statistical model, with special emphasis on possible explanations of the observed strangeness hadronic phase space under-saturation.
New results on the production of Xi and Omega hyperons in Pb+Pb interactions at 40 A GeV and Lambda at 30 A GeV are presented. Transverse mass spectra as well as rapidity spectra of these hyperons are shown and compared to previously measured data at different beam energies. The energy dependence of hyperon production (4Pi yields) is discussed. Additionally, the centrality dependence of Xi- production at 40 A GeV is presented.
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
Assessing enhanced knowledge discovery systems (eKDSs) constitutes an intricate issue that is understood merely to a certain extent by now. Based upon an analysis of why it is difficult to formally evaluate eKDSs, it is argued for a change of perspective: eKDSs should be understood as intelligent tools for qualitative analysis that support, rather than substitute, the user in the exploration of the data; a qualitative gap will be identified as the main reason why the evaluation of enhanced knowledge discovery systems is difficult. In order to deal with this problem, the construction of a best practice model for eKDSs is advocated. Based on a brief recapitulation of similar work on spoken language dialogue systems, first steps towards achieving this goal are performed, and directions of future research are outlined.
This study analyses the labour market effects of fixed-term contracts (FTCs) in West Germany by microeconometric methods using individual and establishment level data. In the first part of the study the role of FTCs in firms’ labour demand is analysed. An econometric investigation of the firms’ reasons for using FTCs focussing on the identification of the link between dismissal protection for permanent contract workers and the firms’ use of FTCs is presented. Furthermore, a descriptive analysis of the role of FTCs in worker and job flows at the firm level is provided. The second part of the study evaluates the short-run effects of being employed on an FTC on working conditions and wages using a large cross-sectional dataset of employees. The final part of the study analyses whether taking up an FTC increases the (permanent contract) employment opportunities in the long-run (stepping stone effect) and whether FTCs affect job finding behaviour of unemployed job searchers. Firstly, an econometric unemployment duration analysis distinguishing between both types of contracts as destination states is performed. Secondly, the effects of entering into FTCs from unemployment on future (permanent contract) employment opportunities are evaluated attempting to account for the sequential decision problem of job searchers.
We modify the concept of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lovasz [LLL82] towards a faster reduction algorithm. We organize LLL-reduction in segments of the basis. Our SLLL-bases approximate the successive minima of the lattice in nearly the same way as LLL-bases. For integer lattices of dimension n given by a basis of length 2exp(O(n)), SLLL-reduction runs in O(n.exp(5+epsilon)) bit operations for every epsilon > 0, compared to O(exp(n7+epsilon)) for the original LLL and to O(exp(n6+epsilon)) for the LLL-algorithms of Schnorr (1988) and Storjohann (1996). We present an even faster algorithm for SLLL-reduction via iterated subsegments running in O(n*exp(3)*log n) arithmetic steps.
Let G be a Fuchsian group containing two torsion free subgroups defining isomorphic Riemann surfaces. Then these surface subgroups K and alpha-Kalpha exp(-1) are conjugate in PSl(2,R), but in general the conjugating element alpha cannot be taken in G or a finite index Fuchsian extension of G. We will show that in the case of a normal inclusion in a triangle group G these alpha can be chosen in some triangle group extending G. It turns out that the method leading to this result allows also to answer the question how many different regular dessins of the same type can exist on a given quasiplatonic Riemann surface.
The large conductance voltage- and Ca2+-activated potassium (BK) channel has been suggested to play an important role in the signal transduction process of cochlear inner hair cells. BK channels have been shown to be composed of the pore-forming alpha-subunit coexpressed with the auxiliary beta-1-subunit. Analyzing the hearing function and cochlear phenotype of BK channel alpha-(BKalpha–/–) and beta-1-subunit (BKbeta-1–/–) knockout mice, we demonstrate normal hearing function and cochlear structure of BKbeta-1–/– mice. During the first 4 postnatal weeks also, BKalpha–/– mice most surprisingly did not show any obvious hearing deficits. High-frequency hearing loss developed in BKalpha–/– mice only from ca. 8 weeks postnatally onward and was accompanied by a lack of distortion product otoacoustic emissions, suggesting outer hair cell (OHC) dysfunction. Hearing loss was linked to a loss of the KCNQ4 potassium channel in membranes of OHCs in the basal and midbasal cochlear turn, preceding hair cell degeneration and leading to a similar phenotype as elicited by pharmacologic blockade of KCNQ4 channels. Although the actual link between BK gene deletion, loss of KCNQ4 in OHCs, and OHC degeneration requires further investigation, data already suggest human BK-coding slo1 gene mutation as a susceptibility factor for progressive deafness, similar to KCNQ4 potassium channel mutations. © 2004, The National Academy of Sciences. Freely available online through the PNAS open access option.
Dendritic cells (DC) are known to present exogenous protein Ag effectively to T cells. In this study we sought to identify the proteases that DC employ during antigen processing. The murine epidermal-derived DC line Xs52, when pulsed with PPD, optimally activated the PPD-reactive Th1 clone LNC.2F1 as well as the Th2 clone LNC.4k1, and this activation was completely blocked by chloroquine pretreatment. These results validate the capacity of XS52 DC to digest PPD into immunogenic peptides inducing antigen specific T cell immune responses. XS52 DC, as well as splenic DC and DCs derived from bone marrow degraded standard substrates for cathepsins B, C, D/E, H, J, and L, tryptase, and chymases, indicating that DC express a variety of protease activities. Treatment of XS52 DC with pepstatin A, an inhibitor of aspartic acid proteases, completely abrogated their capacity to present native PPD, but not trypsin-digested PPD fragments to Th1 and Th2 cell clones. Pepstatin A also inhibited cathepsin D/E activity selectively among the XS52 DC-associated protease activities. On the other hand, inhibitors of serine proteases (dichloroisocoumarin, DCI) or of cystein proteases (E-64) did not impair XS52 DC presentation of PPD, nor did they inhibit cathepsin D/E activity. Finally, all tested DC populations (XS52 DC, splenic DC, and bone marrow-derived DC) constitutively expressed cathepsin D mRNA. These results suggest that DC primarily employ cathepsin D (and perhaps E) to digest PPD into antigenic peptides.
Background: The neurophysiological and neuroanatomical foundations of persistent developmental stuttering (PDS) are still a matter of dispute. A main argument is that stutterers show atypical anatomical asymmetries of speech-relevant brain areas, which possibly affect speech fluency. The major aim of this study was to determine whether adults with PDS have anomalous anatomy in cortical speech-language areas. Methods: Adults with PDS (n = 10) and controls (n = 10) matched for age, sex, hand preference, and education were studied using high-resolution MRI scans. Using a new variant of the voxel-based morphometry technique (augmented VBM) the brains of stutterers and non-stutterers were compared with respect to white matter (WM) and grey matter (GM) differences. Results: We found increased WM volumes in a right-hemispheric network comprising the superior temporal gyrus (including the planum temporale), the inferior frontal gyrus (including the pars triangularis), the precentral gyrus in the vicinity of the face and mouth representation, and the anterior middle frontal gyrus. In addition, we detected a leftward WM asymmetry in the auditory cortex in non-stutterers, while stutterers showed symmetric WM volumes. Conclusions: These results provide strong evidence that adults with PDS have anomalous anatomy not only in perisylvian speech and language areas but also in prefrontal and sensorimotor areas. Whether this atypical asymmetry of WM is the cause or the consequence of stuttering is still an unanswered question. This article is available from: http://www.biomedcentral.com/1471-2377/4/23 © 2004 Jäncke et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Background: In rat, deafferentation of one labyrinth (unilateral labyrinthectomy) results in a characteristic syndrome of ocular and motor postural disorders (e.g., barrel rotation, circling behavior, and spontaneous nystagmus). Behavioral recovery (e.g., diminished symptoms), encompassing 1 week after unilateral labyrinthectomy, has been termed vestibular compensation. Evidence suggesting that the histamine H3 receptor plays a key role in vestibular compensation comes from studies indicating that betahistine, a histamine-like drug that acts as both a partial histamine H1 receptor agonist and an H3 receptor antagonist, can accelerate the process of vestibular compensation. Results: Expression levels for histamine H3 receptor (total) as well as three isoforms which display variable lengths of the third intracellular loop of the receptor were analyzed using in situ hybridization on brain sections containing the rat medial vestibular nucleus after unilateral labyrinthectomy. We compared these expression levels to H3 receptor binding densities. Total H3 receptor mRNA levels (detected by oligo probe H3X) as well as mRNA levels of the three receptor isoforms studied (detected by oligo probes H3A, H3B, and H3C) showed a pattern of increase, which was bilaterally significant at 24 h post-lesion for both H3X and H3C, followed by significant bilateral decreases in medial vestibular nuclei occurring 48 h (H3X and H3B) and 1 week post-lesion (H3A, H3B, and H3C). Expression levels of H3B was an exception to the forementioned pattern with significant decreases already detected at 24 h post-lesion. Coinciding with the decreasing trends in H3 receptor mRNA levels was an observed increase in H3 receptor binding densities occurring in the ipsilateral medial vestibular nuclei 48 h post-lesion. Conclusion: Progressive recovery of the resting discharge of the deafferentated medial vestibular nuclei neurons results in functional restoration of the static postural and occulomotor deficits, usually occurring within a time frame of 48 hours in rats. Our data suggests that the H3 receptor may be an essential part of pre-synaptic mechanisms required for reestablishing resting activities 48 h after unilateral labyrinthectomy.
Western cultures have witnessed a tremendous cultural and social transformation of sexuality in the years since the sexual revolution. Apart from a few public debates and scandals, the process has moved along gradually and quietly. Yet its real and symbolic effects are probably much more consequential than those generated by the sexual revolution of the sixties. Sigusch refers to the broad-based recoding and reassessment of the sexual sphere during the eighties and nineties as the "neosexual revolution". The neosexual revolution is dismantling the old patterns of sexuality and reassembling them anew. In the process, dimensions, intimate relationships, preferences and sexual fragments emerge, many of which had submerged, were unnamed or simply did not exist before. In general, sexuality has lost much of its symbolic meaning as a cultural phenomenon. Sexuality is no longer the great metaphor for pleasure and happiness, nor is it so greatly overestimated as it was during the sexual revolution. It is now widely taken for granted, much like egotism or motility. Whereas sex was once mystified in a positive sense - as ecstasy and transgression, it has now taken on a negative mystification characterized by abuse, violence and deadly infection. While the old sexuality was based primarily upon sexual instinct, orgasm and the heterosexual couple, neosexualities revolve predominantly around gender difference, thrills, self-gratification and prosthetic substitution. From the vast number of interrelated processes from which neosexualities emerge, three empirically observable phenomena have been selected for discussion here: the dissociation of the sexual sphere, the dispersion of sexual fragments and the diversification of intimate relationships. The outcome of the neosexual revolution may be described as "lean sexuality" and "self-sex".
Background: Common warts (verrucae vulgares) are human papilloma virus (HPV) infections with a high incidence and prevalence, most often affecting hands and feet, being able to impair quality of life. About 30 different therapeutic regimens described in literature reveal a lack of a single striking strategy. Recent publications showed positive results of photodynamic therapy (PDT) with 5-aminolevulinic acid (5-ALA) in the treatment of HPV-induced skin diseases, especially warts, using visible light (VIS) to stimulate an absorption band of endogenously formed protoporphyrin IX. Additional experiences adding waterfiltered infrared A (wIRA) during 5-ALA-PDT revealed positive effects. Aim of the study: First prospective randomised controlled blind study including PDT and wIRA in the treatment of recalcitrant common hand and foot warts. Comparison of "5-ALA cream (ALA) vs. placebo cream (PLC)" and "irradiation with visible light and wIRA (VIS+wIRA) vs. irradiation with visible light alone (VIS)". Methods: Pre-treatment with keratolysis (salicylic acid) and curettage. PDT treatment: topical application of 5-ALA (Medac) in "unguentum emulsificans aquosum" vs. placebo; irradiation: combination of VIS and a large amount of wIRA (Hydrosun® radiator type 501, 4 mm water cuvette, waterfiltered spectrum 590-1400 nm, contact-free, typically painless) vs. VIS alone. Post-treatment with retinoic acid ointment. One to three therapy cycles every 3 weeks. Main variable of interest: "Percent change of total wart area of each patient over the time" (18 weeks). Global judgement by patient and by physician and subjective rating of feeling/pain (visual analogue scales). 80 patients with therapy-resistant common hand and foot warts were assigned randomly into one of the four therapy groups with comparable numbers of warts at comparable sites in all groups. Results: The individual total wart area decreased during 18 weeks in group 1 (ALA+VIS+wIRA) and in group 2 (PLC+VIS+wIRA) significantly more than in both groups without wIRA (group 3 (ALA+VIS) and 4 (PLC+VIS)): medians and interquartile ranges: -94% (-100%/-84%) vs. -99% (-100%/-71%) vs. -47% (-75%/0%) vs. -73% (-92%/-27%). After 18 weeks the two groups with wIRA differed remarkably from the two groups without wIRA: 42% vs. 7% completely cured patients; 72% vs. 34% vanished warts. Global judgement by patient and by physician and subjective rating of feeling was much better in the two groups with wIRA than in the two groups without wIRA. Conclusions: The above described complete treatment scheme of hand and foot warts (keratolysis, curettage, PDT treatment, irradiation with VIS+wIRA, retinoic acid ointment; three therapy cycles every 3 weeks) proved to be effective. Within this treatment scheme wIRA as non-invasive and painless treatment modality revealed to be an important, effective factor, while photodynamic therapy with 5-ALA in the described form did not contribute recognisably - neither alone (without wIRA) nor in combination with wIRA - to a clinical improvement. For future treatment of warts an even improved scheme is proposed: one treatment cycle (keratolysis, curettage, wIRA, without PDT) once a week for six to nine weeks. © 2004 Fuchs et al; licensee German Medical Science. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL : http://www.egms.de/en/gms/volume2.shtml
We present an overview of the mathematics underlying the quantum Zeno effect. Classical, functional analytic results are put into perspective and compared with more recent ones. This yields some new insights into mathematical preconditions entailing the Zeno paradox, in particular a simplified proof of Misra's and Sudarshan's theorem. We empahsise the complex-analytic structures associated to the issue of existence of the Zeno dynamics. On grounds of the assembled material, we reason about possible future mathematical developments pertaining to the Zeno paradox and its counterpart, the anti-Zeno paradox, both of which seem to be close to complete characterisations. PACS-Klassifikation: 03.65.Xp, 03.65Db, 05.30.-d, 02.30.T . See the corresponding presentation: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Zeno Dynamics in Quantum Statistical Mechanics"
We study the quantum Zeno effect in quantum statistical mechanics within the operator algebraic framework. We formulate a condition for the appearance of the effect in W*-dynamical systems, in terms of the short-time behaviour of the dynamics. Examples of quantum spin systems show that this condition can be effectively applied to quantum statistical mechanical models. Furthermore, we derive an explicit form of the Zeno generator, and use it to construct Gibbs equilibrium states for the Zeno dynamics. As a concrete example, we consider the X-Y model, for which we show that a frequent measurement at a microscopic level, e.g. a single lattice site, can produce a macroscopic effect in changing the global equilibrium. PACS - Klassifikation: 03.65.Xp, 05.30.-d, 02.30. See the corresponding papers: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Mathematics of the Quantum Zeno Effect" and the talk "Zeno Dynamics in Quantum Statistical Mechanics" - http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1167/
A fundamental work on THz measurement techniques for application to steel manufacturing processes
(2004)
The terahertz (THz) waves had not been obtained except by a huge system, such as a free electron laser, until an invention of a photo-mixing technique at Bell laboratory in 1984 [1]. The first method using the Auston switch could generate up to 1 THz [2]. After then, as a result of some efforts for extending the frequency limit, a combination of antennas for the generation and the detection reached several THz [3, 4]. This technique has developed, so far, with taking a form of filling up the so-called THz gap . At the same time, a lot of researches have been trying to increase the output power as well [5-7]. In the 1990s, a big advantage in the frequency band was brought by non-linear optical methods [8-11]. The technique led to drastically expand the frequency region and recently to realize a measurement up to 41 THz [12]. On the other hand, some efforts have yielded new generation and detection methods from other approaches, a CW-THz as well as the pulse generation [13-19]. Especially, a THz luminescence and a laser, originated in a research on the Bloch oscillator, are recently generated from a quantum cascade structure, even at an only low temperature of 60 K [20-22]. This research attracts a lot of attention, because it would be a breakthrough for the THz technique to become widespread into industrial area as well as research, in a point of low costs and easier operations. It is naturally thought that a technology of short pulse lasers has helped the THz field to be developed. As a background of an appearance of a stable Ti:sapphire laser and a high power chirped pulse amplification (CPA) laser, instead of a dye laser, a lot of concentration on the techniques of a pulse compression and amplification have been done. [23] Viewed from an application side, the THz technique has come into the limelight as a promising measurement method. A discovery of absorption peaks of a protein and a DNA in the THz region is promoting to put the technique into practice in the field of medicine and pharmaceutical science from several years ago [24-27]. It is also known that some absorption of light polar-molecules exist in the region, therefore, some ideas of gas and water content monitoring in the chemical and the food industries are proposed [28-32]. Furthermore, a lot of reports, such as measurements of carrier distribution in semiconductors, refractive index of a thin film and an object shape as radar, indicate that this technique would have a wide range of application [33-37]. I believe that it is worth challenging to apply it into the steel-making industry, due to its unique advantages. The THz wavelength of 30-300 ¼m can cope with both independence of a surface roughness of steel products and a detection with a sub-millimeter precision, for a remote surface inspection. There is also a possibility that it can measure thickness or dielectric constants of relatively high conductive materials, because of a high permeability against non-polar dielectric materials, short pulse detection and with a high signal-to-noise ratio of 103-5. Furthermore, there is a possibility that it could be applicable to a measurement at high temperature, for less influence by a thermal radiation, compared with the visible and infrared light. These ideas have motivated me to start this THz work.
The Kochen-Specker theorem has been discussed intensely ever since its original proof in 1967. It is one of the central no-go theorems of quantum theory, showing the non-existence of a certain kind of hidden states models. In this paper, we first offer a new, non-combinatorial proof for quantum systems with a type I_n factor as algebra of observables, including I_infinity. Afterwards, we give a proof of the Kochen-Specker theorem for an arbitrary von Neumann algebra R without summands of types I_1 and I_2, using a known result on two-valued measures on the projection lattice P(R). Some connections with presheaf formulations as proposed by Isham and Butterfield are made.
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
The Basel Committee plans to differentiate risk-adjusted capital requirements between banks regulated under the internal ratings based (IRB) approach and banks under the standard approach. We investigate the consequences for the lending capacity and the failure risk of banks in a model with endogenous interest rates. The optimal regulatory response depends on the banks' inclination to increase their portfolio risk. If IRB-banks are well-capitalized or gain little from taking risks, then they will increase their market share and hold safe portfolios. As risk-taking incentives become more important, the optimal portfolio size of banks adopting intern rating systems will be increasingly constrained, and ultimately they may lose market share relative to banks using the standard approach. The regulator has only limited options to avoid the excessive adoption of internal rating systems. JEL Klassifikation: K13, H41.
We develop an estimated model of the U.S. economy in which agents form expectations by continually updating their beliefs regarding the behavior of the economy and monetary policy. We explore the effects of policymakers' misperceptions of the natural rate of unemployment during the late 1960s and 1970s on the formation of expectations and macroeconomic outcomes. We find that the combination of monetary policy directed at tight stabilization of unemployment near its perceived natural rate and large real-time errors in estimates of the natural rate uprooted heretofore quiescent in inflation expectations and destabilized the economy. Had monetary policy reacted less aggressively to perceived unemployment gaps, in inflation expectations would have remained anchored and the stag inflation of the 1970s would have been avoided. Indeed, we find that less activist policies would have been more effective at stabilizing both in inflation and unemployment. We argue that policymakers, learning from the experience of the 1970s, eschewed activist policies in favor of policies that concentrated on the achievement of price stability, contributing to the subsequent improvements in macroeconomic performance of the U.S. economy.
Recent evidence on the effect of government spending shocks on consumption cannot be easily reconciled with existing optimizing business cycle models. We extend the standard New Keynesian model to allow for the presence of rule-of-thumb (non-Ricardian) consumers. We show how the interaction of the latter with sticky prices and deficit financing can account for the existing evidence on the effects of government spending. JEL Klassifikation: E32, E62.
In a plain-vanilla New Keynesian model with two-period staggered price-setting, discretionary monetary policy leads to multiple equilibria. Complementarity between the pricing decisions of forward-looking firms underlies the multiplicity, which is intrinsically dynamic in nature. At each point in time, the discretionary monetary authority optimally accommodates the level of predetermined prices when setting the money supply because it is concerned solely about real activity. Hence, if other firms set a high price in the current period, an individual firm will optimally choose a high price because it knows that the monetary authority next period will accommodate with a high money supply. Under commitment, the mechanism generating complementarity is absent: the monetary authority commits not to respond to future predetermined prices. Multiple equilibria also arise in other similar contexts where (i) a policymaker cannot commit, and (ii) forward-looking agents determine a state variable to which future policy respond. JEL Klassifikation: E5, E61, D78
This paper analyzes the empirical relationship between credit default swap, bond and stock markets during the period 2000-2002. Focusing on the intertemporal comovement, we examine weekly and daily lead-lag relationships in a vector autoregressive model and the adjustment between markets caused by cointegration. First, we find that stock returns lead CDS and bond spread changes. Second, CDS spread changes Granger cause bond spread changes for a higher number of firms than vice versa. Third, the CDS market is significantly more sensitive to the stock market than the bond market and the magnitude of this sensitivity increases when credit quality becomes worse. Finally, the CDS market plays a more important role for price discovery than the corporate bond market. JEL Klassifikation: G10, G14, C32.
We characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. Our analysis is based on a unique data set of high-frequency futures returns for each of the markets. We find that news surprises produce conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. The details of the linkages are particularly intriguing as regards equity markets. We show that equity markets react differently to the same news depending on the state of the economy, with bad news having a positive impact during expansions and the traditionally-expected negative impact during recessions. We rationalize this by temporal variation in the competing "cash flow" and "discount rate" effects for equity valuation. This finding helps explain the time-varying correlation between stock and bond returns, and the relatively small equity market news effect when averaged across expansions and recessions. Lastly, relying on the pronounced heteroskedasticity in the high-frequency data, we document important contemporaneous linkages across all markets and countries over-and-above the direct news announcement effects. JEL Klassifikation: F3, F4, G1, C5
This paper analyzes banks' choice between lending to firms individually and sharing lending with other banks, when firms and banks are subject to moral hazard and monitoring is essential. Multiple-bank lending is optimal whenever the benefit of greater diversification in terms of higher monitoring dominates the costs of free-riding and duplication of efforts. The model predicts a greater use of multiple-bank lending when banks are small relative to investment projects, firms are less profitable, and poor financial integration, regulation and inefficient judicial systems increase monitoring costs. These results are consistent with empirical observations concerning small business lending and loan syndication. JEL Klassifikation: D82; G21; G32.
We analyze governance with a dataset on investments of venture capitalists in 3848 portfolio firms in 39 countries from North and South America, Europe and Asia spanning 1971-2003. We find that cross-country differences in Legality have a significant impact on the governance structure of investments in the VC industry: better laws facilitate faster deal screening and deal origination, a higher probability of syndication and a lower probability of potentially harmful co-investment, and facilitate board representation of the investor. We also show better laws reduce the probability that the investor requires periodic cash flows prior to exit, which is in conjunction with an increased probability of investment in high-tech companies. Klassifikation: G24, G31, G32.
A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. JEL Klassifikation: C1, G1
Earlier studies of the seigniorage inflation model have found that the high-inflation steady state is not stable under adaptive learning. We reconsider this issue and analyze the full set of solutions for the linearized model. Our main focus is on stationary hyperinflationary paths near the high-inflation steady state. The hyperinflationary paths are stable under learning if agents can utilize contemporaneous data. However, in an economy populated by a mixture of agents, some of whom only have access to lagged data, stable inflationary paths emerge only if the proportion of agents with access to contemporaneous data is sufficiently high. JEL Klassifikation: C62, D83, D84, E31
In this paper, we study the effectiveness of monetary policy in a severe recession and deflation when nominal interest rates are bounded at zero. We compare two alternative proposals for ameliorating the effect of the zero bound: an exchange-rate peg and price-level targeting. We conduct this quantitative comparison in an empirical macroeconometric model of Japan, the United States and the euro area. Furthermore, we use a stylized micro-founded two-country model to check our qualitative findings. We find that both proposals succeed in generating inflationary expectations and work almost equally well under full credibility of monetary policy. However, price-level targeting may be less effective under imperfect credibility, because the announced price-level target path is not directly observable. Klassifikation: E31, E52, E58, E61