Refine
Year of publication
- 2005 (562) (remove)
Document Type
- Article (215)
- Working Paper (79)
- Doctoral Thesis (58)
- Part of a Book (51)
- Preprint (43)
- Part of Periodical (40)
- Conference Proceeding (36)
- Report (22)
- Book (11)
- Review (3)
Language
- English (562) (remove)
Has Fulltext
- yes (562) (remove)
Keywords
- Artikulation (13)
- Phonetik (13)
- Artikulatorische Phonetik (12)
- Englisch (11)
- Artikulator (8)
- Deutsch (7)
- Europäische Union (7)
- Geldpolitik (7)
- Bedeutungswandel (6)
- Computerlinguistik (6)
Institute
- Physik (72)
- Center for Financial Studies (CFS) (42)
- Wirtschaftswissenschaften (39)
- Biochemie und Chemie (32)
- Medizin (24)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Geowissenschaften (17)
- E-Finance Lab e.V. (16)
- Extern (15)
- Biowissenschaften (14)
Phase diagram of strongly interacting matter is discussed within the exactly solvable statistical model of the quark-gluon bags. The model predicts two phases of matter: the hadron gas at a low temperature T and baryonic chemical potential muB, and the quark-gluon gas at a high T and/or muB. The nature of the phase transition depends on a form of the bag mass-volume spectrum (its pre-exponential factor), which is expected to change with the muB/T ratio. It is therefore likely that the line of the 1st} order transition at a high muB/T ratio is followed by the line of the 2nd order phase transition at an intermediate muB/T, and then by the lines of "higher order transitions" at a low muB/T.
Chlorine monoxide (ClO) plays a key role in stratospheric ozone loss processes at midlatitudes. We present two balloonborne in situ measurements of ClO conducted in northern hemisphere midlatitudes during the period of the maximum of total inorganic chlorine loading in the atmosphere. Both ClO measurements were conducted on board the TRIPLE balloon payload, launched in November 1996 in Le´on, Spain, and in May 1999 in Aire sur l’Adour, France. For both flights a ClO daylight and night time vertical profile could be derived over an altitude range of approximately 15–31 km. ClO mixing ratios are compared to model simulations performed with the photochemical box model version of the Chemical Lagrangian Model of the Stratosphere (CLaMS). Simulations along 24-h backward trajectories were performed to study the diurnal variation of ClO in the midlatitude lower stratosphere. Model simulations for the flight launched in Aire sur l’Adour 1999 show a good agreement with the ClO measurements. For the flight launched in Le´on 1996, a similar good agreement is found, except at around ~ 650 K potential temperature (~26km altitude). However, a tendency is found that for solar zenith angles greater than 86°–87° the simulated ClO mixing ratios substantially overestimate measured ClO by approximately a factor of 2.5 or more for both flights. Therefore we conclude that no indication can be deduced from the presented ClO measurements that substantial uncertainties exist in midlatitude chlorine chemistry of the stratosphere. An exception is the situation at solar zenith angles greater than 86°–87° where model simulations substantial overestimate ClO observations.
Results are presented from a search for the decays D0 -> K min pi plus and D0 bar -> K plus pi min in a sample of 3.8x10^6 central Pb-Pb events collected with a beam energy of 158A GeV by NA49 at the CERN SPS. No signal is observed. An upper limit on D0 production is derived and compared to predictions from several models.
Particle production in central Pb+Pb collisions was studied with the NA49 large acceptance spectrometer at the CERN SPS at beam energies of 20, 30, 40, 80, and 158 GeV per nucleon. A change of the energy dependence is observed around 30A GeV for the yields of pions and strange particles as well as for the shapes of the transverse mass spectra. At present only a reaction scenario with onset of deconfinement is able to reproduce the measurements.
Despite a lot of re-structuring and many innovations in recent years, the securities transaction industry in the European Union is still a highly inefficient and inconsistently configured system for cross-border transactions. This paper analyzes the functions performed, the institutions involved and the parameters concerned that shape market and ownership structure in the industry. Of particular interest are microeconomic incentives of the main players that can be in contradiction to social welfare. We develop a framework and analyze three consistent systems for the securities transaction industry in the EU that offer superior efficiency than the current, inefficient arrangement. Some policy advice is given to select the 'best' system for the Single European Financial Market.
In recent years stock exchanges have been increasingly diversifying their operations into related business areas such as derivatives trading, post-trading services and software sales. This trend can be observed most notably among profit-oriented trading venues. While the pursuit for diversification is likely to be driven by the attractiveness of these investment opportunities, it is yet an open question whether certain integration activities are also efficient, both from a social welfare and from the exchanges' perspective. Academic contributions so far analyzed different business models primarily from the social welfare perspective, whereas there is only little literature considering their impact on the exchange itself. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the factor productivity of exchanges with different business models. Our findings suggest three conclusions: (1) Integration activity comes at the cost of increased operational complexity which in some cases outweigh the potential synergies between related activities and therefore leads to technical inefficiencies and lower productivity growth. (2) We find no evidence that vertical integration is more efficient and productive than other business models. This finding could contribute to the ongoing discussion about the merits of vertical integration from a social welfare perspective. (3) The existence of a strong in-house IT-competence seems to be beneficial to overcome.
Academic contributions on the demutualization of stock exchanges so far have been predominantly devoted to social welfare issues, whereas there is scarce empirical literature referring to the impact of a governance change on the exchange itself. While there is consensus that the case for demutualization is predominantly driven by the need to improve the exchange's competitiveness in a changing business environment, it remains unclear how different governance regimes actually affect stock exchange performance. Some authors propose that a public listing is the best suited governance arrangement to improve an exchange's competitiveness. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the efficiency and productivity of exchanges with differing governance arrangements. For this purpose we calculate in a first step individual efficiency and productivity values via DEA. In a second step we regress the derived values against variables that - amongst others - map the institutional arrangement of the exchanges in order to determine efficiency and productivity differences between (1) mutuals (2) demutualized but customer-owned exchanges and (3) publicly listed and thus at least partly outsider-owned exchanges. We find evidence that demutualized exchanges exhibit higher technical efficiency than mutuals. However, they perform relatively poor as far as productivity growth is concerned. Furthermore, we find no evidence that publicly listed exchanges possess higher efficiency and productivity values than demutualized exchanges with a customer-dominated structure. We conclude that the merits of outside ownership lie possibly in other areas such as solving conflicts of interest between too heterogeneous members.
It is widely believed that the ideal board in corporations is composed almost entirely of independent (outside) directors. In contrast, this paper shows that some lack of board independence can be in the interest of shareholders. This follows because a lack of board independence serves as a substitute for commitment. Boards that are dependent on the incumbent CEO adopt a less aggressive CEO replacement rule than independent boards. While this behavior is inefficient ex post, it has positive ex ante incentive effects. The model suggests that independent boards (dependent boards) are most valuable to shareholders if the problem of providing appropriate incentives to the CEO is weak (severe).
Wider participation in stockholding is often presumed to reduce wealth inequality. We measure and decompose changes in US wealth inequality between 1989 and 2001, a period of considerable spread of equity culture. Inequality in equity wealth is found to be important for net wealth inequality, despite equity's limited share. Our findings show that reduced wealth inequality is not a necessary outcome of the spread of equity culture. We estimate contributions of stockholder characteristics to levels and inequality in equity holdings, and we distinguish changes in configuration of the stockholder pool from changes in the influence of given characteristics. Our estimates imply that both the 1989 and the 2001 stockholder pools would have produced higher equity holdings in 1998 than were actually observed for 1998 stockholders. This arises from differences both in optimal holdings and in financial attitudes and practices, suggesting a dilution effect of the boom followed by a cleansing effect of the downturn. Cumulative gains and losses in stockholding are shown to be significantly influenced by length of household investment horizon and portfolio breadth but, controlling for those, use of professional advice is either insignificant or counterproductive. JEL Classification: E21, G11
We argue that the shape of the system-size dependence of strangeness production in nucleus-nucleus collisions can be understood in a picture that is based on the formation of clusters of overlapping strings. A string percolation model combined with a statistical description of the hadronization yields a quantitative agreement with the data at sqrt s_NN = 17.3 GeV. The model is also applied to RHIC energies.
We investigate the sensitivity of several observables to the density dependence of the symmetry potential within the microscopic transport model UrQMD (ultrarelativistic quantum molecular dynamics model). The same systems are used to probe the symmetry potential at both low and high densities. The influence of the symmetry potentials on the yields of pi-, pi+, the pi-/pi+ ratio, the n/p ratio of free nucleons and the t/3He ratio are studied for neutron-rich heavy ion collisions (208Pb+208Pb, 132Sn+124Sn, 96Zr+96Zr) at E_b=0.4A GeV. We find that these multiple probes provides comprehensive information on the density dependence of the symmetry potential.
DCD – a novel plant specific domain in proteins involved in development and programmed cell death
(2005)
Background: Recognition of microbial pathogens by plants triggers the hypersensitive reaction, a common form of programmed cell death in plants. These dying cells generate signals that activate the plant immune system and alarm the neighboring cells as well as the whole plant to activate defense responses to limit the spread of the pathogen. The molecular mechanisms behind the hypersensitive reaction are largely unknown except for the recognition process of pathogens. We delineate the NRP-gene in soybean, which is specifically induced during this programmed cell death and contains a novel protein domain, which is commonly found in different plant proteins.
Results: The sequence analysis of the protein, encoded by the NRP-gene from soybean, led to the identification of a novel domain, which we named DCD, because it is found in plant proteins involved in d evelopment and c ell d eath. The domain is shared by several proteins in the Arabidopsis and the rice genomes, which otherwise show a different protein architecture. Biological studies indicate a role of these proteins in phytohormone response, embryo development and programmed cell by pathogens or ozone.
Conclusion: It is tempting to speculate, that the DCD domain mediates signaling in plant development and programmed cell death and could thus be used to identify interacting proteins to gain further molecular insights into these processes.
Background: Osteoarthritis (OA) has a high prevalence in primary care. Conservative, guideline orientated approaches aiming at improving pain treatment and increasing physical activity, have been proven to be effective in several contexts outside the primary care setting, as for instance the Arthritis Self management Programs (ASMPs). But it remains unclear if these comprehensive evidence based approaches can improve patients' quality of life if they are provided in a primary care setting. Methods/Design: PraxArt is a cluster randomised controlled trial with GPs as the unit of randomisation. The aim of the study is to evaluate the impact of a comprehensive evidence based medical education of GPs on individual care and patients' quality of life. 75 GPs were randomised either to intervention group I or II or to a control group. Each GP will include 15 patients suffering from osteoarthritis according to the criteria of ACR. In intervention group I GPs will receive medical education and patient education leaflets including a physical exercise program. In intervention group II the same is provided, but in addition a practice nurse will be trained to monitor via monthly telephone calls adherence to GPs prescriptions and advices and ask about increasing pain and possible side effects of medication. In the control group no intervention will be applied at all. Main outcome measurement for patients' QoL is the GERMAN-AIMS2-SF questionnaire. In addition data about patients' satisfaction (using a modified EUROPEP-tool), medication, health care utilization, comorbidity, physical activity and depression (using PHQ-9) will be retrieved. Measurements (pre data collection) will take place in months I-III, starting in June 2005. Post data collection will be performed after 6 months. Discussion: Despite the high prevalence and increasing incidence, comprehensive and evidence based treatment approaches for OA in a primary care setting are neither established nor evaluated in Germany. If the evaluation of the presented approach reveals a clear benefit it is planned to provide this GP-centred interventions on a much larger scale.
Cancer has become one of the most fatal diseases. The Heidelberg Heavy Ion Cancer Therapy (HICAT) has the potential to become an important and efficient treatment method because of its excellent “Bragg peak” characteristics and on-line irradiation control by the PET diagnostics. The dedicated Heidelberg Heavy Ion Cancer Therapy Project includes two ECR ion sources, a RF linear injector, a synchrotron and three treatment rooms. It will deliver 4*10 high 10 protons, or 1*10 high 10 He, or 1*10 high 9 Carbons, or 5*10 high 8 Oxygens per synchrotron cycle with the beam energy 50-430AMeV for the treatments. The RF linear injector consists of a 400AkeV RFQ and of a very compact 7AMeV IH-DTL accelerator operated at 216.816MHz. The development of the IH-DTL within the HICAT project is a great challenge with respect to the present state of the DTL art because of the following reasons: • The highest operating frequency (216.816MHz) of all IH-DTL cavities; • Extremely large cavity length to diameter ratio of about 11; • IH-DTL with three internal triplets; • The highest effective voltage gain per meter (5.5MV/m); • Very short MEBT design for the beam matching. The following achievements have been reached during the development of the IH-DTL injector for HICAT : The KONUS beam dynamics design with LORASR code fulfills the beam requirement of the HICAT synchrotron at the injection point. The simulations for the IH-DTL injector have been performed not only with a homogeneous input beam, but also with the actual particle distribution from the exit of the HICAT RFQ accelerator as delivered by the PARMTEQ code. The output longitudinal normalized emittance for 95% of all particles is 2.00AkeVns, the emittance growth is less than 24%, while the X-X’ and Y-Y’ normalized emittance are 0.77mmmrad and 0.62mmmrad, respectively. The emittance growth in X-X’ is less than 18%, and the emittance growth in Y-Y’ is less than 5%. Based on the transverse envelopes of the transported particles, the redesign of the buncher drift tubes at the RFQ high energy end has been made to get a higher transit time factor for this novel RFQ internal buncher. An optimized effective buncher gap voltage of 45.4KV has been calculated to deliver a minimized longitudinal beam emittance, while the influence of the effective buncher voltage on the transverse emittance can be neglected. Six different tuning concepts were investigated in detail while tuning the 1:2 scaled HICAT IH model cavity. ‘Volume Tuning’ by a variation of the cavity cross sectional area can compensate the unbalanced capacitance distribution in case of an extreme beta-lambda-variation along an IH cavity. ‘Additional Capacitance Plates’ or copper sheets clamped on drift tube stems are a fast way for checking the tuning sensitivity, but they will be replaced by massive copper blocks mounted on the drift tube girders finally. ‘Lens Coupling’ is an important tuning to stabilize the operation mode and to increase or decrease the coupling between neighboring sections. ‘Tube Tuning’ is the fine tuning concept and also the standard tuning method to reach the needed field distributions as well as the gap voltage distributions. ‘Undercut Tuning’ is a very sensitive tuning for the end sections and with respect to the voltage distribution balance along the structure. The different types of ‘plungers’ in the 3rd and 4th sections have different effects on the resonance frequency and on the field distribution. The different triplet stems and the geometry of the cavity end have been also investigated to reach the design field and voltage distributions. Finally, the needed uniform field distribution along the IH-DTL cavity and the corresponding effective voltage distribution were realized, the remaining maximum gap voltage difference was less than 5% for the model cavity. The several important higher order modes were also measured. The RF tuning of the IH-DTL model cavity delivers the final geometry parameters of the IH-DTL power cavity. A rectangular cavity cross section was adopted for the first time for this IH-DTL cavity. This eases the realization of the volume tuning concept in the 1st and 2nd sections. Lens coupling determines the final distance between the triplet and the girder. The triplets are mounted on the lower cavity half shell. The Microwave Studio simulations have been carried out not only for the HICAT model cavity, but also for the final geometry of the IH-DTL power cavity. The field distribution for the operation mode H110 fits to the model cavity measurement as well as the Higher Order Modes. The simulations prove the IH-DTL geometrical design. On the other hand, the precision of one simulation with 2.3 million mesh points for full cross section area and the CPU time more than 15hours on a DELL PC with Intel Pentium 4 of 2.4GHz and 2.096GRAM were exploited to their limit when calculating the real parameters for the two final machining iterations during production. The shunt impedance of the IH-DTL power cavity is estimated by comparison with the existing tanks to about 195.8MOmega/m, which fits to the simulation result of 200.3MOmega/m with reducing the conductivity to the 5.0*10 high 7 Omega-1m-1. The effective shunt impedance is 153 MOmega/m. The needed RF power is 755kW. The expected quality factor of the IH-DTL cavity is about 15600. The IH-DTL power cavity tuning measurements before cavity copper plating have been performed. The results are within the specifications. There is no doubt that the needed accuracy of the voltage distribution will be reached with the foreseen fine tuning concepts in the last steps.
Fluctuations and NA49
(2005)
Under a conventional policy rule, a central bank adjusts its policy rate linearly according to the gap between inflation and its target, and the gap between output and its potential. Under "the opportunistic approach to disinflation" a central bank controls inflation aggressively when inflation is far from its target, but concentrates more on output stabilization when inflation is close to its target, allowing supply shocks and unforeseen fluctuations in aggregate demand to move inflation within a certain band. We use stochastic simulations of a small-scale rational expectations model to contrast the behavior of output and inflation under opportunistic and linear rules. Klassifikation: E31, E52, E58, E61. July, 2005.
This paper introduces a method for solving numerical dynamic stochastic optimization problems that avoids rootfinding operations. The idea is applicable to many microeconomic and macroeconomic problems, including life cycle, buffer-stock, and stochastic growth problems. Software is provided. Klassifikation: C6, D9, E2 . July 28, 2005.
Groundwater recharge is the major limiting factor for the sustainable use of groundwater. To support water management in a globalized world, it is necessary to estimate, in a spatially resolved way, global-scale groundwater recharge. In this report, improved model estimates of diffuse groundwater recharge at the global-scale, with a spatial resolution of 0.5° by 0.5°, are presented. They are based on calculations of the global hydrological model WGHM (WaterGAP Global Hydrology Model) which, for semi-arid and arid areas of the globe, was tuned against independent point estimates of diffuse groundwater recharge. This has led to a decrease of estimated groundwater recharge under semi-arid and arid conditions as compared to the model results before tuning, and the new estimates are more similar to country level data on groundwater recharge. Using the improved model, the impact of climate change on groundwater recharge was simulated, applying two greenhouse gas emissions scenarios as interpreted by two different climate models.
Prion diseases, also called transmissible spongiform encephalopathies, are a group of fatal neurodegenerative conditions that affect humans and a wide variety of animals. To date there is no therapeutic or prophylactic approach against prion diseases available. The causative infectious agent is the prion, also termed PrPSc, which is a pathological conformer of a cellular protein named prion protein PrPc. Prions are thought to multiply upon conversion of PrPc to PrPSc in a self-propagating manner. Immunotherapeutic strategies directed against PrPc represent a possible approach in preventing or curing prion diseases. Accordingly, it was already shown in animal models, that passive immunization delays the onset of prion diseases. The present thesis aimed at the development of a candidate vaccine towards the active immunization against prion diseases, an immune response, which has to be accompanied by the circumvention of host tolerance to the self-antigen PrPc. The vaccine development was approached using virus-like particles (retroparticles) derived from either the murine leukemia (MLV) or the human immunodeficiency virus (HIV). The display of PrP on the surface of such particles was addressed for both the cellular and the pathogenic form of PrP. The display of PrPc was achieved by either fusion to the transmembrane domain of the platelet derived growth factor receptor (PDGFR) or to the N-terminal part of the viral envelope protein (Env). In both cases, the corresponding PrPD- and PrPE-retroparticles were successfully produced and analyzed via immune fluorescence, Western Blot analysis, immunogold electron microscopy as well as by ELISA methods. Both, PrPD- and PrPE-retroparticles showed effective incorporation of N-terminally truncated forms of PrPc but not for the complete protein. PrPc at this revealed the typical glycosylation pattern, which was specifically removed by a glycosidase enzyme. Upon display of PrPc on retroparticles the protein remained detectable by PrP-specific antibodies under native conditions. Electron microscopy analysis of PrPc-variants revealed no alteration of the characteristic retroviral morphology of the generated particles. MLV-derived PrPD-retroparticles were successfully used in immunization studies. Contrary to approaches using bacterially expressed PrPc, the immunization of mice resulted in a specific antibody response. The display of the pathogenic isoform was aimed by two different strategies. The first one was directed at the conversion of the proteinase K (PK) sensitive from of PrP on the surface of PrPD-retroparticles into the PK resistant form. Albeit specific adaption of the PK digestion assay detecting resistant PrP, no PrP conversion was observed for PrPD-retroparticles. The second approach utilized a replication competent variant of the ecotropic MLV displaying PrPc on the viral Env protein. This MLV variant was stable in cell culture for six passages but did not replicate on scrapie-infected, PrPSc-propagating neuroblastoma cells. Thus, besides PrPc-displaying virus-like particles a replication competent MLV variant was obtained, which stably incorporated PrPc at the N-terminus of the viral Env protein. The incorporation of the cell-surface located PrPc into particles was expected from previously obtained data on protein display in the context of retrovirus-derived particles. Thus, the lack of incorporation observed for the complete PrPc sequence was rather unexpected and was found to be inhibited at both, fusion to PDGFR and the viral Env. In contrast to N-terminally truncated PrPc, the complete PrPc was shown to exhibit increased cell surface internalization rates and half-life times eventually contributing to the observed results. The PrP-vaccination approach described in this work represents the first successful system inducing PrP-specific antibody responses against the prion protein in wt mice. Explanations at this are based on the induction of specific T cell help or effects of the innate immunity, respectively. MLV-and HIV-derived particles bearing the PrP-coding sequence or being replication competent variants generated during this thesis might help to further improve the PrP-specific immune response.
Using CORSIKA for simulating extensive air showers, we study the relation between the shower characteristics and features of hadronic multiparticle production at low energies. We report about investigations of typical energies and phase space regions of secondary particles which are important for muon production in extensive air showers. Possibilities to measure relevant quantities of hadron production in existing and planned accelerator experiments are discussed.
Globalized justice - fragmented justice. Human rights violations by "private" transnational actors
(2005)
Plenarvortrag Weltkongress der Rechtsphilosophie und Sozialphilosophie, 24.-29. Mai, Granada 2005. S.a. die deutsche Fassung: "Die anonyme Matrix: Menschenrechtsverletzungen durch "private" transnationale Akteure". Spanische Fassung: Sociedad global, justicia fragmentada: sobre la violatión de los derechos humanos por actores transnacionales 'privados'. In: Manuel Escamilla and Modesto Saavedra (eds.), Law and Justice in a global society, International Association for philosophy of law and social philosophy, Granada 2005, S. 529-546.
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
This paper provides global terrestrial surface balances of nitrogen (N) at a resolution of 0.5 by 0.5 degree for the years 1961, 1995 and 2050 as simulated by the model WaterGAP-N. The terms livestock N excretion (Nanm), synthetic N fertilizer (Nfert), atmospheric N deposition (Ndep) and biological N fixation (Nfix) are considered as input while N export by plant uptake (Nexp) and ammonia volatilization (Nvol) are taken into account as output terms. The different terms in the balance are compared to results of other global models and uncertainties are described. Total global surface N surplus increased from 161 Tg N yr-1 in 1961 to 230 Tg N yr-1 in 1995. Using assumptions for the scenario A1B of the Special Report on Emission Scenarios (SRES) of the International Panel on Climate Change (IPCC) as quantified by the IMAGE model, total global surface N surplus is estimated to be 229 Tg N yr-1 in 2050. However, the implementation of these scenario assumptions leads to negative surface balances in many agricultural areas on the globe, which indicates that the assumptions about N fertilizer use and crop production changes are not consistent. Recommendations are made on how to change the assumptions about N fertilizer use to receive a more consistent scenario, which would lead to higher N surpluses in 2050 as compared to 1995.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigated Areas for the continent of Asia. For this update, an inventory of subnational irrigation statistics for the continent was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 188.5 million ha for the entire continent. The total number of subnational units used in the inventory is 4 428. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation. Contents 1. Working Report I: Generation of a map of administrative units compatible with statistics used to update the Digital Global Map of Irrigated Areas in Asia 2. Working Report II: The inventory of subnational irrigation statistics for the Asian part of the Digital Global Map of Irrigated Areas 3. Working Report III: Geospatial information used to locate irrigated areas within the subnational units in the Asian part of the Digital Global Map of Irrigated Areas 4. Working Report IV: Update of the Digital Global Map of Irrigated Areas in Asia, Results Maps
With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who now have the means to take and distribute pictures of one’s face at no risk and little cost in any situation in public and private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual’s private life to the public in unpreceeded ways. Social and legal measures are increasingly taken to deal with this problem. In practice however, they lack efficiency, as they are hard to enforce in practice. In this paper, we discuss a supportive infrastructure aiming for the distribution channel; as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action.
We consider Schwarz maps for triangles whose angles are rather general rational multiples of pi. Under which conditions can they have algebraic values at algebraic arguments? The answer is based mainly on considerations of complex multiplication of certain Prym varieties in Jacobians of hypergeometric curves. The paper can serve as an introduction to transcendence techniques for hypergeometric functions, but contains also new results and examples.
The main subject of this survey are Belyi functions and dessins d'enfants on Riemann surfaces. Dessins are certain bipartite graphs on 2-mainfolds defining there are conformal and even an algebraic structure. In principle, all deeper properties of the resulting Riemann surfaces or algebraic curves should be encoded in these dessins, but the decoding turns out to be difficult and leads to many open problems. We emphasize arithmetical aspects like Galois actions, the relation to the ABC theorem in function filds and arithemtic questions in uniformization theory of algebraic curves defined over number fields.
Presentation at the AMS Southeastern Sectional Meeting 14-16 March 2003, and the Workshop Asymptotic Analysis, Stability, and Generalized Functions', 17-19 March 2003, Louisiana State University, Baton Rouge, Louisiana. See the corresponding papers "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and "Infinite Infrared Regularization and a State Space for the Heisenberg Algebra".
Background: Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is performed mainly in patients with high-risk or advanced hematologic malignancies and congenital or acquired aplastic anemias. In the context of the significant risk of graft failure after allo-HSCT from alternative donors and the risk of relapse in recipients transplanted for malignancy, the precise monitoring of posttransplant hematopoietic chimerism is of utmost interest. Useful molecular methods for chimerism quantification after allogeneic transplantation, aimed at distinguishing precisely between donor's and recipient's cells, are PCR-based analyses of polymorphic DNA markers. Such analyses can be performed regardless of donor's and recipient's sex. Additionally, in patients after sex-mismatched allo-HSCT, fluorescent in situ hybridization (FISH) can be applied. Methods: We compared different techniques for analysis of posttransplant chimerism, namely FISH and PCR-based molecular methods with automated detection of fluorescent products in an ALFExpress DNA Sequencer (Pharmacia) or ABI 310 Genetic Analyzer (PE). We used Spearman correlation test. Results: We have found high correlation between results obtained from the PCR/ALF Express and PCR/ABI 310 Genetic Analyzer. Lower, but still positive correlations were found between results of FISH technique and results obtained using automated DNA sizing technology. Conclusions: All the methods applied enable a rapid and accurate detection of post-HSCT chimerism.
Background: To investigate the occupational risk of tuberculosis (TB) infection in a low-incidence setting, data from a prospective study of patients with culture-confirmed TB conducted in Hamburg, Germany, from 1997 to 2002 were evaluated. Methods: M. tuberculosis isolates were genotyped by IS6110 RFLP analysis. Results of contact tracing and additional patient interviews were used for further epidemiological analyses. Results: Out of 848 cases included in the cluster analysis, 286 (33.7%) were classified into 76 clusters comprising 2 to 39 patients. In total, two patients in the non-cluster and eight patients in the cluster group were health-care workers. Logistic regression analysis confirmed work in the health-care sector as the strongest predictor for clustering (OR 17.9). However, only two of the eight transmission links among the eight clusters involving health-care workers had been detected previously. Overall, conventional contact tracing performed before genotyping had identified only 26 (25.2%) of the 103 contact persons with the disease among the clustered cases whose transmission links were epidemiologically verified. Conclusion: Recent transmission was found to be strongly associated with health-care work in a setting with low incidence of TB. Conventional contact tracing alone was shown to be insufficient to discover recent transmission chains. The data presented also indicate the need for establishing improved TB control strategies in health-care settings.
Introduction: ScFv(FRP5)-ETA is a recombinant antibody toxin with binding specificity for ErbB2 (HER2). It consists of an N-terminal single-chain antibody fragment (scFv), genetically linked to truncated Pseudomonas exotoxin A (ETA). Potent antitumoral activity of scFv(FRP5)-ETA against ErbB2-overexpressing tumor cells was previously demonstrated in vitro and in animal models. Here we report the first systemic application of scFv(FRP5)-ETA in human cancer patients.
Methods: We have performed a phase I dose-finding study, with the objective to assess the maximum tolerated dose and the dose-limiting toxicity of intravenously injected scFv(FRP5)-ETA. Eighteen patients suffering from ErbB2-expressing metastatic breast cancers, prostate cancers, head and neck cancer, non small cell lung cancer, or transitional cell carcinoma were treated. Dose levels of 2, 4, 10, 12.5, and 20 μg/kg scFv(FRP5)-ETA were administered as five daily infusions each for two consecutive weeks.
Results: No hematologic, renal, and/or cardiovascular toxicities were noted in any of the patients treated. However, transient elevation of liver enzymes was observed, and considered dose limiting, in one of six patients at the maximum tolerated dose of 12.5 μg/kg, and in two of three patients at 20 μg/kg. Fifteen minutes after injection, peak concentrations of more than 100 ng/ml scFv(FRP5)-ETA were obtained at a dose of 10 μg/kg, indicating that predicted therapeutic levels of the recombinant protein can be applied without inducing toxic side effects. Induction of antibodies against scFv(FRP5)-ETA was observed 8 days after initiation of therapy in 13 patients investigated, but only in five of these patients could neutralizing activity be detected. Two patients showed stable disease and in three patients clinical signs of activity in terms of signs and symptoms were observed (all treated at doses ≥ 10 μg/kg). Disease progression occurred in 11 of the patients.
Conclusion: Our results demonstrate that systemic therapy with scFv(FRP5)-ETA can be safely administered up to a maximum tolerated dose of 12.5 μg/kg in patients with ErbB2-expressing tumors, justifying further clinical development.
First paragraph (this article has no abstract) Persistent stimulation of nociceptors results in sensitization of nociceptive sensory neurons, which is associated with hyperalgesia and allodynia. The release of NO and subsequent synthesis of cGMP in the spinal cord are involved in this process. cGMP-dependent protein kinase I (PKG-I) has been suggested to act as a downstream target of cGMP, but its exact role in nociception hadn't been characterized yet. To further evaluate the NO/cGMP/PKG-I pathway in nociception we assessed the effects of PKG-I inhibiton and activaton in the rat formalin assay and analyzed the nociceptive behavior of PKG-I-/- mice. Open access article.
Background: In general shell-less slugs are considered to be slimy animals with a rather dull appearance and a pest to garden plants. But marine slugs usually are beautifully coloured animals belonging to the less-known Opisthobranchia. They are characterized by a large array of interesting biological phenomena, usually related to foraging and/or defence. In this paper our knowledge of shell reduction, correlated with the evolution of different defensive and foraging strategies is reviewed, and new results on histology of different glandular systems are included. Results: Based on a phylogeny obtained by morphological and histological data, the parallel reduction of the shell within the different groups is outlined. Major food sources are given and glandular structures are described as possible defensive structures in the external epithelia, and as internal glands. Conclusion: According to phylogenetic analyses, the reduction of the shell correlates with the evolution of defensive strategies. Many different kinds of defence structures, like cleptocnides, mantle dermal formations (MDFs), and acid glands, are only present in shell-less slugs. In several cases, it is not clear whether the defensive devices were a prerequisite for the reduction of the shell, or reduction occurred before. Reduction of the shell and acquisition of different defensive structures had an implication on exploration of new food sources and therefore likely enhanced adaptive radiation of several groups. © 2005 Wägele and Klussmann-Kolb; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited: http://www.frontiersinzoology.com/content/2/1/3/
Background: Tumor development remains one of the major obstacles following organ transplantation. Immunosuppressive drugs such as cyclosporine and tacrolimus directly contribute to enhanced malignancy, whereas the influence of the novel compound mycophenolate mofetil (MMF) on tumor cell dissemination has not been explored. We therefore investigated the adhesion capacity of colon, pancreas, prostate and kidney carcinoma cell lines to endothelium, as well as their beta1 integrin expression profile before and after MMF treatment. Methods: Tumor cell adhesion to endothelial cell monolayers was evaluated in the presence of 0.1 and 1 μM MMF and compared to unstimulated controls. beta1 integrin analysis included alpha1beta1 (CD49a), alpha2beta1 (CD49b), alpha3beta1 (CD49c), alpha4beta1 (CD49d), alpha5beta1 (CD49e), and alpha6beta1 (CD49f) receptors, and was carried out by reverse transcriptase-polymerase chain reaction, confocal microscopy and flow cytometry. Results: Adhesion of the colon carcinoma cell line HT-29 was strongly reduced in the presence of 0.1 μM MMF. This effect was accompanied by down-regulation of alpha3beta1 and alpha6beta1 surface expression and of alpha3beta1 and alpha6beta1 coding mRNA. Adhesion of the prostate tumor cell line DU-145 was blocked dose-dependently by MMF. In contrast to MMF's effects on HT-29 cells, MMF dose-dependently up-regulated alpha1beta1, alpha2beta1, alpha3beta1, and alpha5beta1 on DU-145 tumor cell membranes. Conclusion: We conclude that MMF possesses distinct anti-tumoral properties, particularly in colon and prostate carcinoma cells. Adhesion blockage of HT-29 cells was due to the loss of alpha3beta1 and alpha6beta1 surface expression, which might contribute to a reduced invasive behaviour of this tumor entity. The enhancement of integrin beta1 subtypes observed in DU-145 cells possibly causes re-differentiation towards a low-invasive phenotype.
Apparent contradiction between negative effects of UV radiation and positive effects of sun exposure
(2005)
We would like to comment on the three contributions in the Journal of the National Cancer Institute, Vol. 97, No. 3, February 2, 2005: Kathleen M. Egan, Jeffrey A. Sosman, William J. Blot: Editorial: Sunlight and Reduced Risk of Cancer: Is the Real Story Vitamin D? (pp. 161-163) ; Marianne Berwick, Bruce K. Armstrong, Leah Ben-Porat, Judith Fine, Anne Kricker, Carey Eberle, Raymond Barnhill: Sun Exposure and Mortality From Melanoma. (pp. 195-199) ; Karin Ekström Smedby, Henrik Hjalgrim, Mads Melbye, Anna Torrång, Klaus Rostgaard, Lars Munksgaard, et al.: Ultraviolet Radiation Exposure and Risk of Malignant Lymphomas. (pp. 199-209).
Drug target 5-lipoxygenase : a link between cellular enzyme regulation and molecular pharmacology
(2005)
Leukotriene (LT) sind bioaktive Lipidmediatoren, die in einer Vielzahl von Entzündungskrankheiten wie z.B. Asthma, Psoriasis, Arthritis oder allergische Rhinitis involviert sind. Des Weiteren spielen LT in der Pathogenese von Erkrankungen wie Krebs, Osteoarthritis oder Atherosklerose eine Rolle. Die 5-Lipoxygenase (5-LO) ist das Enzym, das für die Bildung von LT verantwortlich ist. Aufgrund der physiologischen Eigenschaften der LT, ist die Entwicklung von potentiellen Arzneistoffen, welche die 5-LO als Zielstruktur besitzen, von erheblichem Interesse. Die Aktivität der 5-LO wird in vitro durch Ca2+, ATP, Phosphatidylcholin und Lipidhydroperoxide (LOOH) und durch die p38-abhängige MK-2/3 5-LO bestimmt. Inhibitorstudien weisen darauf hin, dass der MEK1/2-Signalweg ebenfalls in vivo an der 5-LO Aktivierung beteiligt ist. Hauptziel dieser Arbeit war es zu untersuchen, welche Rolle der MEK1/2-Signalweg bei der Aktivierung der 5-LO besitzt und welchen Einfluss der 5-LO Aktivierungsweg auf die Wirksamkeit potentieller Inhibitoren hat. „In gel kinase“ und „In vitro kinase“ Untersuchungen zeigten, dass die 5-LO ein Substrat für die Extracellular signal-regulated kinase (ERK) und MK-2/3 darstellt. Der Zusatz von mehrfach ungesättigten Fettsäuren (UFA), wie AA oder Ölsäure, verstärkte den Phosphorylierungsgrad der 5-LO sowohl durch ERK1/2 als auch durch MK-2/3. Die genannten Kinasen sind demnach auch für die 5-LO Aktivierung durch natürliche Stimuli verantwortlich, die den zellulären Ca2+-Spiegel kaum beeinflussen. Daraus ist ersichtlich, dass die Phosphorylierung der 5-LO durch ERK1/2 und/oder MK-2/3 einen alternativen Aktivierungsmechanismus neben Ca2+ darstellt. Ursprünglich wurden Nonredox-5-LO-Inhibitoren als kompetitive Wirkstoffe entwickelt, die mit AA um die Bindung an die katalytische Domäne der 5-LO konkurrieren. Vertreter dieser Inhibitoren, wie ZM230487 und L-739,010, zeigen eine potente Hemmung der LT-Biosynthese in verschiedenen Testsystemen. Sie scheiterten jedoch in klinischen Studien. In dieser Arbeit konnten wir zeigen, dass die Wirksamkeit dieser Inhibitoren vom Aktivierungsweg der 5-LO abhängig ist. Verglichen mit 5-LO Aktivität, die durch den unphysiologischen Stimulus Ca2+-Ionophor induziert wird, erfordert die Hemmung zellstress-induzierter Aktivität eine 10- bis 100-fach höhere Konzentration der Nonredox-5-LO-Inhibitoren. Die nicht-phosphorylierbare 5-LO Mutante (Ser271Ala/Ser663Ala) war wesentlich sensitiver gegenüber Nonredox-Inhibitoren als der Wildtyp, wenn das Enzym durch 5-LO Kinasen aktiviert wurde. Somit zeigen diese Ergebnisse, dass, im Gegensatz zu Ca2+, die 5-LO Aktivierung mittels Phosphorylierung die Wirksamkeit der Nonredox-Inhibitoren deutlich verringert. Des Weiteren wurde das pharmakologische Profil des neuen 5-LO Inhibitors CJ-13,610 mittels verschiedener in vitro-Testsysteme charakterisiert. In intakten PMNL, die durch Ca2+-Ionophor stimuliert wurden, hemmte die Substanz die 5-LO Produktbildung mit einem IC50 von 70 nM. Durch Zugabe von exogener AA, wird die Wirkung vermindert und der IC50 des Inhibitors steigt an. Dies deutet auf eine kompetitive Wirkweise hin. Wie die bekannten Nonredox-Inhibitoren, verliert auch CJ-13,610 seine Wirkung bei erhöhtem zellulärem Peroxidspiegel. Der Inhibitor CJ-13,610 zeigt jedoch keine Abhängigkeit vom Aktivierungsweg der 5-LO. Grundsätzlich ist es also von fundamentaler Bedeutung bei der Entwicklung von neuen Arzneistoffen, die zellulären Zusammenhänge, insbesondere die Regulierung der Aktivität von Enzymen, zu kennen. Wie in dieser Arbeit gezeigt, hat die Phosphorylierung der 5-LO einen starken Einfluss auf die Regulation der 5-LO Aktivität und eine elementare Wirkung auf die Hemmung des Enzyms durch verschiedene Wirkstoffe.
This paper has shown that some of the principal arguments against shareholder voice are unfounded. It has shown that shareholders do own corporations, and that the nature of their property interest is structured to meet the needs of the relationships found in stock corporations. The paper has explained that fiduciary and other duties restrain the actions of shareholders just as they do those of management, and that critics cannot reasonably expect court-imposed fiduciary duties to extend beyond the actual powers of shareholders. It has also illustrated how, although corporate statutes give shareholders complete power to structure governance as they will, the default governance structures of U.S. corporations leaves shareholders almost powerless to initiate any sort of action, and the interaction between state and federal law makes it almost impossible for shareholders to elect directors of their choice. Lastly, the paper has recalled how the percentage of U.S. corporate equities owned by institutional investors has increased dramatically in recent decades, and it has outlined some of the major developments in shareholder rights that followed this increase. I hope that this paper deflated some of the strong rhetoric used against shareholder voice by contrasting rhetoric to law, and that it illustrated why the picture of weak owners painted in the early 20th century should be updated to new circumstances, which will help avoid projecting an old description as a current normative model that perpetuates the inevitability of "managerialsm", perhaps better known as "dirigisme".
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
This paper characterizes the optimal inflation buffer consistent with a zero lower bound on nominal interest rates in a New Keynesian sticky-price model. It is shown that a purely forward-looking version of the model that abstracts from inflation inertia would significantly underestimate the inflation buffer. If the central bank follows the prescriptions of a welfare-theoretic objective, a larger buffer appears optimal than would be the case employing a traditional loss function. Taking also into account potential downward nominal rigidities in the price-setting behavior of firms appears not to impose significant further distortions on the economy. JEL Klassifikation: C63, E31, E52 .
Ignoring the existence of the zero lower bound on nominal interest rates one considerably understates the value of monetary commitment in New Keynesian models. A stochastic forward-looking model with lower bound, calibrated to the U.S. economy, suggests that low values for the natural rate of interest lead to sizeable output losses and deflation under discretionary monetary policy. The fall in output and deflation are much larger than in the case with policy commitment and do not show up at all if the model abstracts from the existence of the lower bound. The welfare losses of discretionary policy increase even further when inflation is partly determined by lagged inflation in the Phillips curve. These results emerge because private sector expectations and the discretionary policy response to these expectations reinforce each other and cause the lower bound to be reached much earlier than under commitment. JEL Klassifikation: E31, E52
Using data from the Consumer Expenditure Survey we first document that the recent increase in income inequality in the US has not been accompanied by a corresponding rise in consumption inequality. Much of this divergence is due to different trends in within-group inequality, which has increased significantly for income but little for consumption. We then develop a simple framework that allows us to analytically characterize how within-group income inequality affects consumption inequality in a world in which agents can trade a full set of contingent consumption claims, subject to endogenous constraints emanating from the limited enforcement of intertemporal contracts (as in Kehoe and Levine, 1993). Finally, we quantitatively evaluate, in the context of a calibrated general equilibrium production economy, whether this set-up, or alternatively a standard incomplete markets model (as in Ayiagari 1994), can account for the documented stylized consumption inequality facts from the US data. JEL Klassifikation: E21, D91, D63, D31, G22
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61.
This paper studies an overlapping generations model with stochastic production and incomplete markets to assess whether the introduction of an unfunded social security system leads to a Pareto improvement. When returns to capital and wages are imperfectly correlated a system that endows retired households with claims to labor income enhances the sharing of aggregate risk between generations. Our quantitative analysis shows that, abstracting from the capital crowding-out effect, the introduction of social security represents a Pareto improving reform, even when the economy is dynamically effcient. However, the severity of the crowding-out effect in general equilibrium tends to overturn these gains. Klassifikation: E62, H55, H31, D91, D58 . April 2005.
While much of classical statistical analysis is based on Gaussian distributional assumptions, statistical modeling with the Laplace distribution has gained importance in many applied fields. This phenomenon is rooted in the fact that, like the Gaussian, the Laplace distribution has many attractive properties. This paper investigates two methods of combining them and their use in modeling and predicting financial risk. Based on 25 daily stock return series, the empirical results indicate that the new models offer a plausible description of the data. They are also shown to be competitive with, or superior to, use of the hyperbolic distribution, which has gained some popularity in asset-return modeling and, in fact, also nests the Gaussian and Laplace. Klassifikation: C16, C50 . March 2005.
This paper computes the optimal progressivity of the income tax code in a dynamic general equilibrium model with household heterogeneity in which uninsurable labor productivity risk gives rise to a nontrivial income and wealth distribution. A progressive tax system serves as a partial substitute for missing insurance markets and enhances an equal distribution of economic welfare. These beneficial effects of a progressive tax system have to be traded off against the efficiency loss arising from distorting endogenous labor supply and capital accumulation decisions. Using a utilitarian steady state social welfare criterion we find that the optimal US income tax is well approximated by a flat tax rate of 17:2% and a fixed deduction of about $9,400. The steady state welfare gains from a fundamental tax reform towards this tax system are equivalent to 1:7% higher consumption in each state of the world. An explicit computation of the transition path induced by a reform of the current towards the optimal tax system indicates that a majority of the population currently alive (roughly 62%) would experience welfare gains, suggesting that such fundamental income tax reform is not only desirable, but may also be politically feasible. JEL Klassifikation: E62, H21, H24 .
Financial markets embed expectations of central bank policy into asset prices. This paper compares two approaches that extract a probability density of market beliefs. The first is a simulatedmoments estimator for option volatilities described in Mizrach (2002); the second is a new approach developed by Haas, Mittnik and Paolella (2004a) for fat-tailed conditionally heteroskedastic time series. In an application to the 1992-93 European Exchange Rate Mechanism crises, that both the options and the underlying exchange rates provide useful information for policy makers. JEL Klassifikation: G12, G14, F31.
Volatility forecasting
(2005)
Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1.
This paper analyzes dynamic equilibrium risk sharing contracts between profit-maximizing intermediaries and a large pool of ex-ante identical agents that face idiosyncratic income uncertainty that makes them heterogeneous ex-post. In any given period, after having observed her income, the agent can walk away from the contract, while the intermediary cannot, i.e. there is one-sided commitment. We consider the extreme scenario that the agents face no costs to walking away, and can sign up with any competing intermediary without any reputational losses. We demonstrate that not only autarky, but also partial and full insurance can obtain, depending on the relative patience of agents and financial intermediaries. Insurance can be provided because in an equilibrium contract an up-front payment e.ectively locks in the agent with an intermediary. We then show that our contract economy is equivalent to a consumption-savings economy with one-period Arrow securities and a short-sale constraint, similar to Bulow and Rogo. (1989). From this equivalence and our characterization of dynamic contracts it immediately follows that without cost of switching financial intermediaries debt contracts are not sustainable, even though a risk allocation superior to autarky can be achieved. JEL Klassifikation: G22, E21, D11, D91.