Refine
Year of publication
- 2005 (562) (remove)
Document Type
- Article (215)
- Working Paper (79)
- Doctoral Thesis (58)
- Part of a Book (51)
- Preprint (43)
- Part of Periodical (40)
- Conference Proceeding (36)
- Report (22)
- Book (11)
- Review (3)
Language
- English (562) (remove)
Has Fulltext
- yes (562) (remove)
Keywords
- Artikulation (13)
- Phonetik (13)
- Artikulatorische Phonetik (12)
- Englisch (11)
- Artikulator (8)
- Deutsch (7)
- Europäische Union (7)
- Geldpolitik (7)
- Bedeutungswandel (6)
- Computerlinguistik (6)
Institute
- Physik (72)
- Center for Financial Studies (CFS) (42)
- Wirtschaftswissenschaften (39)
- Biochemie und Chemie (32)
- Medizin (24)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Geowissenschaften (17)
- E-Finance Lab e.V. (16)
- Extern (15)
- Biowissenschaften (14)
Credit card debt puzzles
(2005)
Most US credit card holders revolve high-interest debt, often combined with substantial (i) asset accumulation by retirement, and (ii) low-rate liquid assets. Hyperbolic discounting can resolve only the former puzzle (Laibson et al., 2003). Bertaut and Haliassos (2002) proposed an 'accountant-shopper' framework for the latter. The current paper builds, solves, and simulates a fully-specified accountant-shopper model, to show that this framework can actually generate both types of co-existence, as well as target credit card utilization rates consistent with Gross and Souleles (2002). The benchmark model is compared to setups without self-control problems, with alternative mechanisms, and with impatient but fully rational shoppers. Klassifikation: E210, G110
Some have argued that recent increases in credit risk transfer are desirable because they improve the diversification of risk. Others have suggested that they may be undesirable if they increase the risk of financial crises. Using a model with banking and insurance sectors, we show that credit risk transfer can be beneficial when banks face uniform demand for liquidity. However, when they face idiosyncratic liquidity risk and hedge this risk in an interbank market, credit risk transfer can be detrimental to welfare. It can lead to contagion between the two sectors and increase the risk of crises. Klassifikation: G21, G22
How do markets spread risk when events are unknown or unknowable and where not anticipated in an insurance contract? While the policyholder can "hold up" the insurer for extra contractual payments, the continuing gains from trade on a single contract are often too small to yield useful coverage. By acting as a repository of the reputations of the parties, we show the brokers provide a coordinating mechanism to leverage the collective hold up power of policyholders. This extends both the degree of implicit and explicit coverage. The role is reflected in the terms of broker engagement, specifically in the ownership by the broker of the renewal rights. Finally, we argue that brokers can be motivated to play this role when they receive commissions that are contingent on insurer profits. This last feature questions a recent, well publicized, attack on broker compensation by New York attorney general, Elliot Spitzer. Klassifikation: G22, G24, L14
Biophysical investigation of the ligand-induced assembling of the human type I interferon receptor
(2005)
Type I interferons (IFNs) elicit antiviral, antiproliferative and immunmodulatory responses through binding to a shared receptor consisting of the transmembrane proteins ifnar1 and ifnar2. Differential signaling by different interferons – in particular IFNalpha´s and IFNbeta – suggest different modes of receptor engagement. In this work either single ligand-receptor interactions or the formation of the extracellular part of a signaling complex were investigated referring to thermodynamics, kinetics, stoichiometry and structural organization. Initially an expression and purification strategy for the extracellular domain of ifnar1 (ifnar1-EC) using Sf9 insect cells yielding in mg amounts of glycosylated protein was established. Using reflectometric interference spectroscopy (RIfS) the interactions between IFNalpha2/beta and ifnar1-EC and ifnar2-EC was studied in order to understand the individual energetic contributions within the ternary complex. For IFNalpha2 a Kd of 5 µM for the interaction with ifnar1-EC was determined. Substantially tighter binding of IFNbeta with both ifnar2-EC and ifnar1-EC compared to IFNalpha2 was observed. For neither IFNalpha2 nor IFNbeta stabilization of the complex with ifnar1-EC in presence of soluble ifnar2-EC was detectable. In addition, no direct interaction between ifnar2 and ifnar1 was could be shown. Thus, stem-stem interactions between the extracellular domains of ifnar1 and ifnar2 do not seem to play a role for ternary complex formation. Furthermore, ligand-induced cross-talk between ifnar1-EC and ifnar2-EC being tethered onto solid-supported, fluid lipid bilayers was investigated by RIfS and total internal reflection fluorescence spectroscopy. A very stable binding of IFNalpha2 at high receptor surface concentrations was observed with an apparent kd approximately 200-times lower than for ifnar2-EC alone. This apparent kd was strongly dependent on the surface concentration of the receptor components, suggesting kinetic rather than static stabilization, which was corroborated by competition experiments. These results indicate that signaling is activated by transient cross-talk between ifnar1 and ifnar2, which is by several orders of magnitude more efficiently engaged by IFNbeta than by IFNalpha2. With respect to differential recognition of different IFNs ifnar1-EC was dissected into sub-fragments containing different of the four Ig-like domains. The appropriate folding and glycosylation of these proteins, also purified in mg amounts were confirmed by SDS-PAGE, size exclusion chromatography and CD-spectroscopy. Surprisingly, only one construct containing all three N-terminal Ig-like domains was active in terms of ligand binding, indicating that these domains were required. Competitive binding of IFNalpha2 and IFNbeta to both this fragment and ifnar1-EC was demonstrated. Cellular binding assays with different fragments, however, highlight the key role of the membrane-proximal Ig-like domain for the formation of an in situ IFN-receptor complex and the ensuing signal activation. Even substitution with Ig-like domains from homologous cytokine receptors did not restore high-affinity ligand binding. Receptor assembling analysis on supported lipid bilayer revealed that appropriate orientation of the receptor is required, which is controlled by the membrane-proximal Ig-domain. All results indicate that differential signalling is encoded by the efficiency of signalling complex formation, which is controlled by the binding affinity of IFNs to the extracellular domains of ifnar1 and 2.
Here I analyse 23 populations of D. galeata, a large-lake cladoceran, distributed mainly across the Palaearctic. I detected high levels of clonal diversity and population differentiation using variation at six microsatellite loci across Europe. Most populations were characterised by deviations from H-W equilibrium and significant heterozygote deficiencies. Observed heterozygote deficiencies might be a consequence of simultaneous hatching of individuals produced during different times of the year or of the coexistence of ecologically and genetically differentiated subpopulations. A significant isolation by distance was only found over large geographic distances (> 700 km). This pattern is mainly due to the high genetic differentiation among neighbouring populations. My results suggest that historic populations of Daphnia were once interconnected by gene flow but current populations are now largely isolated. Thus local ecological conditions which determine the level of biparental sexual reproduction and local adaptation are the main factors mediating population structure of D. galeata. The population genetic structure and diversity in D. galeata was investigated at a European scale using six microsatellite loci and 12S rDNA sequence data to infer and compare historical and contemporary patterns of gene flow. D. galeata has the potential for long-distance dispersal via ephippial resting eggs by wind and other dispersing vectors (waterfowl), but shows in general strong population differentiation even among neighbouring populations. A total of 427 individuals were analysed for microsatellite and 85 individuals for mitochondrial (mtDNA) sequence data from 12 populations across Europe. I detected genetic differentiation among populations across Europe and locations within sampling regions for both genetic marker systems (average values: mtDNA FST = 0.574; microsatellite FST = 0.389), resulting in a lack of isolation by distance. Furthermore, several microsatellite alleles and one haplotype were shared across populations. Partitioning of molecular variance was inconsistant for both marker systems. Microsatellite variation was higher within than among populations, whereas mtDNA data yielded an inverse pattern. Relative high levels of nuclear DNA diversity were found across Europe. The amount of mitochondrial diversity was low in Spain, Hungary and Denmark. Gene flow analysis at a European scale did not reveal typical pattern of population recolonization in the light of postglacial colonization hypotheses. Populations, which recently experienced an expansion or population-bottleneck were observed both in middle and northern Europe. Since these populations revealed high genetic diversity in both marker systems, I suggest these areas to represent postglacial zones of secondary contact among divergent lineages of D. galeata. In order to reveal the relationship between population genetic structure of D. galeata and the relative contribution of environmental factors, I used a statistical framework based on canonical correspondence analysis. Although I detected no single ecological gradient mediating the genetic differentiation in either lake regions, it is noteworthy that the same ecological factors were significantly correlated with intra- and interspecific genetic variation of D. galeata. For example, I found a relationship between genetic variation of D. galeata and differentiation with higher and lower trophic levels (phytoplankton, submerged macrophytes and fish) and a relationship between clonal variation and species diversity within Cladocera. Variance partitioning had only a minor contribution of each environmental category (abiotic, biomass/density and diversity) to genetic diversity of D. galeata, while the largest proportion of variation was explained by shared components. My work illustrates the important role of ecological differentiation and adaptation in structuring genetic variation, and it highlights the need for approaches incorporating a landscape context for population divergence.
Die vorliegende Arbeit beschäftigt sich mit der Charakterisierung des ALTRO Chips (ALICE TPC Readout), der ein integraler und wichtiger Bestandteil der Auslesekette des TPC (Time Projection Chamber) Detektors von ALICE (A Large Ion Collider Experiment) ist. ALICE ist ein Experiment am noch im Bau befindlichen LHC (Large Hadron Collider) am CERN mit der zentralen Ausrichtung, Schwerionenkollisionen zu untersuchen. Diese sind von besonderem Interesse, da durch sie ein experimenteller Zugriff zu dem QGP (Quark Gluon Plasma) existiert, dem einzigen vom Standardmodell vorhergesagten Phasenübergang, der unter Laborbedingungen erreichbar ist. Im Jahr 2004 wurden Messungen an einem Teststrahl am CERN PS (Proton Synchrotron) durchgeführt. Der Prototyp wurde voll mit FECs bestückt, was 5400 Kanälen entspricht und einer anderen Gasmixtur (Ne/N2/CO2 90%/5%/5%) befüllt. Für das optimale Leistungsverhalten der ALICE TPC muß der Digitalprozessor im ALTRO, bestehend aus vier Berechnungseinheiten, mit den passenden Werten konfiguriert werden. Der Datenfluss beginnt mit dem BCS1 (Baseline Correction and Subtraction 1) Modul, das systematische Störungen und die Grundlinie entfernt. Da der ALTRO kontinuierlich das anliegende Signal abtastet, entfernt es automatisch langsame Grundlinienveränderungen, die Beispielsweise durch Temperaturänderungen auftreten können. Gefolgt von dem TCF (Tail Cancellation Filter), der den Schweif des langsam fallenden, vom PASA generierten Signals entfernt. Um die nichtsystematischen Störungen der Grundlinie zu entfernen, folgt die BCS2 (Baseline Correction and Subtraction 2), die auf einer gleitenden Mittelwertsberechnung mit Ausschluß von Detektorsignalen über einen doppelten Schwellenwert basiert. Die finale Einheit für die Signalverarbeitung ist die ZSU (Zero Suppression Unit), die Meßpunkte unterhalb eines definierten Schwellwertes entfernt. Hier wird der weg beschrieben die TCF und BCS1 Parameter aus vorhandenen Detektordaten zu extrahieren. Während der Analyse der Daten von kosmischen Teilchen fiel bei Signalen mit hoher Amplitude (>700 ADC) eine zusätzliche Struktur in dem Schweif auf. Der Monitor wurde deswegen mit einem gleitenden Mittelwertfilter erweitert, worauf sich diese Struktur auch in kleineren Signalen (> 200 ADC) zeigte. Dieses Signal wird von Ionen erzeugt, die zur Kathode oder zu den Pads driften, bisher ist jedoch weder die Streuung der Elektronenlawine an der Anode, noch die Variationsbreite in den erzeugten Elektronlawinen verstanden oder gemessen worden. Eine erfolgreiche Messung, sowie Charakterisierung wird in dieser Arbeit beschrieben. Im Jahr 2005 im Sommer beginnt der Einbau der Gaskammern der TPC in ALICE, die Elektronik folgt am Ende dieses Jahres. Parallel hierzu wurde der Prototyp der TPC wieder in Betrieb genommen und im Frühling wird ein kompletter Sektor mit der Detektorelektronik ausgestattet. An diesen zwei Aufbauten wird die ALTRO Charakterisierung fortgeführt, verfeinert und komplettiert.
Event-by-event multiplicity fluctuations in nucleus-nucleus collisions are studied within the HSD and UrQMD transport models. The scaled variances of negative, positive, and all charged hadrons in Pb+Pb at 158 AGeV are analyzed in comparison to the data from the NA49 Collaboration. We find a dominant role of the fluctuations in the nucleon participant number for the final hadron multiplicity fluctuations. This fact can be used to check di erent scenarios of nucleus-nucleus collisions by measuring the final multiplicity fluctuations as a function of collision centrality. The analysis reveals surprising e ects in the recent NA49 data which indicate a rather strong mixing of the projectile and target hadron production sources even in peripheral collisions. PACS numbers: 25.75.-q,25.75.Gz,24.60.-k
Mitochondial NADH:ubiquinone oxidoreductase (complex I) the largest multiprotein enzyme of the respiratory chain, catalyses the transfer of two electrons from NADH to ubiquinone, coupled to the translocation of four protons across the membrane. In addition to the 14 strictly conserved central subunits it contains a variable number of accessory subunits. At present, the best characterized enzyme is complex I from bovine heart with a molecular mass of about 980 kDa and 32 accessory proteins. In this study, the subunit composition of mitochondrial complex I from the aerobic yeast Y. lipolytica has been analysed by a combination of proteomic and genomic approaches. The sequences of 37 complex I subunits were identified. The sum of their individual molecular masses (about 930 kDa) was consistent with the native molecular weight of approximately 900 kDa for Y. lipolytica complex I obtained by BN-PAGE. A genomic analysis with Y. lipolytica and other eukaryotic databases to search for homologues of complex I subunits revealed 31 conserved proteins among the examined species. A novel protein named “X” was found in purified Y. lipolytica complex I by MALDI-MS. This protein exhibits homology to the thiosulfate sulfurtransferase enzyme referred to as rhodanese. The finding of a rhodanese-like protein in isolated complex I of Y. lipolytica allows to assume a special regulatory mechanism of complex I activity through control of the status of its iron-sulfur clusters. The second part of this study was aimed at investigating the possible role of one of these extra subunits, 39 kDa (NUEM) subunit which is related to the SDRs-enzyme family. The members of this family function in different redox and isomerization reactions and contain a conserved NAD(P)H-binding site. It was proposed that the 39 kDa subunit may be involved in a biosynthetic pathway, but the role of this subunit in complex I is unknown. In contrast to the situation in N. crassa, deletion of the 39 kDa encoding gene in Y. lipolytica led to the absence of fully assembled complex I. This result might indicate a different pathway of complex I assembly in both organisms. Several site-directed mutations were generated in the nucleotide binding motif. These had either no effect on enzyme activity and NADPH binding, or prevented complex I assembly. Mutations of arginine-65 that is located at the end of the second b-strand and responsible for selective interaction with the 2’-phosphate group of NADPH retained complex I activity in mitochondrial membranes but the affinity for the cofactor was markedly decreased. Purification of complex I from mutants resulted in decrease or loss of ubiquinone reductase activity. It is very likely that replacement of R65 not only led to a decrease in affinity for NADPH but also caused instability of the enzyme due to steric changes in the 39 kDa subunit. These data indicate that NADPH bound to the 39 kDa subunit (NUEM) is not essential for complex I activity, but probably involved in complex I assembly in Y. lipolytica.
The thesis entitled „Investigations on the significance of nucleo-cytoplasmic transport for the biological function of cellular proteins" aimed to unreveal molecular mechanisms in order to improve our understanding of the impact of nucleo-cytoplasmic transport on cellular functions. Within the scope of this work, it could be shown that regulated nucleo-cytoplasmic transport of a subfamily of homeobox transcription factors controlled their intra- and intercellular transport, and thereby influencing also their transcriptional activity. This study describes a novel regulatory mechanism, which could in general play an important role for the ordered differentiation of complex organisms. Besides cis-active transport Signals, also post-translational modifications can influence the localization and biological activity of proteins in trans. In addition to the known impact of phosphorylation on the transport and activity of STAT1, experimental evidence was provided demonstrating that acetylation affected the interaction of STAT1 with NF-kB p65, and subsequently modulated the expression of apoptosis-inducing NF-kB target genes. The impact of nucleo-cytoplasmic transport on the regulation of apoptosis was underlined by showing that the evolutionary conservation of a NES within the anti-apoptotic protein survivin plays an essential role for its dual function in the inhibition of apoptosis and ordered cell division. Since survivin is considered a bona fide cancer therapy target, these results strongly encourage future work to identify molecular decoys that specifically inhibit the nuclear export of survivin as novel therapeutics. In order to further dissect the regulation of nuclear transport and to efficiently identify transport inhibitors, cell-based assays are urgently required. Therefore, the cellular assay Systems developed in this work may not only serve to identify synthetic nuclear export and Import inhibitors but may also be applied in systematic RNAi-screening approaches to identify novel components of the transport machinery. In addition, the translocation based protease- and protein-interaction biosensors can be applied in various biological Systems, in particular to identify protein-protein interaction inhibitors of cancer relevant proteins. In summary, this work does not only underline the general significance of nucleo-cytoplasmic transport for cell biology, but also demonstrates its potential for the development of novel therapies against diseases like cancer and viral infections.
Plural semantics for natural language understanding : a computational proof-theoretic approach
(2005)
The semantics of natural language plurals poses a number of intricate problems – both from a formal and a computational perspective. In this thesis I investigate problems of representing, disambiguating and reasoning with plurals from a computational perspective. The work defines a computationally suitable representation for important plural constructions, proposes a tractable resolution algorithm for semantic plural ambiguities, and integrates an automatic reasoning component for plurals. My solution combines insights from formal semantics, computational linguistics and automated theorem proving and is based on the following main ideas. Whereas many existing approaches to plural semantics work on a model-theoretic basis using higher-order representation languages I propose a proof-theoretic approach to plural semantics based on a flat firstorder semantic representation language thus showing that a trade-off between expressive power and logical tractability can be found. The problem of automatic disambiguation of plurals is tackled by a deliberate decision to drastically reduce recourse to contextual knowledge for disambiguation but rely instead on structurally available and thus computationally manageable information. A further central aspect of the solution lies in carefully drawing the borderline between real ambiguity and mere indeterminacy in the interpretation of plural noun phrases. As a practical result of my computational proof-theoretic approach to plural semantics I can use my methods to perform automated reasoning with plurals by applying advanced firstorder theorem provers and model-generators available off-the shelf. The results are prototypically implemented within the two logic-oriented natural language understanding applications DRoPs and Attempto. DRoPs provides an automatic plural disambiguation component for uncontrolled natural language whereas Attempto works with a constructive disambiguation strategy for controlled natural language. Both systems provide tools for the automated analysis of technical texts allowing users for example to automatically detect inconsistencies, to perform question answering, to check whether a conjecture follows from a text or to find equivalences and redundancies.
Molecular dynamics (MD) simulation serves as an important and widely used computational tool to study molecular systems at an atomic resolution. No experimental technique is capable of generating a complete description of the dynamical structure of the biomolecules in their native solution environment. MD simulations allow us to study the dynamics and structure of the system and, moreover, helps in the interpretation of experimental observations. MD simulation was first introduced and applied by Alder and Wainwright in 1957 \cite{Alder57}. However, the first MD simulation of a macromolecule of biological interest was published 28 years ago \cite{McCammon77}. The simulation was concerned with the bovine pancreatic trypsin inhibitor (BPTI) protein, which has served as the hydrogen molecule'' of protein dynamics because of its small size, high stability, and relatively accurate X-ray structure available in 1977 \cite{Deisenhofer75}. This method is now widely used to tackle larger and more complex biological systems \cite{Groot01,Roux02} and has been facilitated by the development of fast and efficient methods for treating the long-range electrostatic interactions \cite{Essmann95}, the availability of faster parallel computers, and the continuous development of empirical molecular mechanical force fields \cite{Langley98,Cheatham99,Foloppe00}. It took several years until the first MD simulations of nucleic acid systems were performed \cite{Levitt83,Tidor83,Prabhakaran83,Nilsson86}. These investigations, which were also performed in vacuo, clearly demonstrated the importance of proper handling of electrostatics in a highly charged nucleic acid system, and different approaches, such as reduction of the phosphate charges and addition of hydrated counterions, have been applied to remedy this shortcoming and to maintain stable DNA structures. A few years later, the first MD simulation of a DNA molecule, including explicit water molecules and counterions was published \cite{Seibel85}. Various MD simulations on fully solvated RNA molecules with explicit inclusion of mobile ions indicated the importance of proper treatment of the environment of highly charged nucleic acids \cite{Lee95,Zichi95,Auffinger97,Auffinger99}. Given the central roles of RNA in the life of cells, it is important to understand the mechanism by which RNA forms three dimensional structures endowed with properties such as catalysis, ligand binding, and recognition of proteins. Furthermore, the increasing awareness of the essential role of RNA in controlling viral replication and in bacterial protein synthesis emphazises the potential of ribonucleicacids as targets for developing new antibacterial and new antiviral drugs. Driven by fruitful collaborations in the Sonderforschungsbereich RNA-Ligand interactions" the model RNA systems in this study include various RNA tetraloops and HIV-1 TAR RNA. For the latter system, the binding sites of heteroaromatic compounds have been studied employing automated docking calculations \cite{Goodsell90}. The results show that it is possible to use this tool to dock small rigid ligands to an RNA molecule, while large and flexible molecules are clearly problematic. The main part of this work is focused on MD simulations of RNA tetraloops.
Die vorliegende Analyse untersucht die Beschäftigungseffekte von Vermittlungsgutscheinen und Personal-Service-Agenturen mit Hilfe einer makroökonometrischen Evaluation. Neben einer mikroökonometrischen Evaluation, welche die Wirkungen auf individueller Ebene untersucht, kann eine makroökonometrische Analyse Aussagen über die gesamtwirtschaftlichen Effekte der Maßnahmen machen. Die strukturellen Multiplikatorwirkungen im makroökonomischen Kreislaufzusammenhang werden jedoch nicht berücksichtigt. Das ökonometrische Modell zur Analyse der beiden Maßnahmen basiert auf einer Matching-Funktion, die den Suchprozess von Firmen und von Arbeitern nach einem Beschäftigungsverhältnis abbildet. Die empirischen Analysen werden getrennt für Ost- und Westdeutschland sowie für die Strategietypen der Bundesagentur für Arbeit durchgeführt. Sie zeigen, dass die Ausgabe von Vermittlungsgutscheinen nur in „großstädtisch geprägten Bezirken vorwiegend in Westdeutschland mit hoher Arbeitslosigkeit“ (Strategietyp II) einen signifikant positiven Effekt auf den Suchprozess hat. Für die Personal-Service-Agenturen zeigen sich signifikant positive Effekte für Ost- als auch für Westdeutschland. Allerdings fehlt für eine abschließende Bewertung der Ergebnisse für die Personal- Service-Agenturen aufgrund der relativ geringen Teilnehmerzahl noch ein Vergleich mit mikroökonometrischen Analysen.
Serial correlation in dynamic panel data models with weakly exogenous regressor and fixed effects
(2005)
Our paper wants to present and compare two estimation methodologies for dynamic panel data models in the presence of serially correlated errors and weakly exogenous regressors. The ¯rst is the ¯rst di®erence GMM estimator as proposed by Arellano and Bond (1991) and the second is the transformed Maximum Likelihood Estimator as proposed by Hsiao, Pesaran, and Tahmiscioglu (2002). Thereby, we consider the ¯xed e®ects case and weakly exogenous regressors. The ¯nite sample properties of both estimation methodologies are analysed within a simulation experiment. Furthermore, we will present an empirical example to consider the performance of both estimators with real data. JEL Classification: C23, J64
In this paper we evaluate the employment effects of job creation schemes on the participating individuals in Germany. Job creation schemes are a major element of active labour market policy in Germany and are targeted at long-term unemployed and other hard-to-place individuals. Access to very informative administrative data of the Federal Employment Agency justifies the application of a matching estimator and allows to account for individual (group-specific) and regional effect heterogeneity. We extend previous studies in four directions. First, we are able to evaluate the effects on regular (unsubsidised) employment. Second, we observe the outcome of participants and non-participants for nearly three years after programme start and can therefore analyse mid- and long-term effects. Third, we test the sensitivity of the results with respect to various decisions which have to be made during implementation of the matching estimator, e.g. choosing the matching algorithm or estimating the propensity score. Finally, we check if a possible occurrence of 'unobserved heterogeneity' distorts our interpretation. The overall results are rather discouraging, since the employment effects are negative or insignificant for most of the analysed groups. One notable exception are long-term unemployed individuals who benefit from participation. Hence, one policy implication is to address programmes to this problem group more tightly. JEL Classification: J68, H43, C13
Job creation schemes (JCS) have been one important programme of active labour market policy in Germany aiming at the re-integration of hard-to-place unemployed individuals into regular employment. In ontrast to earlier evaluation studies of these programmes based on survey data, we use administrative data containing more than 11,000 participants for our analysis and hence, can take effect heterogeneity explicitly into account. We focus on effect heterogeneity caused by differences in the implementation of programmes (economic sector, types of support and implementing institutions). The results are rather discouraging and show that in general, JCS are unable to improve the re-integration chances of participants into regular employment.
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
The effects of vocational training programmes on the duration of unemployment in Eastern Germany
(2005)
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
Previous empirical studies of job creation schemes in Germany have shown that the average effects for the participating individuals are negative. However, we find that this is not true for all strata of the population. Identifying individual characteristics that are responsible for the effect heterogeneity and using this information for a better allocation of individuals therefore bears some scope for improving programme efficiency. We present several stratification strategies and discuss the occurring effect heterogeneity. Our findings show that job creation schemes do neither harm nor improve the labour market chances for most of the groups. Exceptions are long-term unemployed men in West and long-term unemployed women in East and West Germany who benefit from participation in terms of higher employment rates. JEL: C13 , J68 , H43
Innovations are a key factor to ensure the competitiveness of establishments as well as to enhance the growth and wealth of nations. But more than any other economic activity, decisions about innovations are plagued by failures of the market mechanism. As a response, public instruments have been implemented to stimulate private innovation activities. The effectiveness of these measures, however, is ambiguous and calls for an empirical evaluation. In this paper we make use of the IAB Establishment Panel and apply various microeconometric methods to estimate the effect of public measures on innovation activities of German establishments. We find that neglecting sample selection due to observable as well as to unobservable characteristics leads to an overestimation of the treatment effect and that there are considerable differences with regard to size class and betweenWest and East German establishments.
In recent methodological work the well known ACD approach, originally introduced by Engle and Russell (1998), has been supplemented by the involvement of an unobservable stochastic process which accompanies the underlying process of durations via a discrete mixture of distributions. The Mixture ACD model, emanating from the specialized proposal of De Luca and Gallo (2004), has proved to be a moderate tool for description of financial duration data. The use of one and the same family of ordinary distributions has been common practice until now. Our contribution incites to use the rich parameterized comprehensive family of distributions which allows for interacting different distributional idiosyncrasies. JEL classification: C41, C22, C25, C51, G14.
We propose a new framework for modelling the time dependence in duration processes being in force on financial markets. The pioneering ACD model introduced by Engle and Russell (1998) will be extended in a manner that the duration process will be accompanied by an unobservable stochastic process. The Discrete Mixture ACD framework provides us with a general methodology which puts the idea into practice. It is established by introducing a discrete-valued latent regime variable which can be justified in the light of recent market microstructure theories. The empirical application demonstrates its ability to capture specific characteristics of intraday transaction durations while alternative approaches fail. JEL classification: C41, C22, C25, C51, G14.
We discuss that hadron-induced atmospheric air showers from ultra-high energy cosmic rays are sensitive to QCD interactions at very small momentum fractions x where nonlinear effects should become important. The leading partons from the projectile acquire large random transverse momenta as they pass through the strong field of the target nucleus, which breaks up their coherence. This leads to a steeper x_F-distribution of leading hadrons as compared to low energy collisions, which in turn reduces the position of the shower maximum Xmax. We argue that high-energy hadronic interaction models should account for this effect, caused by the approach to the black-body limit, which may shift fits of the composition of the cosmic ray spectrum near the GZK cutoff towards lighter elements. We further show that present data on Xmax(E) exclude that the rapid ~ 1/x^0.3 growth of the saturation boundary (which is compatible with RHIC and HERA data) persists up to GZK cutoff energies. Measurements of pA collisions at LHC could further test the small-x regime and advance our understanding of high density QCD significantly.
Sharing of substructures like subterms and subcontexts in terms is a common method for space-efficient representation of terms, which allows for example to represent exponentially large terms in polynomial space, or to represent terms with iterated substructures in a compact form. We present singleton tree grammars as a general formalism for the treatment of sharing in terms. Singleton tree grammars (STG) are recursion-free context-free tree grammars without alternatives for non-terminals and at most unary second-order nonterminals. STGs generalize Plandowski's singleton context free grammars to terms (trees). We show that the test, whether two different nonterminals in an STG generate the same term can be done in polynomial time, which implies that the equality test of terms with shared terms and contexts, where composition of contexts is permitted, can be done in polynomial time in the size of the representation. This will allow polynomial-time algorithms for terms exploiting sharing. We hope that this technique will lead to improved upper complexity bounds for variants of second order unification algorithms, in particular for variants of context unification and bounded second order unification.
Plenarvortrag Weltkongress der Rechtsphilosophie und Sozialphilosophie, 24.-29. Mai, Granada 2005. S.a. die deutsche Fassung: "Die anonyme Matrix: Menschenrechtsverletzungen durch "private" transnationale Akteure". Spanische Fassung: Sociedad global, justicia fragmentada: sobre la violatión de los derechos humanos por actores transnacionales 'privados'. In: Manuel Escamilla and Modesto Saavedra (eds.), Law and Justice in a global society, International Association for philosophy of law and social philosophy, Granada 2005, S. 547-562 und in "Anales de öa Catedra Francisco Suarez 2005". S.a. Teubner, Gunther: Globalized Justice - Fragmented Justice. Human Rights Violations by "Private" Transnational Actors
Charmonium production and suppression in heavy-ion collisions at relativistic energies is investigated within di erent models, i.e. the comover absorption model, the threshold suppression model, the statistical coalescence model and the HSD transport approach. In HSD the charmonium dissociation cross sections with mesons are described by a simple phase-space parametrization including an e ective coupling strength |Mi|2 for the charmonium states i =Xc,J/psi, psi'. This allows to include the backward channels for charmonium reproduction by DD channels which are missed in the comover absorption and threshold suppression model employing detailed balance without introducing any new parameters. It is found that all approaches yield a reasonable description of J/psi suppression in S+U and Pb+Pb collisions at SPS energies. However, they di er significantly in the psi'/J/psi ratio versus centrality at SPS and especially at RHIC energies. These pronounced differences can be exploited in future measurements at RHIC to distinguish the hadronic rescattering scenarios from quark coalescence close to the QGP phase boundary.
The quinol:fumarate reductase (QFR) is the terminal reductase of anaerobic fumarate respiration, the most commonly occurring type of anaerobic respiration. This membrane protein complex couples the oxidation of menaquinol to menaquinone to the reduction of fumarate to succinate. The three-dimensional crystal structure of the QFR from Wolinella succinogenes has previoulsy been solved at 2.2 Å resolution. Although the diheme-containing QFR from W. succinogenes is known to catalyze an electroneutral process, structural and functional characterization of parental and variant enzymes has revealed active site locations which indicate electrogenic catalysis across the membrane. A solution to this apparent controversy was proposed with the so-called “Epathway hypothesis”. According to this, transmembrane electron transfer via the heme groups is strictly coupled to a parallel, compensatory transfer of protons via a transiently established pathway, which is inactive in the oxidized state of the enzyme. Proposed constituents of the E-pathway are the side chain of Glu C180, and the ring C propionate of the distal heme. Previous experimental evidence strongly supports such a role for the former constituent. One aim of this thesis is to investigate by a combination of specific 13C-heme propionate labeling and FTIR difference spectroscopy whether the ring C propionate of the distal heme is involved in redox-coupled proton transfer in the QFR from W. succinogenes. In addition to W. succinogenes, the primary structures of the QFR enzymes of two other e- proteobacteria are known. These are Campylobacter jejuni and Helicobacter pylori, which unlike W. succinogenes are human pathogens. The QFR from H. pylori has previously been established to be a potential drug target, and the same is likely for the QFR from C. jejuni. The two pathogenic species colonize mucosal surfaces causing several diseases. The possibility of studying these QFRs from these bacteria and creating more efficient drugs specifically active for this enzyme depends substantially on the availability of large amounts of high-quality protein. Further, biochemical and structural studies on QFR enzymes from e- proteobacteria species other than W. succinogenes can be valuable to enlighten new aspects or corroborate the current understanding of this class of membrane proteins.
We study the collective flow of open charm mesons and charmonia in Au + Au collisions at s = 200 GeV within the hadron-string-dynamics (HSD) transport approach. The detailed studies show that the coupling of D, mesons to the light hadrons leads to comparable directed and elliptic flow as for the light mesons. This also holds approximately for J/ mesons since more than 50% of the final charmonia for central and midcentral collisions stem from D + induced reactions in the transport calculations. The transverse momentum spectra of D, mesons and J/ s are only very moderately changed by the (pre-)hadronic interactions in HSD, which can be traced back to the collective flow generated by elastic interactions with the light hadrons. PACS-Nr. 25.75.-q, 13.60.Le, 14.40.Lb, 14.65.Dw
The study of hidden charm production is an important part of the heavy ion program. The standard approach to this problem [1] assumes that c¯c bound states are created only at the initial stage of the reaction and then partially destroyed at later stages due to interactions with the medium [2, 3, 4].
Nuclear collisions at intermediate, relativistic, and ultra-relativistic energies offer unique opportunities to study in detail manifold fragmentation and clustering phenomena in dense nuclear matter. At intermediate energies, the well known processes of nuclear multifragmentation -- the disintegration of bulk nuclear matter in clusters of a wide range of sizes and masses -- allow the study of the critical point of the equation of state of nuclear matter. At very high energies, ultra-relativistic heavy-ion collisions offer a glimpse at the substructure of hadronic matter by crossing the phase boundary to the quark-gluon plasma. The hadronization of the quark-gluon plasma created in the fireball of a ultra-relativistic heavy-ion collision can be considered, again, as a clustering process. We will present two models which allow the simulation of nuclear multifragmentation and the hadronization via the formation of clusters in an interacting gas of quarks, and will discuss the importance of clustering to our understanding of hadronization in ultra-relativistic heavy-ion collisions.
We study Mach shocks generated by fast partonic jets propagating through a deconfined strongly-interacting matter. Our main goal is to take into account different types of collective motion during the formation and evolution of this matter. We predict a significant deformation of Mach shocks in central Au+Au collisions at RHIC and LHC energies as compared to the case of jet propagation in a static medium. The observed broadening of the near-side two-particle correlations in pseudorapidity space is explained by the Bjorken-like longitudinal expansion. Three-particle correlation measurements are proposed for a more detailed study of the Mach shock waves.
We study the effects of isovector-scalar meson delta on the equation of state (EOS) of neutron star matter in strong magnetic fields. The EOS of neutron-star matter and nucleon effective masses are calculated in the framework of Lagrangian field theory, which is solved within the mean-field approximation. From the numerical results one can find that the delta-field leads to a remarkable splitting of proton and neutron effective masses. The strength of delta-field decreases with the increasing of the magnetic field and is little at ultrastrong field. The proton effective mass is highly influenced by magnetic fields, while the effect of magnetic fields on the neutron effective mass is negligible. The EOS turns out to be stiffer at B < 10^15G but becomes softer at stronger magnetic field after including the delta-field. The AMM terms can affect the system merely at ultrastrong magnetic field(B > 10^19G). In the range of 10^15 G - 10^18 G the properties of neutron-star matter are found to be similar with those without magnetic fields.
The D-meson spectral density at finite temperature is obtained within a self-consistent coupled-channel approach. For the bare meson-baryon interaction, a separable potential is taken, whose parameters are fixed by the position and width of the Lambda_c (2593) resonance. The quasiparticle peak stays close to the free D-meson mass, indicating a small change in the effective mass for finite density and temperature. However, the considerable width of the spectral density implies physics beyond the quasiparticle approach. Our results indicate that the medium modifications for the D-mesons in nucleus-nucleus collisions at FAIR (GSI) will be dominantly on the width and not, as previously expected, on the mass.
Potential energy surfaces are calculated by using the most advanced asymmetric two-center shell model allowing to obtain shell and pairing corrections which are added to the Yukawa-plus-exponential model deformation energy. Shell effects are of crucial importance for experimental observation of spontaneous disintegration by heavy ion emission. Results for 222Ra, 232U, 236Pu and 242Cm illustrate the main ideas and show for the first time for a cluster emitter a potential barrier obtained by using the macroscopic-microscopic method.
In this increasingly complex world of learned information delivery and discovery - is it possible that the "free lunch" the Publishing world worries about could come true? Although Open Access and Institutional Repositories have not (yet) created the "scorched earth" effect many were predicting, they are slowly and inevitably gaining momentum. Broader access to top-level information via Google (and others) does indeed appear to be "good enough" for many in their search for content. But you rarely get food for free in a good quality restaurant. You pay for the selection, preparation, speed and expertise of the delivery. At the soup kitchen the food can often be filling - but the queue will be long, the wait even longer and there is no chance of silver service or à la carte. If you are unfortunate enough to have little choice then this may be a great solution. Others will be willing to pay for a more satisfactory meal. As in all aspects of life, diversification and specialisation are fundamental forces. The publishing community in the years to come will continue to develop its offerings for a variety of needs that require more than just broth. To stretch the analogy, the ongoing presence of tap water in our lives has done little to halt the extraordinary rise of bottled water as part of our staple diet. Business reality will continue to settle these types of debate; my bet is that the commercial publishers see a role as providing information that commands an intrinsic value proposition to enough customers to remain economically viable for some time to come. Inspired by the comments and ideas expounded by Dr. James O'Donnell of Georgetown University on the liblicense listserv on 20th July this year, this paper will look to expand on the analogy and identify the good, the bad - but importantly the difference in information quality and access that will result in the radically changed (but still co-existent) information landscape of tomorrow.
The economical and organizational debates about open access have mostly been concerned with journals. This is not surprising since the open access movement can be seen largely as a response to the serials crisis. Recently the open access debate has been extended to include access to government produced data in different forms. In this presentation I'll critically look at some economic and organizational issues pertaining to the open access provision of bibliographical data.
In keeping with the views of its guru, Stephen Harnard, the open access movement is only prepared to discuss the two models of the "green road" and the "golden road" as sole alternatives for the future of scientific publishing. The "golden road" is put forward as the royal road for solving the journals crisis. However, no one has drawn attention to the fact that the golden road represents a purely socialist solution to a free-market problem and thus continues the "samizdat" tradition of underground literature in the former Eastern bloc. The present paper reveals the alarmingly low level at which the open access movement intends to publish top-class results from science and research, and the low degree of professionalism with which they are satisfied.
Der Vortrag wurde am 5th Frankfurt Scientific Symposium gehalten (22-23 Oktober 2005). Die Betrachtung des Videos ist (leider) nur mit den Browsern Internet Explorer ab 5.0, Netscape Navigator ab 7.0 oder Internet Explorer ab 5.2.2 für MaC möglich (s. Dokument 1.html). Die gesamten Tagungsbeiträge sind unter http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1992/ abrufbar.
Within the scenario of large extra dimensions, the Planck scale is lowered to values soon accessible. Among the predicted effects, the production of TeV mass black holes at the LHC is one of the most exciting possibilities. Though the final phases of the black hole’s evaporation are still unknown, the formation of a black hole remnant is a theoretically well motivated expectation. We analyze the observables emerging from a black hole evaporation with a remnant instead of a final decay. We show that the formation of a black hole remnant yields a signature which differs substantially from a final decay. We find the total transverse momentum of the black hole event to be significantly dominated by the presence of a remnant mass providing a strong experimental signature for black hole remnant formation.
Probing the density dependence of the symmetry potential in intermediate energy heavy ion collisions
(2005)
Based on the ultrarelativistic quantum molecular dynamics (UrQMD) model, the effects of the density-dependent symmetry potential for baryons and of the Coulomb potential for produced mesons are investigated for neutron-rich heavy ion collisions at intermediate energies. The calculated results of the Delta-/Delta++ and pi -/pi + production ratios show a clear beam-energy dependence on the density-dependent symmetry potential, which is stronger for the pi -/pi + ratio close to the pion production threshold. The Coulomb potential of the mesons changes the transverse momentum distribution of the pi -/pi + ratio significantly, though it alters only slightly the pi- and pi+ total yields. The pi- yields, especially at midrapidity or at low transverse momenta and the p-/pi+ ratios at low transverse momenta, are shown to be sensitive probes of the density-dependent symmetry potential in dense nuclear matter. The effect of the density-dependent symmetry potential on the production of both, K0 and K+ mesons, is also investigated.