Refine
Year of publication
- 2008 (768) (remove)
Document Type
- Article (301)
- Working Paper (100)
- Book (95)
- Doctoral Thesis (85)
- Part of Periodical (81)
- Conference Proceeding (31)
- Part of a Book (25)
- Preprint (18)
- Report (14)
- Other (10)
Language
- English (768) (remove)
Keywords
- Deutsch (10)
- Metapher (9)
- USA (7)
- Bank (6)
- Englisch (6)
- Phonetik (6)
- Phonologie (6)
- Bedeutung (5)
- Geldpolitik (5)
- Grammatik (5)
Institute
- Medizin (82)
- Center for Financial Studies (CFS) (59)
- Biochemie und Chemie (55)
- Physik (36)
- Geowissenschaften (35)
- Biowissenschaften (32)
- Extern (26)
- E-Finance Lab e.V. (20)
- Informatik (18)
- Wirtschaftswissenschaften (16)
Bayesian learning provides the core concept of processing noisy information. In standard Bayesian frameworks, assessing the price impact of information requires perfect knowledge of news’ precision. In practice, however, precision is rarely dis- closed. Therefore, we extend standard Bayesian learning, suggesting traders infer news’ precision from magnitudes of surprises and from external sources. We show that interactions of the different precision signals may result in highly nonlinear price responses. Empirical tests based on intra-day T-bond futures price reactions to employment releases confirm the model’s predictions and show that the effects are statistically and economically significant.
The popular Nelson-Siegel (1987) yield curve is routinely fit to cross sections of intra-country bond yields, and Diebold and Li (2006) have recently proposed a dynamized version. In this paper we extend Diebold-Li to a global context, modeling a potentially large set of country yield curves in a framework that allows for both global and country-specific factors. In an empirical analysis of term structures of government bond yields for the Germany, Japan, the U.K. and the U.S., we find that global yield factors do indeed exist and are economically important, generally explaining significant fractions of country yield curve dynamics, with interesting differences across countries.
Measuring financial asset return and volatilty spillovers, with application to global equity markets
(2008)
We provide a simple and intuitive measure of interdependence of asset returns and/or volatilities. In particular, we formulate and examine precise and separate measures of return spillovers and volatility spillovers. Our framework facilitates study of both non-crisis and crisis episodes, including trends and bursts in spillovers, and both turn out to be empirically important. In particular, in an analysis of nineteen global equity markets from the early 1990s to the present, we find striking evidence of divergent behavior in the dynamics of return spillovers vs. volatility spillovers: Return spillovers display a gently increasing trend but no bursts, whereas volatility spillovers display no trend but clear bursts.
Research with Keynesian-style models has emphasized the importance of the output gap for policies aimed at controlling inflation while declaring monetary aggregates largely irrelevant. Critics, however, have argued that these models need to be modified to account for observed money growth and inflation trends, and that monetary trends may serve as a useful cross-check for monetary policy. We identify an important source of monetary trends in form of persistent central bank misperceptions regarding potential output. Simulations with historical output gap estimates indicate that such misperceptions may induce persistent errors in monetary policy and sustained trends in money growth and inflation. If interest rate prescriptions derived from Keynesian-style models are augmented with a cross-check against money-based estimates of trend inflation, inflation control is improved substantially.
Religious conversion has become a dangerous social and individual problem. In Latin America, a traditional Catholic area, Protestant sects are successfully con-verting more and more Catholics into their own communities. Therefore the Pope demands a strict control of these activities. In India e.g., the Catholic hierarchy is critizising the Indian governments which have forbidden conversion on non-spiritual reasons. Hindu organizations have started even very successfully to re-convert Indian Christians particularly of Dalit and tribal background. Buddhists are very successful in indirect and even direct conversion of many Westerners. Wah-habit missionaries spread their Neo-Islam in the Muslim societies and get more and more even non-Muslim converts. We should add the forcible and sometimes ex-tremely cruel conversions the atheistic states had executed since the last century. ...
In the course of the ME period, HAVE began to encroach on territory previously held by BE. According to Rydén and Brorström (1987); Kytö (1997), this occurred especially in iterative and durational contexts, in the perfect infinitive and modal constructions. In Early Modern English (henceforth EModE), BE was increasingly restricted to the most common intransitives come and go, before disappearing entirely in the 18th and 19th centuries. This development raises a number of questions, both historical and theoretical. First, why did HAVE start spreading at the expense of BE in the first place? Second, why was the change conditioned by the factors mentioned by Rydén and Brorström (1987) and Kytö (1997)? Third, why did the change take on the order of 800 years to go to completion? Fourth, what implications does the change have for general theories of auxiliary selection? In this paper we’ll try to answer the first question by focusing on one the earliest clearly identifiable advance of HAVE onto BE territory – its first appearance with the verb come, which for a number of reasons is an ideal verb to focus on. First, come is by far the most common intransitive verb, so we get large enough numbers for statistical analysis. Second, clauses containing the past participle of come with a form of BE are unambiguous perfects: they cannot be passives, and they did not continue into modern English with a stative reading like he is gone. Third, and perhaps most importantly, come selected BE categorically in the early stages of English, so the first examples we find with HAVE are clear evidence for innovation. We will present evidence from a corpus study showing that the first spread of HAVE was due to a ban on auxiliary BE in certain types of counterfactual perfects, and will propose an account for that ban in terms of Iatridou’s (2000) Exclusion theory of counterfactuals.
Verbs, nouns and affixation
(2008)
What explains the rich patterns of deverbal nominalization? Why do some nouns have argument structure, while others do not? We seek a solution in which properties of deverbal nouns are composed from properties of verbs, properties of nouns, and properties of the morphemes that relate them. The theory of each plus the theory of howthey combine, should give the explanation. In exploring this, we investigate properties of two theories of nominalization. In one, the verb-like properties of deverbal nouns result from verbal syntactic structure (a “structural model”). See, for example, van Hout & Roeper 1998, Fu, Roeper and Borer 1993, 2001, to appear, Alexiadou 2001, to appear). According to the structural hypothesis, some nouns contain VPs and/or verbal functional layers. In the other theory, the verbal properties of deverbal nouns result from the event structure and argument structure of the DPs that they head. By “event structure” we mean a representation of the elements and structure of a linguistic event, not a representation of the world. We refer to this view as the “event model”. According to the event model hypothesis, all derived nouns are represented with the same syntactic structure, the difference lying in argument structure – which in turn is critically related to event structure, in the way sketched in Grimshaw (1990), Siloni (1997) among others. In pursuing these lines of analysis, and at least to some extent disentangling their properties, we reach the conclusion that, with respect to a core set of phenomena, the two theories are remarkably similar – specifically, they achieve success with the same problems, and must resort to the same stipulations to address the remaining issues that we discuss (although the stipulations are couched in different forms).
Class features as probes
(2008)
In this article, we adress (i) the form and (ii) the function on inflection class features in minimalist grammar. The empirical evidence comes from noun inflection systems involving fusional markers in German, Greek, and Russian. As for (i), we argue (based on instances of transparadigmatic syncretism) that class features are not privative; rather, class information must be decomposed into more abstract, binary features. Concerning (ii), we propose that class features qualify as the very device that brings about fusional infection: They are uninterpretable in syntax and actas probes on stems, with matching inflection markers as goels, and thus trigger morphological Agree operations that merge stem and inflection marker before syntax is reached.
Background: Polymorphisms within the insulin gene can influence insulin expression in the pancreas and especially in the thymus, where self-antigens are processed, shaping the T cell repertoire into selftolerance, a process that protects from ß-cell autoimmunity.
Methods: We investigated the role of the -2221Msp(C/T) and -23HphI(A/T) polymorphisms within the insulin gene in patients with a monoglandular autoimmune endocrine disease [patients with isolated type 1 diabetes (T1D, n = 317), Addison´s disease (AD, n = 107) or Hashimoto´s thyroiditis (HT, n = 61)], those with a polyglandular autoimmune syndrome type II (combination of T1D and/or AD with HT or GD, n = 62) as well as in healthy controls (HC, n = 275).
Results: T1D patients carried significantly more often the homozygous genotype "CC" -2221Msp(C/T) and "AA" -23HphI(A/T) polymorphisms than the HC (78.5% vs. 66.2%, p = 0.0027 and 75.4% vs. 52.4%, p = 3.7 × 10-8, respectively). The distribution of insulin gene polymorphisms did not show significant differences between patients with AD, HT, or APS-II and HC.
Conclusion: We demonstrate that the allele "C" of the -2221Msp(C/T) and "A" -23HphI(A/T) insulin gene polymorphisms confer susceptibility to T1D but not to isolated AD, HT or as a part of the APS-II.
Poster presentation A central problem in neuroscience is to bridge local synaptic plasticity and the global behavior of a system. It has been shown that Hebbian learning of connections in a feedforward network performs PCA on its inputs [1]. In recurrent Hopfield network with binary units, the Hebbian-learnt patterns form the attractors of the network [2]. Starting from a random recurrent network, Hebbian learning reduces system complexity from chaotic to fixed point [3]. In this paper, we investigate the effect of Hebbian plasticity on the attractors of a continuous dynamical system. In a Hopfield network with binary units, it can be shown that Hebbian learning of an attractor stabilizes it with deepened energy landscape and larger basin of attraction. We are interested in how these properties carry over to continuous dynamical systems. Consider system of the form Math(1) where xi is a real variable, and fi a nondecreasing nonlinear function with range [-1,1]. T is the synaptic matrix, which is assumed to have been learned from orthogonal binary ({1,-1}) patterns ξμ, by the Hebbian rule: Math. Similar to the continuous Hopfield network [4], ξμ are no longer attractors, unless the gains gi are big. Assume that the system settles down to an attractor X*, and undergoes Hebbian plasticity: T´ = T + εX*X*T, where ε > 0 is the learning rate. We study how the attractor dynamics change following this plasticity. We show that, in system (1) under certain general conditions, Hebbian plasticity makes the attractor move towards its corner of the hypercube. Linear stability analysis around the attractor shows that the maximum eigenvalue becomes more negative with learning, indicating a deeper landscape. This in a way improves the system´s ability to retrieve the corresponding stored binary pattern, although the attractor itself is no longer stabilized the way it does in binary Hopfield networks.
Introduction To investigate the predictive value of clinical and biological markers for a pathological complete remission after a preoperative dose-dense regimen of doxorubicin and docetaxel, with or without tamoxifen, in primary operable breast cancer. Methods Patients with a histologically confirmed diagnosis of previously untreated, operable, and measurable primary breast cancer (tumour (T), nodes (N) and metastases (M) score: T2-3(>= 3 cm) N0-2 M0) were treated in a prospectively randomised trial with four cycles of dose-dense (bi-weekly) doxorubicin and docetaxel (ddAT) chemotherapy, with or without tamoxifen, prior to surgery. Clinical and pathological parameters (menopausal status, clinical tumour size and nodal status, grade, and clinical response after two cycles) and a panel of biomarkers (oestrogen and progesterone receptors, Ki-67, human epidermal growth factor receptor 2 (HER2), p53, bcl-2, all detected by immunohistochemistry) were correlated with the detection of a pathological complete response (pCR). Results A pCR was observed in 9.7% in 248 patients randomised in the study and in 8.6% in the subset of 196 patients with available tumour tissue. Clinically negative axillary lymph nodes, poor tumour differentiation, negative oestrogen receptor status, negative progesterone receptor status, and loss of bcl-2 were significantly predictive for a pCR in a univariate logistic regression model, whereas in a multivariate analysis only the clinical nodal status and hormonal receptor status provided significantly independent information. Backward stepwise logistic regression revealed a response after two cycles, with hormone receptor status and lymph-node status as significant predictors. Patients with a low percentage of cells stained positive for Ki-67 showed a better response when treated with tamoxifen, whereas patients with a high percentage of Ki-67 positive cells did not have an additional benefit when treated with tamoxifen. Tumours overexpressing HER2 showed a similar response to that in HER2-negative patients when treated without tamoxifen, but when HER2-positive tumours were treated with tamoxifen, no pCR was observed. Conclusion Reliable prediction of a pathological complete response after preoperative chemotherapy is not possible with clinical and biological factors routinely determined before start of treatment. The response after two cycles of chemotherapy is a strong but dependent predictor. The only independent factor in this subset of patients was bcl-2. Trial registration number NCT00543829
Background This study was carried out to compare the HRQoL of patients in general practice with differing chronic diseases with the HRQoL of patients without chronic conditions, to evaluate the HRQoL of general practice patients in Germany compared with the HRQoL of the general population, and to explore the influence of different chronic diseases on patients HRQoL, independently of the effects of multiple confounding variables. Methods A cross-sectional questionnaire survey including the SF-36, the EQ-5D and demographic questions was conducted in 20 general practices in Germany. 1009 consecutive patients aged 15–89 participated. The SF-36 scale scores of general practice patients with differing chronic diseases were compared with those of patients without chronic conditions. Differences in the SF-36 scale/summary scores and proportions in the EQ-5D dimensions between patients and the general population were analyzed. Independent effects of chronic conditions and demographic variables on the HRQoL were analyzed using multivariable linear regression and polynomial regression models. Results The HRQoL for general practice patients with differing chronic diseases tended to show more physical than mental health impairments compared with the reference group of patients without. Patients in general practice in Germany had considerably lower SF-36 scores than the general population (P < 0.001 for all) and showed significantly higher proportions of problems in all EQ-5D dimensions except for the self-care dimension (P < 0.001 for all). The mean EQ VAS for general practice patients was lower than that for the general population (69.2 versus 77.4, P < 0.001). The HRQoL for general practice patients in Germany seemed to be more strongly affected by diseases like depression, back pain, OA of the knee, and cancer than by hypertension and diabetes. Conclusion General practice patients with differing chronic diseases in Germany had impaired quality of life, especially in terms of physical health. The independent impacts on the HRQoL were different depending on the type of chronic disease. Findings from this study might help health professionals to concern more influential diseases in primary care from the patient´s perspective.
Background: This article reports on the relationship between cultural influences on life style, coping style, and sleep in a sample of female Portuguese immigrants living in Germany. Sleep quality is known to be poorer in women than in men, yet little is known about mediating psychological and sociological variables such as stress and coping with stressful life circumstances. Migration constitutes a particularly difficult life circumstance for women if it involves differing role conceptions in the country of origin and the emigrant country.
Methods: The study investigated sleep quality, coping styles and level of integration in a sample of Portuguese (N = 48) and Moroccan (N = 64) immigrant women who took part in a structured personal interview.
Results: Sleep quality was poor in 54% of Portuguese and 39% of Moroccan women, which strongly exceeds reports of sleep complaints in epidemiologic studies of sleep quality in German women. Reports of poor sleep were associated with the degree of adoption of a German life style. Women who had integrated more into German society slept worse than less integrated women in both samples, suggesting that non-integration serves a protective function. An unusually large proportion of women preferred an information-seeking (monitoring) coping style and adaptive coping. Poor sleep was related to high monitoring in the Portuguese but not the Moroccan sample.
Conclusion: Sleep quality appears to be severely affected in women with a migration background. Our data suggest that non-integration may be less stressful than integration. This result points to possible benefits of non-integration. The high preference for an information-seeking coping style may be related to the process of migration, representing the attempt at regaining control over an uncontrollable and stressful life situation.
In this paper we compare the distribution of PPs introducing external arguments in nominalizations with PPs introducing external arguments in the verbal domain. We show that several mismatches exist between the behavior of PPs in nominalizations and PPs in the verbal domain. This leads us to suggest that while PPs in the verbal domain are licensed by functional structure alone, within the nominal domain, PPs can also be licensed via an interplay of the encyclopaedic meaning of the root involved and the properties of the preposition itself. This second mechanism kicks in in the absence of functional structure.
Structuring participles
(2008)
In this paper we discuss three types of adjectival participles in Greek, ending in -tos and –menos, and provide a further argument for the view that finer distinctions are necessary in the domain of participles (Kratzer 2001, Embick 2004). We further compare Greek stative participles to their German (and English) counterparts. We propose that a number of semantic as well as syntactic differences shown by these derive from differences in their respective morpho-syntactic composition.
In this paper we investigate the distribution of PPs related to external arguments (agent, causer, instrument, causing event) in Greek. We argue that their distribution supports an analysis, according to which agentive/instrument and causer PPs are licensed by distinct functional heads, respectively. We argue against a conceivable alternative analysis, which links agentivity and causation to the prepositions themselves. We furthermore identify a particular type of Voice head in Greek anticausative realised by non-active Voice morphology.
On the role of syntactic locality in morphological processes : the case of (Greek) derived nominals
(2008)
The paper is structured as follows. In section 2, I briefly summarize the facts on English and Greek nominalizations. In section 3, I discuss English nominal derivation in some detail. In section 4, I turn to the question of licensing of AS in nominals. In section 5, I turn to the issue of the optionality of licensing of AS in the nominal system.
This paper deals with the variable position of adjectives in the Romanian DP. As all other Romance languages, Romanian allows for adjectives to appear in both prenominal and post-nominal position. In addition, however, Romanian has a third pattern: the so-called cel construction, in which the adjective in the post-nominal position is preceded by a determiner-like element, cel. This pattern is superficially similar to Determiner Spreading in Greek. In this paper we contrast the cel construction to Greek DS and discuss the similarities and differences between the two. We then present an analysis of cel as involving an appositive specification clause, building on de Vries (2002). We argue that the same structure is also involved in the context of nominal ellipsis, the second environment in which cel is found.
Pulsed electron-electron double resonance (PELDOR) is a well established method concerning nanometer distance measurements involving two nitroxide spin-labels. In this thesis the applicability of this method to count the number of spins is tested. Furthermore, this work explored the limits, up to which PELDOR data obtained on copper(II)-nitroxide complexes can be quantitatively interpreted. Spin counting provides access to oligomerization studies – monitoring the assembly of homo- or hetero-oligomers from singly labeled compounds. The experimental calibration was performed using model systems, which contain one to four nitroxide radicals. The results show that monomers, dimers, trimers, and tetramers can be distinguished within an error of 5% in the number of spins. Moreover, a detailed analysis of the distance distributions in model complexes revealed that more than one distance can be extracted from complexes bearing several spins, as for example three different distances were resolved in a model tetramer – the other three possible distances being symmetry related. Furthermore, systems exhibiting mixtures of oligomeric states complicate the analysis of the data, because the average number of spin centers contributes nonlinearly to the signal and different relaxation behavior of the oligomers has to be treated explicitly. Experiments solving these problems are proposed in the thesis. Thus, for the first time spin counting has been experimentally calibrated using fully characterized test systems bearing up to four spins. Moreover, the behavior of mixtures was quantitatively interpreted. In addition, it has been shown that several spin-spin distances within a molecule can be extracted from a single dataset. In the second part of the thesis PELDOR experiments on a spin-labeled copper(II)-porphyrin have been quantitatively analyzed. Metal-nitroxide distance measurements are a valuable tool for the triangulation of paramagnetic metal ions. Therefore, X-band PELDOR experiments at different frequencies have been performed. The data exhibits only weak orientation selection, but a fast damping of the oscillation. The experimental data has been interpreted based upon quantitative simulations. The influence of orientation selection, conformational flexibility, spin-density distribution, exchange interaction J, as well as anisotropy and strains of the g-tensor has been examined. An estimate of the spin-density delocalization has been obtained by density functional theory calculations. The dipolar interaction tensor was calculated from the point-charge model, the extension of the point-dipole approximation to several spin bearing centers. Even assuming asymmetric spin distributions induced by an ensemble of asymmetrically distorted porphyrins the effect of delocalization on the PELDOR time trace is weak. The observed damping of dipolar oscillations has been only reproduced by simulations, if a small distribution in J was assumed. It has been shown that the experimental damping of dipolar modulations is not solely due to conformational heterogeneity. In conclusion the quantitative interpretation of PELDOR data is extended to copper-nitroxide- and multi-spin-systems. The influence of the mean distance, of the number of coupled spins, of the conformational flexibility, of spin-density distribution and of the electronic structure of the spin centers has been analyzed using model systems. The insights on model compounds mimicking spin-labeled biomacromolecules – in oligomeric or metal bound states – calibrate the method with respect to the information that can be deduced from the experimental data. The resulting in-depth understanding allows correlating experimental results (from for example biological systems) with models of structure and dynamics. It also opens new fields for PELDOR as for example triangulation of metal centers and oligomerization studies. In general, this thesis has demonstrated that modern pulsed electron paramagnetic resonance techniques in combination with quantitative data analysis can contribute to a detailed insight into molecular structure and dynamics.
Extracts of Boswellia serrata, also known as Indian frankincense, have been used to treat inflammatory diseases in the Indian ayurvedic medicine or Chinese traditional medicine (TCM) for over 3000 years, but the molecular mechanisms of the anti-inflammatory effects are still not well understood. It is obvious that the boswellic acids, the major compounds in the extracts, are responsible for the efficacy. This work employed a protein fishing technique to identify putative targets of boswellic acids at different stages within the inflammatory cascade. For fishing experiments, boswellic acids were immobilized to sepharose and incubated with cell lysates. After washing and boiling, fished proteins were separated by SDS-PAGE and analysed by MALDI-TOF-MS. CatG, DNA-PK and the protein kinase Akt were identified by protein pulldowns with immobilised BAs and characterised as selective and important targets for BAs with an IC50 in the range of physiologically achievable plasma levels up to 5 microM. In addition, the influence on several signal transductions by BAs was tested. Calcium influx, arachidonic acid release, platelet aggregation and TNFalpha-release were assayed to reveal further pharmacological effects of BAs. Celecoxib is a well-known selective COX-2 inhibitor that is in clinical use. In this work, it is demonstrated that celecoxib is also a highly potent direct 5-LO inhibitor. Celecoxib is used in arthritis and its gastro-intestinal side effects are reduced compared to non-selective NSAIDs. In patients with a familiar disposition to polyp forming, celecoxib reduced polyps and the incidence of colon cancer. Because of lowered leukotriene levels in patients under celecoxib therapy it was plausible to test whether celecoxib interferes with 5-LO. Here it is shown that the activity of 5-LO is inhibited in PMNL and cell-free assays with IC50 of 8 microM in intact cells, 20 microM with supplemented arachidonic acid and 30 microM in cell-free systems. Thus, celecoxib is a dual inhibitor of COX-2 and 5-LO. Since 2006, celecoxib has been approved as an orphan drug for the treatment of familial adenomatous polyposis. Aside from this indication, it could be useful for treatment of asthma and other diseases where 5-LO is implicated.
A graph theoretical approach to the analysis, comparison, and enumeration of crystal structures
(2008)
As an alternative approach to lattices and space groups, this work explores graph theory as a means to model crystal structures. The approach uses quotient graphs and nets - the graph theoretical equivalent of cells and lattices - to represent crystal structures. After a short review of related work, new classes of cycles in nets are introduced and their ability to distinguish between non-isomorphic nets and their computational complexity are evaluated. Then, two methods to estimate a structure’s density from the corresponding net are proposed. The first uses coordination sequences to estimate the number of nodes in a sphere, whereas the second method determines the maximal volume of a unit cell. Based on the quotient graph only, methods are proposed to determine whether nets consist of islands, chains, planes, or penetrating, disconnected sub-nets. An algorithm for the enumeration of crystal structures is revised and extended to a search for structures possessing certain properties. Particular attention is given to the exclusion of redundant nets and those, which, by the nature of their connectivity, cannot correspond to a crystal structure. Nets with four four-coordinated nodes, corresponding to sp3 hybridised carbon polymorphs with four atoms per unit cell, are completely enumerated in order to demonstrate the approach. In order to render quotient graphs and nets independent from crystal structures, they are reintroduced in a purely graph-theoretical way. Based on this, the issue of iso- and automorphism of nets is reexamined. It is shown that the topology of a net (that is the bonds in a crystal) constrains severely the symmetry of the embedding (that is the crystal), and in the case of connected nets the space group except for the setting. Several examples are studied and conclusions on phases are drawn (pseudo-cubic FeS2 versus pyrite; α- versus β- quartz; marcasite- versus rutile-like phases). As the automorphisms of certain quotient graphs stipulate a translational symmetry higher than an arbitrary embedding of the corresponding net would show, they are examined in more detail and a method to reduce the size of such quotient graphs is proposed. Besides two instructional examples with 2-dimensional graphs, the halite, calcite, magnesite, barytocalcite, and a strontium feldspar structures are discussed. For some of the structures it is shown that the quotient graph which is equivalent to a centred cell is reduced to a quotient graph equivalent to the primitive cell. For the partially disordered strontium feldspar, it is shown that even if it could be annealed to an ordered structure, the unit cell would likely remain unchanged. For the calcite and barytocalcite structures it is shown that the equivalent nets are not isomorphic.
In this work data of the NA49 experiment at CERN SPS on the energy dependence of multiplicity fluctuations in central Pb+Pb collisions at 20A, 30A, 40A, 80A and 158A GeV, as well as the system size dependence at 158A GeV, is analysed for positively, negatively and all charged hadrons. Furthermore the rapidity and transverse momentum dependence of multiplicity fluctuations are studied. The experimental results are compared to predictions of statistical hadron-gas and string-hadronic models. It is expected that multiplicity fluctuations are sensitive to the phase transition to quark-gluon-plasma (QGP) and to the critical point of strongly interacting matter. It is predicted that both the onset of deconfinement, the lowest energy where QGP is created, and the critical point are located in the SPS energy range. Furthermore, the predictions for the multiplicity fluctuations of statistical and string-hadronic models are different, the experimental data might allow to distinguish between them. The used measure of multiplicity fluctuations is the scaled variance omega, defined as the ratio of the variance and the mean of the multiplicity distribution. In the NA49 experiment the tracks of charged particles are detected in four large volume time projection chambers (TPCs). In order to remove possible detector effects a detailed study of event and track selection criteria is performed. Naively one would expect Poisson fluctuations in central heavy ion collisions. A suppression of fluctuations compared to a Poisson distribution is observed for positively and negatively charged hadrons at forward rapidity in Pb+Pb collisions. At midrapidity and for all charged hadrons the fluctuations are larger than the Poisson ones. The fluctuations seem to increase with decreasing system size. It is suggested that this is due to increased relative fluctuations in the number of participants. Furthermore, it was discovered that omega increases for decreasing rapidity and transverse momentum. A hadron-gas model predicts different values of omega for different statistical ensembles. In the grand-canonical ensemble, where all conservation laws are fulfilled only on the average, not on an event-by-event basis, the predicted fluctuations are the largest ones. In the canonical ensemble the charges, namely the electrical charge, the baryon number and the strangeness, are conserved for each event. The scaled variance in this ensemble is smaller than for the grand-canonical ensemble. In the micro-canonical ensemble not only the charges, but also the energy and the momentum are conserved in each event, the predicted $omega$ is the smallest one. The grand-canonical and canonical formulations of the hadron-gas model over-predict fluctuations in the forward acceptance. In contrast to the experimental data no dependence of omega on rapidity and transverse momentum is expected. For the micro-canonical formulation, which predicts small fluctuations in the total phase space, no quantitative calculation is available yet for the limited experimental acceptance. The increase of fluctuations for low rapidities and transverse momenta can be qualitatively understood in a micro-canonical ensemble as an effect of energy and momentum conservation. The string-hadronic model UrQMD significantly over-predicts the mean multiplicities but approximately reproduces the scaled variance of the multiplicity distributions at all measured collision energies, systems and phase-space intervals. String-hadronic models predict for Pb+Pb collisions a monotonous increase of omega with collision energy, similar to the observations for p+p interactions. This is in contrast to the predictions of the hadron-gas model, where omega shows no energy dependence at higher energies. At SPS energies the predictions of the string-hadronic and hadron-gas models are in the same order of magnitude, but at RHIC and LHC energies the difference in omega in the full phase space is much larger. Experimental data should be able to distinguish between them rather easily. Narrower than Poissonian (omega < 1) multiplicity fluctuations measured in the forward kinematic region (1<y(pi)<y_{beam}) can be related to the reduced fluctuations predicted for relativistic gases with imposed conservation laws. This general feature of relativistic gases may be preserved also for some non-equilibrium systems as modeled by the string-hadronic approaches. A quantitative estimate shows that the predicted maximum in fluctuations due to a first order phase transition from hadron-gas to QGP is smaller than the experimental errors of the present experiment and can therefore neither be confirmed nor disproved. No sign of increased fluctuations as expected for a freeze-out near the critical point of strongly interacting matter is observed.
In the first part of this study, we have identified the two steroid hormones progesterone and norgestimate as novel TRPC channel blockers. Both substances blocked TRPC-mediated Ca2+ influx with micromolar activities in fluorometric measurements. TRPC channel inhibition did not seem to be a general steroid effect since another progestin, the norgestimate metabolite levonorgestrel, was not effective. Norgestimate was 4- to 5-fold more active on the TRPC3/6/7 subfamily compared to TRPC4/5, whereas progesterone was similarly potent. This selectivity of norgestimate was confirmed by patch clamp recordings. As norgestimate blocked channels directly gated by DAG with a fast kinetic, we assume the compound acts on the channel protein itself. This view was further substantiated by the lack of effects on IP3R-mediated Ca2+ release from the endoplasmic reticulum, which is activated in parallel with TRPCs by Gq/11-coupled receptor stimulation. Norgestimate did not only block ectopically expressed TRPC channels but also native, TRPC-mediated currents in rat aortic smooth muscle cells with similar activity. The usefulness of norgestimate as a tool compound for the investigation of physiological TRPC functions was tested in isolated vessel rings. Consistent with TRPC6 being an essential component of the alpha-1-adrenoceptor-activated cation channel, we demonstrated a direct vasorelaxant, endothelium-independent effect of norgestimate on rat aortic rings precontracted with phenylephrine. Thus, our results provide further experimental support for a role of TRPC6 in alpha-1-adrenergic vessel constriction. In the second part of this study, we screened a human aorta cDNA-library for novel TRPC4-interacting proteins with a modified yeast two-hybrid (Y2H) system in which the TRPC4-C-terminus was expressed as tetrameric bait protein, thereby mimicking the native channel conformation. Of the eleven interacting proteins found SESTD1 was chosen for further analyses since it contains a phospholipid-binding Sec14p-like domain and thus could be involved in regulation of TRPC channels by phospholipids. After the biochemical validation of the found interaction, the first spectrin domain of SESTD1 was then identified to interact with the CIRB domain of TRPC4 in directed Y2H tests. SESTD1 also co-immunoprecipitated with the closely related TRPC5 protein in which the SESTD1-binding domain is highly conserved. Independent of the CIRB site, co-immunoprecipitation with TRPC6 and the distantly related TRPM8 channel was observed indicating the existence of other sites in these channel proteins that mediate interaction with SESTD1. Analysis of SESTD1 gene expression in human tissues showed that its transcripts are ubiquitously expressed and tissues with significant coexpression with TRPC4 and -5 were identified. We have generated two polyclonal antisera directed against SESTD1 that consistently detected SESTD1 protein in brain, aorta, heart, and in smooth muscle and endothelial cells. The functional consequences of the found interaction were investigated by examination of the TRPC5-mediated Ca2+ influx in a clonal HM1 cell line stably expressing the channel. Since SESTD1 overexpression had no detectable effects on TRPC5-mediated Ca2+ influx, most likely due to expression of endogenous SESTD1, we knocked-down the native protein with specific siRNA. This procedure reduced TRPC5-mediated Ca2+ influx following receptor stimulation by 50%. Parallel biotinylation experiments did not reveal any differences in cell surface expressed TRPC5-protein, suggesting that reduction of TRPC5 activity resulted from a loss of a direct SESTD1 effect on the channel. In addition, in immunofluorescence experiments we observed that reduced SESTD1 protein levels resulted in a redistribution of the multifunctional protein ß-catenin from the plasma membrane to the cytosol. This result may point to an involvement of SESTD1 in formation and maintenance of adherens junctions. SESTD1 contains a phospholipid-binding Sec14p-like domain and we were the first to demonstrate its Ca2+-dependent binding to phosphatidic acid and all physiological phosphatidylinositol mono- and bisphosphates in vitro. The physiological function of this binding activity is not known at present, but it could play a role in regulation of associated TRPC channels. TRPC4 and -5 channels are activated by phospholipid hydrolysis and also bind phospholipids directly. The identification of SESTD1 as novel TRPC-interacting protein could thus be an important step forward in the investigation and better comprehension of the complex molecular mechanisms of TRP channel regulation by lipids.
Writing against the odds : the south’s cultural and literary struggle against progress and modernity
(2008)
Die Literatur und Kultur der Südstaaten ist entscheidend geprägt von ihrer Orientierung an der eigenen Geschichte und Vergangenheit. Die düstere Vergangenheit, die die Gegenwart überschatten und die Zukunft determiniert ist ein Südstaatenthema par exellence und allgegenwärtig in ihrer Kultur und Literatur. Nach dem Bürgerkrieg und der Reconstruction Era ist der Süden kulturell und ökonomisch ausgeblutet, am Boden und isoliert. Nach dem Krieg weitet sich die Kluft zwischen Nord- und Südstaaten immer weiter aus, ein Prozess, der jedoch schon so alt ist wie die Vereinigten Staaten selbst und bereits im beginnenden 18. Jahrhundert seinen Anfang nimmt. Die Isolation ist gleichzeitig gewollt und ungewollt, bewusst und unbewusst. Die Scham des verlorenen Krieges und die Marginalisierung sind die Katalysatoren für die Kultivierung und das Bestreben nach Erhalt der Besonderheiten der Südstaaten, mit ihrer vermeintlich überlegenen Kultur und Moral. Es beginnt die kommerzialisierte, hoch ideologisierte Konstruktion der Geschichte und Identität der Südstaaten, die in alle Lebensbereiche strahlt. Der melancholische Blick in die Vergangenheit als wichtigste Referenz und kulturellen Fluchtpunkt ersetzt den Eintritt in die Moderne und Modernität mit ihrer Schnelllebigkeit, Austauschbarkeit und die Aufgabe der Tradition für eine rasante Gegenwart. Das Individuum der Südstaaten sieht sich statt mit einer Flut an Wahlmöglichkeiten und Optionen, mit einer einengenden Gesellschaft konfrontiert, die wenig Spielraum für Abweichungen übriglässt und ein harsches Kontrollsystem hat. Es ist eine einzigartige Mischung aus Stolz, Scham und einem Gefühl der gleichzeitigen Unter- und Überlegenheit, die einen besonders guten literarischen Nährboden hervorbringt. In dieser Arbeit wird den historischen, kulturellen, und literarischen Wurzeln der Südstaatenliteratur seit der Southern Renaissance nachgegegangen, um dann die ständig perpetuierten formalen und inhaltlichen Strukturen darzustellen, die wenig Veränderungen erfahren haben. Diese Perpetuierung resultiert aus der einzigartigen Situation der Südstaaten, aus der historischen Last, die unvermindert aktuell bleibt und längst nicht verarbeitet ist. Südstaatenautoren konnten und können nicht die traditionellen Formen und Themen ablegen, solange diese konstituierende Bestandteile der Kultur und Identität der Südstaaten bleiben. Die Südstaaten verweigern sich der Modernität und empfinden Fortschritt und die moderne Massengesellschaft nicht nur als Bedrohung, sondern als Einfluss aus den Nordstaaten, der die eigene Kultur bedroht und eine Einmischung von außen ist, die es abzuwenden gilt. Traditionsbewusste, reaktionäre Tendenzen und Elemente ziehen sich selbst durch vermeintlich progressive, moderne Entwicklungen und Phänomene. Ich kombiniere identitätskonstituierende, isolierende und melancholische Elemente und beleuchte sie historisch, kulturell und literarisch, um eine mehrschichtige Perspektive zu erlangen. Das Verständnis dieser historischen Last und deren unverminderte Bedeutung und Auswirkung auf die Literatur und Kultur der Südstaaten ist essentiell für eine tiefere Einsicht in deren Strukturen und Bedeutung.
The German Working Group on Vegetation Databanks has held annual meetings since 2002 with financial support by the German Federal Agency for Nature Conservation. Ca. 215 members are regularly informed through a mailing-list. The 2008 meeting was hosted by University of Oldenburg’s Landscape Ecology Group and was attended by 72 participants from 15 countries. Software demonstrations of vegetation databanks Turboveg and VegetWeb as well as plant trait databanks LEDA and BiolFlor opened the workshop. There were lecture sessions on trait databanks, recalibration of ecological indicator values and new developments in the field of vegetation databanks. Working groups were devoted to an initiative to build a meta-databank of existing vegetation databanks in Germany and to mathematical modelling of species habitats. In 2009 the 8th workshop will be held on "Vegetation Databanks and Biodindication" at the University of Greifswald.
Naturalness is one of the most important criteria in nature conservation. This paper examines the fundamental concepts underlying the definition and assessment of naturalness. Its role in nature conservation and forest management under conditions of global change is also discussed. The degree of naturalness may be defined in ordinal classes. The “static” concept of the potential natural vegetation (pnV), developed in the 1950ies, is mostly used as the reference state. In other cases, its reversed concept, the hemeroby (degree of articifiality) is assessed, based on the intensity and frequency of human impacts. Since the 1970ies, more attention has been given to natural dynamics than in earlier approaches, e.g. in forest succession models. At the end of the 1980ies, the previous importance was increasingly stressed of natural browsing by large herbivores and the role of predators. These large herbivors are extinct today in most cultural European landscapes. It is assumed, that they open up the canopy, and create park-like forest structures which contain a diversity of habitats for other types of organism (birds, insects). Changed and permanently changing environments and altering patterns of competition between species continue to modify natural processes today. Some of the more conspicuous effects are the extinction of native species and immigration of species to new regions. Long-lived ecosystems like forests are however not able to adapt quickly to such changes and may be unable to find a new balance with the environment. Today, such changes occur very rapidly, and are reducing the original naturalness of ecosystems. Because of this, the criterion “naturalness” must be downweighted. Conversely, more importance should be attached to other criteria: particularly originality (= original naturalness) and restorability. Forestry is contributing to this accelerated change of biocoenoses by increasing disturbances and introducing exotic tree species. Naturalisation of some exotic tree species modifies the natural processes and creates a “new allochthonous naturalness”. Because of this, forest planning should try to preserve or restore stands with attributes of the “original forest”. Exotic species should not be planted, or only in a very restricted way.
Objectives To examine the dose-response relationship between cumulative exposure to kneeling and squatting as well as to lifting and carrying of loads and symptomatic knee osteoarthritis (OA) in a population-based case-control study. Methods In five orthopedic clinics and five practices we recruited 295 male patients aged 25 to 70 with radiographically confirmed knee osteoarthritis associated with chronic complaints. A total of 327 male control subjects were recruited. Data were gathered in a structured personal interview. To calculate cumulative exposure, the self-reported duration of kneeling and squatting as well as the duration of lifting and carrying of loads were summed up over the entire working life. Results The results of our study support a dose-response relationship between kneeling/squatting and symptomatic knee osteoarthritis. For a cumulative exposure to kneeling and squatting > 10.800 hours, the risk of having radiographically confirmed knee osteoarthritis as measured by the odds ratio (adjusted for age, region, weight, jogging/athletics, and lifting or carrying of loads) is 2.4 (95% CI 1.1-5.0) compared to unexposed subjects. Lifting and carrying of loads is significantly associated with knee osteoarthritis independent of kneeling or similar activities. Conclusions As the knee osteoarthritis risk is strongly elevated in occupations that involve both kneeling/squatting and heavy lifting/carrying, preventive efforts should particularly focus on these "high-risk occupations".
The moderate halophile Halobacillus halophilus is the paradigm for chloride dependent growth in prokaryotes. Recent experiments shed light on the molecular basis of the chloride dependence that is reviewed here. In the presence of moderate salinities Halobacillus halophilus mainly accumulates glutamine and glutamate to adjust turgor. The transcription of glnA2 (encoding a glutamine synthetase) as well as the glutamine synthetase activity were identified as chloride dependent steps. Halobacillus halophilus switches its osmolyte strategy and produces proline as the main compatible solute at high salinities. Furthermore, Halobacillus halophilus also shifts its osmolyte strategy at the transition from the exponential to the stationary phase where proline is exchanged by ectoine. Glutamate was found as a second messenger" essential for proline production. This observation leads to a new model of sensing salinity by sensing the physico-chemical properties of different anions.
Increasingly, individuals are in charge of their own financial security and are confronted with ever more complex financial instruments. However, there is evidence that many individuals are not well-equipped to make sound saving decisions. This paper demonstrates widespread financial illiteracy among the U.S. population, particularly among specific demographic groups. Those with low education, women, African-Americans, and Hispanics display particularly low levels of literacy. Financial literacy impacts financial decision-making. Failure to plan for retirement, lack of participation in the stock market, and poor borrowing behavior can all be linked to ignorance of basic financial concepts. While financial education programs can result in improved saving behavior and financial decision-making, much can be done to improve these programs’ effectiveness.
Traditionally, aggregate liquidity shocks are modelled as exogenous events. Extending our previous work (Cao & Illing, 2007), this paper analyses the adequate policy response to endogenous systemic liquidity risk. We analyse the feedback between lender of last resort policy and incentives of private banks, determining the aggregate amount of liquidity available. We show that imposing minimum liquidity standards for banks ex ante are a crucial requirement for sensible lender of last resort policy. In addition, we analyse the impact of equity requirements and narrow banking, in the sense that banks are required to hold sufficient liquid funds so as to pay out in all contingencies. We show that such a policy is strictly inferior to imposing minimum liquidity standards ex ante combined with lender of last resort policy.
Modern macroeconomics empirically addresses economy-wide incentives behind economic actions by using insights from the way a single representative household would behave. This analytical approach requires that incentives of the poor and the rich are strictly aligned. In empirical analysis a challenging complication is that consumer and income data are typically available at the household level, and individuals living in multimember households have the potential to share goods within the household. The analytical approach of modern macroeconomics would require that intra-household sharing is also strictly aligned across the rich and the poor. Here we have designed a survey method that allows the testing of this stringent property of intra-household sharing and find that it holds: once expenditures for basic needs are subtracted from disposable household income, household-size economies implied by the remainder household incomes are the same for the rich and the poor.
Antibiotic resistance of pathogenic bacteria is a major worldwide problem. Bacteria can resist antibiotics by active efflux due to multidrug efflux pumps. The focus of this study has been the mycobacterial multidrug transporter TBsmr because it belongs to the small multidrug resistance (SMR) family whose members are a paradigm to study multidrug efflux due to their small size. SMR proteins are typically 11-12 kDa in size and have a four-transmembrane helix topology. They bind cationic, lipophilic antibiotics such as ethidium bromide (EtBr) and TPP+, and transport them across the membrane in exchange for protons. To understand the molecular mechanism of multidrug resistance, we have to gain information about the structure and function of these proteins. The research described in this thesis aimed to deduce details about the topology, transport cycle and key residues of TBsmr using biophysical techniques. Solid-state NMR (ssNMR) can provide detailed insight into structural organization and dynamical properties of these systems. However, a major bottleneck is the preparation of mg amounts of isotope labeled protein. In case of proteoliposomes, the problem is compounded by the presence of lipids which have to fit into the small active volume of the ssNMR rotor. In Chapter 3, an enhanced protein preparation is described which yields large amounts of TBsmr reconstituted in a native lipid environment suitable for further functional and structual studies. The achieved high protein-to-lipid ratios made a further characterization by ssNMR feasible. The transport activity and oligomeric state of the reconstituted protein in different types of lipid was studied as shown in Chapter 4. The exact oligomeric state of native SMR proteins is still uncertain but a number of biochemical and biophysical studies in detergent suggest that the minimal functional unit capable of binding substrate is a dimer. However, binding assays are not ideal since a protein may bind substrate without completing the transport cycle which can only be shown for reconstituted protein in transport assays.By combining functional data of a TPP+ transport assay with information about theoligomeric state of reconstituted TBsmr obtained by freeze-fracture electron microscopy, it could be shown that lipids affect the function and the oligomeric state of the protein, and that the TBsmr dimer is the minimal functional unit necessary for transport. The transport cycle must involve various conformational states of the protein needed for substrate binding, translocation and release. A fluorescent substrate will therefore experience a significant change of environment while being transported, which influences its fluorescence properties. Thus the substrate itself can report intermediate states that form during the transport cycle. In Chapter 5, the existence of such a substrate-transporter complex for the TBsmr and its substrate EtBr could be shown. The pH gradient needed for antiport has been generated by co-reconstituting TBsmr with bacteriorhodopsin. The measurements have shown the formation of a pH-dependant, transient substrate-protein complex between binding and release of EtBr. This state was further characterized by determining the Kd, by inhibiting EtBr transport through titration with non-fluorescent substrate and by fluorescence anisotropy measurements. The findings support a model with a single occluded intermediate state in which the substrate is highly immobile. Liquid-state NMR is a useful tool to monitor protein-ligand interactions by chemical shift mapping and thus identify and characterize important residues in the protein which are involved in substrate binding. In agreement with previous studies (Krueger-Koplin et al., 2004), the detergent LPPG was found to be highly suitable for liquid-state NMR studies of the membrane protein TBsmr and 42% of the residues could be assigned, as reported in Chapter 6. However, no specific interactions with EtBr were found. This observation was confirmed by LILBID mass spectrometry which showed that TBsmr was predominantly in the non-functional monomeric state. Functional protein was prepared in proteoliposomes which can be investigated by solidstate NMR (Chapter 7). Besides the essential E13, the aromatic residues W63, Y40, and Y60 have been shown to be directly involved in drug binding and transport. Different isotope labeling strategies were evaluated to improve the quality of the NMR spectra to identify and characterize these key residues. In a single tryptophan mutant of reconstituted TBsmr W30A, the binding of ethidium bromide could be detected by 13C solid-state NMR. The measurements have revealed two populations of the conserved W63 residue with distinct backbone structures in the presence of substrate. There is a controversy about the parallel or anti-parallel arrangement of the protomers in the EmrE dimer (Schuldiner, 2007) but this structural asymmetry is consistent with both a parallel and anti-parallel topology.
After the pioneering German “Aktiengesetz” of 1965 and the Brazilian “Lei das Sociedades Anónimas” of 1976, Portugal has become the third country in the world to enact a specific regulation on groups of companies. The Code of Commercial Companies (“Código das Sociedades Comerciais”, abbreviately hereinafter CSC), enacted in 1986, contains a unitary set of rules regulating the relationships between companies, in general, and the groups of companies, in particular (arts. 481° to 508°-E CSC). With this set of rules, the Portuguese legislator has dealt with one of the major topics of modern Company Law. While this branch of law is traditionally conceived as the law of the individual company, modern economic reality is characterized by the massive emergence of large-scale enterprise networks, where parts of a whole business are allocated and insulated in several legally independent companies submitted to an unified economic direction. As Tom HADDEN put it: “Company lawyers still write and talk as if the single independent company, with its shareholders, directors and employees, was the norm. In reality, the individual company ceased to be the most significant form of organization in the 1920s and 1930s. The commercial world is now dominated both nationally and internationally by complex groups of companies”. This trend, which is now observable in any of the largest economies in the world, holds also true for small markets such as Portugal. Although Portuguese economy is still dominated by small and medium-sized enterprises, the organizational structure of the group has always been extremely common. During the 70s, it was estimated that the seven largest groups of companies owned about 50% of the equity capital of all domestic enterprises and were alone responsible for 3/4 of the internal national product. Such a trend has continued and even highlighted in the next decades, surviving to different political and economic scenarios: during the 80s, due to the process of state nationalization of these groups, an enormous public group with more than one thousand controlled companies has been created (“IPE - Instituto de Participações do Estado”); and during the 90s until today, thanks to the reprivatisation movement and the opening of our national market, we assisted to the re-emergence of some large private groups, composed of several hundred subsidiaries each, some of which are listed in foreign stock exchange markets (e.g., in the banking sector, “BCP – Banco Comercial Português”, in the industrial area, “SONAE”, and in the media and communication area, “Portugal-Telecom”).
Reform of the securities class action is once again the subject of national debate. The impetus for this debate is the reports of three different groups – The Committee on Capital Market Regulation, The Commission on the Regulation of U.S. Capital Markets In the 21st Century, and McKinsey & Company. Each of the reports focuses on a single theme: how the contemporary regulatory culture places U.S. capital markets at a competitive disadvantage to foreign markets. While multiple regulatory forces are targeted by each report’s call for reform, each of the reports singles out securities class actions as one of the prime villains that place U.S. capital markets at a competitive disadvantage. The reports’ recommendations range from insignificant changes to drastic curtailments of private class actions. Surprisingly, these current-day cries echo calls for reform heeded by Congress in the not too distant past. Major reform of the securities class action occurred with the Private Securities Litigation Reform Act of 1995.5 Among the PSLRA’s contributions is the introduction of procedures by which the court chooses from among competing petitioners a lead plaintiff for the class. The statute commands that the petitioner with the largest financial loss suffered as a consequence of the defendant’s alleged misrepresentation is presumed to be the most adequate plaintiff. Thus, the lead plaintiff provision supplants the traditional “first to file” rule for selecting the suit’s plaintiff with a mechanism that seeks to harness to the plaintiff’s economic self interest to the suits’ prosecution. Also, by eliminating the race to be the first to file, the lead plaintiff provision seeks to avoid “hair trigger” filings by overly eager plaintiffs’ counsel which Congress believed too frequently gave rise to incomplete and insubstantially pled causes of action. The PSLRA also introduced for securities class actions a heightened pleading requirement8 as well as a bar to the plaintiff obtaining any discovery prior to the district court disposing of the defendants’ motions to dismiss. By introducing the requirement that allegations involving fraud must be plead not only with particularity, but also that the pled facts must establish a “strong inference” of fraud, the PSLRA cast aside, albeit only for securities actions, the much lower notice pleading requirement that has been a fixture of American civil procedure for decades. Substantive changes to the law were also introduced by the PSLRA. With few exceptions, joint and several liability was replaced by proportionate liability so that a particular defendant’s liability is capped by that defendant’s relative degree of fault. Similarly, contribution rights among co-violators are also based on proportionate fault of each defendant. Three years after the PSLRA, Congress returned to the topic again by enacting the Securities Litigation Uniform Standards Act;13 this provision was prompted by aggressive efforts of plaintiff lawyers to bypass the limitations, most notably the bar to discovery and higher pleading requirement, of the PSLRA by bringing suit in state court. Post-SLUSA, securities fraud class actions are exclusively the domain of the federal court. In this paper, we examine the impact of the PSLRA and more particularly the impact the type of lead plaintiff on the size of settlements in securities fraud class actions. We thus provide insight into whether the type of plaintiff that heads the class action impacts the overall outcome of the case. Furthermore, we explore possible indicia that may explain why some suits settle for extremely small sums – small relative to the “provable losses” suffered by the class, small relative to the asset size of the defendantcompany, and small relative to other settlements in our sample. This evidence bears heavily on the debate over “strike suits.” Part I of this paper sets forth the contemporary debate surrounding the need for further reforms of securities class actions. In this section, we set forth the insights advanced in three prominent reports focused on the competitiveness of U.S. capital markets. In Part II we first provide descriptive statistics of our extensive data set, and thenuse multivariate regression analysis to explore the underlying relationships. In Part III, we closely examine small settlements for clues to whether they reflect evidence of strike suits. We conclude in Part IV with a set of policy recommendations based on our analysis of the data. Our goals in this paper are more modest than the Committee Report, the Chamber Report and the McKinsey Report, each of which called for wide-ranging reforms: we focus on how the PSLRA changed securities fraud settlements so as to determine whether the reforms it introduced accomplished at least some of the Act’s important goals. If the PSLRA was successful, and we think it was, then one must be somewhat skeptical of the need for further cutbacks in private securities class action so soon after the Act was passed.
The market reaction to legal shocks and their antidotes : lessons from the sovereign debt market
(2008)
This Article examines the market reaction to a series of legal events concerning the judicial interpretation of the pari passu clause in sovereign debt instruments. More generally, the Article provides insights into the reactions of investors (predominantly financial institutions), issuers (sovereigns), and those who draft bond covenants (lawyers), to unanticipated changes in the judicial interpretation of certain covenant terms.
How do fiscal and technology shocks affect real exchange rates? : New evidence for the United States
(2008)
Using vector autoregressions on U.S. time series relative to an aggregate of industrialized countries, this paper provides new evidence on the dynamic effects of government spending and technology shocks on the real exchange rate and the terms of trade. To achieve identification, we derive robust restrictions on the sign of several impulse responses from a two-country general equilibrium model. We find that both the real exchange rate and the terms of trade – whose responses are left unrestricted – depreciate in response to expansionary government spending shocks and appreciate in response to positive technology shocks.
Motivated by the prominent role of electronic limit order book (LOB) markets in today’s stock market environment, this paper provides the basis for understanding, reconstructing and adopting Hollifield, Miller, Sandas, and Slive’s (2006) (henceforth HMSS) methodology for estimating the gains from trade to the Xetra LOB market at the Frankfurt Stock Exchange (FSE) in order to evaluate its performance in this respect. Therefore this paper looks deeply into HMSS’s base model and provides a structured recipe for the planned implementation with Xetra LOB data. The contribution of this paper lies in the modification of HMSS’s methodology with respect to the particularities of the Xetra trading system that are not yet considered in HMSS’s base model. The necessary modifications, as expressed in terms of empirical caveats, are substantial to derive unbiased market efficiency measures for Xetra in the end.
We explore the pattern of elderly homeownership using microeconomic surveys of 15 OECD countries, merging 60 national household surveys on about 300,000 individuals. In all countries the survey is repeated over time, permitting construction of an international dataset of repeated cross-sectional data. We find that ownership rates decline considerably after age 60 in all countries. However, a large part of the decline depends on cohort effects. Adjusting for them, we find that ownership rates start falling after age 70 and reach a percentage point per year decline after age 75. We find that differences across country ownership trajectories are correlated with indicators measuring the degree of market regulations.
This paper introduces adaptive learning and endogenous indexation in the New-Keynesian Phillips curve and studies disinflation under inflation targeting policies. The analysis is motivated by the disinflation performance of many inflation-targeting countries, in particular the gradual Chilean disinflation with temporary annual targets. At the start of the disinflation episode price-setting firms’ expect inflation to be highly persistent and opt for backward-looking indexation. As the central bank acts to bring inflation under control, price-setting firms revise their estimates of the degree of persistence. Such adaptive learning lowers the cost of disinflation. This reduction can be exploited by a gradual approach to disinflation. Firms that choose the rate for indexation also re-assess the likelihood that announced inflation targets determine steady-state inflation and adjust indexation of contracts accordingly. A strategy of announcing and pursuing short-term targets for inflation is found to influence the likelihood that firms switch from backward-looking indexation to the central bank’s targets. As firms abandon backward-looking indexation the costs of disinflation decline further. We show that an inflation targeting strategy that employs temporary targets can benefit from lower disinflation costs due to the reduction in backward-looking indexation.
Monetary policy analysts often rely on rules-of-thumb, such as the Taylor rule, to describe historical monetary policy decisions and to compare current policy to historical norms. Analysis along these lines also permits evaluation of episodes where policy may have deviated from a simple rule and examination of the reasons behind such deviations. One interesting question is whether such rules-of-thumb should draw on policymakers "forecasts of key variables such as inflation and unemployment or on observed outcomes. Importantly, deviations of the policy from the prescriptions of a Taylor rule that relies on outcomes may be due to systematic responses to information captured in policymakers" own projections. We investigate this proposition in the context of FOMC policy decisions over the past 20 years using publicly available FOMC projections from the biannual monetary policy reports to the Congress (Humphrey-Hawkins reports). Our results indicate that FOMC decisions can indeed be predominantly explained in terms of the FOMC´s own projections rather than observed outcomes. Thus, a forecast-based rule-of-thumb better characterizes FOMC decision-making. We also confirm that many of the apparent deviations of the federal funds rate from an outcome-based Taylor-style rule may be considered systematic responses to information contained in FOMC projections.
Risk transfer with CDOs
(2008)
Modern bank management comprises both classical lending business and transfer of asset risk to capital markets through securitization. Sound knowledge of the risks involved in securitization transactions is a prerequisite for solid risk management. This paper aims to resolve a part of the opaqueness surrounding credit-risk allocation to tranches that represent claims of different seniority on a reference portfolio. In particular, this paper analyzes the allocation of credit risk to different tranches of a CDO transaction when the underlying asset returns are driven by a common macro factor and an idiosyncratic component. Junior and senior tranches are found to be nearly orthogonal, motivating a search for the whereabout of systematic risk in CDO transactions. We propose a metric for capturing the allocation of systematic risk to tranches. First, in contrast to a widely-held claim, we show that (extreme) tail risk in standard CDO transactions is held by all tranches. While junior tranches take on all types of systematic risk, senior tranches take on almost no non-tail risk. This is in stark contrast to an untranched bond portfolio of the same rating quality, which on average suffers substantial losses for all realizations of the macro factor. Second, given tranching, a shock to the risk of the underlying asset portfolio (e.g. a rise in asset correlation or in mean portfolio loss) has the strongest impact, in relative terms, on the exposure of senior tranche CDO-investors. Our findings can be used to explain major stylized facts observed in credit markets.
We show that the use of correlations for modeling dependencies may lead to counterintuitive behavior of risk measures, such as Value-at-Risk (VaR) and Expected Short- fall (ES), when the risk of very rare events is assessed via Monte-Carlo techniques. The phenomenon is demonstrated for mixture models adapted from credit risk analysis as well as for common Poisson-shock models used in reliability theory. An obvious implication of this finding pertains to the analysis of operational risk. The alleged incentive suggested by the New Basel Capital Accord (Basel II), amely decreasing minimum capital requirements by allowing for less than perfect correlation, may not necessarily be attainable.
The paper proposes a panel cointegration analysis of the joint development of government expenditures and economic growth in 23 OECD countries. The empirical evidence provides indication of a structural positive correlation between public spending and per-capita GDP which is consistent with the so-called Wagner´s law. A long-run elasticity larger than one suggests a more than proportional increase of government expenditures with respect to economic activity. In addition, according to the spirit of the law, we found that the correlation is usually higher in countries with lower per-capita GDP, suggesting that the catching-up period is characterized by a stronger development of government activities with respect to economies in a more advanced state of development.
Natural killer (NK) cells are white blood lymphocytes of the innate immune system that have diverse biological functions, including recognition and destruction of certain microbial infections and neoplasms [1]. NK cells comprise ~ 10% of all circulating lymphocytes and are also found in peripheral tissues including the liver, peritoneal cavity and placenta. Resting NK cells circulate in the blood, but, following activation by cytokines, they are capable of extravasation and infiltration into most tissues that contain pathogen-infected or malignant cells [2-5]. NK cells discriminate between normal and abnormal cells (infected or transformed) through engagement and dynamic integration of multiple signaling pathways, which are initiated by germline-encoded receptors [6-8]. Healthy cells are protected from NK cell-mediated lysis by expression of major histocompatibility complex (MHC) class I ligands for NK cell inhibitory receptors [6, 9]. The MHC is a group of highly polymorphic glycoproteins that are expressed by every nucleated cell of vertebrates, and that are encoded by the MHC gene cluster. The human MHC molecules are termed human leucocyte antigen (HLA)-A, B and C molecules. Every NK cell expresses at least one inhibitory receptor that recognizes a self-MHC class I molecule. So, normal cells that express MHC class I molecules are protected from self-NK cells, but transformed or infected cells that have down-regulated MHC class I expression are attacked by NK cells [10]. There are 2 distinct subsets of human NK cells identified mainly by cell surface density of CD56. The majority (approximately 90%) of human NK cells are CD56dimCD16bright and express high levels of FcγRIII (CD16), whereas a minority (approximately 10%) are CD56brightCD16dim/- [11]. Resting CD56dim NK cells are more cytotoxic against NK-sensitive targets than CD56bright NK cells [12]. However, after activation with interleukin (IL)-2 or IL-12, CD56bright cells exhibit similar or enhanced cytotoxicity against NK targets compared to CD56dim cells [12-14]. The functions of NK cells are regulated by a balance of signals (Fig. 1.1). These are transmitted by inhibitory receptors, which bind MHC class I molecules, and activating receptors, which bind ligands on tumors and virus-infected cells [15]. These receptors are completely encoded in the genome, rather than being generated by somatic recombinations, like T- and B-cell receptors.
In this work the nuclear structure of exotic nuclei and superheavy nuclei is studied in a relativistic framework. In the relativistic mean-field (RMF) approximation, the nucleons interact with each other through the exchange of various effective mesons (scalar, vector, isovector-vector). Ground state properties of exotic nuclei and superheavy nuclei are studied in the RMF theory with the three different parameter sets (ChiM, NL3, NL-Z2). Axial deformation of nuclei within two drip lines are performed with the parameter set (ChiM). The position of drip lines are investigated with three different parameter sets (ChiM, NL3, NL-Z2) and compared with the experimental drip line nuclei. In addition, the structure of hypernuclei are studied and for a certain isotope, hyperon halo nucleus is predicted.
In this work we study the non-equilibrium dynamics of a quark-gluon plasma, as created in heavy-ion collisions. We investigate how big of a role plasma instabilities can play in the isotropization and equilibration of a quark-gluon plasma. In particular, we determine, among other things, how much collisions between the particles can reduce the growth rate of unstable modes. This is done both in a model calculation using the hard-loop approximation, as well as in a real-time lattice simulation combining both classical Yang-Mills-fields as well as inter-particle collisions. The new extended version of the simulation is also used to investigate jet transport in isotropic media, leading to a cutoff-independent result for the transport coefficient $hat{q}$. The precise determination of such transport coefficients is essential, since they can provide important information about the medium created in heavy-ion collisions. In anisotropic media, the effect of instabilities on jet transport is studied, leading to a possible explanation for the experimental observation that high-energy jets traversing the plasma perpendicular to the beam axis experience much stronger broadening in rapidity than in azimuth. The investigation of collective modes in the hard-loop limit is extended to fermionic modes, which are shown to be all stable. Finally, we study the possibility of using high energy photon production as a tool to experimentally determine the anisotropy of the created system. Knowledge of the degree of local momentum-space anisotropy reached in a heavy-ion collision is essential for the study of instabilities and their role for isotropization and thermalization, because their growth rate depends strongly on the anisotropy.
Risk transfer with CDOs
(2008)
Modern bank management comprises both classical lending business and transfer of asset risk to capital markets through securitization. Sound knowledge of the risks involved in securitization transactions is a prerequisite for solid risk management. This paper aims to resolve a part of the opaqueness surrounding credit-risk allocation to tranches that represent claims of different seniority on a reference portfolio. In particular, this paper analyzes the allocation of credit risk to different tranches of a CDO transaction when the underlying asset returns are driven by a common macro factor and an idiosyncratic component. Junior and senior tranches are found to be nearly orthogonal, motivating a search for the where about of systematic risk in CDO transactions. We propose a metric for capturing the allocation of systematic risk to tranches. First, in contrast to a widely-held claim, we show that (extreme) tail risk in standard CDO transactions is held by all tranches. While junior tranches take on all types of systematic risk, senior tranches take on almost no non-tail risk. This is in stark contrast to an untranched bond portfolio of the same rating quality, which on average suffers substantial losses for all realizations of the macro factor. Second, given tranching, a shock to the risk of the underlying asset portfolio (e.g. a rise in asset correlation or in mean portfolio loss) has the strongest impact, in relative terms, on the exposure of senior tranche CDO-investors. Our findings can be used to explain major stylized facts observed in credit markets.
Methodology and Objects: Methodologically, from a diachronic linguistics perspective regarding the concept of the shin, spirits in folk belief in China and neighbouring cultures, we compare texts that comprise meanings a) historically in the local language and b) compared to the meanings of equivalent terms in languages of other cultures. Comparing sources of this belief, we examine if and how the shin belief can serve as an example of communication across cultural borders including practical forms of worshipping. Argumentation: We argue that the concept of the shin is across cultural and national borders a result from folk culture transcending political or cultural borders transmitted via migration of ethnic groups. Although similar, mind concepts of different cultures and groups never melted; evidence for this independence gives the Islamic distinctive separation between shin and jinn in this area in the Chinese Quran and other spiritual Chinese writings. On the other hand, the practice of worshipping is similar. Conclusions: A spiritual concept like shin varies in practice in different areas. Central Asia as the melting pot of Chinese and Middle East culture shows the cultural practice of Shamanism with shin belief, complex mind concepts like in Daoism, and religions incorporating shin belief (Islam). Observed changes in the particular local languages show the continuity of the local set of meanings. Multilingual and multicultural areas such as Central Asia rather integrate new words to increase their thesaurus with new meanings than to change the set of previous existing meanings in the languages. Arabic as a language of conquerors in Central Asia is a typical example for such a language that serves as a tool to set up new meanings.
Background The EGF receptor has been shown to internalize via clathrin-independent endocytosis (CIE) in a ligand concentration dependent manner. From a modeling point of view, this resembles an ultrasensitive response, which is the ability of signaling networks to suppress a response for low input values and to increase to a pre-defined level for inputs exceeding a certain threshold. Several mechanisms to generate this behaviour have been described theoretically, the underlying assumptions of which, however, have not been experimentally demonstrated for the EGF receptor internalization network. Results Here, we present a mathematical model of receptor sorting into alternative pathways that explains the EGF-concentration dependent response of CIE. The described mechanism involves a saturation effect of the dominant clathrin-dependent endocytosis pathway and implies distinct steady-states into which the system is forced for low vs high EGF stimulations. The model is minimal since no experimentally unjustified reactions or parameter assumptions are imposed. We demonstrate the robustness of the sorting effect for large parameter variations and give an analytic derivation for alternative steady-states that are reached. Further, we describe extensibility of the model to more than two pathways which might play a role in contexts other than receptor internalization. Conclusions Our main result is that a scenario where different endocytosis routes consume the same form of receptor corroborates the observation of a clear-cut, stimulus dependent sorting. This is especially important since a receptor modification discriminating between the pathways has not been found. The model is not restricted to EGF receptor internalization and might account for ultrasensitivity in other cellular contexts.
The fear that with the existence of an unconditional basic income sufficient for living many people would cease to engage in a productive life, would only relax, consume and devote to having fun, can be addressed from different perspectives. One of these is the sociology of religion, which allows elaborating the argument that with such a way of life the question about the meaning of life cannot be answered. But this "meaning question", the whole research within the field of the sociology of religion speaks for this, compellingly must be answered by each life praxis. It cannot remain unanswered, as is said already in the Bible: "Man does not live on bread alone, [but on every word that comes from the mouth of God.]" (5. Moses 8.3, Matthew 4.4, Lukas 4.4) The paper examines the reasons of this fact and its consequences in regard to a life with an unconditional basic income sufficient for living.
Crohn´s disease (CD) and Ulcerative colitis (UC) are idiopathic inflammatory disorders. Environmental factors, infectious microbes, ethnic origin, genetic susceptibility, and a dysregulated immune system can result in mucosal inflammation. However, the etiology of both CD and UC still remains largely unclear. Inflammatory bowel diseaserelated animal models suggest that a combination of genetic susceptibility factors and altered immune response driven by microbial factors in the enteric environment may contribute to the initiation and chronification of the disease. The intestinal immune system represents a complex network of different lymphoid and non-lymphoid cell populations as well as humoral factors. In inflammatory bowel disease, the controlled balance of the intestinal immune system is disturbed at all levels. In CD, naïve T cells preferably differentiate into Th1 or Th17 producing cells, while in UC, these cells differentiate into aberrant Th2 cells. Overall, in active inflammatory bowel disease effector T cell activity (Th1, Th17, Th2) predominates over regulatory T cells. Animal models of intestinal inflammation are indispensable for our understanding of the pathogenesis of CD and UC. When chosen appropriately, these models proved to be a helpful tool to investigate pathophysiological mechanisms, as well as to test emerging therapeutic options in the preclinical phase. 2,4,6-Trinitrobenzene sulfonic acid (TNBS) and oxazolone are the two major chemicals applied to induce Th1- and Th2-skewed intestinal inflammation, respectively. Colitis can be induced in susceptible strains of mice by intrarectal instillation of the haptenating substances TNBS or oxazolone in ethanol, which is necessary for an initial desintegration of the epithelial barrier. TNBS or oxazolone are believed to haptenize colonic autologous or microbiotic proteins rendering them immunogenic to the host immune system. While TNBS administration in the presence of ethanol results in a transmural infiltrative disease in the entire colon based on an IL-12/IL-23 driven, Th1-or Th17 mediated response, oxazolone instillation finally leads to a colitis caused by a polarized Th2 IL-13-dominated lymphocyte response. Rectal oxazolone instillation in ethanol produces a more superficial inflammation that affects the distal half of the colon rather than the whole colon. Therapeutic modulation of the disturbed immune response in patients with inflammatory bowel disease still represents a complex challenge in the clinic. Currently, none of the therapeutic measure are disease specific and they generally target the pathophysiology downstream of the driving immunpathology. So, there is still the need to develop a tailored approach to prevention of the initiation and perpetuation of the inflammatory cascade before tissue injury occurs. One important aspect of this approach might involve the induction or re-establishment of immunological tolerance. FTY720 following rapid phosphorylation to FTY-P by endogenous sphingosine kinases acts as a sphingosine-1-phosphate (S1P) receptor agonist and represents the prototype of a new generation of S1P receptor modulators. While changing currently its proposed mode of action still focus on the fact, that FTY720 effectively inhibits the egress of T-cells from lymph nodes, thereby reducing the number of antigen-primed/restimulated cells that re-circulate to peripheral inflammatory tissues. However, recent studies indicate, that its immunomodulatory properties might be more complex and exerted not only via interactions with other S1P receptor subtypes but also via a direct modulation of the inflammatory capacity of dendritic cells (DC) resulting in a modified regulation of T cell effector functions as well as in an induction of regulatory T cells and function. 1,25(OH)2D3, the active form of vitamin D, is a secosteroid hormone that has in addition to its central function in calcium and bone metabolism pronounced immune regulatory properties. The biological effects of calcitriol are mediated by the vitamin D receptor (VDR), a member of the superfamily of nuclear hormone receptors. A number of studies identified calcitriol/VDR as prominent negative regulators of Th1-type immune responses, whereas Th2 responses are not affected or even augmented. These effects have been mainly explained by direct activities on lymphocytes, subsequent studies clearly supported a role of calcitriol in modulating monocyte differentiation or DC maturation. However, to translate the immunosuppressive capacities of calcitriol into an effective immunointervention, a great challenge was the design of structural analogs of calcitriol that are devoid of adverse effects related to hypercalcemic activity. The intense study of the 25-oxa series generated a large number of calcitriol analogs exhibiting substantial dissociation between possible immunomodulatory capacities and undesired hypercalcemia. Especially, the combination of the 22-ene modification with the 25-oxa element as realized in ZK156979 yielded a very promising set of new analogs for further characterization in animal models resembling human autoimmune diseases. So, the overall aim of the studies presented here was to evaluate strategies of enhancing regulatory immunity in mouse models of Th1- and Th2-mediated colitis as a new therapeutic approach. To this end we used FTY720, 22-ene-25-oxa vitamin D (ZK156979), as well as the combination of calcitriol and dexamethasone to evaluate the respective pro-tolerogenic potential in intestinal inflammation models in mice. First, to induce Th1-mediated colitis a rectal enema of TNBS was given to Balb/c mice. FTY720 was administered i.p. from day 0-3 or 3-5. FTY720 substantially reduced all clinical, histopathologic, macroscopic, and microscopic parameters of colitis analyzed. The therapeutic effects of FTY720 were associated with a down-regulation of IL-12p70 and subsequent Th1 cytokines. Importantly, FTY720 treatment resulted in a prominent up-regulation of FoxP3, IL-10, TGFβ and CTLA4. Moreover, we observed a significant increase of CD25 and FoxP3 expression in isolated lamina propria CD4+ T cells of FTY720-treated mice. The impact of FTY720 on regulatory T cell induction was further confirmed by concomitant in vivo blockade of CTLA4 or IL-10R which significantly abrogated its therapeutic activity. Thus, our data provide new and strong evidence that besides its well-established migratory properties FTY720 down-regulates proinflammatory signals while simultaneously inducing the functional activity of CD4+CD25+ regulatory T cells. In a second approach, the rectal instillation of oxazolone yielded a Th2-mediated colitis. Treatment with FTY720 prominently reduced the clinical and histopathologic severity of oxazolone-induced colitis, abrogating body weight loss, diarrhea, and macroscopic and microscopic intestinal inflammation. The therapeutic effects of FTY720 were associated with a prominent reduction of the key Th2 effector cytokines IL-13, IL-4 and IL-5. Moreover, FTY720 inhibited GATA3 and T1/ST2 expression, which represent distinct markers for Th2 differentiation and Th2 effector function. Thus, our data are supportive for the view that FTY720 exhibits beneficial prophylactic as well as therapeutic effects in Th2-mediated experimental colitis by directly affecting Th2 cytokine profiles, probably by reducing GATA3 and T1/ST2. Recently, we described 22-ene-25-oxa-vitamin D (ZK156979) as a representative of a novel class of low calcemic vitamin D analogs showing prominent immunomodulative capacities. Here, we used the Th1-mediated TNBS colitis to test its anti-inflammatory properties in vivo. We found that treatment with ZK156979 clearly inhibited the severity of TNBS-induced colitis without exhibiting calcemic effects. Both early and late treatment abrogated all the clinical macroscopic and microscopic parameters of colitis severity; in addition we observed a clear down-regulation of the relevant Th1 cytokine pattern including the T-box transcription factor, T-bet. On the other hand, application of ZK156979 increased local tissue IL-10 and IL-4. Finally, as a new approach we evaluated the pro-tolerogenic potential of calcitriol and dexamethasone in acute Th1-mediated colitis. Calcitriol and/or dexamethasone were administered i.p. from day 0-3 or from day 3-5 following the instillation of the haptenating agent. The combination of these steroids most effectively reduced the clinical and histopathologic severity of TNBS colitis. Th1-related parameters were down- while Th2 markers like IL-4 and GATA3 were up-regulated. Clearly distinguishable from known steroid effects calcitriol in particular promoted regulatory T cell profiles as indicated by a marked increase of IL-10, TGFß, FoxP3 and CTLA4. Furthermore, analysis of DC mediators responsible for a pro-inflammatory differentiation of T cells revealed a clear reduction of IL-12p70, and IL23p19 as well as IL-6 and IL-17. Thus, our data suggest the concept of a steroid-sparing application of calcitriol derivatives in inflammatory bowel disease. Furthermore, the data presented suggest that early markers of inflammatory DC and Th17 differentiation might qualify as new target molecules for both calcitriol as well as for selective immune modulating vitamin D analogs. In conclusion the data of these published investigations added to the substantial progress in understanding the biology of tolerogenic DC and regulatory T cells with respect to their roles in health and disease achieved in the past years. This has led to an increasing interest in the possibility of using DC and regulatory T cells as biological therapeutics to preserve and restore tolerance to self antigens and alloantigens. Especially DC may be helpful to exert their important roles in directing tolerance and immunity by modulation of subpopulations of effector T cells and regulatory T cells. The data demonstrated in the present studies may assist to define the divergent implications of new therapeutic concepts for the treatment of inflammatory bowel disease, especially with regard to a possible auspicious impact on pro-tolerogenic DC and regulatory T cell functions. However, further studies are needed to fulfil our understanding of the complex immunomodulatory profiles of FTY720 as well as of calcitriol and its low calcemic analog ZK156979, thus accelerating their entry into the clinic as new therapeutic options for the cure of inflammatory bowel disease.
The µ-opioid receptor is the primary target structure of most opioid analgesics and thus responsible for the predominant part of their wanted and unwanted effects. Carriers of the frequent genetic µ-opioid receptor variant N40D (allelic frequency 8.2 - 17 %), coded by the single nucleotide polymorphism A>G at position 118 of the µ-opioid receptor coding gene OPRM1 (OPRM1 118A>G SNP), suffer from a decreased opioid potency and from a higher need of opioid analgesics to reach adequate analgesia. The aim of the present work was to identify the mechanism by which the OPRM1 118A>G SNP decreases the opioid potency and to quantify its effects on the analgesic potency and therapeutic range of opioid analgesics.
To elucidate the consequences of the OPRM1 118A>G SNP for the effects of opioid analgesics, brain regions of healthy homozygous carriers of the OPRM1 118A>G SNP were identified by means of functional magnetic resonace imaging (fMRI), where the variant alters the response to opioid analgesics after painful stimulation. Afterwards, the µ-opioid receptor function was analyzed on a molecular level in post mortem samples of these brain regions. Finally, the consequences of the OPRM1 118A>G SNP for the analgesic and respiratory depressive effects of opioids were quantified in healthy carriers and non-carriers of OPRM1 118A>G SNP by means of experimental pain- and respiratory depression-models.
To identify pain processing brain regions, where the variant alters the response to opioid analgesics after painful stimulation, we investigated the effects of different alfentanil concentration levels (0, 25, 50 and 75 ng/ml) on pain-related brain activation achieved by short pulses (300 msec) of gaseous CO2 (66% v/v) delivered to the nasal mucosa using a 3.0 T magnetic head scanner in 16 non-carriers and nine homozygous carriers of the µ-opioid receptor gene variant OPRM1 118A>G. In brain regions associated with the processing of the sensory dimension of pain (pain intensity), such as the primary and secondary somatosensory cortices and the posterior insular cortex, the activation decreased linearly in relation to alfentanil concentrations, which was significantly less pronounced in OPRM1 118G carriers. In contrast, in brain regions known to process the affective dimension of pain (emotional dimension), such as the parahippocampal gyrus, amygdala and anterior insula, the pain-related activation disappeared already at the lowest alfentanil dose, without genotype differences.
Subsequently, we investigated the µ-opioid receptor-expression ([3H]-DAMGO saturation experiments, OPRM1 mRNA analysis by means of RT-PCR), the µ-opioid receptor affinity ([3H]-DAMGO saturation and competition experiments) and µ-opioid receptor signaling ([35S]- GTPγS binding experiments) in post mortem samples of the human SII-region, as a cortical projection region coding for pain intensity, and lateral thalamus, as an important region for nociceptive transmission. Samples of 22 non-carriers, 21 heterozygous and three homozygous carriers of OPRM1 118A>G SNP were included into the analysis. The receptor expression and receptor affinity of both brain regions did not differ between non-carriers and carriers of the variant N40D. In non-carriers, the µ-opioid receptors of the SII-region activated the receptor bound G-protein more efficiently than those of the thalamus (factor 1.55-2.27). This regional difference was missing in heterozygous (factor 0.78-1.66) and homozygous (factor 0.66-1.15) carriers of the N40D variant indicating a reduced receptor-G-protein-coupling in the SII-region.
Finally, the consequences of the alteration of µ-opioid receptor function in carriers and noncarriers of the genetic variant was investigated using pain- and respiratory depression-models. Therefore, 10 healthy non-carriers, four heterozygous and six homozygous carriers of the µ- opioid receptor variant N40D received an infusion of four different concentrations of alfentanil (0, 33.33, 66.66 and 100 ng/ml). At each concentration level, analgesia was assessed by means of electrically (5 Hz sinus 0 to 20 mA) and chemically (200 ms gaseous CO2 pulses applied to the nasal mucosa) induced pain, and respiratory depression was quantified by means of hypercapnic challenge according to Read and recording of the breathing frequency. The results showed that depending on the used pain model, both heterozygous and homozygous carriers of the variant N40D needed 2 – 4 times higher alfentanil concentrations to achieve the same analgesia as non-carriers. This increase seems to be at least for homozygous carriers unproblematic, because to reach a comparable respiratory depression as non-carriers, they needed 10-12 times higher alfentanil concentrations.
The results of this work demonstrate that the µ-opioid receptor variant N40D causes a regionally limited reduction of the signal transduction efficiency of µ-opioid receptors in brain regions involved in pain processing. Thus, the painful activation of sensory brain regions coding for pain intensity is not sufficiently suppressed by opioid analgesics in carriers of the variant N40D. Due to the insufficient suppression in hetero- and homozygous carriers of the variant N40D, the concentration of opioids has to be increased by a factor 2 - 4, in order to achieve the same analgesia as in non-carriers. At the same time, the respiratory depressive effects are decreased to a greater extent in homozygous carriers of the N40D variant as they need a 10 - 12 times higher opioid concentration to suffer from the same degree of respiratory depression as non-carriers. Due to the increased therapeutic range of opioid analgesics, an increase of the opioid dose seems to be harmless, at least for homozygous carriers of the N40D variant.
High field strength element systematics and Lu-Hf & Sm-Nd garnet geochronology of orogenic eclogites
(2008)
Concerning the Bulk Silicate Earth (BSE), the depleted mantle and the continental crust are thought to balance the budget of refractory and lithophile elements, resulting in complementary trace element patterns. However, the two high field strength elements (HFSE) Niob and Tantal appear to contradict this mass balance. All reservoirs of the silicate Earth exhibit subchondritic Nb/Ta ratios, possibly as a result of Nb depletion. The two HFSE Zr and Hf on the other hand seem not to be fractionated between the silicate reservoirs. They show more or less chondritic Zr/Hf ratios. In this study a series of orogenic eclogites from different localities was analyzed to determine their HFSE concentrations and to contribute to the question if eclogites could form a hidden reservoir to account for the mass imbalance of the BSE. The results show that the orogenic eclogites have subchondritic Nb/Ta ratios and near chondritic Zr/Hf ratios. The investigated eclogites show no fractionation of Nb/Ta ratios and no enrichment of Nb compared to e.g. MOR-basalts, the likely precursor of these rocks. With an average Nb/Ta ratio of 14.9 these eclogites could not balance the differences between BSE and chondrite. Additionally, with an average Nb/Ta ≈ MORB they also cannot balance the small differences in the Nb/Ta of the crust and the mantle. LA-ICPMS analyses of rutiles in these eclogites reveal a zonation of Nb/Ta ratios in this mineral, with rutile cores having higher Nb/Ta than rutile rims. As a consequence, Laser Ablation data of rutiles have to be evaluated carefully and cannot necessarily reflect a bulk rock Nb and Ta composition, although over 90% of these elements reside in rutile.
Disruption of the complex gastrointestinal ecosystem between the resident microflora and the colonic epithelial cells has been associated with increased inflammation and altered cell growth. Possible endpoints of this disturbance are IBD and CRC. The data presented in this thesis, entitled "PPARgamma as molecular target of epithelial functions in the gastrointestinal tract", shed further light on the underlying molecular mechanisms contributing to the well ordered homeostasis of this gastrointestinal ecosystem. Except for elucidating important roles for mesalazine and the dietary HDAC inhibitors butyrate and SFN in a) the modulation of cellular growth, b) the induction of APs, and c) the control of NFkappaB signalling in CRC cells, the involvement of the nuclear hormone receptors PPARgamma und VDR as "gatekeepers" in these intricate regulatory mechanisms were established. Future work will be engaged in analysing whether these in vitro findings are also physiologically relevant in regard to prevention and therapy of gastrointestinal diseases. Within the scope of this work, in Paper I and II it could be demonstrated that butyrate and mesalazine act via PPARgamma to induce their anti-proliferative and pro-apoptotic actions along the caspase signalling pathway. Activation of the intrinsic and extrinsic signalling trail and the down-regulation of anti-apoptotic proteins are responsible for increased caspase-3 activity caused by butyrate. In contrast, mesalazine merely activates this cascade via the extrinsic trail and the IAPs. Moreover, a signal transduction pathway leading to increased cell death via p38 MAPK - PPARgamma - caspase-3 in response to butyrate was unveiled. In addition, there is strong evidence that mesalazine-mediated pro-apoptotic and growth-inhibitory abilities are controlled by PPARgamma-dependent and -independent mechanisms which appear to be triggered at least in part by the modulation of the tumor suppressor gene PTEN and the oncoprotein c-myc, respectively. In Paper III and IV the induction of the APs HBD-2 and LL-37 in response to the dietary HDAC inhibitors butyrate and SFN was pinpointed. Regarding the molecular events of this regulation, the data presented in this thesis provide strong evidence for the involvement of VDR in HBD-2- and LL-37-induced gene expression, while the participation of PPARgamma was excluded. Moreover, the role for p38 MAPK and TGF-beta1 in the up-regulation of LL-37 caused by butyrate was established. In contrast, SFN-mediated induction of HBD-2 is modulated via ERK1/2 signalling. The findings in Paper V clearly refer to the involvement of the nuclear hormone receptors PPARgamma and VDR in butyrate-mediated suppression of inducible NFkappaB activation dependent on the stimulated signalling pathway caused by LPS or TNFalpha. Moreover, an inhibitory role for VDR in the regulation of basal NFkappaB activation was revealed. On the contrary, a modulating role for PPARgamma on basal NFkappaB could be debarred. Altogether the data presented in this thesis not only provide new insights in the understanding of the fundamental gastrointestinal physiology regulated by nuclear hormone receptors, but also may offer opportunities for the development of potential drug targets and therapeutic strategies in the treatment of IBD and CRC.
Background Over the past years a variety of host restriction genes have been identified in human and mammals that modulate retrovirus infectivity, replication, assembly, and/or cross-species transmission. Among these host-encoded restriction factors, the APOBEC3 (A3; apolipoprotein B mRNA-editing catalytic polypeptide 3) proteins are potent inhibitors of retroviruses and retrotransposons. While primates encode seven of these genes (A3A to A3H), rodents carry only a single A3 gene. Results Here we identified and characterized several A3 genes in the genome of domestic cat (Felis catus) by analyzing the genomic A3 locus. The cat genome presents one A3H gene and three very similar A3C genes (a-c), probably generated after two consecutive gene duplications. In addition to these four one-domain A3 proteins, a fifth A3, designated A3CH, is expressed by read-through alternative splicing. Specific feline A3 proteins selectively inactivated only defined genera of feline retroviruses: Bet-deficient feline foamy virus was mainly inactivated by feA3Ca, feA3Cb, and feA3Cc, while feA3H and feA3CH were only weakly active. The infectivity of Vif-deficient feline immunodeficiency virus and feline leukemia virus was reduced only by feA3H and feA3CH, but not by any of the feA3Cs. Within Felidae, A3C sequences show significant adaptive selection, but unexpectedly, the A3H sequences present more sites that are under purifying selection. Conclusion Our data support a complex evolutionary history of expansion, divergence, selection and individual extinction of antiviral A3 genes that parallels the early evolution of Placentalia, becoming more intricate in taxa in which the arms race between host and retroviruses is harsher.
Ancient coins are among the most widely collected and demanded objects among American collectors of antiquities. A vocal lobby of ancient coin dealers/collectors has arisen to protect the importation of undocumented material into the United States and also seeks to make a distinction between antiquities trafficking and that in ancient coins. Coins are an equally important historical source and are no less important 'antiquities' than a Greek painted vase. I examine the scale of the trade in ancient coins in North America and address some points made by proponents of a continued unfettered ancient coin trade.
Background Medical students come into contact with infectious diseases early on their career. Immunity against vaccine-preventable diseases is therefore vital for both medical students and the patients with whom they come into contact. Methods The purpose of this study was to compare the medical history and serological status of selected vaccine-preventable diseases of medical students in Germany. Results The overall correlation between medical history statements and serological findings among the 150 students studied was 86.7 %, 66.7 %, 78 % and 93.3 % for measles, mumps, rubella and varicella, conditional on sufficient immunity being achieved after one vaccination. Conclusions Although 81.2 % of the students medical history data correlated with serological findings, significant gaps in immunity were found. Our findings indicate that medical history alone is not a reliable screening tool for immunity against the vaccine-preventable diseases studied.
In the context of information theory, the term Mutual Information has first been formulated by Claude Elwood Shannon. Information theory is the consistent mathematical description of technical communication systems. To this day, it is the basis of numerous applications in modern communications engineering and yet became indispensable in this field. This work is concerned with the development of a concept for nonlinear feature selection from scalar, multivariate data on the basis of the mutual information. From the viewpoint of modelling, the successful construction of a realistic model depends highly on the quality of the employed data. In the ideal case, high quality data simply consists of the relevant features for deriving the model. In this context, it is important to possess a suitable method for measuring the degree of the, mostly nonlinear, dependencies between input- and output variables. By means of such a measure, the relevant features could be specifically selected. During the course of this work, it will become evident that the mutual information is a valuable and feasible measure for this task and hence the method of choice for practical applications. Basically and without the claim of being exhaustive, there are two possible constellations that recommend the application of feature selection. On the one hand, feature selection plays an important role, if the computability of a derived system model cannot be guaranteed, due to a multitude of available features. On the other hand, the existence of very few data points with a significant number of features also recommends the employment of feature selection. The latter constellation is closely related to the so called "Curse of Dimensionality". The actual statement behind this is the necessity to reduce the dimensionality to obtain an adequate coverage of the data space. In other word, it is important to reduce the dimensionality of the data, since the coverage of the data space exponentially decreases, for a constant number of data points, with the dimensionality of the available data. In the context of mapping between input- and output space, this goal is ideally reached by selecting only the relevant features from the available data set. The basic idea for this work has its origin in the rather practical field of automotive engineering. It was motivated by the goals of a complex research project in which the nonlinear, dynamic dependencies among a multitude of sensor signals should be identified. The final goal of such activities was to derive so called virtual sensors from identified dependencies among the installed automotive sensors. This enables the real-time computability of the required variable without the expenses of additional hardware. The prospect of doing without additional computing hardware is a strong motive force in particular in automotive engineering. In this context, the major problem was to find a feasible method to capture the linear- as well as the nonlinear dependencies. As mentioned before, the goal of this work is the development of a flexibly applicable system for nonlinear feature selection. The important point here is to guarantee the practicable computability of the developed method even for high dimensional data spaces, which are rather realistic in technical environments. The employed measure for the feature selection process is based on the sophisticated concept of mutual information. The property of the mutual information, regarding its high sensitivity and specificity to linear- and nonlinear statistical dependencies, makes it the method of choice for the development of a highly flexible, nonlinear feature selection framework. In addition to the mere selection of relevant features, the developed framework is also applicable for the nonlinear analysis of the temporal influences of the selected features. Hence, a subsequent dynamic modelling can be performed more efficiently, since the proposed feature selection algorithm additionally provides information about the temporal dependencies between input- and output variables. In contrast to feature extraction techniques, the developed feature selection algorithm in this work has another considerable advantage. In the case of cost intensive measurements, the variables with the highest information content can be selected in a prior feasibility study. Hence, the developed method can also be employed to avoid redundance in the acquired data and thus prevent for additional costs.
Extrait des Minutes de la Secrétairerie d’Etat au quartier imperial de St Polten, le 22 brumaire an 14 Napoléon Empereur des Français et Roi d’Italie Sur le rapport de notre ministre de l’interieur Nous avons décreté et décretons ce qui suit Dispositions Générales Art. 1er l’Ecole existante dans le local du ci devant Gymnase Laurentien à Cologne, Departement de la Roer, prendra à l’avenir le titre d’Ecole secondaire communale de premier dégré II. Independamment de cette école, il en sera établi une autre sous le nom d’Ecole secondaire communale de second dégré. Le batiment et dépendances du collège des Jesuites du ci-devant couvent de St. Maximin sont concédé à la Ville de Cologne pour l’usage de cette école III. Tous les biens capitaux et revenus des fondations et bourses d’études des ci-devant Gymnases, et tous les biens capitaux et revenus provenant des Jésuites supprimés spécialement et originairement affectés aux établissemens d’instruction publiques de Cologne sont destinés à l’entretien des écoles de premier et second dégré de cette Ville
A new global crop water model was developed to compute blue (irrigation) water requirements and crop evapotranspiration from green (precipitation) water at a spatial resolution of 5 arc minutes by 5 arc minutes for 26 different crop classes. The model is based on soil water balances performed for each crop and each grid cell. For the first time a new global data set was applied consisting of monthly growing areas of irrigated crops and related cropping calendars. Crop water use was computed for irrigated land and the period 1998 – 2002. In this documentation report the data sets used as model input and methods used in the model calculations are described, followed by a presentation of the first results for blue and green water use at the global scale, for countries and specific crops. Additionally the simulated seasonal distribution of water use on irrigated land is presented. The computed model results are compared to census based statistical information on irrigation water use and to results of another crop water model developed at FAO.
A data set of monthly growing areas of 26 irrigated crops (MGAG-I) and related crop calendars (CC-I) was compiled for 402 spatial entities. The selection of the crops consisted of all major food crops including regionally important ones (wheat, rice, maize, barley, rye, millet, sorghum, soybeans, sunflower, potatoes, cassava, sugar cane, sugar beets, oil palm, rapeseed/canola, groundnuts/peanuts, pulses, citrus, date palm, grapes/vine, cocoa, coffee), major water-consuming crops (cotton), and unspecified other crops (other perennial crops, other annual crops, managed grassland). The data set refers to the time period 1998-2002 and has a spatial resolution of 5 arc minutes by 5 arc minutes which is 8 km by 8 km at the equator. This is the first time that a data set of cell-specific irrigated growing areas of irrigated crops with this spatial resolution was created. The data set is consistent to the irrigated area and water use statistics of the AQUASTAT programme of the Food and Agriculture Organization of the United Nations (FAO) (http://www.fao.org/ag/agl/aglw/aquastat/main/index.stm) and the Global Map of Irrigation Areas (GMIA) (http://www.fao.org/ag/agl/aglw/aquastat/irrigationmap/index.stm). At the cell-level it was tried to maximise consistency to the cropland extent and cropland harvested area from the Department of Geography and Earth System Science Program of the McGill University at Montreal, Quebec, Canada and the Center for Sustainability and the Global Environment (SAGE) of the University of Wisconsin at Madison, USA (http://www.geog.mcgill.ca/~nramankutty/ Datasets/Datasets.html and http://geomatics.geog.mcgill.ca/~navin/pub/Data/175crops2000/). The consistency between the grid product and the input data was quantified. MGAG-I and CC-I are fully consistent to each other on entity level. For input data other than CC-I, the consistency of MGAG-I on cell level was calculated. The consistency of MGAG-I with respect to the area equipped for irrigation (AEI) of GMIA and to the cropland extent of SAGE was characterised by the sum of the cell-specific maximum difference between the MGAG-I monthly total irrigated area and the reference area when the latter was exceeded in the grid cell. The consistency of the harvested area contained in MGAG-I with respect to SAGE harvested area was characterised by the crop-specific sum of the cell-specific difference between MGAG-I harvested area and the SAGE harvested area when the latter was exceeded in the grid cell. In all three cases, the sums are the excess areas that should not have been distributed under the assumption that the input data were correct. Globally, this cell-level excess of MGAG-I as compared to AEI is 331,304 ha or only about 0.12 % of the global AEI of 278.9 Mha found in the original grid. The respective cell-level excess of MGAG-I as compared to the SAGE cropland extent is 32.2 Mha, corresponding to about 2.2 % of the total cropland area. The respective cell-level excess of MGAG-I as compared to the SAGE harvested area is 27 % of the irrigated harvested area, or 11.5 % of the AEI. In a further step that will be published later also rainfed areas were compiled in order to form the Global data set of monthly irrigated and rainfed crop areas around the year 2000 (MIRCA2000). The data set can be used for global and continental-scale studies on food security and water use. In the future, it will be improved, e.g. with a better spatial resolution of crop calendars and an improved crop distribution algorithm. The MIRCA2000 data set, its full documentation together with future updates will be freely available through the following long-term internet site: http://www.geo.uni-frankfurt.de/ipg/ag/dl/forschung/MIRCA/index.html. The research presented here was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) within the framework of the research project entitled "Consistent assessment of global green, blue and virtual water fluxes in the context of food production: regional stresses and worldwide teleconnections". The authors thank Navin Ramankutty and Chad Monfreda for making available the current SAGE datasets on cropland extent (Ramankutty et al., 2008) and harvested area (Monfreda et al., 2008) prior to their publication.
The introduction of a common currency as well as the harmonization of rules and regulations in Europe has significantly reduced distance in all its guises. With reduced costs of overcoming space, this emphasizes centripetal forces and it should foster consolidation of financial activity. In a national context, as a rule, this led to the emergence of one financial center. Hence, Europeanization of financial and monetary affairs could foretell the relegation of some European financial hubs such as Frankfurt and Paris to third-rank status. Frankfurt’s financial history is interesting insofar as it has lost (in the 1870s) and regained (mainly in the 1980s) its preeminent place in the German context. Because Europe is still characterized by local pockets of information-sensitive assets as well as a demand for variety the national analogy probably does not hold. There is room in Europe for a number of financial hubs of an international dimension, including Frankfurt.
In this paper we consider the dynamics of spot and futures prices in the presence of arbitrage. We propose a partially linear error correction model where the adjustment coefficient is allowed to depend non-linearly on the lagged price difference. We estimate our model using data on the DAX index and the DAX futures contract. We find that the adjustment is indeed nonlinear. The linear alternative is rejected. The speed of price adjustment is increasing almost monotonically with the magnitude of the price difference.
This study develops a novel 2-step hedonic approach, which is used to construct a price index for German paintings. This approach enables the researcher to use every single auction record, instead of only those auction records that belong to a sub-sample of selected artists. This results in a substantially larger sample available for research and it lowers the selection bias that is inherent in the traditional hedonic and repeat sales methodologies. Using a unique sample of 61,135 auction records for German artworks created by 5,115 different artists over the period 1985 to 2007, we find that the geometric annual return on German art is just 3.8 percent, with a standard deviation of 17.87 percent. Although our results indicate that art underperforms the market portfolio and is not proportionally rewarded for downside risk, under some circumstances art should be included in an optimal portfolio for diversification purposes.
While companies have emerged as very proactive donors in the wake of recent major disasters like Hurricane Katrina, it remains unclear whether that corporate generosity generates benefits to firms themselves. The literature on strategic philanthropy suggests that such philanthropic behavior may be valuable because it can generate direct and indirect benefits to the firm, yet it is not known whether investors interpret donations in this way. We develop hypotheses linking the strategic character of donations to positive abnormal returns. Using event study methodology, we investigate stock market reactions to corporate donation announcements by 108 US firms made in response to Hurricane Katrina. We then use regression analysis to examine if our hypothesized predictors are associated with positive abnormal returns. Our results show that overall, corporate donations were linked to neither positive nor negative abnormal returns. We do, however, see that a number of factors moderate the relationship between donation announcements and abnormal stock returns. Implications for theory and practice are discussed.
We estimate the degree of 'stickiness' in aggregate consumption growth (sometimes interpreted as reflecting consumption habits) for thirteen advanced economies. We find that, after controlling for measurement error, consumption growth has a high degree of autocorrelation, with a stickiness parameter of about 0.7 on average across countries. The sticky-consumption-growth model outperforms the random walk model of Hall (1978), and typically fits the data better than the popular Campbell and Mankiw (1989) model. In several countries, the sticky-consumption-growth and Campbell-Mankiw models work about equally well.
From sea to land and beyond : new insights into the evolution of euthyneuran Gastropoda (Mollusca)
(2008)
Background The Euthyneura are considered to be the most successful and diverse group of Gastropoda. Phylogenetically, they are riven with controversy. Previous morphology-based phylogenetic studies have been greatly hampered by rampant parallelism in morphological characters or by incomplete taxon sampling. Based on sequences of nuclear 18S rRNA and 28S rRNA as well as mitochondrial 16S rRNA and COI DNA from 56 taxa, we reconstructed the phylogeny of Euthyneura utilising Maximum Likelihood and Bayesian inference methods. The evolution of colonization of freshwater and terrestrial habitats by pulmonate Euthyneura, considered crucial in the evolution of this group of Gastropoda, is reconstructed with Bayesian approaches. Results We found several well supported clades within Euthyneura, however, we could not confirm the traditional classification, since Pulmonata are paraphyletic and Opistobranchia are either polyphyletic or paraphyletic with several clades clearly distinguishable. Sacoglossa appear separately from the rest of the Opisthobranchia as sister taxon to basal Pulmonata. Within Pulmonata, Basommatophora are paraphyletic and Hygrophila and Eupulmonata form monophyletic clades. Pyramidelloidea are placed within Euthyneura rendering the Euthyneura paraphyletic. Conclusion Based on the current phylogeny, it can be proposed for the first time that invasion of freshwater by Pulmonata is a unique evolutionary event and has taken place directly from the marine environment via an aquatic pathway. The origin of colonisation of terrestrial habitats is seeded in marginal zones and has probably occurred via estuaries or semi-terrestrial habitats such as mangroves.
Poster presentation In pharmaceutical research and drug development, machine learning methods play an important role in virtual screening and ADME/Tox prediction. For the application of such methods, a formal measure of similarity between molecules is essential. Such a measure, in turn, depends on the underlying molecular representation. Input samples have traditionally been modeled as vectors. Consequently, molecules are represented to machine learning algorithms in a vectorized form using molecular descriptors. While this approach is straightforward, it has its shortcomings. Amongst others, the interpretation of the learned model can be difficult, e.g. when using fingerprints or hashing. Structured representations of the input constitute an alternative to vector based representations, a trend in machine learning over the last years. For molecules, there is a rich choice of such representations. Popular examples include the molecular graph, molecular shape and the electrostatic field. We have developed a molecular similarity measure defined directly on the (annotated) molecular graph, a long-standing established topological model for molecules. It is based on the concepts of optimal atom assignments and iterative graph similarity. In the latter, two atoms are considered similar if their neighbors are similar. This recursive definition leads to a non-linear system of equations. We show how to iteratively solve these equations and give bounds on the computational complexity of the procedure. Advantages of our similarity measure include interpretability (atoms of two molecules are assigned to each other, each pair with a score expressing local similarity; this can be visualized to show similar regions of two molecules and the degree of their similarity) and the possibility to introduce knowledge about the target where available. We retrospectively tested our similarity measure using support vector machines for virtual screening on several pharmaceutical and toxicological datasets, with encouraging results. Prospective studies are under way.
Background The inhibitor telaprevir (VX-950) of the hepatitis C virus (HCV) protease NS3-4A has been tested in a recent phase 1b clinical trial in patients infected with HCV genotype 1. This trial revealed residue mutations that confer varying degrees of drug resistance. In particular, two protease positions with the mutations V36A/G/L/M and T54A/S were associated with low to medium levels of drug resistance during viral breakthrough, together with only an intermediate reduction of viral replication fitness. These mutations are located in the protein interior and far away from the ligand binding pocket. Results Based on the available experimental structures of NS3-4A, we analyze the binding mode of different ligands. We also investigate the binding mode of VX-950 by protein-ligand docking. A network of non-covalent interactions between amino acids of the protease structure and the interacting ligands is analyzed to discover possible mechanisms of drug resistance. We describe the potential impact of V36 and T54 mutants on the side chain and backbone conformations and on the non-covalent residue interactions. We propose possible explanations for their effects on the antiviral efficacy of drugs and viral fitness. Molecular dynamics simulations of T54A/S mutants and rotamer analysis of V36A/G/L/M side chains support our interpretations. Experimental data using an HCV V36G replicon assay corroborate our findings. Conclusion T54 mutants are expected to interfere with the catalytic triad and with the ligand binding site of the protease. Thus, the T54 mutants are assumed to affect the viral replication efficacy to a larger degree than V36 mutants. Mutations at V36 and/or T54 result in impaired interaction of the protease residues with the VX-950 cyclopropyl group, which explains the development of viral breakthrough variants.
Generally, information provision and certifcation have been identified as the major economic functions of rating agencies. This paper analyzes whether the “watchlist” (rating review) instrument has extended the agencies' role towards a monitoring position, as proposed by Boot, Milbourn, and Schmeits (2006). Using a data set of Moody's rating history between 1982 and 2004, we find that the overall information content of rating action has indeed increased since the introduction of the watchlist procedure. Our findings suggest that rating reviews help to establish implicit monitoring contracts between agencies and borrowers and as such enable a finer partition of rating information, thereby contributing to a higher information quality.
The basic problem of primary audio and video research materials is clearly shown by the survey: A great and important part of the entire heritage is still outside archival custody in the narrower sense, scattered over many institutions in fairy small collections, and even in private hands. reservation following generally accepted standards can only be carried out effectively if collections represent critical mass. Specialised audiovisual archives will solve their problems, as they will sooner or later succeed in getting appropriate funding to achieve their aims. A very encouraging example is the case of the Netherlands. The larger audiovisual research archives will also manage, more or less autonomously, the transfer of contents in time. For a considerable part of the research collections, however, the concept of cooperative models and competence centres is the only viable model to successfullly safeguard their holdings. Their organisation and funding is a considerable challenge for the scientific community. TAPE has significantly raised awareness of the fact that, unless action is swiftly taken, the loss of audiovisual materials is inevitable. TAPE’s international and regional workshops were generally overbooked. While TAPE was already underway, several other projects for the promotion of archives have received grants from organisations other than the European Commission, inter alia support for the St. Petersburg Phonogram Archive, and the Folklore Archive in Tirana, obviously as a result of a better understanding of the need for audiovisual preservation. When the TAPE project started its partners assumed that cooperative projects would fail because of the notorious distrust of researchers, specifically in the post-communist countries. One of the most encouraging surprises was to learn that, at least in the most recent survey, it became apparent that this social obstacle is fading out. TAPE may have contributed to this important development.
Twentieth-century scholars have thought little about the attractions of Descartes’ thinking. Especially in feminist theory, he has a bad press as the ‘instigator’ of the body-mind-split – seen as one of the theoretical bases for the subordination of women in Western culture. Seen from within seventeenth-century discourse it is the dictum that can be inferred from his writings that ‘the mind has no sex’ and which can be seen as an appeal to think about rational capacities in the utopian perspective of a gender neutral discourse. My work analyses this “face” of Cartesianism as it was adapted in favour of English seventeenth-century women. How were the specific tenets of Descartes’ philosophy employed on behalf of English women in the second half of the seventeenth century in England? My focus is on Descartes as a thinker, who – whatever his real or imagined intention might have been – provided women in seventeenth-century England with tools with which to change their status, in other words: with instruments of empowerment. So why were Descartes’ arguments so attractive for women? Descartes had argued for equal rational abilities among individuals in a gender neutral way. He had further critiqued generally accepted truth with his universal doubt. I believe this specific combination of ideas, affirming their rational capabilities, was seen by a number of women as an invitation to become involved in spheres of activity from which they were previously excluded. Moreover, a specific set of Descartes’ arguments provided a number of English women with a strategy to extend female agency. Not only did Descartes’ views legitimate female rationality, they also allowed an acknowledgement that this female intellect was equally connected to “truth” as that of their male contemporaries. As a consequence, women developed an increased self-esteem and inspiration to pursue their own independent study (and in some cases publishing). These ideas eventually helped to bring forward a demand for female education, as girls and women were still excluded from formal education in seventeenth-century England. My general thesis is that Cartesianism, as one of the earliest universalist theories on the nature of human reason, introduced new possibilities into the English debate over the nature and, hence, social position of women. It brought a radical twist to the already existing discussion on women by offering new critical tools which were taken up to argue on behalf of English women. In my work I examine the specific historical conditions of the reception of Descartes’ thought in England, the philosophical appeal of his ideas for women and analyse the writings of two English ‘disciples’ of Descartes: Margaret Cavendish, Duchess of Newcastle and Mary Astell.
On 27 and 28 September 2007, a commission formed on the initiative of the authors held its first meeting in Aarhus, Denmark to deliberate on its goal of drafting a "European Model Company Law Act" (EMCLA). This project, outlined in the following pages, aims neither to force a mandatory harmonization of national company law nor to create a further, European corporate form. The goal is rather to draft model rules for a corporation that national legislatures would be free to adopt in whole or in part. Thus, the project is thought as an alternative and supplement to the existing EU instruments for the convergence of company law. The present EU instruments, their prerequisites and limits will be discussed in more detail in Part II, below. Part III will examine the US experience with such "model acts" in the area of company law. Part IV will then conclude by discussing several topics concerning the content of an EMCLA, introducing the members of the EMCLA Working Group, and explaining the Group's preliminary working plan.
This paper identifies some common errors that occur in comparative law, offers some guidelines to help avoid such errors, and provides a framework for entering into studies of the company laws of three major jurisdictions. The first section illustrates why a conscious approach to comparative company law is useful. Part I discusses some of the problems that can arise in comparative law and offers a few points of caution that can be useful for practical, theoretical and legislative comparative law. Part II discusses some relatively famous examples of comparative analysis gone astray in order to demonstrate the utility of heeding the outlined points of caution. The second section offers a framework for approaching comparative company law. Part III provides an example of using functional definition to demarcate the topic "company law", offering an "effects" test to determine whether a given provision of law should be considered as functionally part of the rules that govern the core characteristics of companies. It does this by presenting the relevant company law statutes and related topical laws of Germany, the United Kingdom and the United States, using Delaware as a proxy for the 50 states. On the basis of this definition, Part IV analyzes the system of legal functions that comprises "company law" in the United States and the European Union. It selects as the predominant factor for consideration the jurisdictions, sub-jurisdictions and rule-making entities that have legislative or rule-making competence in the relevant territorial unit, analyzes the extent of their power, presents the type of law (rules) they enact (issue), and discusses the concrete manner in which the laws and rules of the jurisdictions and sub-jurisdictions can legally interact. Part V looks at the way these jurisdictions do interact on the temporal axis of history, that is, their actual influence on each other, which in the relevant jurisdictions currently takes the form of regulatory competition and legislative harmonization. The method of the approach outlined in this paper borrows much from system theory. The analysis attempts to be detailed without losing track of the overall jurisdictional framework in the countries studied.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus' semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as non-determinism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe's proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
We develop a multivariate generalization of the Markov–switching GARCH model introduced by Haas, Mittnik, and Paolella (2004b) and derive its fourth–moment structure. An application to international stock markets illustrates the relevance of accounting for volatility regimes from both a statistical and economic perspective, including out–of–sample portfolio selection and computation of Value–at–Risk.
An asymmetric multivariate generalization of the recently proposed class of normal mixture GARCH models is developed. Issues of parametrization and estimation are discussed. Conditions for covariance stationarity and the existence of the fourth moment are derived, and expressions for the dynamic correlation structure of the process are provided. In an application to stock market returns, it is shown that the disaggregation of the conditional (co)variance process generated by the model provides substantial intuition. Moreover, the model exhibits a strong performance in calculating out–of–sample Value–at–Risk measures.
This paper documents and studies sources of international differences in participation and holdings in stocks, private businesses, and homes among households aged 50+ in the US, England, and eleven continental European countries, using new internationally comparable, household-level data. With greater integration of asset and labor markets and policies, households of given characteristics should be holding more similar portfolios for old age. We decompose observed differences across the Atlantic, within the US, and within Europe into those arising from differences: a) in the distribution of characteristics and b) in the influence of given characteristics. We find that US households are generally more likely to own these assets than their European counterparts. However, European asset owners tend to hold smaller real, PPP-adjusted amounts in stocks and larger in private businesses and primary residence than US owners at comparable points in the distribution of holdings, even controlling for differences in configuration of characteristics. Differences in characteristics often play minimal or no role. Differences in market conditions are much more pronounced among European countries than among US regions, suggesting significant potential for further integration.
Marginal income taxes may have an insurance effect by decreasing the effective fluctuations of after-tax individual income. By compressing the idiosyncratic component o personal income fluctuations, higher marginal taxes should be negatively correlated with the dispersion of consumption across households, a necessary implication of an insurance effect of taxation. Our study empirically examines this negative correlation, exploiting the ample variation of state taxes across US states. We show that taxes are negatively correlated with the consumption dispersion of the within-state distribution of non-durable consumption and that this correlation is robust.
Based on a unique dataset of legislative changes in industrial countries, we identify events that strengthen the competition control of mergers and acquisitions, analyze their impact on banks and non-financial firms and explain the different reactions observed with specific regulatory characteristics of the banking sector. Covering nineteen countries for the period 1987 to 2004, we find that more competition-oriented merger control increases the stock prices of banks and decreases the stock prices of non-financial firms. Bank targets become more profitable and larger, while those of non-financial firms remain mostly unaffected. A major determinant of the positive bank returns is the degree of opaqueness that characterizes the institutional setup for supervisory bank merger reviews. The legal design of the supervisory control of bank mergers may therefore have important implications for real activity.
This work is devoted to the description of mechanisms that might be responsible for avian magnetoreception. Two possible theoretical concepts underlying this phenomenon are formulated and their functionality is proven in realistic geomagnetic fields. It has been suggested that the "magnetic sense" in birds may be mediated by the blue light receptor protein- cryptochrome- which is known to be localized in the retinas of migratory birds. Cryptochromes are a class of photoreceptor signaling proteins that are found in a wide variety of organisms and which primarily perform regulatory functions, such as the entrainment of circadian rhythm in mammals and the inhibition of hypocotyl growth in plants. Recent experiments have shown that the activity of cryptochrome-1 in Arabidopsis thaliana is enhanced by the presence of a weak external magnetic field, confirming the ability of cryptochrome to mediate magnetic field responses. Cryptochrome's signaling is tied to the photoreduction of an internally bound chromophore, flavin adenine dinucleotide (FAD). The spin chemistry of this photoreduction process, which involves electron transfer from a chain of three tryptophans, is modulated by the presence of a magnetic field in an effect known as the radical pair mechanism. Cryptochrome was suggested as a possible magnetoreceptor for the first time in 2000. However, no realistic calculations of the magnetic field effect in cryptochrome were performed. One of the goals of the present thesis is computationally to study the electron spin dynamics in cryptochrome and to show the feasibility of a cryptochrome-based compass in birds. In particular, the activation yield of cryptochrome was studied as a function of an external magnetic field and it was shown that the activation of the protein can be influenced by the geomagnetic field. In the work it has also been proven that cryptochrome provides an inclination compass, which is necessary for bird orientation. The evolution of spin densities as a function of time is also discussed. An alternative mechanism of avian magnetoreception discussed in the thesis is based on the interaction of two iron minerals (magnetite and maghemite) which were only recently found in subcellular compartments within the sensory dendrites of the upper beak of several bird species. The iron minerals in the beak form platelets of crystalline maghemite and assemblies of magnetite nanoparticles (magnetite clusters). The interaction between these particles can be manipulated by an external magnetic field inducing a primary receptor potential via strain-sensitive membrane channels that lead to a certain bird orientation effect. Various properties of the magnetite/maghemite magnetoreceptor system have been considered: the potential energy surface of the magnetite cluster has been calculated and analyzed as a function of the orientation of an external magnetic field; the forces acting on the magnetite cluster were calculated and analyzed; the force differences caused by the change of the direction of external magnetic field were established; the probability of opening the mechanosensitive ion channel was calculated. Finally it has been demonstrated that the iron-mineral based magnetoreceptor provides a polarity magnetic compass. Various conditions at which the magnetoreception process is violated are outlined.
Strong chromofields developed at early stages of relativistic heavy-ion collisions give rise to the collective deceleration of net baryons from colliding nuclei. We have solved classical equations of motion for baryonic slabs under the action of time-dependent chromofield. We have studied sensitivity of the slab trajectories and their final rapidities to the initial strength and decay pattern of the chromofield as well as to the back reaction of produced plasma. This mechanism can naturally explain significant baryon stopping observed at RHIC, an average rapidity loss hδyi ≈ 2. Using a Bjorken hydrodynamical model with particle producing source we also study the evolution of partonic plasma produced as the result of chromofield decay. Due to the delayed formation and expansion of plasma its maximum energy density is much lower than the initial energy density of the chromofield. It is shown that the net-baryon and produced parton distributions are strongly correlated in the rapidity space. The shape of net-baryon spectra in midrapidity region found in the BRAHMS experiment cannot be reproduced by only one value of chromofield energy density parameter ǫ0, even if one takes into account novel mechanisms as fluctuations of color charges generated on the slab surface, and weak interaction of baryon-rich matter with produced plasma. The further step to improve our results is to take into account rapidity dependence of saturation momentum as explained in thesis. Different values of parameter ǫ0 has been tried for different variants of chromofield decay to fit BRAHMS data for net-baryon rapidity distribution. In accordance with our analysis, data for fragmentation region correspond to the lower chromofield energy densities than mid-rapidity region. χ2 analysis favors power-law of chromofield decay with corresponding initial chromofield energy density of order ǫf = 30GeV/fm3.
In the article, a travel sketch of Danish playwright Kaj Munk (1898 – 1944) is analytically considered. The analysis of this text allows drawing at least three conclusions: 1. Explicit motive of seasickness, figuring here as antithetic modification of implicite present free-standing posture motive, symbolizes idea of disintegrating personality. 2. Such a symbolism is deeply rooted in that of Danish identity. 3. From literary styles and trends history point of view, the sketch appears to be typical of that line in expressionism, which continues tradition of symbolism as artistic and literary trend of late 19th century.
Many older US households have done little or no planning for retirement, and there is a substantial population that seems to undersave for retirement. Of particular concern is the relative position of older women, who are more vulnerable to old-age poverty due to their longer longevity. This paper uses data from a special module we devised on planning and financial literacy in the 2004 Health and Retirement Study. It shows that women display much lower levels of financial literacy than the older population as a whole. In addition, women who are less financially literate are also less likely to plan for retirement and be successful planners. These findings have important implications for policy and for programs aimed at fostering financial security at older ages.
Generally, information provision and certification have been identified as the major economic functions of rating agencies. This paper analyzes whether the “watchlist" (rating review) instrument has extended the agencies' role towards a monitoring position, as proposed by Boot, Milbourn, and Schmeits (2006). Using a data set of Moody's rating history between 1982 and 2004, we find that the overall information content of rating action has indeed increased since the introduction of the watchlist procedure. Our findings suggest that rating reviews help to establish implicit monitoring contracts between agencies and borrowers and as such enable a finer partition of rating information, thereby contributing to a higher information quality.
The reaction of consumer spending and debt to tax rebates – evidence from consumer credit data
(2008)
We use a new panel dataset of credit card accounts to analyze how consumer responded to the 2001 Federal income tax rebates. We estimate the monthly response of credit card payments, spending, and debt, exploiting the unique, randomized timing of the rebate disbursement. We find that, on average, consumers initially saved some of the rebate, by increasing their credit card payments and thereby paying down debt. But soon afterwards their spending increased, counter to the canonical Permanent-Income model. Spending rose most for consumers who were initially most likely to be liquidity constrained, whereas debt declined most (so saving rose most) for unconstrained consumers. More generally, the results suggest that there can be important dynamics in consumers’ response to “lumpy” increases in income like tax rebates, working in part through balance sheet (liquidity) mechanisms.