Refine
Year of publication
- 2005 (598) (remove)
Document Type
- Article (215)
- Working Paper (79)
- Conference Proceeding (72)
- Doctoral Thesis (58)
- Part of a Book (51)
- Preprint (43)
- Part of Periodical (40)
- Report (22)
- Book (11)
- Review (3)
Language
- English (598) (remove)
Has Fulltext
- yes (598) (remove)
Keywords
- Artikulation (13)
- Phonetik (13)
- Artikulatorische Phonetik (12)
- Englisch (11)
- Artikulator (8)
- Deutsch (7)
- Europäische Union (7)
- Geldpolitik (7)
- Bedeutungswandel (6)
- Computerlinguistik (6)
Institute
- Physik (72)
- Center for Financial Studies (CFS) (42)
- Wirtschaftswissenschaften (39)
- Biochemie und Chemie (32)
- Medizin (24)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Geowissenschaften (17)
- E-Finance Lab e.V. (16)
- Extern (15)
- Biowissenschaften (14)
We present a biologically-inspired system for real-time, feed-forward object recognition in cluttered scenes. Our system utilizes a vocabulary of very sparse features that are shared between and within different object models. To detect objects in a novel scene, these features are located in the image, and each detected feature votes for all objects that are consistent with its presence. Due to the sharing of features between object models our approach is more scalable to large object databases than traditional methods. To demonstrate the utility of this approach, we train our system to recognize any of 50 objects in everyday cluttered scenes with substantial occlusion. Without further optimization we also demonstrate near-perfect recognition on a standard 3-D recognition problem. Our system has an interpretation as a sparsely connected feed-forward neural network, making it a viable model for fast, feed-forward object recognition in the primate visual system.
The analysis of doxorubicin-loaded poly(butyl cyanoacrylate) nanoparticles in in vitro glioma models
(2005)
The use of doxorubicin for the treatment of glioma tumours would be an important approach in the chemotherapy treatment since doxorubicin is a very effective neoplastic agent. However, one problem faced by the use of doxorubicin for the treatment of brain tumours is the fact that doxorubicin is a substrate of an efflux pump protein, P-glycoprotein (P-gp), which is located on the luminal side of the brain capillary endothelium and in many tumour cells, which acts pumping out of the cell such substrate, and blocking its transport into the cell. A strategy to enhance the doxorubicin delivery into the brain would be the use of nanoparticles. This work showed, that the treatment of doxorubicin bound to poly(butyl cyanoacrylate) nanoparticles decreased the viability of the three glioma cell lines, the GS-9L, the RG-2, and the F-98 cell lines significantly in comparison to doxorubicin in solution, indicating an improvement of the nanoparticles-bound doxorubicin transport into the cells. The modification of the nanoparticles surface with different surfactants may even enhance the delivery of the drug into the cells. Searching for an improvement of the doxorubicin internalization, the nanoparticles surface was modified using polysorbate 80, poloxamer 188 and poloxamine 908 surfactants. The poloxamer 188 and polaxamine 908 surfactant modified nanoparticles did not show a significant enhancement of the doxorubicin internalization. Contrary, the treatment of polysorbate 80 surfactant modified nanoparticles led in some cases to a significant decrease of cancer cell viability. The use of doxorubicin in the three glioma cell lines allowed the measurement of different responses towards doxorubicin treatment. The different responses were due to the entry of various amounts of doxorubicin into the glioma cells, which express the P-glycoprotein in their cellular membrane. A higher level of the P-gp expression correlated with a weaker response towards the doxorubicin treatment. The GS-9L cell line showed a significant higher level of P-gp expression than the F-98, and RG-2 cell lines, and consequently, the GS-9L cell line presented the highest resistance to doxorubicin with the highest viability values after doxorubicin treatment. Due to the fact that the transport of doxorubicin is governed by the activity of the P-gp in the studied glioma cells, the use of poloxamer 185 as a P-gp inhibitor resulted in an enhancement of the uptake as well as of the accumulation of doxorubicin into the cells. The effect of poloxamer 185 on the doxorubicin uptake was significant marked in the case of doxorubicin-resistance cells, as the GS-9L cell line. In some cases, the presence of the nanoparticles formulation showed also an influence on such uptake improvement. The use of a P-gp inhibitor in combination with chemotherapeutic agents leads to encouraging results. Because of the wide spectrum of substances acting as P-gp inhibitors, the exact inhibitory mechanisms remain still unclear. For instance in our results the evaluation of a described P-gp inhibitor, polysorbate 80 did not show an important improvement in the doxorubicin uptake in the P-gp-expressing cell line, GS-9L. On the other hand, the Polysorbate 80-Dox-PBCA nanoparticles formulation decreased in greater extend the viability of the glioma cells than the poloxamer185-Dox-PBCA nanoparticles. Although, the P-gp inhibition was undoubtedly higher in the presence of poloxamer 185, polysorbate 80 showed a main effect on the disruption of the cellular membrane, resulting in an important cellular viability decrease. It seems that poloxamer 185 presents a direct effect on the functionality of the P-gp protein, which would be of great importance in the sensitization of resistant cancer cells. The range of concentration of poloxamer 185 is very important to yield an inhibitory effect on the P-gp-mediated transport mechanism. The accumulation of Rhodamine-123 (Rho-123), a known P-gp substrate, increased in a range of concentration from 0.001 % to 0.01, whereas at 0.1 % poloxamer 185 the accumulation significantly decreased. A maximal Rho-123 accumulation was reached at 0.01 % poloxamer 185.
In this paper we derive a formula for the energy loss due to elastic N to N particle scattering in models with extra dimensions that are compactified on a radius R. In contrast to a previous derivation we also calculate additional terms that are suppressed by factors of frequency over compactification radius. In the limit of a large compactification radius R those terms vanish and the standard result for the non compactified case is recovered.
Background: Depression is a disorder with high prevalence in primary health care and a significant burden of illness. The delivery of health care for depression, as well as other chronic illnesses, has been criticized for several reasons and new strategies to address the needs of these illnesses have been advocated. Case management is a patient-centered approach which has shown efficacy in the treatment of depression in highly organized Health Maintenance Organization (HMO) settings and which might also be effective in other, less structured settings. Methods/Design: PRoMPT (PRimary care Monitoring for depressive Patients Trial) is a cluster randomised controlled trial with General Practice (GP) as the unit of randomisation. The aim of the study is to evaluate a GP applied case-management for patients with major depressive disorder. 70 GPs were randomised either to intervention group or to control group with the control group delivering usual care. Each GP will include 10 patients suffering from major depressive disorder according to the DSM-IV criteria. The intervention group will receive treatment based on standardized guidelines and monthly telephone monitoring from a trained practice nurse. The nurse investigates the patient's status concerning the MDD criteria, his adherence to GPs prescriptions, possible side effects of medication, and treatment goal attainment. The control group receives usual care – including recommended guidelines. Main outcome measure is the cumulative score of the section depressive disorders (PHQ-9) from the German version of the Prime MD Patient Health Questionnaire (PHQ-D). Secondary outcome measures are the Beck-Depression-Inventory, self-reported adherence (adapted from Moriskey) and the SF-36. In addition, data are collected about patients' satisfaction (EUROPEP-tool), medication, health care utilization, comorbidity, suicide attempts and days out of work. The study comprises three assessment times: baseline (T0) , follow-up after 6 months (T1) and follow-up after 12 months (T2). Discussion: Depression is now recognized as a disorder with a high prevalence in primary care but with insufficient treatment response. Case management seems to be a promising intervention which has the potential to bridge the gap of the usually time-limited and fragmented provision of care. Case management has been proven to be effective in several studies but its application in the private general medical practice setting remains unclear.
Background: Diabetes model projects in different regions of Germany including interventions such as quality circles, patient education and documentation of medical findings have shown improvements of HbA1c levels, blood pressure and occurrence of hypoglycaemia in before-after studies (without control group). In 2002 the German Ministry of Health defined legal regulations for the introduction of nationwide disease management programs (DMP) to improve the quality of care in chronically ill patients. In April 2003 the first DMP for patients with type 2 diabetes was accredited. The evaluation of the DMP is essential and has been made obligatory in Germany by the Fifth Book of Social Code. The aim of the study is to assess the effectiveness of DMP by example of type 2 diabetes in the primary care setting of two German federal states (Rheinland-Pfalz and Sachsen-Anhalt). Methods/Design: The study is three-armed: a prospective cluster-randomized comparison of two interventions (DMP 1 and DMP 2) against routine care without DMP as control group. In the DMP group 1 the patients are treated according to the current situation within the German-Diabetes-DMP. The DMP group 2 represents diabetic care within ideally implemented DMP providing additional interventions (e.g. quality circles, outreach visits). According to a sample size calculation a sample size of 200 GPs (each GP including 20 patients) will be required for the comparison of DMP 1 and DMP 2 considering possible drop-outs. For the comparison with routine care 4000 patients identified by diabetic tracer medication and age (> 50 years) will be analyzed. Discussion: This study will evaluate the effectiveness of the German Diabetes-DMP compared to a Diabetes-DMP providing additional interventions and routine care in the primary care setting of two different German federal states.
Aims: This paper is a review of the literature on problem-related drinking of alcohol among medical doctors, and it deals with the epidemiology and results. Methods: A search of computer literature databases - PubMed and ETOH - was performed to locate articles reporting problem-related drinking among doctors, using population-based samples of doctors within the last two decades. Results: In the light of different definitions of problem-related drinking, there was found a breadth of prevalence of problem-related drinking - from heavy drinking and hazardous drinking (12%-16%) to misuse and dependence (6%-8%) - within the population-based samples of doctors. An increased risk was positively related to male doctors and doctors of the age of 40-45 years and older, and to some factors of work, lifestyle and health. Conclusion: For the future, it seems necessary to sensitise the research for problem-related drinking of doctors in Germany, e.g. initiating a representative survey, analysing the drinking of alcohol in the context of health, life-style and work-related factors.
Herman P. Schwan [1915–2005] was a distinguished scientist and engineer, and a founding father of the field of biomedical engineering. A man of integrity, Schwan influenced the lives of many, including his wife and children, and his many students and colleagues. Active in science until nearly the end of his life, he will be very much missed by his family and many colleagues.
Background: Murine leukemia virus (MLV) vector particles can be pseudotyped with a truncated variant of the human immunodeficiency virus type 1 (HIV-1) envelope protein (Env) and selectively target gene transfer to human cells expressing both CD4 and an appropriate co-receptor. Vector transduction mimics the HIV-1 entry process and is therefore a safe tool to study HIV-1 entry. Results: Using FLY cells, which express the MLV gag and pol genes, we generated stable producer cell lines that express the HIV-1 envelope gene and a retroviral vector genome encoding the green fluorescent protein (GFP). The BH10 or 89.6 P HIV-1 Env was expressed from a bicistronic vector which allowed the rapid selection of stable cell lines. A codon-usage-optimized synthetic env gene permitted high, Rev-independent Env expression. Vectors generated by these producer cells displayed different sensitivity to entry inhibitors. Conclusion: These data illustrate that MLV/HIV-1 vectors are a valuable screening system for entry inhibitors or neutralizing antisera generated by vaccines.
In this paper, we propose a model of credit rating agencies using the global games framework to incorporate information and coordination problems. We introduce a refined utility function of a credit rating agency that, additional to reputation maximization, also embeds aspects of competition and feedback effects of the rating on the rated firms. Apart from hinting at explanations for several hypotheses with regard to agencies' optimal rating assessments, our model suggests that the existence of rating agencies may decrease the incidence of multiple equilibria. If investors have discretionary power over the precision of their private information, we can prove that public rating announcements and private information collection are complements rather than substitutes in order to secure uniqueness of equilibrium. In this respect, rating agencies may spark off a virtuous circle that increases the efficiency of the market outcome.
The 5'-terminal cloverleaf (CL)-like RNA structures are essential for the initiation of positive- and negative-strand RNA synthesis of entero- and rhinoviruses. SLD is the cognate RNA ligand of the viral proteinase 3C (3Cpro), which is an indispensable component of the viral replication initiation complex. The structure of an 18mer RNA representing the apical stem and the cGUUAg D-loop of SLD from the first 5'-CL of BEV1 was determined in solution to a root-mean-square deviation (r.m.s.d.) (all heavy atoms) of 0.59 A (PDB 1Z30). The first (antiG) and last (synA) nucleotide of the D-loop forms a novel ‘pseudo base pair’ without direct hydrogen bonds. The backbone conformation and the base-stacking pattern of the cGUUAg-loop, however, are highly similar to that of the coxsackieviral uCACGg D-loop (PDB 1RFR) and of the stable cUUCGg tetraloop (PDB 1F7Y) but surprisingly dissimilar to the structure of a cGUAAg stable tetraloop (PDB 1MSY), even though the cGUUAg BEV D-loop and the cGUAAg tetraloop differ by 1 nt only. Together with the presented binding data, these findings provide independent experimental evidence for our model [O. Ohlenschläger, J. Wöhnert, E. Bucci, S. Seitz, S. Häfner, R. Ramachandran, R. Zell and M. Görlach (2004) Structure, 12, 237–248] that the proteinase 3Cpro recognizes structure rather than sequence.
We have isolated the human protein SNEV as downregulated in replicatively senescent cells. Sequence homology to the yeast splicing factor Prp19 suggested that SNEV might be the orthologue of Prp19 and therefore might also be involved in pre-mRNA splicing. We have used various approaches including gene complementation studies in yeast using a temperature sensitive mutant with a pleiotropic phenotype and SNEV immunodepletion from human HeLa nuclear extracts to determine its function. A human–yeast chimera was indeed capable of restoring the wild-type phenotype of the yeast mutant strain. In addition, immunodepletion of SNEV from human nuclear extracts resulted in a decrease of in vitro pre-mRNA splicing efficiency. Furthermore, as part of our analysis of protein–protein interactions within the CDC5L complex, we found that SNEV interacts with itself. The self-interaction domain was mapped to amino acids 56–74 in the protein's sequence and synthetic peptides derived from this region inhibit in vitro splicing by surprisingly interfering with spliceosome formation and stability. These results indicate that SNEV is the human orthologue of yeast PRP19, functions in splicing and that homo-oligomerization of SNEV in HeLa nuclear extract is essential for spliceosome assembly and that it might also be important for spliceosome stability.
In order to further understand how DNA polymerases discriminate against incorrect dNTPs, we synthesized two sets of dNTP analogues and tested them as substrates for DNA polymerase a (pol alpha) and Klenow fragment (exo-) of DNA polymerase I (Escherichia coli ). One set of analogues was designed to test the importance of the electronic nature of the base. The bases consisted of a benzimidazole ring with one or two exocyclic substituent(s) that are either electron-donating (methyl and methoxy) or electronwithdrawing (trifluoromethyl and dinitro). Both pol a and Klenow fragment exhibit a remarkable inability to discriminate against these analogues as compared to their ability to discriminate against incorrect natural dNTPs. Neither polymerase shows any distinct electronic or steric preferences for analogue incorporation. The other set of analogues, designed to examine the importance of hydrophobicity in dNTP incorporation, consists of a set of four regioisomers of trifluoromethyl benzimidazole. Whereas pol a and Klenow fragment exhibited minimal discrimination against the 5- and 6-regioisomers, they discriminated much more effectively against the 4- and 7-regioisomers. Since all four of these analogues will have similar hydrophobicity and stacking ability, these data indicate that hydrophobicity and stacking ability alone cannot account for the inability of pol a and Klenow fragment to discriminate against unnatural bases. After incorporation, however, both sets of analogues were not efficiently elongated. These results suggest that factors other than hydrophobicity, sterics and electronics govern the incorporation of dNTPs into DNA by pol {alpha} and Klenow fragment.
Background: Costly structures need to represent an adaptive advantage in order to be maintained over evolutionary times. Contrary to many other conspicuous shell ornamentations of gastropods, the haired shells of several Stylommatophoran land snails still lack a convincing adaptive explanation. In the present study, we analysed the correlation between the presence/absence of hairs and habitat conditions in the genus Trochulus in a Bayesian framework of character evolution. Results: Haired shells appeared to be the ancestral character state, a feature most probably lost three times independently. These losses were correlated with a shift from humid to dry habitats, indicating an adaptive function of hairs in moist environments. It had been previously hypothesised that these costly protein structures of the outer shell layer facilitate the locomotion in moist habitats. Our experiments, on the contrary, showed an increased adherence of haired shells to wet surfaces. Conclusion: We propose the hypothesis that the possession of hairs facilitates the adherence of the snails to their herbaceous food plants during foraging when humidity levels are high. The absence of hairs in some Trochulus species could thus be explained as a loss of the potential adaptive function linked to habitat shifts.
Using unobservable conditional variance as measure, latent-variable approaches, such as GARCH and stochastic-volatility models, have traditionally been dominating the empirical finance literature. In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. By constructing "observable" or realized volatility series from intraday transaction data, the use of standard time series models, such as ARFIMA models, have become a promising strategy for modeling and predicting (daily) volatility. In this paper, we show that the residuals of the commonly used time-series models for realized volatility exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance when modeling and forecasting realized volatility. In an empirical application for S&P500 index futures we show that allowing for time-varying volatility of realized volatility leads to a substantial improvement of the model's fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting. Klassifikation: C22, C51, C52, C53
Stem cells capable of self-renewal and differentiation into multiple tissues are important in medicine to reconstitute the hematopoietic system after myelo-ablative chemo- or radiotherapy. In the present situation, adult stem cells such as Mesenchymal stem cells (MSC) and Hematopoietic stem cells (HSC) are used for therapeutic purposes. For tissue regeneration and tissue constitution, engraftment of transplanted stem cells is a necessary feature. However, in many instances, the transplanted stem cells reach the tissues with low efficiency. Considering the three-step model of leukocyte extravasation by Springer et al, the rolling, adhesion and transmigration form the three major steps for the transplanted stem cells to enter the desired tissues. One of the molecular switches reported to be involved in these mechanisms are the Rho family GTPases. The present study investigates the role of Rho GTPases in adhesion and migration of stem and progenitor cells. Chemotactic and chemokinetic migration assays, transendothelial migration assays, migration of cells under shear stress, microinjection, retroviral and lentiviral gene transfer methods, oligonucleotide microarray analysis and pull down assays were employed in this study for the elucidation of Rho GTPase involvement in migration and adhesion of stem and progenitor cells. The transmigration assay used for the migration determination of the adherent cell type, MSC, was optimized for the efficient and effective assessment of the migrating cells. The involvement of Rho was found to be critical for stem and progenitor cell migration where inactivation of Rho by C2I-C3 transferase toxin and/or overexpression of C3 transferase cDNA increased the migration rate of Hematopoietic progenitor cells (HPC) and MSC. Moreover, modulation of Rho caused predictable cytoskeletal and morphological changes in MSC. Assessment of Rho GTPase involvement in the interacting partner, the endothelial cells during stem cell migration, revealed that active Rho expression induced E-selectin expression. The increased levels of E-selectin were functionally confirmed by the increased adhesion of progenitor cells (HPC) to the Human umbilical vein endothelial cell (HUVEC) layer. Moreover, inhibition of Rac in the migrating endothelial progenitor cells (eEPC) increased their adhesion to HUVEC correlating with the increased percentage expression of cell surface receptor, CD44 in Rac inactivated eEPC. In conclusion, this study shows that Rho GTPases control the adhesion and migration of stem and progenitor cells, HPC and MSC. Rho inhibition drives the cells to migrate in the blood vessels. The substantial increase in the level of active Rho in endothelial layer, manifested by the E-selectin surface expression assists the better adhesion of stem and progenitor cells to the endothelial layer. Serum factors and growth factors in the physiological system influence the Rho GTPase expression in both migrating stem cells and the barrier endothelial cells. Thus, specific modulation of Rho GTPases in the transplanted stem and progenitor cells could be an interesting tool to improve the migration and homing processes of stem cells for cellular therapy in future.
This work is dedicated to the investigation of nuclear matter at non-zero temperatures within an effective hadronic model based on the Walecka model. It includes fermions as well as a vector omega meson and a scalar sigma meson where for the latter a quartic self-interaction has been considered. The coupling constants have been adapted to the saturation properties of infinite nuclear matter. A set of self-consistent Schwinger-Dyson equations has been set up for all included particles within the Cornwall-Jackiw-Tomboulis formalism. This has been expanded to non-zero temperatures via the imaginary time formalism. Beside tree-level two different stages of approximations have been considered: the Hartree approximation which takes into account the double-bubble diagram for the scalar meson, and an improved approximation where in addition two-particle irreducible sunset diagrams for all fields were included. In the Hartree-approximation the Schwinger-Dyson equations can be solved by quasi-particle ansaetze, while in the improved approximation spectral functions with non-zero widths have to be introduced. The Schwinger-Dyson equations are solved by the fully dressed propagators. Comparing the two levels of approximation shows the influence of finite widths on the temperature dependence of the particle properties. The consideration of finite widths in fact has a significant influence on the transition from a phase of heavy nucleons to a transition of light nucleons, observed in the Walecka-model. The temperature dependence is weakend when finte widths are taken into account.
The present work was devised to address the systematic analysis of samples from a range of Roman non-ferrous metal artefacts from different archaeological contexts and sites in the Roman provinces of Germania Superior. One of the focal points of this study is the provenancing of different lead objects from five important Roman settlements between 15 BC and the beginning of fourth century AD. For this purpose, measurements were made on lead and copper ore samples from the Siegerland, Eifel, Hunsrück and Lahn-Dill area in Germany and supplemented with data from the literature to create a data bank of lead isotope ratios of European deposits. Compositional analysis of lead objects by Electron Microprobe analysis showed that Romans were able to purify lead from ore up to 99%. Multi-Collector Inductively Coupled Plasma Mass-Spectrometry was used to determine the source of lead, which played an important role in nearly all aspects of Roman life. Lead isotope ratios were measured for ore samples from German deposits from the eastern side of the Rhine (Siegerland, Lahn-Dill, Ems) and the western side of the Rhine (Eifel, Hunsrück), which contained enough ore reserves to answer the increasing local demand and are believed to have been mined during the Roman period. This data together with those from Mediterranean ore deposits from the literature was used to establish a data bank. The Mediterranean ore deposits range from Cambrian (high 207Pb/206Pb) to tertiary (lower 207Pb/206Pb) values. In particular, the Cypriot deposits are younger, while the Spanish deposits fall either with the younger Sardic ores or close to the older Cypriot ores. The lead isotope ratios of most German ore deposits fall in between the 208Pb/206Pb vs. 207Pb/206Pb ratios of Sardinia and Cyprus, where the lead isotope signature of ore deposits from France and Britain are also found. Over 240 lead objects were measured from Wallendorf (second century BC to first century AD) Dangstetten (15-8 BC), Waldgirmes (AD 1-10), Mainz (AD 1-300), Martberg (first to fourth centuries AD) & Trier (third to fourth centuries AD). Comparing the lead isotope ratios of lead objects and those from German ores shows that the source of over 85 percent of objects are Eifel ore deposits, but the Roman’s had also imported lead from the Southern Massif Central and from Great Britain. A further topic of this work was the systematic study of the variation of copper isotope ratios in different copper minerals and the mechanisms, which controls copper isotope fractionation in ores deposits. For this purpose, copper isotope analyses were made by Multi-Collector Inductively Coupled Plasma Mass-Spectrometry from a series of hydrothermal copper sulphides and their alteration products. Copper and lead isotope ratios were measured in coexisting phases of chalcopyrite and malachite and also coexisting malachite and azurite. No significant fractionation was observed in malachite-azurite phases, but in chalcopyrite-malachite coexisting phases, malachite always shows a positive fractionation to heavier isotope values. Zhu et al. and Larson et al. showed that isotopic variations in copper principally reflect mass fractionation in response to low temperature processes rather than source heterogeneity. The low temperature ore formation processes are mostly represented by weathering of primary sulphide ores to produce secondary carbonate phases and therefore are usually observed on the surface of ore deposits, which were probably removed during the early Bronze Age. Using this concept, copper isotope ratios were measured in some Early Bronze Age copper alloys and Roman copper alloys. However, no large copper isotope fractionation has been observed. Lead and copper isotope ratios were measured on samples from the Kupferschiefer. Two profiles were investigated; 1) Sangerhausen, which was not directly influenced by the oxidizing brines of Rote Fäule and 2) Oberkatz, where both Rote Fäule-controlled and structure-controlled mineralization were observed. Results from maturation studies of organic matter suggest the maximum temperature affecting the Kupferschiefer did not exceed 130°C. delta-65-Cu ranges between -0.78-+0.58‰, shows a positive correlation with copper concentration. Maximum temperature in the Kupferschiefer profile from Oberkatz is supposed to be around 150°C. delta-65Cu in this profile ranges between -0.71-+0.68‰. The pattern of copper isotope fractionation and copper concentration is same as the for profile of Sangerhausen. Origina lead isotope ratios are strongly overprinted by high concentrations of uranium in bottom of both profiles causing more radiogenic lead.
Static analysis of different non-strict functional programming languages makes use of set constants like Top, Inf, and Bot denoting all expressions, all lists without a last Nil as tail, and all non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics. This paper proves decidability, in particular EXPTIMEcompleteness, of subset relationship of co-inductively defined sets by using algorithms and results from tree automata. This shows decidability of the test for set inclusion, which is required by certain strictness analysis algorithms in lazy functional programming languages.
In order to investigate the role of neuronal synchronization in perceptual grouping, a new method was developed to record selectively from multiple cortical sites of known functional specificity as determined by optical imaging of intrinsic signals. To this end, a matrix of closely spaced guide tubes was developed in cooperation with a company providing the essential manufacturing technique RMPD® (Rapid Micro Product Development). The matrix was embedded into a framework of hard and software that allowed for the mapping of each guide tube onto the cortical site an electrode would be led to if inserted into that guide tube. With these developments, it was possible to determine the functional layout of the cortex by optical imaging and subsequently perform targeted recordings with multiple electrodes in parallel. The method was tested for its accuracy and found to target the electrodes with a precision of 100 µm to the desired cortical locations. Using the developed technique, neuronal activity was recorded from area 18 of anesthetized cats. For stimulation, Gabor-patches in different geometrical configurations were placed over the recorded receptive fields merging into visual objects appropriate for testing the hypothesis of feature binding by synchrony. Synchronization strength was measured by the height of the cross-correlation centre peaks. All pairwise synchronizations were summarized in a correlation index which determined the mean difference of the correlation strengths between conditions in which recording sites should or should not fire in synchrony according to the binding hypothesis. The correlation index deviated significantly from zero for several of these configurations, further supporting the hypothesis that synchronization plays an important role in the process of perceptual grouping. Furthermore, direct evidence was found for the independence of the synchronization strength from the neuronal firing rate and for neurons that change dynamically the ensemble they participate in. In parallel to the experimental approach, mechanisms of oscillatory long range synchronization were studied by network simulations. To this end, a biologically plausible model was implemented using pyramidal and basket cells with Hodgkin-Huxley like conductances. Several columns were built from these cells and intra- and inter-columnar connections were mimicked from physiological data. When activated by independent Poisson spike trains, the columns showed oscillatory activity in the gamma frequency range. Correlation analysis revealed the tendency to locally synchronize the oscillations among the columns, but a rapid phase transition occurred with increasing cortical distance. This finding suggests that the present view of the inter-columnar connectivity does not fully explain oscillatory long range synchronization and predicts that other processes such as top-down influences are necessary for long range synchronization phenomena.
Systematisch verabreichte Chemotherapeutika sind oft uneffektiv bei der Behandlung von Krankheiten des zentralen Nervensystems (ZNS). Eine der Ursachen hierfür ist der unzureichende Arzneistoff-Transport ins Gehirn aufgrund der Blut-Hirn-Schranke. Eine der Strategien für den nicht-invasiven Wirkstoff-Transport ins Gehirn ist die Verwendung von Nanopartikeln. Polybutylcyanoacrylat-Nanopartikel, die mit Polysorbat 80 (Tween® 80) überzogen wurden, können die Blut-Hirn-Schranke passieren und somit Wirkstoffe ins Gehirn transportieren. Wird die Blut-Hirn-Schranke durch einen Hirntumor partiell beschädigt und hierdurch ihre Permeabilität am Ort des Tumors erhöht, können Nanopartikel den Tumor zusätzlich durch den sogenannten EPR-Effekt erreichen. Im ersten Teil der vorliegenden Arbeit wurde die Beladung der Nanopartikel durch Variation der Formulierungparameter mit dem Ziel optimiert, eine Formulierung mit höherer Wirksamkeit für die Therapie von Glioblastom-tragenden Ratten zu entwickeln. Außerdem wurde das Potential von Doxorubicin, das an mit „Stealth Agents“ überzogenen Polybutylcyanoacrylat-Nanopartikel gebunden war, für die Chemotherapie von Hirntumoren untersucht. Im zweiten Teil dieser Studie wurden die Gehirn- und Körperverteilung in gesunden und in Glioblastom-101/8-tragenden Ratten nach i.v.-Gabe von Poly(butyl-2-cyano[3- 14C]acrylat)-Nanopartikeln, die mit Polysorbat 80 beschichtet wurden, und solchen, die noch zusätzlich mit Doxorubicin geladen waren (DOX-14C-PBCA + PS), untersucht. Die Standardformulierung von Doxubicin-Polybutylcyanoacrylat-Nanopartikeln (DOX-NP) wurde durch anionische Polymerisierung von Butylcyanoacrylat in Anwesenheit von DOX hergestellt. Zusätzlich wurden unterschiedliche DOX-NP Formulierungen durch Veränderung der Herstellung produziert. Das therapeutische Potential der Formulierungen wurde in Ratten mit ins Gehirn transplantieren Glioblastom 101/8 untersucht. Neben Polysorbat 80 wurden Poloxamer 188 und Poloxamin 908 als Überzugsmaterial verwendet. Die Resultate ergaben, dass die mit Polysorbat 80 überzogene Standardformulierung am effektivsten war. Die höhere Wirksamkeit von DOX-NP+PS 80 könnte durch die Fähigkeit dieser Träger erklärt werden, den Wirkstoff während eines frühen Stadiums der Tumorentwicklung durch einen Rezeptor-vermittelten Mechanismus, der durch den PS 80-Überzug aktiviert wurde über die intakte Blut-Hirn-Schranke, zu transportieren. Unsere Ergebnisse zeigen auch, dass Poloxamer 188 und Poloxamin 908 den antitumoralen Effekt von DOX-PBCA beträchtlich verbessern. Der anti-tumorale Effekt dieser Formulierungen könnte möglicherweise dem EPR-Effekt zugeschrieben werden. Es ist bekannt, dass die tumorale Arzneistoff-Aufnahme durch den EPR-Effektes für lang-zirkulierende Wirkstoffträger ausgeprägter ist und so mehr Wirkstoff durch die Tumor-geschädigte Blut-Hirn-Schranke gelangt. Unbeschichtete Nanopartikel, Polysorbat 80-beschichtete Nanopartikel oder mit Doxorubicin beladene und mit Polysorbat 80 beschichtete Nanopartikel wurden in gesunden und Tumor-tragenden Ratten injiziert. Diese Nanopartikel-Präparationen zeigten einer unterschiedliche Korpenverteilung in den Ratten. Unbeschichtete Nanopartikel sammelten sich in den RES-Organen an. Mit PS 80 beschichtete NP reduzierten die Aufnahme der NP in Leber und Milz, während sich die Konzentration der NP in der Lunge erhöhte. Diese Beobachtungen deuten darauf hin, dass die Änderung der Oberflächeneigenschaften der NP durch das Tensid, zu einer Interaktion mit unterschiedlichen Opsoninen führt, welches die Aufnahme der NP von verschiedenen phagozitierenden Zellen erleichtert. Hingegen war die Aufnahme der mit DOX beladenen, PS 80-beschichteten Nanopartikel den unbeschichteten Partikel ähnlich. Im Vergleich mit gesunden Ratten und mit Tumor-tragenden Ratten hingegen war die Konzentration der NP im Gehirn von Tumor tragenden Ratten 10 Tage nach der Tumor-implantation signifikant höher. In Anwesenheit des Glioblastoms ist der Transport von NP in das Gehirn das Resultat verschiedener Faktoren: zusätzlich zur Fähigkeit von PS 80-Nanopartikeln, die Blut-Hirn-Schranke zu passieren, extravasieren diese Träger wegen des EPR Effekts über das durch den Tumor undichte Endothelium. Die Konzentration von PS 80 [14C]-PBCA NP war im Glioblastom signifikant höher als mit DOX [14C]-PBCA NP. Dieses Phänomen kann durch die unterschiedliche Mikroumgebung von zerebralem intra-tumoralen und intaktem Gehirngewebe erklärt werde. Insbesondere können sich die positive Ladung der tumoralen Regionen und die positive Ladung der DOX [14C]-PBCA NP negativ beeinflussen. Dennoch waren die Doxorubicin-Konzentration in Glioblastom ausreichend, einen therapeutischen Effekt zu ermöglichen.
Group III presynaptic metabotropic glutamate receptors (mGluRs) play a central role in regulating presynaptic activity through G-protein effects on ion channels and signal transducing enzymes. Like all Class C G-protein coupled receptors, mGluR8 has an extended intracellular C-terminal domain (CTD) presumed to allow for modulation of downstream signaling. To elucidate the function and modulation of mGluR8, yeast two-hybrid screens of an adult rat brain cDNA library were performed with the CTDs of mGluR8a and 8b (mGluR8-C) as baits. Different components of the sumoylation cascade (ube2a, sumo-1, Pias1, Pias gamma and Pias xbeta) and some other proteins were identified as mGluR8 interacting proteins. Binding assays using recombinant GST-fusion proteins confirmed that Pias1 interacts not only with mGluR8-C, but all group III mGluR CTDs. Pias1 binding to mGluR8-C required a region N-terminally to a consensus sumoylation motif and was not affected by arginine substitution of the conserved lysine K882 within this motif. Co-transfection of fluorescently tagged mGluR8a-C, sumo-1 and enzymes of the sumoylation cascade into HEK 293 cells showed that mGluR8a-C can be sumoylated in cells. Arginine substitution of lysine K882 within the consensus sumoylation motif, but not of other conserved lysines within the CTD, abolished in vivo sumoylation. The results are consistent with post-translational sumoylation providing a novel mechanism of group III mGluR regulation.
Chemokines play a key role in the cellular infiltration of inflamed tissue. They are released by a wide variety of cell types during the initial phase of host response to injury, allergens, antigens, or invading microorganisms, and selectively attract leukocytes to inflammatory foci, inducing both migration and activation. Monocyte chemoattractant protein-1 (MCP-1), a member of the CC chemokine superfamily, functions in attracting monocytes, T lymphocytes, and basophils to sites of inflammation. MCP-1 is produced by monocytes, fibroblasts, vascular endothelial cells and smooth muscle cells in response to various stimuli such as tumour necrosis factor-a (TNF-a), interferon-g (IFN-g), and interleukin-1b (IL-1b). It also plays an important role in the pathogenesis of chronic inflammation, and overexpression of MCP-1 has been implicated in diseases including glomerulonephritis and rheumatoid arthritis. Oligonucleotide-directed triple helix formation offers a means to target specific sequences in DNA and interfere with gene expression at the transcriptional level. Triple helix-forming oligonucleotides (TFOs) bind to homopurine/homopyrimidine sequences, forming a stable, sequence-specific complex with the duplex DNA. Purine-rich sequences are frequent in gene regulatory regions and TFOs directed to promoter sequences have been shown to prevent binding of transcription factors and inhibit transcription initiation and elongation. Exogenous TFOs that bind homopurine/ homopyrimidine DNA sequences and form triple-helices can be rationally designed, while the intracellular delivery of single-stranded RNA TFOs has not been studied in detail before. In this study, expression vectors were constructed which directed transcription of either a 19 nt triplex-forming pyrimidine CU-TFO sequence targeting the human MCP-1 or two different 19 nt GU- or CA-control sequences, respectively, together with the vector encoded hygromycin resistance mRNA as one fusion transcript. HEK 293 cells were stable transfected with these vectors and several TFO and control cell lines were generated. Functional relevant triplex formation of a TFO with a corresponding 19 bp GC-rich AP-1/SP-1 site of the human MCP-1 promoter was shown. Binding of synthetic 19 nt CUTFO to the MCP-1 promoter duplex was verified by triplex blotting at pH 6.7. Underlining binding specificity, control sequences, including the GU- and CA-sequence, a TFO containing one single mismatch and a MCP-1 promoter duplex containing two mismatches, did not participate in triplex formation. Establishing a magnetic capture technique with streptavidin microbeads it was verified that at pH 7.0 the 19 nt TFO embedded in a 1.1 kb fusion transcript binds to a plasmid encoded MCP-1 promoter target duplex three times stronger than the controls. Finally, cell culture experiments revealed 76 ± 10.2% inhibition of MCP-1 protein secretion in TNF-a stimulated CU-TFO harboring cell lines and up to 88% after TNF-a and IFN-g costimulation in comparison to controls. Expression of interleukin-8 (IL-8) as one TNF-a inducible control gene was not affected by CU-TFO, demonstrating both highly specific and effective chemokine gene repression. Furthermore, another chemokine target, regulated upon activation normal T cell expressed and secreted (RANTES), which plays an essential role in inflammation by recruiting T lymphocytes, macrophages and eosinophils to inflammatory sites, was analysed using the triplex approach. A 28 nt TFO was designed targeting the murine RANTES gene promoter, and gel mobility shift assays demonstrated that the phosphodiester TFO formed a sequencespecific triplex with the double-stranded target DNA with a Kd of 2.5 x 10-7 M. It was analysed whether RANTES expression could be inhibited at the transcriptional level testing the TFO in two different cell lines, T helper-1 lymphocytes and brain microvascular endothelial cells (bend3 cells). Although there was a sequence-specific binding of the TFO detectable in the gel shift assays, there was no inhibitory effect of the exogenously added and phosphorothioate stabilised TFO on endogenous RANTES gene expression visible. Additionally, the small interfering RNA (siRNA) approach was tested as another strategy to inhibit expression of the pro-inflammatory chemokines MCP-1 and RANTES. Two different methods were pursuit, describing transient transfection with vector derived and synthetic siRNA. The vector pSUPER containing the siRNA coding sequence was used to suppress endogenous MCP-1 in HEK 293 cells. An empty vector without RNA sequence served as a control. Inhibition due to the siRNA was measured in stimulated and unstimulated cells. In TNF-a stimulated cells MCP-1 protein synthesis was decreased by 35 ± 11% after siRNA transfection. Using a synthetic double-stranded siRNA, the TNF-a induced MCP-1 protein secretion could be successfully inhibited about 62.3 ± 10.3% in HEK 293 cells, indicating that the siRNA is functional in these cells to suppress chemokine expression. The siRNA approach targeting murine RANTES in Th1 cells and b-end3 cells revealed no inhibition of endogenous gene expression. Gene therapy approaches rely on efficient transfer of genes to the desired target cells. A wide variety of viral and nonviral vectors have been developed and evaluated for their efficiency of transduction, sustained expression of the transgene, and safety. Among them, lentiviruses have been widely used for gene therapy applications. In order to improve the delivery of TFOs or siRNAs into the target cells, cloning of the lentiviral transfer vector SEW, the production of lentiviral particles by transient transfection were performed with the aim to generate lentiviral vector-derived TFOs in further experiments. Here, Th1 cells were transduced with infectious lentiviral particles and transduction efficacy was measured. Transduction efficacy higher than 82% could be achieved using the lentiviral vector SEW, opening optimal possibilities for the TFO or siRNA approach.
Lesion of the rat entorhinal cortex denervates the outer molecular layer of the fascia dentata followed by layer-specific axonal sprouting of uninjured fibers in the denervated zone. One of the candidate molecules regulating the laminar-specific sprouting response in the outer molecular layer is the transmembrane chondroitin sulfate proteoglycan NG2. NG2 is found in glial scars and has been suggested to impede axonal regeneration following injury of the spinal cord. The present study adressed the question whether NG2 could also regulate axonal growth in denervated areas of the brain. Therefore, (1) changes in NG2 mRNA and NG2 protein levels, (2) the cellular and the extracellular localisation of the molecule, (3) the identity of NG2 expressing cells, and (4) the generation of NG2-positive cells were studied in the rat fascia dentata before and following entorhinal deafferentation. Laser microdissection was employed to selectively harvest the denervated molecular layer and combined with quantitative reverse transcription-PCR to measure changes in NG2 mRNA amount (6h, 12h, 2d, 4d, 7d post lesion). The study revealed increases of NG2 mRNA at day 2 (2.5-fold) and day 4 (2-fold) post lesion. Immunocytochemistry was used to detect changes in NG2 protein distribution (1d, 4d, 7d, 10d, 14d, 30d, 6 months post lesion). NG2 staining was increased in the denervated outer molecular layer at 1 day post lesion, reached a maximum at 10 days post lesion, and returned to control levels within 6 month. Interestingly, the accumulation of NG2 protein was strongly restricted to the denervated outer molecular layer forming a border to the unaffected inner molecular layer. Using electron microscopy, NG2-immunoprecipitate was localized not only on glial surfaces and in the extracellular matrix but also in the vicinity of neuronal profiles indicating that NG2 is secreted following denervation. Double-labelings of NG2-immunopositive cells with markers for astrocytes, microglia/macrophages, and oligodendrocytes suggested that NG2-cells are a distinct glial subpopulation before and after entorhinal deafferentation. Bromodeoxyuridine-labeling revealed that some of the NG2-positive cells are postlesional generated. Taken together, the data revealed a layer-specific upregulation of NG2 in the denervated outer molecular layer of the fascia dentata that coincides with the sprouting response of uninjured fibers. This suggests that NG2 could regulate lesion-induced axonal growth in denervated areas of the brain.
Results from various theoretical approaches and ideas presented at this exciting meeting (summary talk at the 5th International Conference on Physics and Astrophysics of Quark Gluon Plasma (ICPAQGP - 2005)) are reviewed. I also point towards future directions, in particular hydrodynamic behaviour induced by jets traveling through the quark-gluon plasma, which might be worth looking at in more detail.
In this dissertation a non-deterministic lambda-calculus with call-by-need evaluation is treated. Call-by-need means that subexpressions are evaluated at most once and only if their value must be known to compute the overall result. Also called "sharing", this technique is inevitable for an efficient implementation. In the lambda-ND calculus of chapter 3 sharing is represented explicitely by a let-construct. Above, the calculus has function application, lambda abstractions, sequential evaluation and pick for non-deterministic choice. Non-deterministic lambda calculi play a major role as a theoretical foundation for concurrent processes or side-effected input/output. In this work, non-determinism additionally makes visible when sharing is broken. Based on the bisimulation method this work develops a notion of equality which respects sharing. Using bisimulation to establish contextual equivalence requires substitutivity within contexts, i.e., the ability to "replace equals by equals" within every program or term. This property is called congruence or precongruence if it applies to a preorder. The open similarity of chapter 4 represents a new concept, insofar that the usual definition of a bisimulation is impossible in the lambda-ND calculus. So in section 3.2 a further calculus lambda-Approx has to be defined. Section 3.3 contains the proof of the so-called Approximation Theorem which states that the evaluation in lambda-ND and lambda-Approx agrees. The foundation for the non-trivial precongruence proof is set out in chapter 2 where the trailblazing method of Howe is extended to be capable with sharing. By the use of this (extended) method, the Precongruence Theorem proves open similarity to be a precongruence, involving the so-called precongruence candidate relation. Joining with the Approximation Theorem we obtain the Main Theorem which says that open similarity of the lambda-Approx calculus is contained within the contextual preorder of the lambda-ND calculus. However, this inclusion is strict, a property whose non-trivial proof involves the notion of syntactic continuity. Finally, chapter 6 discusses possible extensions of the base calculus such as recursive bindings or case and constructors. As a fundamental study the calculus lambda-ND provides neither of these concepts, since it was intentionally designed to keep the proofs as simple as possible. Section 6.1 illustrates that the addition case and constructors could be accomplished without big hurdles. However, recursive bindings cannot be represented simply by a fixed point combinator like Y, thus further investigations are necessary.
We study queueing strategies in the adversarial queueing model. Rather than discussing individual prominent queueing strategies we tackle the issue on a general level and analyze classes of queueing strategies. We introduce the class of queueing strategies that base their preferences on knowledge of the entire graph, the path of the packet and its progress. This restriction only rules out time keeping information like a packet’s age or its current waiting time.
We show that all strategies without time stamping have exponential queue sizes, suggesting that time keeping is necessary to obtain subexponential performance bounds. We further introduce a new method to prove stability for strategies without time stamping and show how it can be used to completely characterize a large class of strategies as to their 1-stability and universal stability.
Jet physics in ALICE
(2005)
This work aims at the performance of the ALICE detector for the measurement of high-energy jets at mid-pseudo-rapidity in ultra-relativistic nucleus-nucleus collisions at LHC and their potential for the characterization of the partonic matter created in these collisions. In our approach, jets at high energy with E_{T}>50 GeV are reconstructed with a cone jet finder, as typically done for jet measurements in hadronic collisions. Within the ALICE framework we study its capabilities of measuring high-energy jets and quantify obtainable rates and the quality of reconstruction, both, in proton-proton and in lead-lead collisions at LHC conditions. In particular, we address whether modification of the jet fragmentation in the charged-particle sector can be detected within the high particle-multiplicity environment of the central lead-lead collisions. We comparatively treat these topics in view of an EMCAL proposed to complete the central ALICE tracking detectors. The main activities concerning the thesis are the following: a) Determination of the potential for exclusive jet measurements in ALICE. b) Determination of jet rates that can be acquired with the ALICE setup. c) Development of a parton-energy loss model. d) Simulation and study of the energy-loss effect on jet properties.
The results presented here strongly indicate that ubiquitination of the recombinant human alpha1 GlyR at the plasma membrane of Xenopus oocytes is involved in receptor internalisation and degradation. Ubiquitination of the human alpha1 GlyR has been demonstrated by radio-iodination of plasma membrane-boundalpha1 GlyRs, whose subunits differed in molecular weight by additional 7, 14 or 21 kDa, corresponding to the molecular weights of one, two and three conjugated ubiquitin molecules, respectively, and by co-isolation of the non-tagged human alpha1 GlyR through hexahistidyl-tagged ubiquitin. Ubiquitin conjugated GlyRs where prominent at the plasma membrane, but could be hardly detected in total cell homogenates, indicating that ubiquitination takes place exclusively at the plasma membrane. Ubiquitination of the alpha1 GlyR at the plasma membrane was no longer detectable when the ten lysine residues of the cytoplasmic loop between transmembrane segments M3 and M4 were replaced by arginines. Despite this proteolytic cleavage continued to take place at the same extent as with the wild type alpha1 GlyR, suggesting that removal of GlyRs from the plasma membrane and routing to lysosomes for degradation were not dependent on ubiquitination. Also replacing a tyrosine in position 339, which was speculated to be part of an additional endocytosis motif, did not lead to a significant reduction of cleavage of the GlyR alpha1 subunits. However, a mutant lacking both, ubiquitination sites and 339Y, was significantly less processed. These results may suggest that the GlyR alpha1 subunit harbors at least two endocytosis motifs, which may act independently to regulate the density of alpha1 GlyR. Apparently, each of the two signals may be capable of compensating entirely the loss of the other. Part two of this Dissertation demonstrates that the correct topology of the glycine receptor alpha1 subunit depends critically on six positively charged residues within a basic cluster, RFRRKRR, located in the large cytoplasmic loop following the C-terminal end of M3. Neutralization of one or more charges of this cluster, but not of other charged residues in the M3-M4 loop, led to an aberrant translocation into the endoplasmic reticulum lumen of the M3-M4 loop. However, when two of the three basic charges located in the ectodomain linking M2 and M3 were neutralized, in addition to two charges of the basic cluster, endoplasmic reticulum disposition of the M3-M4 loop was prevented. We conclude that a high density of basic residues C-terminal to M3 is required to compensate for the presence of positively charged residues in the M2-M3 ectodomain, which otherwise impair correct membrane integration of the M3 segment. Part three of this Dissertation describes my contribution (blue native PAGE analysis of metabolically labeled alpha7 and 5HT3A receptors and the examination of the glycosylation state of metabolically labeled alpha7 subunits) to a work on the limited assembly capacity of Xenopus oocytes for nicotinic alpha7 subunits. While 5HT3A subunits combined efficiently to pentamers, alpha7 subunits existed in various assembly states including trimers, tetramers, pentamers, and aggregates. Only alpha7 subunits that completed the assembly process to homopentamers acquired complex-type carbohydrates and appeared at the cell surface. We conclude that Xenopus oocytes have a limited capacity to guide the assembly of alpha7 subunits, but not 5HT3A subunits to homopentamers. Accordingly, ER retention of imperfectly assembled alpha7 subunits rather than inefficient routing of fully assembled alpha7 receptors to the cell surface limits surface expression levels of alpha7 nicotinic acetylcholine receptors. Part four of this Dissertation describes my contribution (the biochemical analysis of the human P2X2 and P2X6 subtypes) to studies on the quaternary structure of P2X receptors. Armaz Aschrafi, the main author of the paper showed that subsequent to isolation under non-denaturing conditions from Xenopus oocytes the His-rP2X2 protein migrated on blue native PAGE predominantly in an aggregated form. The only discrete protein band detectable could be assigned to homotrimers of the His-rP2X2 subunit. Because of the exceptional assembly-behaviour of the rP2X2 protein compared to the rP2X1, rP2X3, rP2X4 and rP2X5 proteins, its human orthologue was investigated in the same manner. In contrast to rP2X2 subunits, hP2X2 subunits migrated under virtually identical conditions in a single defined assembly state, which could be clearly assigned to a trimer. P2X6 subunits represent the sole P2X subtype that is unable to form functional homomeric receptors in Xenopus oocytes. The blue native PAGE analysis of metabolically labeled hP2X6 receptors and the examination of the glycosylation state revealed that hP2X6 subunits form tetramers and aggregates that are not exported to the plasma membrane of Xenopus oocytes.
In the present work, the Heidelberg electron beam ion trap (EBIT) at the Max-Planck-Institute für Kernphysik (MPIK) has been used to produce, trap highly charged argon ions and study their magnetic dipole (M1) forbidden transitions. These transitions are of relativistic origin and, hence, provide unique possibilities to perform precise studies of relativistic effects in many electron systems. In this way, the transitions energies of the 1s22s22p for the 2P3/2 - 2P1/2 transition in Ar13+ and the 1s22s2p for the 3P1 - 3P2 transition in Ar14+, for 36Ar and 40Ar isotopes were compared. The observed isotopic effect has confirmed the relativistic nuclear recoil effect corrections due to the finite nuclear mass in a recent calculation made by Tupitsyn [TSC03], in which major inconsistencies of earlier theoretical methods have been corrected for the first time. The finite mass, or recoil effect, composed of the normal mass shift (NMS), and the specific mass shift (SMS) were corrected for relativistic contributions, RNMS and RSMS. The present experimental results have shown that the recoil effects on the Breit level are indeed very important, as well as the effects of the correlated relativistic dynamics in a many electron ion.
We calculate thermal photon and neutral pion spectra in ultrarelativistic heavy-ion collisions in the framework of three-fluid hydrodynamics. Both spectra are quite sensitive to the equation of state used. In particular, within our model, recent data for S + Au at 200 AGeV can only be understood if a scenario with a phase transition (possibly to a quark-gluon plasma) is assumed. Results for Au+Au at 11 AGeV and Pb + Pb at 160 AGeV are also presented.
Different numerical approaches and algorithms arising in the context of modelling of cellular tissue evolution are discussed in this thesis. Being suited in particular to off-lattice agent-based models, the numerical tool of three-dimensional weighted kinetic and dynamic Delaunay triangulations is introduced and discussed for its applicability to adjacency detection. As there exists no implementation of a code that incorporates all necessary features for tissue modelling, algorithms for incremental insertion or deletion of points in Delaunay triangulations and the restoration of the Delaunay property for triangulations of moving point sets are introduced. In addition, the numerical solution of reaction-diffusion equations and their connection to agent-based cell tissue simulations is discussed. In order to demonstrate the applicability of the numerical algorithms, biological problems are studied for different model systems: For multicellular tumour spheroids, the weighted Delaunay triangulation provides a great advantage for adjacency detection, but due to the large cell numbers the model used for the cell-cell interaction has to be simplified to allow for a numerical solution. The agent-based model reproduces macroscopic experimental signatures, but some parameters cannot be fixed with the data available. A much simpler, but in key properties analogous, continuum model based on reaction-diffusion equations is likewise capable of reproducing the experimental data. Both modelling approaches make differing predictions on non-quantified experimental signatures. In the case of the epidermis, a smaller system is considered which enables a more complete treatment of the equations of motion. In particular, a control mechanism of cell proliferation is analysed. Simple assumptions suffice to explain the flow equilibrium observed in the epidermis. In addition, the effect of adhesion on the survival chances of cancerous cells is studied. For some regions in parameter space, stochastic effects may completely alter the outcome. The findings stress the need of establishing a defined experimental model to fix the unknown model parameters and to rule out further models.
Mobile telephony and mobile internet are driving a new application paradigm: location-based services (LBS). Based on a person’s location and context, personalized applications can be deployed. Thus, internet-based systems will continuously collect and process the location in relationship to a personal context of an identified customer. One of the challenges in designing LBS infrastructures is the concurrent design for economic infrastructures and the preservation of privacy of the subjects whose location is tracked. This presentation will explain typical LBS scenarios, the resulting new privacy challenges and user requirements and raises economic questions about privacy-design. The topics will be connected to “mobile identity” to derive what particular identity management issues can be found in LBS.
In this paper, I examine the potential of mobile alerting services empowering investors to react quickly to critical market events. Therefore, an analysis of short-term (intraday) price effects is performed. I find abnormal returns to company announcements which are completed within a timeframe of minutes. To make use of these findings, these price effects are predicted using pre-defined external metrics and different estimation methodologies. Compared to previous research, the results provide support that artificial neural networks and multiple linear regression are good estimation models for forecasting price effects also on an intraday basis. As most of the price effect magnitude and effect delay can be estimated correctly, it is demonstrated how a suitable mobile alerting service combining a low level of user-intrusiveness and timely information supply can be designed.
My graduate thesis is on the "Structural studies of membrane transport proteins". Transporters are membrane proteins that have multiple membrane-spanning a-helices. They are dynamic and diverse proteins, undergoing a large conformational change and transporting wide range of susbtrates. Based on their energy source they can be classified into primary and secondary transport systems. Primary transport systems are driven by the use of chemical (ATP) or light energy, while secondary transporters utilize ion gradients to transport substrates. I began my PhD dissertation on secondary transporters by two-dimensional crystallization and electron crystallographic analysis and recently my focus also has shifted towards 3D crystallization. The following projects constitute my PhD thesis: 1) 2D crystallization of MjNhaP1 and pH induced structural change: MjNhaP1, a Na+/H+ antiporter that is regulated by pH has been implicated in homeostasis of H+ and Na+ in Methanococcus jannaschii, a hyperthermophilic archaeon that grows optimally at 85°C. MjNhaP1 was cloned and expressed in E. coli. Two-dimensional crystals were obtained from purified protein at pH4. Electron cryo-microscopy yielded an 8Å projection map. The map of MjNhaP1 shows elongated densities in the centre of the dimer and a cluster of density peaks on either side of the dimer core, indicative of a bundle of 4-6 membrane-spanning helices. The effect of pH on the structure of MjNhaP1was studied in situ in 2D crystals revealing a major change in density within the helix bundle relative to the dimer interface. This change occurred at pH6 and above. The two conformations at low and high pH most likely represent the closed and open states of the antiporter, respectively. This is the first instance where a conformational change associated with the regulation of a secondary transporter appears to map structurally. Reconstruction of 3D map and high-resolution structure by x-ray crystallography would be necessary to understand the mechanism of ion transport and regulation by pH. 2) 2D crystallization of Proline transporter: Proline transporter (PutP) from E.coli belongs the sodium-solute symporter family that includes disease related sodium dependent glucose and iodide transporter in humans. Sodium and proline are co-transported with a stoichiometry of 1:1. Purified PutP was reconstituted to yield 2D crystals that were hexagonal in nature. The 2D crystals had tendency to stack indicating their willingness to form 3D crystals. A projection map of PutP from negatively stained crystals showed trimeric arrangement of protein. Other members of the SSF family have been shown to be monomers. My analysis of oligomeric state of PutP in detergent by blue native gel indicates a monomer in detergent solution. It is likely that PutP can function as a monomer but at higher concentration and in lipid bilayer it tends to form trimer. 3) Oligomeric state and crystallization of carnitine transporter from E.coli: E.coli carnitine transporter (CaiT) belongs to the BCCT (Betaine, Carnitine and Choline) superfamily that transports molecules with quaternary amine groups. CaiT is predicted to span the membrane 12 times and acts as a L-carnitine/g-butyrobetaine exchanger. Unlike other members in this transporter family, it does not require an ion gradient and does not respond to osmotic stress. Over-expression of the protein yielded ~2mg of protein/L of culture. The structure and oligomeric state of the protein were analyzed in detergent and lipid bilayers. Blue native gel electrophoresis indicated that CaiT was a trimer in detergent solution. Gel filtration and cross-linking studies further support this. Reconstitution of CaiT into lipid bilayers resulted in 2D crystals. Analysis of negatively stained 2D crystals confirmed that CaiT is a trimer in the membrane. Initial 3D crystallization trials have been successful and currently, the crystals diffract to 6Å and are being improved. 4) Monomeric porin OmpG: OmpG is a bacterial outer membrane b-barrel protein. It is monomeric and its size (33kDa) places it as a prime candidate for a structural solution, using the recently developed method of solid state NMR (work in collaboration with Prof.Hartmut Oskinat, FMP, Berlin). A long-term aim would be to study porins as templates for designing nanopores, for DNA sequencing and identification. I have expressed OmpG in inclusion bodies and refolded at an efficiency of >90% into a functional form using detergent. OmpG was then crystallized by 2D crystallization yielding an 8Å projection map whose structure was similar to native protein. In addition, these crystals were used for structure determination by solid state NMR. An initial spectrum of heavy isotopically labeled OmpG has allowed identification of specific amino acid residues including threonine and proline. Additionally, I obtained 3D crystals in detergent that diffract to 5.5Å and are being improved.
Protein-protein interactions within the plane of cellular membranes play a key role for many biological processes and in particular for transmembrane signaling. A prominent example is the ligand-induced crosslinking of cytokine receptors, where 3- dimensional cytokine binding followed by 2-dimensional interaction between the receptor subunits have been recognized to be important for regulating signaling specificity. The fundamental importance of such coupled interactions for cell-surface receptor activation has stimulated numerous theoretical studies, which have hardly been confirmed experimentally. An experimental approach to measure interactions and real time kinetics of type I interferon (IFN) induced assembly between interferon receptor subunits ifnar2 and ifnar1 on membrane was developed and determinants of the 2-dimensional interactions, such as dimensionality, size, valency, orientation, membrane fluidity and receptor density were quantitatively addressed The C-terminal decahistidine tagged extracellular domains (EC) of ifnar1 and ifnar2 were site- specifically tethered onto solid-supported fluid lipid membrane, which carried covalently attached chelator bis-nitrilotriacetic acid (bis-NTA) groups. Interactions on the lipid bilayer were detected with a novel solid phase detection technique, which allows simultaneous detection of ligand binding to a membrane anchored receptors and lateral interaction between them in the real time. This was achieved by combining two optical techniques: label-free reflectance interferometry (RIf) and total internal reflection fluorescence spectroscopy (TIRFS). Fluorescence signals, in the order of 10 fluorophores/µm2, were detected without substantial photobleaching. The sensitivity of the label-free interferometric detection was in the range of 10 pg/mm2. The crosstalk between the two signals was eliminated by means of spectral separation. Fluorescence was detected in the visible region and RIf was performed at 800 nm in the near infrared. Flow through conditions allowed to automate experiments and measure binding events as fast as ~ 5 s-1. Using this technique we have dissected the interactions involved in IFN-induced ifnar crosslinking. 2-dimensional association and dissociation rate constants were independently determined by tethering high stoichiometric excess of one of the receptor subunits and comparing dissociation of the labelled ligand away from the membrane in the absence and presence of the non-labelled high affinity competitor. Dissociation traces were fitted with the two-step dissociation model: the first step being the 2-dimensional separation of the ternary complex followed by the 3- dimensional ligand dissociation into solution. Label-free RIf detection allowed absolute parameterization of the 2-dimensional concentrations of the ifnar subunits on the membrane. The TIRFS signal provided high sensitivity of the ligand dissociation and was correlated against the RIf signal before fitting. These features of the detection system allowed us to parameterize the model, and the 2-dimensional association or dissociation rate constants were the only variables during the fitting. Another FRET based binding assay was developed to determine the 2- dimensional dissociation rate constant using a pulse-chase approach. The donor fluorescence from ifnar2-EC was quenched upon the ternary complex formation with the acceptor-labelled IFN and the nonlabelled ifnar1-EC. The equilibrium was perturbed by rapid tethering of substantial excess of the nonlabelled ifnar2-EC onto the membrane. The exchange of the labelled ifnar2-EC with the nonlabelled one was monitored as the decrease in the FRET signal with the 2-dimensional dissociation of ifnar2-EC from the ternary complex being the rate limiting step. Based on the several mutants and variants of the interacting proteins, the effect of different rate constants and receptor orientation on the 2-dimensional crosslinking dynamics was studied. We have identified several critical features of the 2- dimensional interactions on membranes, which cannot be readily concluded from the solution binding assays. The restricted rotation and the increased lifetime of the encounter complex due to high membrane viscosity are the main determinants of the 2-dimensional association. Tethering ifnar1-EC to the membrane via N-terminal decahistidine tag decreased the 2-dimensional association rate constant 4-5 fold. Electrostatic attraction and steering, the important mechanism to enhance association rate constant between the soluble proteins, are not pronounced for interactions on the membrane. Protein orientation due to membrane anchoring dominates over electrostatic effects and together with the increased lifetime of the encounter complex consequence that 2-dimensional association rate constants are quite similar and do not correlate with association rate constants in solution. The 2- dimensional dissociation rate constants were generally 2-5-fold lower compared to the corresponding 3-dimensional dissociation rate constants in solution. Possible explanations for this are that long lifetime of the encounter complex stabilizes the ternary complex or that membrane tethering affects the interaction diagram. In conclusion, combined TIRFS-RIf detection turn to be powerful and versatile technique to characterize protein-protein interactions on membranes.
This paper makes a case for the future development of European corporate law through regulatory competition rather than EC legislation. It is for the first time becoming legally possible for firms within the EU to select the national company law that they wish to govern their activities. A significant number of firms can be expected to exercise this freedom, and national legislatures can be expected to respond by seeking to make their company laws more attractive to firms. Whilst the UK is likely to be the single most successful jurisdiction in attracting firms, the presence of different models of corporate governance within Europe make it quite possible that competition will result in specialisation rather than convergence, and that no Member State will come to dominate as Delaware has done in the US. Procedural safeguards in the legal framework will direct the selection of laws which increase social welfare, as opposed simply to the welfare of those making the choice. Given that European legislators cannot be sure of the ‘optimal’ model for company law, the future of European company law-making would better be left with Member States than take the form of harmonized legislation.
Virtual screening of potential bioactive substances using the support vector machine approach
(2005)
Die vorliegende Dissertation stellt eine kumulative Arbeit dar, die in insgesamt acht wissenschaftlichen Publikationen (fünf publiziert, zwei eingerichtet und eine in Vorbereitung) dargelegt ist. In diesem Forschungsprojekt wurden Anwendungen von maschinellem Lernen für das virtuelle Screening von Moleküldatenbanken durchgeführt. Das Ziel war primär die Einführung und Überprüfung des Support-Vector-Machine (SVM) Ansatzes für das virtuelle Screening nach potentiellen Wirkstoffkandidaten. In der Einleitung der Arbeit ist die Rolle des virtuellen Screenings im Wirkstoffdesign beschrieben. Methoden des virtuellen Screenings können fast in jedem Bereich der gesamten pharmazeutischen Forschung angewendet werden. Maschinelles Lernen kann einen Einsatz finden von der Auswahl der ersten Moleküle, der Optimierung der Leitstrukturen bis hin zur Vorhersage von ADMET (Absorption, Distribution, Metabolism, Toxicity) Eigenschaften. In Abschnitt 4.2 werden möglichen Verfahren dargestellt, die zur Beschreibung von chemischen Strukturen eingesetzt werden können, um diese Strukturen in ein Format zu bringen (Deskriptoren), das man als Eingabe für maschinelle Lernverfahren wie Neuronale Netze oder SVM nutzen kann. Der Fokus ist dabei auf diejenigen Verfahren gerichtet, die in der vorliegenden Arbeit verwendet wurden. Die meisten Methoden berechnen Deskriptoren, die nur auf der zweidimensionalen (2D) Struktur basieren. Standard-Beispiele hierfür sind physikochemische Eigenschaften, Atom- und Bindungsanzahl etc. (Abschnitt 4.2.1). CATS Deskriptoren, ein topologisches Pharmakophorkonzept, sind ebenfalls 2D-basiert (Abschnitt 4.2.2). Ein anderer Typ von Deskriptoren beschreibt Eigenschaften, die aus einem dreidimensionalen (3D) Molekülmodell abgeleitet werden. Der Erfolg dieser Beschreibung hangt sehr stark davon ab, wie repräsentativ die 3D-Konformation ist, die für die Berechnung des Deskriptors angewendet wurde. Eine weitere Beschreibung, die wir in unserer Arbeit eingesetzt haben, waren Fingerprints. In unserem Fall waren die verwendeten Fingerprints ungeeignet zum Trainieren von Neuronale Netzen, da der Fingerprintvektor zu viele Dimensionen (~ 10 hoch 5) hatte. Im Gegensatz dazu hat das Training von SVM mit Fingerprints funktioniert. SVM hat den Vorteil im Vergleich zu anderen Methoden, dass sie in sehr hochdimensionalen Räumen gut klassifizieren kann. Dieser Zusammenhang zwischen SVM und Fingerprints war eine Neuheit, und wurde von uns erstmalig in die Chemieinformatik eingeführt. In Abschnitt 4.3 fokussiere ich mich auf die SVM-Methode. Für fast alle Klassifikationsaufgaben in dieser Arbeit wurde der SVM-Ansatz verwendet. Ein Schwerpunkt der Dissertation lag auf der SVM-Methode. Wegen Platzbeschränkungen wurde in den beigefügten Veröffentlichungen auf eine detaillierte Beschreibung der SVM verzichtet. Aus diesem Grund wird in Abschnitt 4.3 eine vollständige Einführung in SVM gegeben. Darin enthalten ist eine vollständige Diskussion der SVM Theorie: optimale Hyperfläche, Soft-Margin-Hyperfläche, quadratische Programmierung als Technik, um diese optimale Hyperfläche zu finden. Abschnitt 4.3 enthält auch eine Diskussion von Kernel-Funktionen, welche die genaue Form der optimalen Hyperfläche bestimmen. In Abschnitt 4.4 ist eine Einleitung in verschiede Methoden gegeben, die wir für die Auswahl von Deskriptoren genutzt haben. In diesem Abschnitt wird der Unterschied zwischen einer „Filter“- und der „Wrapper“-basierten Auswahl von Deskriptoren herausgearbeitet. In Veröffentlichung 3 (Abschnitt 7.3) haben wir die Vorteile und Nachteile von Filter- und Wrapper-basierten Methoden im virtuellen Screening vergleichend dargestellt. Abschnitt 7 besteht aus den Publikationen, die unsere Forschungsergebnisse enthalten. Unsere erste Publikation (Veröffentlichung 1) war ein Übersichtsartikel (Abschnitt 7.1). In diesem Artikel haben wir einen Gesamtüberblick der Anwendungen von SVM in der Bio- und Chemieinformatik gegeben. Wir diskutieren Anwendungen von SVM für die Gen-Chip-Analyse, die DNASequenzanalyse und die Vorhersage von Proteinstrukturen und Proteininteraktionen. Wir haben auch Beispiele beschrieben, wo SVM für die Vorhersage der Lokalisation von Proteinen in der Zelle genutzt wurden. Es wird dabei deutlich, dass SVM im Bereich des virtuellen Screenings noch nicht verbreitet war. Um den Einsatz von SVM als Hauptmethode unserer Forschung zu begründen, haben wir in unserer nächsten Publikation (Veröffentlichung 2) (Abschnitt 7.2) einen detaillierten Vergleich zwischen SVM und verschiedenen neuronalen Netzen, die sich als eine Standardmethode im virtuellen Screening etabliert haben, durchgeführt. Verglichen wurde die Trennung von wirstoffartigen und nicht-wirkstoffartigen Molekülen („Druglikeness“-Vorhersage). Die SVM konnte 82% aller Moleküle richtig klassifizieren. Die Klassifizierung war zudem robuster als mit dreilagigen feedforward-ANN bei der Verwendung verschiedener Anzahlen an Hidden-Neuronen. In diesem Projekt haben wir verschiedene Deskriptoren zur Beschreibung der Moleküle berechnet: Ghose-Crippen Fragmentdeskriptoren [86], physikochemische Eigenschaften [9] und topologische Pharmacophore (CATS) [10]. Die Entwicklung von weiteren Verfahren, die auf dem SVM-Konzept aufbauen, haben wir in den Publikationen in den Abschnitten 7.3 und 7.8 beschrieben. Veröffentlichung 3 stellt die Entwicklung einer neuen SVM-basierten Methode zur Auswahl von relevanten Deskriptoren für eine bestimmte Aktivität dar. Eingesetzt wurden die gleichen Deskriptoren wie in dem oben beschriebenen Projekt. Als charakteristische Molekülgruppen haben wir verschiedene Untermengen der COBRA Datenbank ausgewählt: 195 Thrombin Inhibitoren, 226 Kinase Inhibitoren und 227 Faktor Xa Inhibitoren. Es ist uns gelungen, die Anzahl der Deskriptoren von ursprünglich 407 auf ungefähr 50 zu verringern ohne signifikant an Klassifizierungsgenauigkeit zu verlieren. Unsere Methode haben wir mit einer Standardmethode für diese Anwendung verglichen, der Kolmogorov-Smirnov Statistik. Die SVM-basierte Methode erwies sich hierbei in jedem betrachteten Fall als besser als die Vergleichsmethoden hinsichtlich der Vorhersagegenauigkeit bei der gleichen Anzahl an Deskriptoren. Eine ausführliche Beschreibung ist in Abschnitt 4.4 gegeben. Dort sind auch verschiedene „Wrapper“ für die Deskriptoren-Auswahl beschrieben. Veröffentlichung 8 beschreibt die Anwendung von aktivem Lernen mit SVM. Die Idee des aktiven Lernens liegt in der Auswahl von Molekülen für das Lernverfahren aus dem Bereich an der Grenze der verschiedenen zu unterscheidenden Molekülklassen. Auf diese Weise kann die lokale Klassifikation verbessert werden. Die folgenden Gruppen von Moleküle wurden genutzt: ACE (Angiotensin converting enzyme), COX2 (Cyclooxygenase 2), CRF (Corticotropin releasing factor) Antagonisten, DPP (Dipeptidylpeptidase) IV, HIV (Human immunodeficiency virus) protease, Nuclear Receptors, NK (Neurokinin receptors), PPAR (peroxisome proliferator-activated receptor), Thrombin, GPCR und Matrix Metalloproteinasen. Aktives Lernen konnte die Leistungsfähigkeit des virtuellen Screenings verbessern, wie sich in dieser retrospektiven Studie zeigte. Es bleibt abzuwarten, ob sich das Verfahren durchsetzen wird, denn trotzt des Gewinns an Vorhersagegenauigkeit ist es aufgrund des mehrfachen SVMTrainings aufwändig. Die Publikationen aus den Abschnitten 7.5, 7.6 und 7.7 (Veröffentlichungen 5-7) zeigen praktische Anwendungen unserer SVM-Methoden im Wirkstoffdesign in Kombination mit anderen Verfahren, wie der Ähnlichkeitssuche und neuronalen Netzen zur Eigenschaftsvorhersage. In zwei Fällen haben wir mit dem Verfahren neuartige Liganden für COX-2 (cyclooxygenase 2) und dopamine D3/D2 Rezeptoren gefunden. Wir konnten somit klar zeigen, dass SVM-Methoden für das virtuelle Screening von Substanzdatensammlungen sinnvoll eingesetzt werden können. Es wurde im Rahmen der Arbeit auch ein schnelles Verfahren zur Erzeugung großer kombinatorischer Molekülbibliotheken entwickelt, welches auf der SMILES Notation aufbaut. Im frühen Stadium des Wirstoffdesigns ist es wichtig, eine möglichst „diverse“ Gruppe von Molekülen zu testen. Es gibt verschiedene etablierte Methoden, die eine solche Untermenge auswählen können. Wir haben eine neue Methode entwickelt, die genauer als die bekannte MaxMin-Methode sein sollte. Als erster Schritt wurde die „Probability Density Estimation“ (PDE) für die verfügbaren Moleküle berechnet. [78] Dafür haben wir jedes Molekül mit Deskriptoren beschrieben und die PDE im N-dimensionalen Deskriptorraum berechnet. Die Moleküle wurde mit dem Metropolis Algorithmus ausgewählt. [87] Die Idee liegt darin, wenige Moleküle aus den Bereichen mit hoher Dichte auszuwählen und mehr Moleküle aus den Bereichen mit niedriger Dichte. Die erhaltenen Ergebnisse wiesen jedoch auf zwei Nachteile hin. Erstens wurden Moleküle mit unrealistischen Deskriptorwerten ausgewählt und zweitens war unser Algorithmus zu langsam. Dieser Aspekt der Arbeit wurde daher nicht weiter verfolgt. In Veröffentlichung 6 (Abschnitt 7.6) haben wir in Zusammenarbeit mit der Molecular-Modeling Gruppe von Aventis-Pharma Deutschland (Frankfurt) einen SVM-basierten ADME Filter zur Früherkennung von CYP 2C9 Liganden entwickelt. Dieser nichtlineare SVM-Filter erreichte eine signifikant höhere Vorhersagegenauigkeit (q2 = 0.48) als ein auf den gleichen Daten entwickelten PLS-Modell (q2 = 0.34). Es wurden hierbei Dreipunkt-Pharmakophordeskriptoren eingesetzt, die auf einem dreidimensionalen Molekülmodell aufbauen. Eines der wichtigen Probleme im computerbasierten Wirkstoffdesign ist die Auswahl einer geeigneten Konformation für ein Molekül. Wir haben versucht, SVM auf dieses Problem anzuwenden. Der Trainingdatensatz wurde dazu mit jeweils mehreren Konformationen pro Molekül angereichert und ein SVM Modell gerechnet. Es wurden anschließend die Konformationen mit den am schlechtesten vorhergesagten IC50 Wert aussortiert. Die verbliebenen gemäß dem SVM-Modell bevorzugten Konformationen waren jedoch unrealistisch. Dieses Ergebnis zeigt Grenzen des SVM-Ansatzes auf. Wir glauben jedoch, dass weitere Forschung auf diesem Gebiet zu besseren Ergebnissen führen kann.
After a brief introduction on QCD and effective models in the first chapter, I analyze the dependence of the QCD transition temperature on the quark (or pion) mass in the second chapter. I found that a linear sigma model, which links the transition to chiral symmetry restoration, predicts a much stronger dependence of T_c on m_pi than seen in present lattice data for m_pi >~ 0.4 GeV. On the other hand, an effective Lagrangian for the Polyakov loop requires only small explicit symmetry breaking to describe T_c(m_pi) in the above mass range. In the third and fourth chapter, I study the linear sigma model with O(N) symmetry at nonzero temperature in the framework of the Cornwall-Jackiw-Tomboulis formalism. Extending the set of two-particle irreducible diagrams by adding sunset diagrams to the usual Hartree-Fock (or Hartree) contributions, I derive a new approximation scheme which extends the standard Hartree-Fock (or Hartree) approximation by the inclusion of nonzero decay widths.
Artificial drainage of agricultural land, for example with ditches or drainage tubes, is used to avoid water logging and to manage high groundwater tables. Among other impacts it influences the nutrient balances by increasing leaching losses and by decreasing denitrification. To simulate terrestrial transport of nitrogen on the global scale, a digital global map of artificially drained agricultural areas was developed. The map depicts the percentage of each 5’ by 5’ grid cell that is equipped for artificial drainage. Information on artificial drainage in countries or sub-national units was mainly derived from international inventories. Distribution to grid cells was based, for most countries, on the "Global Croplands Dataset" of Ramankutty et al. (1998) and the "Digital Global Map of Irrigation Areas" of Siebert et al. (2005). For some European countries the CORINE land cover dataset was used instead of the both datasets mentioned above. Maps with outlines of artificially drained areas were available for 6 countries. The global drainage area on the map is 167 Mio hectares. For only 11 out of the 116 countries with information on artificial drainage areas, sub-national information could be taken into account. Due to this coarse spatial resolution of the data sources, we recommended to use the map of artificially drained areas only for continental to global scale assessments. This documentation describes the dataset, the data sources and the map generation, and it discusses the data uncertainty.
We find that on average consumers chose the contract that ex post minimized their net costs. A substantial fraction of consumers (about 40%) still chose the ex post sub-optimal contract, with some incurring hundreds of dollars of avoidable interest costs. Nonetheless, the probability of choosing the sub-optimal contract declines with the dollar magnitude of the potential error, and consumers with larger errors were more likely to subsequently switch to the optimal contract. Thus most of the errors appear not to have been very costly, with the exception that a small minority of consumers persists in holding substantially sub-optimal contracts without switching. Klassifikation: G11, G21, E21, E51
Using a set of regional inflation rates we examine the dynamics of inflation dispersion within the U.S.A., Japan and across U.S. and Canadian regions. We find that inflation rate dispersion is significant throughout the sample period in all three samples. Based on methods applied in the empirical growth literature, we provide evidence in favor of significant mean reversion (ß-convergence) in inflation rates in all considered samples. The evidence on ó-convergence is mixed, however. Observed declines in dispersion are usually associated with decreasing overall inflation levels which indicates a positive relationship between mean inflation and overall inflation rate dispersion. Our findings for the within-distribution dynamics of regional inflation rates show that dynamics are largest for Japanese prefectures, followed by U.S. metropolitan areas. For the combined U.S.-Canadian sample, we find a pattern of within-distribution dynamics that is comparable to that found for regions within the European Monetary Union (EMU). In line with findings in the so-called 'border literature' these results suggest that frictions across European markets are at least as large as they are, e.g., across North American markets. Klassifikation: E31, E52, E58
Using a unique data set of regional inflation rates we are examining the extent and dynamics of inflation dispersion in major EMU countries before and after the introduction of the euro. For both periods, we find strong evidence in favor of mean reversion (ß-convergence) in inflation rates. However, half-lives to convergence are considerable and seem to have increased after 1999. The results indicate that the convergence process is nonlinear in the sense that its speed becomes smaller the further convergence has proceeded. An examination of the dynamics of overall inflation dispersion (ó-convergence) shows that there has been a decline in dispersion in the first half of the 1990s. For the second half of the 1990s, no further decline can be observed. At the end of the sample period, dispersion has even increased. The existence of large persistence in European inflation rates is confirmed when distribution dynamics methodology is applied. At the end of the paper we present evidence for the sustainability of the ECB's inflation target of an EMU-wide average inflation rate of less than but close to 2%. Klassifikation: E31, E52, E58
The paper documents lack of awareness of financial assets in the 1995 and 1998 Bank of Italy Surveys of Household Income and Wealth. It then explores the determinants of awareness, and finds that the probability that survey respondents are aware of stocks, mutual funds and investment accounts is positively correlated with education, household resources, long-term bank relations and proxies for social interaction. Lack of financial awareness has important implications for understanding the stockholding puzzle and for estimating stock market participation costs. Klassifikation: E2, D8, G1
The theory of intertemporal consumption choice makes sharp predictions about the evolution of the entire distribution of household consumption, not just about its conditional mean. In the paper, we study the empirical transition matrix of consumption using a panel drawn from the Bank of Italy Survey of Household Income and Wealth. We estimate the parameters that minimize the distance between the empirical and the theoretical transition matrix of the consumption distribution. The transition matrix generated by our estimates matches remarkably well the empirical matrix, both in the aggregate and in samples stratified by education. Our estimates strongly reject the consumption insurance model and suggest that households smooth income shocks to a lesser extent than implied by the permanent income hypothesis. Klassifikation: D52, D91, I30
Trusting the stock market
(2005)
We provide a new explanation to the limited stock market participation puzzle. In deciding whether to buy stocks, investors factor in the risk of being cheated. The perception of this risk is a function not only of the objective characteristics of the stock, but also of the subjective characteristics of the investor. Less trusting individuals are less likely to buy stock and, conditional on buying stock, they will buy less. The calibration of the model shows that this problem is sufficiently severe to account for the lack of participation of some of the richest investors in the United States as well as for differences in the rate of participation across countries. We also find evidence consistent with these propositions in Dutch and Italian micro data, as well as in cross country data. Klassifikation: D1, D8
Credit card debt puzzles
(2005)
Most US credit card holders revolve high-interest debt, often combined with substantial (i) asset accumulation by retirement, and (ii) low-rate liquid assets. Hyperbolic discounting can resolve only the former puzzle (Laibson et al., 2003). Bertaut and Haliassos (2002) proposed an 'accountant-shopper' framework for the latter. The current paper builds, solves, and simulates a fully-specified accountant-shopper model, to show that this framework can actually generate both types of co-existence, as well as target credit card utilization rates consistent with Gross and Souleles (2002). The benchmark model is compared to setups without self-control problems, with alternative mechanisms, and with impatient but fully rational shoppers. Klassifikation: E210, G110
Some have argued that recent increases in credit risk transfer are desirable because they improve the diversification of risk. Others have suggested that they may be undesirable if they increase the risk of financial crises. Using a model with banking and insurance sectors, we show that credit risk transfer can be beneficial when banks face uniform demand for liquidity. However, when they face idiosyncratic liquidity risk and hedge this risk in an interbank market, credit risk transfer can be detrimental to welfare. It can lead to contagion between the two sectors and increase the risk of crises. Klassifikation: G21, G22
How do markets spread risk when events are unknown or unknowable and where not anticipated in an insurance contract? While the policyholder can "hold up" the insurer for extra contractual payments, the continuing gains from trade on a single contract are often too small to yield useful coverage. By acting as a repository of the reputations of the parties, we show the brokers provide a coordinating mechanism to leverage the collective hold up power of policyholders. This extends both the degree of implicit and explicit coverage. The role is reflected in the terms of broker engagement, specifically in the ownership by the broker of the renewal rights. Finally, we argue that brokers can be motivated to play this role when they receive commissions that are contingent on insurer profits. This last feature questions a recent, well publicized, attack on broker compensation by New York attorney general, Elliot Spitzer. Klassifikation: G22, G24, L14
Biophysical investigation of the ligand-induced assembling of the human type I interferon receptor
(2005)
Type I interferons (IFNs) elicit antiviral, antiproliferative and immunmodulatory responses through binding to a shared receptor consisting of the transmembrane proteins ifnar1 and ifnar2. Differential signaling by different interferons – in particular IFNalpha´s and IFNbeta – suggest different modes of receptor engagement. In this work either single ligand-receptor interactions or the formation of the extracellular part of a signaling complex were investigated referring to thermodynamics, kinetics, stoichiometry and structural organization. Initially an expression and purification strategy for the extracellular domain of ifnar1 (ifnar1-EC) using Sf9 insect cells yielding in mg amounts of glycosylated protein was established. Using reflectometric interference spectroscopy (RIfS) the interactions between IFNalpha2/beta and ifnar1-EC and ifnar2-EC was studied in order to understand the individual energetic contributions within the ternary complex. For IFNalpha2 a Kd of 5 µM for the interaction with ifnar1-EC was determined. Substantially tighter binding of IFNbeta with both ifnar2-EC and ifnar1-EC compared to IFNalpha2 was observed. For neither IFNalpha2 nor IFNbeta stabilization of the complex with ifnar1-EC in presence of soluble ifnar2-EC was detectable. In addition, no direct interaction between ifnar2 and ifnar1 was could be shown. Thus, stem-stem interactions between the extracellular domains of ifnar1 and ifnar2 do not seem to play a role for ternary complex formation. Furthermore, ligand-induced cross-talk between ifnar1-EC and ifnar2-EC being tethered onto solid-supported, fluid lipid bilayers was investigated by RIfS and total internal reflection fluorescence spectroscopy. A very stable binding of IFNalpha2 at high receptor surface concentrations was observed with an apparent kd approximately 200-times lower than for ifnar2-EC alone. This apparent kd was strongly dependent on the surface concentration of the receptor components, suggesting kinetic rather than static stabilization, which was corroborated by competition experiments. These results indicate that signaling is activated by transient cross-talk between ifnar1 and ifnar2, which is by several orders of magnitude more efficiently engaged by IFNbeta than by IFNalpha2. With respect to differential recognition of different IFNs ifnar1-EC was dissected into sub-fragments containing different of the four Ig-like domains. The appropriate folding and glycosylation of these proteins, also purified in mg amounts were confirmed by SDS-PAGE, size exclusion chromatography and CD-spectroscopy. Surprisingly, only one construct containing all three N-terminal Ig-like domains was active in terms of ligand binding, indicating that these domains were required. Competitive binding of IFNalpha2 and IFNbeta to both this fragment and ifnar1-EC was demonstrated. Cellular binding assays with different fragments, however, highlight the key role of the membrane-proximal Ig-like domain for the formation of an in situ IFN-receptor complex and the ensuing signal activation. Even substitution with Ig-like domains from homologous cytokine receptors did not restore high-affinity ligand binding. Receptor assembling analysis on supported lipid bilayer revealed that appropriate orientation of the receptor is required, which is controlled by the membrane-proximal Ig-domain. All results indicate that differential signalling is encoded by the efficiency of signalling complex formation, which is controlled by the binding affinity of IFNs to the extracellular domains of ifnar1 and 2.
Here I analyse 23 populations of D. galeata, a large-lake cladoceran, distributed mainly across the Palaearctic. I detected high levels of clonal diversity and population differentiation using variation at six microsatellite loci across Europe. Most populations were characterised by deviations from H-W equilibrium and significant heterozygote deficiencies. Observed heterozygote deficiencies might be a consequence of simultaneous hatching of individuals produced during different times of the year or of the coexistence of ecologically and genetically differentiated subpopulations. A significant isolation by distance was only found over large geographic distances (> 700 km). This pattern is mainly due to the high genetic differentiation among neighbouring populations. My results suggest that historic populations of Daphnia were once interconnected by gene flow but current populations are now largely isolated. Thus local ecological conditions which determine the level of biparental sexual reproduction and local adaptation are the main factors mediating population structure of D. galeata. The population genetic structure and diversity in D. galeata was investigated at a European scale using six microsatellite loci and 12S rDNA sequence data to infer and compare historical and contemporary patterns of gene flow. D. galeata has the potential for long-distance dispersal via ephippial resting eggs by wind and other dispersing vectors (waterfowl), but shows in general strong population differentiation even among neighbouring populations. A total of 427 individuals were analysed for microsatellite and 85 individuals for mitochondrial (mtDNA) sequence data from 12 populations across Europe. I detected genetic differentiation among populations across Europe and locations within sampling regions for both genetic marker systems (average values: mtDNA FST = 0.574; microsatellite FST = 0.389), resulting in a lack of isolation by distance. Furthermore, several microsatellite alleles and one haplotype were shared across populations. Partitioning of molecular variance was inconsistant for both marker systems. Microsatellite variation was higher within than among populations, whereas mtDNA data yielded an inverse pattern. Relative high levels of nuclear DNA diversity were found across Europe. The amount of mitochondrial diversity was low in Spain, Hungary and Denmark. Gene flow analysis at a European scale did not reveal typical pattern of population recolonization in the light of postglacial colonization hypotheses. Populations, which recently experienced an expansion or population-bottleneck were observed both in middle and northern Europe. Since these populations revealed high genetic diversity in both marker systems, I suggest these areas to represent postglacial zones of secondary contact among divergent lineages of D. galeata. In order to reveal the relationship between population genetic structure of D. galeata and the relative contribution of environmental factors, I used a statistical framework based on canonical correspondence analysis. Although I detected no single ecological gradient mediating the genetic differentiation in either lake regions, it is noteworthy that the same ecological factors were significantly correlated with intra- and interspecific genetic variation of D. galeata. For example, I found a relationship between genetic variation of D. galeata and differentiation with higher and lower trophic levels (phytoplankton, submerged macrophytes and fish) and a relationship between clonal variation and species diversity within Cladocera. Variance partitioning had only a minor contribution of each environmental category (abiotic, biomass/density and diversity) to genetic diversity of D. galeata, while the largest proportion of variation was explained by shared components. My work illustrates the important role of ecological differentiation and adaptation in structuring genetic variation, and it highlights the need for approaches incorporating a landscape context for population divergence.
Die vorliegende Arbeit beschäftigt sich mit der Charakterisierung des ALTRO Chips (ALICE TPC Readout), der ein integraler und wichtiger Bestandteil der Auslesekette des TPC (Time Projection Chamber) Detektors von ALICE (A Large Ion Collider Experiment) ist. ALICE ist ein Experiment am noch im Bau befindlichen LHC (Large Hadron Collider) am CERN mit der zentralen Ausrichtung, Schwerionenkollisionen zu untersuchen. Diese sind von besonderem Interesse, da durch sie ein experimenteller Zugriff zu dem QGP (Quark Gluon Plasma) existiert, dem einzigen vom Standardmodell vorhergesagten Phasenübergang, der unter Laborbedingungen erreichbar ist. Im Jahr 2004 wurden Messungen an einem Teststrahl am CERN PS (Proton Synchrotron) durchgeführt. Der Prototyp wurde voll mit FECs bestückt, was 5400 Kanälen entspricht und einer anderen Gasmixtur (Ne/N2/CO2 90%/5%/5%) befüllt. Für das optimale Leistungsverhalten der ALICE TPC muß der Digitalprozessor im ALTRO, bestehend aus vier Berechnungseinheiten, mit den passenden Werten konfiguriert werden. Der Datenfluss beginnt mit dem BCS1 (Baseline Correction and Subtraction 1) Modul, das systematische Störungen und die Grundlinie entfernt. Da der ALTRO kontinuierlich das anliegende Signal abtastet, entfernt es automatisch langsame Grundlinienveränderungen, die Beispielsweise durch Temperaturänderungen auftreten können. Gefolgt von dem TCF (Tail Cancellation Filter), der den Schweif des langsam fallenden, vom PASA generierten Signals entfernt. Um die nichtsystematischen Störungen der Grundlinie zu entfernen, folgt die BCS2 (Baseline Correction and Subtraction 2), die auf einer gleitenden Mittelwertsberechnung mit Ausschluß von Detektorsignalen über einen doppelten Schwellenwert basiert. Die finale Einheit für die Signalverarbeitung ist die ZSU (Zero Suppression Unit), die Meßpunkte unterhalb eines definierten Schwellwertes entfernt. Hier wird der weg beschrieben die TCF und BCS1 Parameter aus vorhandenen Detektordaten zu extrahieren. Während der Analyse der Daten von kosmischen Teilchen fiel bei Signalen mit hoher Amplitude (>700 ADC) eine zusätzliche Struktur in dem Schweif auf. Der Monitor wurde deswegen mit einem gleitenden Mittelwertfilter erweitert, worauf sich diese Struktur auch in kleineren Signalen (> 200 ADC) zeigte. Dieses Signal wird von Ionen erzeugt, die zur Kathode oder zu den Pads driften, bisher ist jedoch weder die Streuung der Elektronenlawine an der Anode, noch die Variationsbreite in den erzeugten Elektronlawinen verstanden oder gemessen worden. Eine erfolgreiche Messung, sowie Charakterisierung wird in dieser Arbeit beschrieben. Im Jahr 2005 im Sommer beginnt der Einbau der Gaskammern der TPC in ALICE, die Elektronik folgt am Ende dieses Jahres. Parallel hierzu wurde der Prototyp der TPC wieder in Betrieb genommen und im Frühling wird ein kompletter Sektor mit der Detektorelektronik ausgestattet. An diesen zwei Aufbauten wird die ALTRO Charakterisierung fortgeführt, verfeinert und komplettiert.
Event-by-event multiplicity fluctuations in nucleus-nucleus collisions are studied within the HSD and UrQMD transport models. The scaled variances of negative, positive, and all charged hadrons in Pb+Pb at 158 AGeV are analyzed in comparison to the data from the NA49 Collaboration. We find a dominant role of the fluctuations in the nucleon participant number for the final hadron multiplicity fluctuations. This fact can be used to check di erent scenarios of nucleus-nucleus collisions by measuring the final multiplicity fluctuations as a function of collision centrality. The analysis reveals surprising e ects in the recent NA49 data which indicate a rather strong mixing of the projectile and target hadron production sources even in peripheral collisions. PACS numbers: 25.75.-q,25.75.Gz,24.60.-k
Mitochondial NADH:ubiquinone oxidoreductase (complex I) the largest multiprotein enzyme of the respiratory chain, catalyses the transfer of two electrons from NADH to ubiquinone, coupled to the translocation of four protons across the membrane. In addition to the 14 strictly conserved central subunits it contains a variable number of accessory subunits. At present, the best characterized enzyme is complex I from bovine heart with a molecular mass of about 980 kDa and 32 accessory proteins. In this study, the subunit composition of mitochondrial complex I from the aerobic yeast Y. lipolytica has been analysed by a combination of proteomic and genomic approaches. The sequences of 37 complex I subunits were identified. The sum of their individual molecular masses (about 930 kDa) was consistent with the native molecular weight of approximately 900 kDa for Y. lipolytica complex I obtained by BN-PAGE. A genomic analysis with Y. lipolytica and other eukaryotic databases to search for homologues of complex I subunits revealed 31 conserved proteins among the examined species. A novel protein named “X” was found in purified Y. lipolytica complex I by MALDI-MS. This protein exhibits homology to the thiosulfate sulfurtransferase enzyme referred to as rhodanese. The finding of a rhodanese-like protein in isolated complex I of Y. lipolytica allows to assume a special regulatory mechanism of complex I activity through control of the status of its iron-sulfur clusters. The second part of this study was aimed at investigating the possible role of one of these extra subunits, 39 kDa (NUEM) subunit which is related to the SDRs-enzyme family. The members of this family function in different redox and isomerization reactions and contain a conserved NAD(P)H-binding site. It was proposed that the 39 kDa subunit may be involved in a biosynthetic pathway, but the role of this subunit in complex I is unknown. In contrast to the situation in N. crassa, deletion of the 39 kDa encoding gene in Y. lipolytica led to the absence of fully assembled complex I. This result might indicate a different pathway of complex I assembly in both organisms. Several site-directed mutations were generated in the nucleotide binding motif. These had either no effect on enzyme activity and NADPH binding, or prevented complex I assembly. Mutations of arginine-65 that is located at the end of the second b-strand and responsible for selective interaction with the 2’-phosphate group of NADPH retained complex I activity in mitochondrial membranes but the affinity for the cofactor was markedly decreased. Purification of complex I from mutants resulted in decrease or loss of ubiquinone reductase activity. It is very likely that replacement of R65 not only led to a decrease in affinity for NADPH but also caused instability of the enzyme due to steric changes in the 39 kDa subunit. These data indicate that NADPH bound to the 39 kDa subunit (NUEM) is not essential for complex I activity, but probably involved in complex I assembly in Y. lipolytica.
The thesis entitled „Investigations on the significance of nucleo-cytoplasmic transport for the biological function of cellular proteins" aimed to unreveal molecular mechanisms in order to improve our understanding of the impact of nucleo-cytoplasmic transport on cellular functions. Within the scope of this work, it could be shown that regulated nucleo-cytoplasmic transport of a subfamily of homeobox transcription factors controlled their intra- and intercellular transport, and thereby influencing also their transcriptional activity. This study describes a novel regulatory mechanism, which could in general play an important role for the ordered differentiation of complex organisms. Besides cis-active transport Signals, also post-translational modifications can influence the localization and biological activity of proteins in trans. In addition to the known impact of phosphorylation on the transport and activity of STAT1, experimental evidence was provided demonstrating that acetylation affected the interaction of STAT1 with NF-kB p65, and subsequently modulated the expression of apoptosis-inducing NF-kB target genes. The impact of nucleo-cytoplasmic transport on the regulation of apoptosis was underlined by showing that the evolutionary conservation of a NES within the anti-apoptotic protein survivin plays an essential role for its dual function in the inhibition of apoptosis and ordered cell division. Since survivin is considered a bona fide cancer therapy target, these results strongly encourage future work to identify molecular decoys that specifically inhibit the nuclear export of survivin as novel therapeutics. In order to further dissect the regulation of nuclear transport and to efficiently identify transport inhibitors, cell-based assays are urgently required. Therefore, the cellular assay Systems developed in this work may not only serve to identify synthetic nuclear export and Import inhibitors but may also be applied in systematic RNAi-screening approaches to identify novel components of the transport machinery. In addition, the translocation based protease- and protein-interaction biosensors can be applied in various biological Systems, in particular to identify protein-protein interaction inhibitors of cancer relevant proteins. In summary, this work does not only underline the general significance of nucleo-cytoplasmic transport for cell biology, but also demonstrates its potential for the development of novel therapies against diseases like cancer and viral infections.
Plural semantics for natural language understanding : a computational proof-theoretic approach
(2005)
The semantics of natural language plurals poses a number of intricate problems – both from a formal and a computational perspective. In this thesis I investigate problems of representing, disambiguating and reasoning with plurals from a computational perspective. The work defines a computationally suitable representation for important plural constructions, proposes a tractable resolution algorithm for semantic plural ambiguities, and integrates an automatic reasoning component for plurals. My solution combines insights from formal semantics, computational linguistics and automated theorem proving and is based on the following main ideas. Whereas many existing approaches to plural semantics work on a model-theoretic basis using higher-order representation languages I propose a proof-theoretic approach to plural semantics based on a flat firstorder semantic representation language thus showing that a trade-off between expressive power and logical tractability can be found. The problem of automatic disambiguation of plurals is tackled by a deliberate decision to drastically reduce recourse to contextual knowledge for disambiguation but rely instead on structurally available and thus computationally manageable information. A further central aspect of the solution lies in carefully drawing the borderline between real ambiguity and mere indeterminacy in the interpretation of plural noun phrases. As a practical result of my computational proof-theoretic approach to plural semantics I can use my methods to perform automated reasoning with plurals by applying advanced firstorder theorem provers and model-generators available off-the shelf. The results are prototypically implemented within the two logic-oriented natural language understanding applications DRoPs and Attempto. DRoPs provides an automatic plural disambiguation component for uncontrolled natural language whereas Attempto works with a constructive disambiguation strategy for controlled natural language. Both systems provide tools for the automated analysis of technical texts allowing users for example to automatically detect inconsistencies, to perform question answering, to check whether a conjecture follows from a text or to find equivalences and redundancies.
Molecular dynamics (MD) simulation serves as an important and widely used computational tool to study molecular systems at an atomic resolution. No experimental technique is capable of generating a complete description of the dynamical structure of the biomolecules in their native solution environment. MD simulations allow us to study the dynamics and structure of the system and, moreover, helps in the interpretation of experimental observations. MD simulation was first introduced and applied by Alder and Wainwright in 1957 \cite{Alder57}. However, the first MD simulation of a macromolecule of biological interest was published 28 years ago \cite{McCammon77}. The simulation was concerned with the bovine pancreatic trypsin inhibitor (BPTI) protein, which has served as the hydrogen molecule'' of protein dynamics because of its small size, high stability, and relatively accurate X-ray structure available in 1977 \cite{Deisenhofer75}. This method is now widely used to tackle larger and more complex biological systems \cite{Groot01,Roux02} and has been facilitated by the development of fast and efficient methods for treating the long-range electrostatic interactions \cite{Essmann95}, the availability of faster parallel computers, and the continuous development of empirical molecular mechanical force fields \cite{Langley98,Cheatham99,Foloppe00}. It took several years until the first MD simulations of nucleic acid systems were performed \cite{Levitt83,Tidor83,Prabhakaran83,Nilsson86}. These investigations, which were also performed in vacuo, clearly demonstrated the importance of proper handling of electrostatics in a highly charged nucleic acid system, and different approaches, such as reduction of the phosphate charges and addition of hydrated counterions, have been applied to remedy this shortcoming and to maintain stable DNA structures. A few years later, the first MD simulation of a DNA molecule, including explicit water molecules and counterions was published \cite{Seibel85}. Various MD simulations on fully solvated RNA molecules with explicit inclusion of mobile ions indicated the importance of proper treatment of the environment of highly charged nucleic acids \cite{Lee95,Zichi95,Auffinger97,Auffinger99}. Given the central roles of RNA in the life of cells, it is important to understand the mechanism by which RNA forms three dimensional structures endowed with properties such as catalysis, ligand binding, and recognition of proteins. Furthermore, the increasing awareness of the essential role of RNA in controlling viral replication and in bacterial protein synthesis emphazises the potential of ribonucleicacids as targets for developing new antibacterial and new antiviral drugs. Driven by fruitful collaborations in the Sonderforschungsbereich RNA-Ligand interactions" the model RNA systems in this study include various RNA tetraloops and HIV-1 TAR RNA. For the latter system, the binding sites of heteroaromatic compounds have been studied employing automated docking calculations \cite{Goodsell90}. The results show that it is possible to use this tool to dock small rigid ligands to an RNA molecule, while large and flexible molecules are clearly problematic. The main part of this work is focused on MD simulations of RNA tetraloops.
Die vorliegende Analyse untersucht die Beschäftigungseffekte von Vermittlungsgutscheinen und Personal-Service-Agenturen mit Hilfe einer makroökonometrischen Evaluation. Neben einer mikroökonometrischen Evaluation, welche die Wirkungen auf individueller Ebene untersucht, kann eine makroökonometrische Analyse Aussagen über die gesamtwirtschaftlichen Effekte der Maßnahmen machen. Die strukturellen Multiplikatorwirkungen im makroökonomischen Kreislaufzusammenhang werden jedoch nicht berücksichtigt. Das ökonometrische Modell zur Analyse der beiden Maßnahmen basiert auf einer Matching-Funktion, die den Suchprozess von Firmen und von Arbeitern nach einem Beschäftigungsverhältnis abbildet. Die empirischen Analysen werden getrennt für Ost- und Westdeutschland sowie für die Strategietypen der Bundesagentur für Arbeit durchgeführt. Sie zeigen, dass die Ausgabe von Vermittlungsgutscheinen nur in „großstädtisch geprägten Bezirken vorwiegend in Westdeutschland mit hoher Arbeitslosigkeit“ (Strategietyp II) einen signifikant positiven Effekt auf den Suchprozess hat. Für die Personal-Service-Agenturen zeigen sich signifikant positive Effekte für Ost- als auch für Westdeutschland. Allerdings fehlt für eine abschließende Bewertung der Ergebnisse für die Personal- Service-Agenturen aufgrund der relativ geringen Teilnehmerzahl noch ein Vergleich mit mikroökonometrischen Analysen.
Serial correlation in dynamic panel data models with weakly exogenous regressor and fixed effects
(2005)
Our paper wants to present and compare two estimation methodologies for dynamic panel data models in the presence of serially correlated errors and weakly exogenous regressors. The ¯rst is the ¯rst di®erence GMM estimator as proposed by Arellano and Bond (1991) and the second is the transformed Maximum Likelihood Estimator as proposed by Hsiao, Pesaran, and Tahmiscioglu (2002). Thereby, we consider the ¯xed e®ects case and weakly exogenous regressors. The ¯nite sample properties of both estimation methodologies are analysed within a simulation experiment. Furthermore, we will present an empirical example to consider the performance of both estimators with real data. JEL Classification: C23, J64
In this paper we evaluate the employment effects of job creation schemes on the participating individuals in Germany. Job creation schemes are a major element of active labour market policy in Germany and are targeted at long-term unemployed and other hard-to-place individuals. Access to very informative administrative data of the Federal Employment Agency justifies the application of a matching estimator and allows to account for individual (group-specific) and regional effect heterogeneity. We extend previous studies in four directions. First, we are able to evaluate the effects on regular (unsubsidised) employment. Second, we observe the outcome of participants and non-participants for nearly three years after programme start and can therefore analyse mid- and long-term effects. Third, we test the sensitivity of the results with respect to various decisions which have to be made during implementation of the matching estimator, e.g. choosing the matching algorithm or estimating the propensity score. Finally, we check if a possible occurrence of 'unobserved heterogeneity' distorts our interpretation. The overall results are rather discouraging, since the employment effects are negative or insignificant for most of the analysed groups. One notable exception are long-term unemployed individuals who benefit from participation. Hence, one policy implication is to address programmes to this problem group more tightly. JEL Classification: J68, H43, C13
Job creation schemes (JCS) have been one important programme of active labour market policy in Germany aiming at the re-integration of hard-to-place unemployed individuals into regular employment. In ontrast to earlier evaluation studies of these programmes based on survey data, we use administrative data containing more than 11,000 participants for our analysis and hence, can take effect heterogeneity explicitly into account. We focus on effect heterogeneity caused by differences in the implementation of programmes (economic sector, types of support and implementing institutions). The results are rather discouraging and show that in general, JCS are unable to improve the re-integration chances of participants into regular employment.
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
The effects of vocational training programmes on the duration of unemployment in Eastern Germany
(2005)
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
Previous empirical studies of job creation schemes in Germany have shown that the average effects for the participating individuals are negative. However, we find that this is not true for all strata of the population. Identifying individual characteristics that are responsible for the effect heterogeneity and using this information for a better allocation of individuals therefore bears some scope for improving programme efficiency. We present several stratification strategies and discuss the occurring effect heterogeneity. Our findings show that job creation schemes do neither harm nor improve the labour market chances for most of the groups. Exceptions are long-term unemployed men in West and long-term unemployed women in East and West Germany who benefit from participation in terms of higher employment rates. JEL: C13 , J68 , H43
Innovations are a key factor to ensure the competitiveness of establishments as well as to enhance the growth and wealth of nations. But more than any other economic activity, decisions about innovations are plagued by failures of the market mechanism. As a response, public instruments have been implemented to stimulate private innovation activities. The effectiveness of these measures, however, is ambiguous and calls for an empirical evaluation. In this paper we make use of the IAB Establishment Panel and apply various microeconometric methods to estimate the effect of public measures on innovation activities of German establishments. We find that neglecting sample selection due to observable as well as to unobservable characteristics leads to an overestimation of the treatment effect and that there are considerable differences with regard to size class and betweenWest and East German establishments.
In recent methodological work the well known ACD approach, originally introduced by Engle and Russell (1998), has been supplemented by the involvement of an unobservable stochastic process which accompanies the underlying process of durations via a discrete mixture of distributions. The Mixture ACD model, emanating from the specialized proposal of De Luca and Gallo (2004), has proved to be a moderate tool for description of financial duration data. The use of one and the same family of ordinary distributions has been common practice until now. Our contribution incites to use the rich parameterized comprehensive family of distributions which allows for interacting different distributional idiosyncrasies. JEL classification: C41, C22, C25, C51, G14.
We propose a new framework for modelling the time dependence in duration processes being in force on financial markets. The pioneering ACD model introduced by Engle and Russell (1998) will be extended in a manner that the duration process will be accompanied by an unobservable stochastic process. The Discrete Mixture ACD framework provides us with a general methodology which puts the idea into practice. It is established by introducing a discrete-valued latent regime variable which can be justified in the light of recent market microstructure theories. The empirical application demonstrates its ability to capture specific characteristics of intraday transaction durations while alternative approaches fail. JEL classification: C41, C22, C25, C51, G14.
We discuss that hadron-induced atmospheric air showers from ultra-high energy cosmic rays are sensitive to QCD interactions at very small momentum fractions x where nonlinear effects should become important. The leading partons from the projectile acquire large random transverse momenta as they pass through the strong field of the target nucleus, which breaks up their coherence. This leads to a steeper x_F-distribution of leading hadrons as compared to low energy collisions, which in turn reduces the position of the shower maximum Xmax. We argue that high-energy hadronic interaction models should account for this effect, caused by the approach to the black-body limit, which may shift fits of the composition of the cosmic ray spectrum near the GZK cutoff towards lighter elements. We further show that present data on Xmax(E) exclude that the rapid ~ 1/x^0.3 growth of the saturation boundary (which is compatible with RHIC and HERA data) persists up to GZK cutoff energies. Measurements of pA collisions at LHC could further test the small-x regime and advance our understanding of high density QCD significantly.
Sharing of substructures like subterms and subcontexts in terms is a common method for space-efficient representation of terms, which allows for example to represent exponentially large terms in polynomial space, or to represent terms with iterated substructures in a compact form. We present singleton tree grammars as a general formalism for the treatment of sharing in terms. Singleton tree grammars (STG) are recursion-free context-free tree grammars without alternatives for non-terminals and at most unary second-order nonterminals. STGs generalize Plandowski's singleton context free grammars to terms (trees). We show that the test, whether two different nonterminals in an STG generate the same term can be done in polynomial time, which implies that the equality test of terms with shared terms and contexts, where composition of contexts is permitted, can be done in polynomial time in the size of the representation. This will allow polynomial-time algorithms for terms exploiting sharing. We hope that this technique will lead to improved upper complexity bounds for variants of second order unification algorithms, in particular for variants of context unification and bounded second order unification.
Plenarvortrag Weltkongress der Rechtsphilosophie und Sozialphilosophie, 24.-29. Mai, Granada 2005. S.a. die deutsche Fassung: "Die anonyme Matrix: Menschenrechtsverletzungen durch "private" transnationale Akteure". Spanische Fassung: Sociedad global, justicia fragmentada: sobre la violatión de los derechos humanos por actores transnacionales 'privados'. In: Manuel Escamilla and Modesto Saavedra (eds.), Law and Justice in a global society, International Association for philosophy of law and social philosophy, Granada 2005, S. 547-562 und in "Anales de öa Catedra Francisco Suarez 2005". S.a. Teubner, Gunther: Globalized Justice - Fragmented Justice. Human Rights Violations by "Private" Transnational Actors
Charmonium production and suppression in heavy-ion collisions at relativistic energies is investigated within di erent models, i.e. the comover absorption model, the threshold suppression model, the statistical coalescence model and the HSD transport approach. In HSD the charmonium dissociation cross sections with mesons are described by a simple phase-space parametrization including an e ective coupling strength |Mi|2 for the charmonium states i =Xc,J/psi, psi'. This allows to include the backward channels for charmonium reproduction by DD channels which are missed in the comover absorption and threshold suppression model employing detailed balance without introducing any new parameters. It is found that all approaches yield a reasonable description of J/psi suppression in S+U and Pb+Pb collisions at SPS energies. However, they di er significantly in the psi'/J/psi ratio versus centrality at SPS and especially at RHIC energies. These pronounced differences can be exploited in future measurements at RHIC to distinguish the hadronic rescattering scenarios from quark coalescence close to the QGP phase boundary.
The quinol:fumarate reductase (QFR) is the terminal reductase of anaerobic fumarate respiration, the most commonly occurring type of anaerobic respiration. This membrane protein complex couples the oxidation of menaquinol to menaquinone to the reduction of fumarate to succinate. The three-dimensional crystal structure of the QFR from Wolinella succinogenes has previoulsy been solved at 2.2 Å resolution. Although the diheme-containing QFR from W. succinogenes is known to catalyze an electroneutral process, structural and functional characterization of parental and variant enzymes has revealed active site locations which indicate electrogenic catalysis across the membrane. A solution to this apparent controversy was proposed with the so-called “Epathway hypothesis”. According to this, transmembrane electron transfer via the heme groups is strictly coupled to a parallel, compensatory transfer of protons via a transiently established pathway, which is inactive in the oxidized state of the enzyme. Proposed constituents of the E-pathway are the side chain of Glu C180, and the ring C propionate of the distal heme. Previous experimental evidence strongly supports such a role for the former constituent. One aim of this thesis is to investigate by a combination of specific 13C-heme propionate labeling and FTIR difference spectroscopy whether the ring C propionate of the distal heme is involved in redox-coupled proton transfer in the QFR from W. succinogenes. In addition to W. succinogenes, the primary structures of the QFR enzymes of two other e- proteobacteria are known. These are Campylobacter jejuni and Helicobacter pylori, which unlike W. succinogenes are human pathogens. The QFR from H. pylori has previously been established to be a potential drug target, and the same is likely for the QFR from C. jejuni. The two pathogenic species colonize mucosal surfaces causing several diseases. The possibility of studying these QFRs from these bacteria and creating more efficient drugs specifically active for this enzyme depends substantially on the availability of large amounts of high-quality protein. Further, biochemical and structural studies on QFR enzymes from e- proteobacteria species other than W. succinogenes can be valuable to enlighten new aspects or corroborate the current understanding of this class of membrane proteins.
We study the collective flow of open charm mesons and charmonia in Au + Au collisions at s = 200 GeV within the hadron-string-dynamics (HSD) transport approach. The detailed studies show that the coupling of D, mesons to the light hadrons leads to comparable directed and elliptic flow as for the light mesons. This also holds approximately for J/ mesons since more than 50% of the final charmonia for central and midcentral collisions stem from D + induced reactions in the transport calculations. The transverse momentum spectra of D, mesons and J/ s are only very moderately changed by the (pre-)hadronic interactions in HSD, which can be traced back to the collective flow generated by elastic interactions with the light hadrons. PACS-Nr. 25.75.-q, 13.60.Le, 14.40.Lb, 14.65.Dw
The study of hidden charm production is an important part of the heavy ion program. The standard approach to this problem [1] assumes that c¯c bound states are created only at the initial stage of the reaction and then partially destroyed at later stages due to interactions with the medium [2, 3, 4].
Nuclear collisions at intermediate, relativistic, and ultra-relativistic energies offer unique opportunities to study in detail manifold fragmentation and clustering phenomena in dense nuclear matter. At intermediate energies, the well known processes of nuclear multifragmentation -- the disintegration of bulk nuclear matter in clusters of a wide range of sizes and masses -- allow the study of the critical point of the equation of state of nuclear matter. At very high energies, ultra-relativistic heavy-ion collisions offer a glimpse at the substructure of hadronic matter by crossing the phase boundary to the quark-gluon plasma. The hadronization of the quark-gluon plasma created in the fireball of a ultra-relativistic heavy-ion collision can be considered, again, as a clustering process. We will present two models which allow the simulation of nuclear multifragmentation and the hadronization via the formation of clusters in an interacting gas of quarks, and will discuss the importance of clustering to our understanding of hadronization in ultra-relativistic heavy-ion collisions.
We study Mach shocks generated by fast partonic jets propagating through a deconfined strongly-interacting matter. Our main goal is to take into account different types of collective motion during the formation and evolution of this matter. We predict a significant deformation of Mach shocks in central Au+Au collisions at RHIC and LHC energies as compared to the case of jet propagation in a static medium. The observed broadening of the near-side two-particle correlations in pseudorapidity space is explained by the Bjorken-like longitudinal expansion. Three-particle correlation measurements are proposed for a more detailed study of the Mach shock waves.
We study the effects of isovector-scalar meson delta on the equation of state (EOS) of neutron star matter in strong magnetic fields. The EOS of neutron-star matter and nucleon effective masses are calculated in the framework of Lagrangian field theory, which is solved within the mean-field approximation. From the numerical results one can find that the delta-field leads to a remarkable splitting of proton and neutron effective masses. The strength of delta-field decreases with the increasing of the magnetic field and is little at ultrastrong field. The proton effective mass is highly influenced by magnetic fields, while the effect of magnetic fields on the neutron effective mass is negligible. The EOS turns out to be stiffer at B < 10^15G but becomes softer at stronger magnetic field after including the delta-field. The AMM terms can affect the system merely at ultrastrong magnetic field(B > 10^19G). In the range of 10^15 G - 10^18 G the properties of neutron-star matter are found to be similar with those without magnetic fields.
The D-meson spectral density at finite temperature is obtained within a self-consistent coupled-channel approach. For the bare meson-baryon interaction, a separable potential is taken, whose parameters are fixed by the position and width of the Lambda_c (2593) resonance. The quasiparticle peak stays close to the free D-meson mass, indicating a small change in the effective mass for finite density and temperature. However, the considerable width of the spectral density implies physics beyond the quasiparticle approach. Our results indicate that the medium modifications for the D-mesons in nucleus-nucleus collisions at FAIR (GSI) will be dominantly on the width and not, as previously expected, on the mass.
Potential energy surfaces are calculated by using the most advanced asymmetric two-center shell model allowing to obtain shell and pairing corrections which are added to the Yukawa-plus-exponential model deformation energy. Shell effects are of crucial importance for experimental observation of spontaneous disintegration by heavy ion emission. Results for 222Ra, 232U, 236Pu and 242Cm illustrate the main ideas and show for the first time for a cluster emitter a potential barrier obtained by using the macroscopic-microscopic method.
In this increasingly complex world of learned information delivery and discovery - is it possible that the "free lunch" the Publishing world worries about could come true? Although Open Access and Institutional Repositories have not (yet) created the "scorched earth" effect many were predicting, they are slowly and inevitably gaining momentum. Broader access to top-level information via Google (and others) does indeed appear to be "good enough" for many in their search for content. But you rarely get food for free in a good quality restaurant. You pay for the selection, preparation, speed and expertise of the delivery. At the soup kitchen the food can often be filling - but the queue will be long, the wait even longer and there is no chance of silver service or à la carte. If you are unfortunate enough to have little choice then this may be a great solution. Others will be willing to pay for a more satisfactory meal. As in all aspects of life, diversification and specialisation are fundamental forces. The publishing community in the years to come will continue to develop its offerings for a variety of needs that require more than just broth. To stretch the analogy, the ongoing presence of tap water in our lives has done little to halt the extraordinary rise of bottled water as part of our staple diet. Business reality will continue to settle these types of debate; my bet is that the commercial publishers see a role as providing information that commands an intrinsic value proposition to enough customers to remain economically viable for some time to come. Inspired by the comments and ideas expounded by Dr. James O'Donnell of Georgetown University on the liblicense listserv on 20th July this year, this paper will look to expand on the analogy and identify the good, the bad - but importantly the difference in information quality and access that will result in the radically changed (but still co-existent) information landscape of tomorrow.
The economical and organizational debates about open access have mostly been concerned with journals. This is not surprising since the open access movement can be seen largely as a response to the serials crisis. Recently the open access debate has been extended to include access to government produced data in different forms. In this presentation I'll critically look at some economic and organizational issues pertaining to the open access provision of bibliographical data.
In keeping with the views of its guru, Stephen Harnard, the open access movement is only prepared to discuss the two models of the "green road" and the "golden road" as sole alternatives for the future of scientific publishing. The "golden road" is put forward as the royal road for solving the journals crisis. However, no one has drawn attention to the fact that the golden road represents a purely socialist solution to a free-market problem and thus continues the "samizdat" tradition of underground literature in the former Eastern bloc. The present paper reveals the alarmingly low level at which the open access movement intends to publish top-class results from science and research, and the low degree of professionalism with which they are satisfied.
Der Vortrag wurde am 5th Frankfurt Scientific Symposium gehalten (22-23 Oktober 2005). Die Betrachtung des Videos ist (leider) nur mit den Browsern Internet Explorer ab 5.0, Netscape Navigator ab 7.0 oder Internet Explorer ab 5.2.2 für MaC möglich (s. Dokument 1.html). Die gesamten Tagungsbeiträge sind unter http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1992/ abrufbar.
Within the scenario of large extra dimensions, the Planck scale is lowered to values soon accessible. Among the predicted effects, the production of TeV mass black holes at the LHC is one of the most exciting possibilities. Though the final phases of the black hole’s evaporation are still unknown, the formation of a black hole remnant is a theoretically well motivated expectation. We analyze the observables emerging from a black hole evaporation with a remnant instead of a final decay. We show that the formation of a black hole remnant yields a signature which differs substantially from a final decay. We find the total transverse momentum of the black hole event to be significantly dominated by the presence of a remnant mass providing a strong experimental signature for black hole remnant formation.
Probing the density dependence of the symmetry potential in intermediate energy heavy ion collisions
(2005)
Based on the ultrarelativistic quantum molecular dynamics (UrQMD) model, the effects of the density-dependent symmetry potential for baryons and of the Coulomb potential for produced mesons are investigated for neutron-rich heavy ion collisions at intermediate energies. The calculated results of the Delta-/Delta++ and pi -/pi + production ratios show a clear beam-energy dependence on the density-dependent symmetry potential, which is stronger for the pi -/pi + ratio close to the pion production threshold. The Coulomb potential of the mesons changes the transverse momentum distribution of the pi -/pi + ratio significantly, though it alters only slightly the pi- and pi+ total yields. The pi- yields, especially at midrapidity or at low transverse momenta and the p-/pi+ ratios at low transverse momenta, are shown to be sensitive probes of the density-dependent symmetry potential in dense nuclear matter. The effect of the density-dependent symmetry potential on the production of both, K0 and K+ mesons, is also investigated.
In this study, we analyze the recently proposed charge transfer fluctuations within a finite pseudo-rapidity space. As the charge transfer fluctuation is a measure of the local charge correlation length, it is capable of detecting inhomogeneity in the hot and dense matter created by heavy ion collisions. We predict that going from peripheral to central collisions, the charge transfer fluctuations at midrapidity should decrease substantially while the charge transfer fluctuations at the edges of the observation window should decrease by a small amount. These are consequences of having a strongly inhomogeneous matter where the QGP component is concentrated around midrapidity. We also show how to constrain the values of the charge correlations lengths in both the hadronic phase and the QGP phase using the charge transfer fluctuations.
The regeneration of hadronic resonances is discussed for heavy ion collisions at SPS and SIS-300 energies. The time evolutions of Delta, rho and phi resonances are investigated. Special emphasize is put on resonance regeneration after chemical freeze-out. The emission time spectra of experimentally detectable resonances are explored.
The influence of the isospin-independent, isospin- and momentum-dependent equation of state (EoS), as well as the Coulomb interaction on the pion production in intermediate energy heavy ion collisions (HICs) is studied for both isospin-symmetric and neutron-rich systems. The Coulomb interaction plays an important role in the reaction dynamics, and strongly influences the rapidity and transverse momentum distributions of charged pions. It even leads to the pi- pi+ ratio deviating slightly from unity for isospin-symmetric systems. The Coulomb interaction between mesons and baryons is also crucial for reproducing the proper pion flow since it changes the behavior of the directed and the elliptic flow components of pions visibly. The EoS can be better investigated in neutron-rich system if multiple probes are measured simultaneously. For example, the rapidity and the transverse momentum distributions of the charged pions, the pi- pi+ ratio, the various pion flow components, as well as the difference of pi+-pi- flows. A new sensitive observable is proposed to probe the symmetry potential energy at high densities, namely the transverse momentum distribution of the elliptic flow difference [Delta v_2^pi+ - pi-(p_t rm c.m.].
It is investigated whether canonical suppression associated with the exact conservation of an U(1)-charge can be reproduced correctly by current transport models. Therefore a pion-gas having a volume-limited cross section for kaon production and annihilation is simulated within two different transport prescriptions for realizing the inelastic collisions. It is found that both models can indeed dynamically account for the canonical suppression in the yields of rare strange particles.