Refine
Year of publication
Document Type
- Article (15820)
- Part of Periodical (2818)
- Working Paper (2353)
- Preprint (2083)
- Doctoral Thesis (2065)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (753)
- Report (471)
- Review (165)
Language
- English (29530) (remove)
Keywords
- taxonomy (744)
- new species (444)
- morphology (174)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (117)
- biodiversity (101)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5341)
- Physik (3819)
- Wirtschaftswissenschaften (1921)
- Frankfurt Institute for Advanced Studies (FIAS) (1760)
- Biowissenschaften (1550)
- Center for Financial Studies (CFS) (1494)
- Informatik (1401)
- Biochemie und Chemie (1090)
- Sustainable Architecture for Finance in Europe (SAFE) (1071)
- House of Finance (HoF) (710)
Stem cells capable of self-renewal and differentiation into multiple tissues are important in medicine to reconstitute the hematopoietic system after myelo-ablative chemo- or radiotherapy. In the present situation, adult stem cells such as Mesenchymal stem cells (MSC) and Hematopoietic stem cells (HSC) are used for therapeutic purposes. For tissue regeneration and tissue constitution, engraftment of transplanted stem cells is a necessary feature. However, in many instances, the transplanted stem cells reach the tissues with low efficiency. Considering the three-step model of leukocyte extravasation by Springer et al, the rolling, adhesion and transmigration form the three major steps for the transplanted stem cells to enter the desired tissues. One of the molecular switches reported to be involved in these mechanisms are the Rho family GTPases. The present study investigates the role of Rho GTPases in adhesion and migration of stem and progenitor cells. Chemotactic and chemokinetic migration assays, transendothelial migration assays, migration of cells under shear stress, microinjection, retroviral and lentiviral gene transfer methods, oligonucleotide microarray analysis and pull down assays were employed in this study for the elucidation of Rho GTPase involvement in migration and adhesion of stem and progenitor cells. The transmigration assay used for the migration determination of the adherent cell type, MSC, was optimized for the efficient and effective assessment of the migrating cells. The involvement of Rho was found to be critical for stem and progenitor cell migration where inactivation of Rho by C2I-C3 transferase toxin and/or overexpression of C3 transferase cDNA increased the migration rate of Hematopoietic progenitor cells (HPC) and MSC. Moreover, modulation of Rho caused predictable cytoskeletal and morphological changes in MSC. Assessment of Rho GTPase involvement in the interacting partner, the endothelial cells during stem cell migration, revealed that active Rho expression induced E-selectin expression. The increased levels of E-selectin were functionally confirmed by the increased adhesion of progenitor cells (HPC) to the Human umbilical vein endothelial cell (HUVEC) layer. Moreover, inhibition of Rac in the migrating endothelial progenitor cells (eEPC) increased their adhesion to HUVEC correlating with the increased percentage expression of cell surface receptor, CD44 in Rac inactivated eEPC. In conclusion, this study shows that Rho GTPases control the adhesion and migration of stem and progenitor cells, HPC and MSC. Rho inhibition drives the cells to migrate in the blood vessels. The substantial increase in the level of active Rho in endothelial layer, manifested by the E-selectin surface expression assists the better adhesion of stem and progenitor cells to the endothelial layer. Serum factors and growth factors in the physiological system influence the Rho GTPase expression in both migrating stem cells and the barrier endothelial cells. Thus, specific modulation of Rho GTPases in the transplanted stem and progenitor cells could be an interesting tool to improve the migration and homing processes of stem cells for cellular therapy in future.
This work is dedicated to the investigation of nuclear matter at non-zero temperatures within an effective hadronic model based on the Walecka model. It includes fermions as well as a vector omega meson and a scalar sigma meson where for the latter a quartic self-interaction has been considered. The coupling constants have been adapted to the saturation properties of infinite nuclear matter. A set of self-consistent Schwinger-Dyson equations has been set up for all included particles within the Cornwall-Jackiw-Tomboulis formalism. This has been expanded to non-zero temperatures via the imaginary time formalism. Beside tree-level two different stages of approximations have been considered: the Hartree approximation which takes into account the double-bubble diagram for the scalar meson, and an improved approximation where in addition two-particle irreducible sunset diagrams for all fields were included. In the Hartree-approximation the Schwinger-Dyson equations can be solved by quasi-particle ansaetze, while in the improved approximation spectral functions with non-zero widths have to be introduced. The Schwinger-Dyson equations are solved by the fully dressed propagators. Comparing the two levels of approximation shows the influence of finite widths on the temperature dependence of the particle properties. The consideration of finite widths in fact has a significant influence on the transition from a phase of heavy nucleons to a transition of light nucleons, observed in the Walecka-model. The temperature dependence is weakend when finte widths are taken into account.
The present work was devised to address the systematic analysis of samples from a range of Roman non-ferrous metal artefacts from different archaeological contexts and sites in the Roman provinces of Germania Superior. One of the focal points of this study is the provenancing of different lead objects from five important Roman settlements between 15 BC and the beginning of fourth century AD. For this purpose, measurements were made on lead and copper ore samples from the Siegerland, Eifel, Hunsrück and Lahn-Dill area in Germany and supplemented with data from the literature to create a data bank of lead isotope ratios of European deposits. Compositional analysis of lead objects by Electron Microprobe analysis showed that Romans were able to purify lead from ore up to 99%. Multi-Collector Inductively Coupled Plasma Mass-Spectrometry was used to determine the source of lead, which played an important role in nearly all aspects of Roman life. Lead isotope ratios were measured for ore samples from German deposits from the eastern side of the Rhine (Siegerland, Lahn-Dill, Ems) and the western side of the Rhine (Eifel, Hunsrück), which contained enough ore reserves to answer the increasing local demand and are believed to have been mined during the Roman period. This data together with those from Mediterranean ore deposits from the literature was used to establish a data bank. The Mediterranean ore deposits range from Cambrian (high 207Pb/206Pb) to tertiary (lower 207Pb/206Pb) values. In particular, the Cypriot deposits are younger, while the Spanish deposits fall either with the younger Sardic ores or close to the older Cypriot ores. The lead isotope ratios of most German ore deposits fall in between the 208Pb/206Pb vs. 207Pb/206Pb ratios of Sardinia and Cyprus, where the lead isotope signature of ore deposits from France and Britain are also found. Over 240 lead objects were measured from Wallendorf (second century BC to first century AD) Dangstetten (15-8 BC), Waldgirmes (AD 1-10), Mainz (AD 1-300), Martberg (first to fourth centuries AD) & Trier (third to fourth centuries AD). Comparing the lead isotope ratios of lead objects and those from German ores shows that the source of over 85 percent of objects are Eifel ore deposits, but the Roman’s had also imported lead from the Southern Massif Central and from Great Britain. A further topic of this work was the systematic study of the variation of copper isotope ratios in different copper minerals and the mechanisms, which controls copper isotope fractionation in ores deposits. For this purpose, copper isotope analyses were made by Multi-Collector Inductively Coupled Plasma Mass-Spectrometry from a series of hydrothermal copper sulphides and their alteration products. Copper and lead isotope ratios were measured in coexisting phases of chalcopyrite and malachite and also coexisting malachite and azurite. No significant fractionation was observed in malachite-azurite phases, but in chalcopyrite-malachite coexisting phases, malachite always shows a positive fractionation to heavier isotope values. Zhu et al. and Larson et al. showed that isotopic variations in copper principally reflect mass fractionation in response to low temperature processes rather than source heterogeneity. The low temperature ore formation processes are mostly represented by weathering of primary sulphide ores to produce secondary carbonate phases and therefore are usually observed on the surface of ore deposits, which were probably removed during the early Bronze Age. Using this concept, copper isotope ratios were measured in some Early Bronze Age copper alloys and Roman copper alloys. However, no large copper isotope fractionation has been observed. Lead and copper isotope ratios were measured on samples from the Kupferschiefer. Two profiles were investigated; 1) Sangerhausen, which was not directly influenced by the oxidizing brines of Rote Fäule and 2) Oberkatz, where both Rote Fäule-controlled and structure-controlled mineralization were observed. Results from maturation studies of organic matter suggest the maximum temperature affecting the Kupferschiefer did not exceed 130°C. delta-65-Cu ranges between -0.78-+0.58‰, shows a positive correlation with copper concentration. Maximum temperature in the Kupferschiefer profile from Oberkatz is supposed to be around 150°C. delta-65Cu in this profile ranges between -0.71-+0.68‰. The pattern of copper isotope fractionation and copper concentration is same as the for profile of Sangerhausen. Origina lead isotope ratios are strongly overprinted by high concentrations of uranium in bottom of both profiles causing more radiogenic lead.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb, which is locally bottom-avoiding. We use a small-step operational semantics in form of a normal order reduction. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into an arbitrary program context their termination behaviour is the same. We use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We evolve different proof tools for proving correctness of program transformations. We provide a context lemma for may- as well as must- convergence which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations keep contextual equivalence. In contrast to other approaches our syntax as well as semantics does not make use of a heap for sharing expressions. Instead we represent these expressions explicitely via letrec-bindings.
Static analysis of different non-strict functional programming languages makes use of set constants like Top, Inf, and Bot denoting all expressions, all lists without a last Nil as tail, and all non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics. This paper proves decidability, in particular EXPTIMEcompleteness, of subset relationship of co-inductively defined sets by using algorithms and results from tree automata. This shows decidability of the test for set inclusion, which is required by certain strictness analysis algorithms in lazy functional programming languages.
Extending the method of Howe, we establish a large class of untyped higher-order calculi, in particular such with call-by-need evaluation, where similarity, also called applicative simulation, can be used as a proof tool for showing contextual preorder. The paper also demonstrates that Mann’s approach using an intermediate “approximation” calculus scales up well from a basic call-by-need non-deterministic lambdacalculus to more expressive lambda calculi. I.e., it is demonstrated, that after transferring the contextual preorder of a non-deterministic call-byneed lambda calculus to its corresponding approximation calculus, it is possible to apply Howe’s method to show that similarity is a precongruence. The transfer is not treated in this paper. The paper also proposes an optimization of the similarity-test by cutting off redundant computations. Our results also applies to deterministic or non-deterministic call-by-value lambda-calculi, and improves upon previous work insofar as it is proved that only closed values are required as arguments for similaritytesting instead of all closed expressions.
The paper examines challenges in effectively implementing the lender-of-last-resort function in the EU single financial market. Briefly highlighted are features of the EU financial landscape that could increase EU systemic financial risk. Briefly described are the complexities of the EU’s financial-stability architecture for preventing and resolving financial problems, including lender-of-last-resort operations. The paper examines how the lender-of-last-resort function might materialize during a systemic financial disturbance affecting more than one EU Member State. The paper identifies challenges and possible ways of enhancing the effectiveness of the existing architecture.
In order to investigate the role of neuronal synchronization in perceptual grouping, a new method was developed to record selectively from multiple cortical sites of known functional specificity as determined by optical imaging of intrinsic signals. To this end, a matrix of closely spaced guide tubes was developed in cooperation with a company providing the essential manufacturing technique RMPD® (Rapid Micro Product Development). The matrix was embedded into a framework of hard and software that allowed for the mapping of each guide tube onto the cortical site an electrode would be led to if inserted into that guide tube. With these developments, it was possible to determine the functional layout of the cortex by optical imaging and subsequently perform targeted recordings with multiple electrodes in parallel. The method was tested for its accuracy and found to target the electrodes with a precision of 100 µm to the desired cortical locations. Using the developed technique, neuronal activity was recorded from area 18 of anesthetized cats. For stimulation, Gabor-patches in different geometrical configurations were placed over the recorded receptive fields merging into visual objects appropriate for testing the hypothesis of feature binding by synchrony. Synchronization strength was measured by the height of the cross-correlation centre peaks. All pairwise synchronizations were summarized in a correlation index which determined the mean difference of the correlation strengths between conditions in which recording sites should or should not fire in synchrony according to the binding hypothesis. The correlation index deviated significantly from zero for several of these configurations, further supporting the hypothesis that synchronization plays an important role in the process of perceptual grouping. Furthermore, direct evidence was found for the independence of the synchronization strength from the neuronal firing rate and for neurons that change dynamically the ensemble they participate in. In parallel to the experimental approach, mechanisms of oscillatory long range synchronization were studied by network simulations. To this end, a biologically plausible model was implemented using pyramidal and basket cells with Hodgkin-Huxley like conductances. Several columns were built from these cells and intra- and inter-columnar connections were mimicked from physiological data. When activated by independent Poisson spike trains, the columns showed oscillatory activity in the gamma frequency range. Correlation analysis revealed the tendency to locally synchronize the oscillations among the columns, but a rapid phase transition occurred with increasing cortical distance. This finding suggests that the present view of the inter-columnar connectivity does not fully explain oscillatory long range synchronization and predicts that other processes such as top-down influences are necessary for long range synchronization phenomena.
Systematisch verabreichte Chemotherapeutika sind oft uneffektiv bei der Behandlung von Krankheiten des zentralen Nervensystems (ZNS). Eine der Ursachen hierfür ist der unzureichende Arzneistoff-Transport ins Gehirn aufgrund der Blut-Hirn-Schranke. Eine der Strategien für den nicht-invasiven Wirkstoff-Transport ins Gehirn ist die Verwendung von Nanopartikeln. Polybutylcyanoacrylat-Nanopartikel, die mit Polysorbat 80 (Tween® 80) überzogen wurden, können die Blut-Hirn-Schranke passieren und somit Wirkstoffe ins Gehirn transportieren. Wird die Blut-Hirn-Schranke durch einen Hirntumor partiell beschädigt und hierdurch ihre Permeabilität am Ort des Tumors erhöht, können Nanopartikel den Tumor zusätzlich durch den sogenannten EPR-Effekt erreichen. Im ersten Teil der vorliegenden Arbeit wurde die Beladung der Nanopartikel durch Variation der Formulierungparameter mit dem Ziel optimiert, eine Formulierung mit höherer Wirksamkeit für die Therapie von Glioblastom-tragenden Ratten zu entwickeln. Außerdem wurde das Potential von Doxorubicin, das an mit „Stealth Agents“ überzogenen Polybutylcyanoacrylat-Nanopartikel gebunden war, für die Chemotherapie von Hirntumoren untersucht. Im zweiten Teil dieser Studie wurden die Gehirn- und Körperverteilung in gesunden und in Glioblastom-101/8-tragenden Ratten nach i.v.-Gabe von Poly(butyl-2-cyano[3- 14C]acrylat)-Nanopartikeln, die mit Polysorbat 80 beschichtet wurden, und solchen, die noch zusätzlich mit Doxorubicin geladen waren (DOX-14C-PBCA + PS), untersucht. Die Standardformulierung von Doxubicin-Polybutylcyanoacrylat-Nanopartikeln (DOX-NP) wurde durch anionische Polymerisierung von Butylcyanoacrylat in Anwesenheit von DOX hergestellt. Zusätzlich wurden unterschiedliche DOX-NP Formulierungen durch Veränderung der Herstellung produziert. Das therapeutische Potential der Formulierungen wurde in Ratten mit ins Gehirn transplantieren Glioblastom 101/8 untersucht. Neben Polysorbat 80 wurden Poloxamer 188 und Poloxamin 908 als Überzugsmaterial verwendet. Die Resultate ergaben, dass die mit Polysorbat 80 überzogene Standardformulierung am effektivsten war. Die höhere Wirksamkeit von DOX-NP+PS 80 könnte durch die Fähigkeit dieser Träger erklärt werden, den Wirkstoff während eines frühen Stadiums der Tumorentwicklung durch einen Rezeptor-vermittelten Mechanismus, der durch den PS 80-Überzug aktiviert wurde über die intakte Blut-Hirn-Schranke, zu transportieren. Unsere Ergebnisse zeigen auch, dass Poloxamer 188 und Poloxamin 908 den antitumoralen Effekt von DOX-PBCA beträchtlich verbessern. Der anti-tumorale Effekt dieser Formulierungen könnte möglicherweise dem EPR-Effekt zugeschrieben werden. Es ist bekannt, dass die tumorale Arzneistoff-Aufnahme durch den EPR-Effektes für lang-zirkulierende Wirkstoffträger ausgeprägter ist und so mehr Wirkstoff durch die Tumor-geschädigte Blut-Hirn-Schranke gelangt. Unbeschichtete Nanopartikel, Polysorbat 80-beschichtete Nanopartikel oder mit Doxorubicin beladene und mit Polysorbat 80 beschichtete Nanopartikel wurden in gesunden und Tumor-tragenden Ratten injiziert. Diese Nanopartikel-Präparationen zeigten einer unterschiedliche Korpenverteilung in den Ratten. Unbeschichtete Nanopartikel sammelten sich in den RES-Organen an. Mit PS 80 beschichtete NP reduzierten die Aufnahme der NP in Leber und Milz, während sich die Konzentration der NP in der Lunge erhöhte. Diese Beobachtungen deuten darauf hin, dass die Änderung der Oberflächeneigenschaften der NP durch das Tensid, zu einer Interaktion mit unterschiedlichen Opsoninen führt, welches die Aufnahme der NP von verschiedenen phagozitierenden Zellen erleichtert. Hingegen war die Aufnahme der mit DOX beladenen, PS 80-beschichteten Nanopartikel den unbeschichteten Partikel ähnlich. Im Vergleich mit gesunden Ratten und mit Tumor-tragenden Ratten hingegen war die Konzentration der NP im Gehirn von Tumor tragenden Ratten 10 Tage nach der Tumor-implantation signifikant höher. In Anwesenheit des Glioblastoms ist der Transport von NP in das Gehirn das Resultat verschiedener Faktoren: zusätzlich zur Fähigkeit von PS 80-Nanopartikeln, die Blut-Hirn-Schranke zu passieren, extravasieren diese Träger wegen des EPR Effekts über das durch den Tumor undichte Endothelium. Die Konzentration von PS 80 [14C]-PBCA NP war im Glioblastom signifikant höher als mit DOX [14C]-PBCA NP. Dieses Phänomen kann durch die unterschiedliche Mikroumgebung von zerebralem intra-tumoralen und intaktem Gehirngewebe erklärt werde. Insbesondere können sich die positive Ladung der tumoralen Regionen und die positive Ladung der DOX [14C]-PBCA NP negativ beeinflussen. Dennoch waren die Doxorubicin-Konzentration in Glioblastom ausreichend, einen therapeutischen Effekt zu ermöglichen.
Group III presynaptic metabotropic glutamate receptors (mGluRs) play a central role in regulating presynaptic activity through G-protein effects on ion channels and signal transducing enzymes. Like all Class C G-protein coupled receptors, mGluR8 has an extended intracellular C-terminal domain (CTD) presumed to allow for modulation of downstream signaling. To elucidate the function and modulation of mGluR8, yeast two-hybrid screens of an adult rat brain cDNA library were performed with the CTDs of mGluR8a and 8b (mGluR8-C) as baits. Different components of the sumoylation cascade (ube2a, sumo-1, Pias1, Pias gamma and Pias xbeta) and some other proteins were identified as mGluR8 interacting proteins. Binding assays using recombinant GST-fusion proteins confirmed that Pias1 interacts not only with mGluR8-C, but all group III mGluR CTDs. Pias1 binding to mGluR8-C required a region N-terminally to a consensus sumoylation motif and was not affected by arginine substitution of the conserved lysine K882 within this motif. Co-transfection of fluorescently tagged mGluR8a-C, sumo-1 and enzymes of the sumoylation cascade into HEK 293 cells showed that mGluR8a-C can be sumoylated in cells. Arginine substitution of lysine K882 within the consensus sumoylation motif, but not of other conserved lysines within the CTD, abolished in vivo sumoylation. The results are consistent with post-translational sumoylation providing a novel mechanism of group III mGluR regulation.
We introduce a smooth mapping of some discrete space-time symmetries into quasi-continuous ones. Such transformations are related with q-deformations of the dilations of the Euclidean space and with the non-commutative space. We work out two examples of Hamiltonian invariance under such symmetries. The Schrodinger equation for a free particle is investigated in such a non-commutative plane and a connection with anyonic statistics is found. PACS: 03.65.Fd, 11.30.Er
Chemokines play a key role in the cellular infiltration of inflamed tissue. They are released by a wide variety of cell types during the initial phase of host response to injury, allergens, antigens, or invading microorganisms, and selectively attract leukocytes to inflammatory foci, inducing both migration and activation. Monocyte chemoattractant protein-1 (MCP-1), a member of the CC chemokine superfamily, functions in attracting monocytes, T lymphocytes, and basophils to sites of inflammation. MCP-1 is produced by monocytes, fibroblasts, vascular endothelial cells and smooth muscle cells in response to various stimuli such as tumour necrosis factor-a (TNF-a), interferon-g (IFN-g), and interleukin-1b (IL-1b). It also plays an important role in the pathogenesis of chronic inflammation, and overexpression of MCP-1 has been implicated in diseases including glomerulonephritis and rheumatoid arthritis. Oligonucleotide-directed triple helix formation offers a means to target specific sequences in DNA and interfere with gene expression at the transcriptional level. Triple helix-forming oligonucleotides (TFOs) bind to homopurine/homopyrimidine sequences, forming a stable, sequence-specific complex with the duplex DNA. Purine-rich sequences are frequent in gene regulatory regions and TFOs directed to promoter sequences have been shown to prevent binding of transcription factors and inhibit transcription initiation and elongation. Exogenous TFOs that bind homopurine/ homopyrimidine DNA sequences and form triple-helices can be rationally designed, while the intracellular delivery of single-stranded RNA TFOs has not been studied in detail before. In this study, expression vectors were constructed which directed transcription of either a 19 nt triplex-forming pyrimidine CU-TFO sequence targeting the human MCP-1 or two different 19 nt GU- or CA-control sequences, respectively, together with the vector encoded hygromycin resistance mRNA as one fusion transcript. HEK 293 cells were stable transfected with these vectors and several TFO and control cell lines were generated. Functional relevant triplex formation of a TFO with a corresponding 19 bp GC-rich AP-1/SP-1 site of the human MCP-1 promoter was shown. Binding of synthetic 19 nt CUTFO to the MCP-1 promoter duplex was verified by triplex blotting at pH 6.7. Underlining binding specificity, control sequences, including the GU- and CA-sequence, a TFO containing one single mismatch and a MCP-1 promoter duplex containing two mismatches, did not participate in triplex formation. Establishing a magnetic capture technique with streptavidin microbeads it was verified that at pH 7.0 the 19 nt TFO embedded in a 1.1 kb fusion transcript binds to a plasmid encoded MCP-1 promoter target duplex three times stronger than the controls. Finally, cell culture experiments revealed 76 ± 10.2% inhibition of MCP-1 protein secretion in TNF-a stimulated CU-TFO harboring cell lines and up to 88% after TNF-a and IFN-g costimulation in comparison to controls. Expression of interleukin-8 (IL-8) as one TNF-a inducible control gene was not affected by CU-TFO, demonstrating both highly specific and effective chemokine gene repression. Furthermore, another chemokine target, regulated upon activation normal T cell expressed and secreted (RANTES), which plays an essential role in inflammation by recruiting T lymphocytes, macrophages and eosinophils to inflammatory sites, was analysed using the triplex approach. A 28 nt TFO was designed targeting the murine RANTES gene promoter, and gel mobility shift assays demonstrated that the phosphodiester TFO formed a sequencespecific triplex with the double-stranded target DNA with a Kd of 2.5 x 10-7 M. It was analysed whether RANTES expression could be inhibited at the transcriptional level testing the TFO in two different cell lines, T helper-1 lymphocytes and brain microvascular endothelial cells (bend3 cells). Although there was a sequence-specific binding of the TFO detectable in the gel shift assays, there was no inhibitory effect of the exogenously added and phosphorothioate stabilised TFO on endogenous RANTES gene expression visible. Additionally, the small interfering RNA (siRNA) approach was tested as another strategy to inhibit expression of the pro-inflammatory chemokines MCP-1 and RANTES. Two different methods were pursuit, describing transient transfection with vector derived and synthetic siRNA. The vector pSUPER containing the siRNA coding sequence was used to suppress endogenous MCP-1 in HEK 293 cells. An empty vector without RNA sequence served as a control. Inhibition due to the siRNA was measured in stimulated and unstimulated cells. In TNF-a stimulated cells MCP-1 protein synthesis was decreased by 35 ± 11% after siRNA transfection. Using a synthetic double-stranded siRNA, the TNF-a induced MCP-1 protein secretion could be successfully inhibited about 62.3 ± 10.3% in HEK 293 cells, indicating that the siRNA is functional in these cells to suppress chemokine expression. The siRNA approach targeting murine RANTES in Th1 cells and b-end3 cells revealed no inhibition of endogenous gene expression. Gene therapy approaches rely on efficient transfer of genes to the desired target cells. A wide variety of viral and nonviral vectors have been developed and evaluated for their efficiency of transduction, sustained expression of the transgene, and safety. Among them, lentiviruses have been widely used for gene therapy applications. In order to improve the delivery of TFOs or siRNAs into the target cells, cloning of the lentiviral transfer vector SEW, the production of lentiviral particles by transient transfection were performed with the aim to generate lentiviral vector-derived TFOs in further experiments. Here, Th1 cells were transduced with infectious lentiviral particles and transduction efficacy was measured. Transduction efficacy higher than 82% could be achieved using the lentiviral vector SEW, opening optimal possibilities for the TFO or siRNA approach.
Lesion of the rat entorhinal cortex denervates the outer molecular layer of the fascia dentata followed by layer-specific axonal sprouting of uninjured fibers in the denervated zone. One of the candidate molecules regulating the laminar-specific sprouting response in the outer molecular layer is the transmembrane chondroitin sulfate proteoglycan NG2. NG2 is found in glial scars and has been suggested to impede axonal regeneration following injury of the spinal cord. The present study adressed the question whether NG2 could also regulate axonal growth in denervated areas of the brain. Therefore, (1) changes in NG2 mRNA and NG2 protein levels, (2) the cellular and the extracellular localisation of the molecule, (3) the identity of NG2 expressing cells, and (4) the generation of NG2-positive cells were studied in the rat fascia dentata before and following entorhinal deafferentation. Laser microdissection was employed to selectively harvest the denervated molecular layer and combined with quantitative reverse transcription-PCR to measure changes in NG2 mRNA amount (6h, 12h, 2d, 4d, 7d post lesion). The study revealed increases of NG2 mRNA at day 2 (2.5-fold) and day 4 (2-fold) post lesion. Immunocytochemistry was used to detect changes in NG2 protein distribution (1d, 4d, 7d, 10d, 14d, 30d, 6 months post lesion). NG2 staining was increased in the denervated outer molecular layer at 1 day post lesion, reached a maximum at 10 days post lesion, and returned to control levels within 6 month. Interestingly, the accumulation of NG2 protein was strongly restricted to the denervated outer molecular layer forming a border to the unaffected inner molecular layer. Using electron microscopy, NG2-immunoprecipitate was localized not only on glial surfaces and in the extracellular matrix but also in the vicinity of neuronal profiles indicating that NG2 is secreted following denervation. Double-labelings of NG2-immunopositive cells with markers for astrocytes, microglia/macrophages, and oligodendrocytes suggested that NG2-cells are a distinct glial subpopulation before and after entorhinal deafferentation. Bromodeoxyuridine-labeling revealed that some of the NG2-positive cells are postlesional generated. Taken together, the data revealed a layer-specific upregulation of NG2 in the denervated outer molecular layer of the fascia dentata that coincides with the sprouting response of uninjured fibers. This suggests that NG2 could regulate lesion-induced axonal growth in denervated areas of the brain.
Results from various theoretical approaches and ideas presented at this exciting meeting (summary talk at the 5th International Conference on Physics and Astrophysics of Quark Gluon Plasma (ICPAQGP - 2005)) are reviewed. I also point towards future directions, in particular hydrodynamic behaviour induced by jets traveling through the quark-gluon plasma, which might be worth looking at in more detail.
In this dissertation a non-deterministic lambda-calculus with call-by-need evaluation is treated. Call-by-need means that subexpressions are evaluated at most once and only if their value must be known to compute the overall result. Also called "sharing", this technique is inevitable for an efficient implementation. In the lambda-ND calculus of chapter 3 sharing is represented explicitely by a let-construct. Above, the calculus has function application, lambda abstractions, sequential evaluation and pick for non-deterministic choice. Non-deterministic lambda calculi play a major role as a theoretical foundation for concurrent processes or side-effected input/output. In this work, non-determinism additionally makes visible when sharing is broken. Based on the bisimulation method this work develops a notion of equality which respects sharing. Using bisimulation to establish contextual equivalence requires substitutivity within contexts, i.e., the ability to "replace equals by equals" within every program or term. This property is called congruence or precongruence if it applies to a preorder. The open similarity of chapter 4 represents a new concept, insofar that the usual definition of a bisimulation is impossible in the lambda-ND calculus. So in section 3.2 a further calculus lambda-Approx has to be defined. Section 3.3 contains the proof of the so-called Approximation Theorem which states that the evaluation in lambda-ND and lambda-Approx agrees. The foundation for the non-trivial precongruence proof is set out in chapter 2 where the trailblazing method of Howe is extended to be capable with sharing. By the use of this (extended) method, the Precongruence Theorem proves open similarity to be a precongruence, involving the so-called precongruence candidate relation. Joining with the Approximation Theorem we obtain the Main Theorem which says that open similarity of the lambda-Approx calculus is contained within the contextual preorder of the lambda-ND calculus. However, this inclusion is strict, a property whose non-trivial proof involves the notion of syntactic continuity. Finally, chapter 6 discusses possible extensions of the base calculus such as recursive bindings or case and constructors. As a fundamental study the calculus lambda-ND provides neither of these concepts, since it was intentionally designed to keep the proofs as simple as possible. Section 6.1 illustrates that the addition case and constructors could be accomplished without big hurdles. However, recursive bindings cannot be represented simply by a fixed point combinator like Y, thus further investigations are necessary.
A new approach to optimize multilevel logic circuits is introduced. Given a multilevel circuit, the synthesis method optimizes its area while simultaneously enhancing its random pattern testability. The method is based on structural transformations at the gate level. New transformations involving EX-OR gates as well as Reed–Muller expansions have been introduced in the synthesis of multilevel circuits. This method is augmented with transformations that specifically enhance random-pattern testability while reducing the area. Testability enhancement is an integral part of our synthesis methodology. Experimental results show that the proposed methodology not only can achieve lower area than other similar tools, but that it achieves better testability compared to available testability enhancement tools such as tstfx. Specifically for ISCAS-85 benchmark circuits, it was observed that EX-OR gate-based transformations successfully contributed toward generating smaller circuits compared to other state-of-the-art logic optimization tools.
Retiming is a widely investigated technique for performance optimization. It performs powerful modifications on a circuit netlist. However, often it is not clear, whether the predicted performance improvement will still be valid after placement has been performed. This paper presents a new retiming algorithm using a highly accurate timing model taking into account the effect of retiming on capacitive loads of single wires as well as fanout systems. We propose the integration of retiming into a timing-driven standard cell placement environment based on simulated annealing. Retiming is used as an optimization technique throughout the whole placement process. The experimental results show the benefit of the proposed approach. In comparison with the conventional design flow based on standard FEAS our approach achieved an improvement in cycle time of up to 34% and 17% on the average.
Retiming is a widely investigated technique for performance optimization. In general, it performs extensive modifications on a circuit netlist, leaving it unclear, whether the achieved performance improvement will still be valid after placement has been performed. This paper presents an approach for integrating retiming into a timing-driven placement environment. The experimental results show the benefit of the proposed approach on circuit performance in comparison with design flows using retiming only as a pre- or postplacement optimization method.
Channel routing is an NP-complete problem. Therefore, it is likely that there is no efficient algorithm solving this problem exactly.In this paper, we show that channel routing is a fixed-parameter tractable problem and that we can find a solution in linear time for a fixed channel width.We implemented our approach for the restricted layer model. The algorithm finds an optimal route for channels with up to 13 tracks within minutes or up to 11 tracks within seconds.Such narrow channels occur for example as a leaf problem of hierarchical routers or within standard cell generators.
We present a theoretical analysis of structural FSM traversal, which is the basis for the sequential equivalence checking algorithm Record & Play presented earlier. We compare the convergence behaviour of exact and approximative structural FSM traversal with that of standard BDD-based FSM traversal. We show that for most circuits encountered in practice exact structural FSM traversal reaches the fixed point as fast as symbolic FSM traversal, while approximation can significantly reduce in the number of iterations needed. Our experiments confirm these results.
We present the FPGA implementation of an algorithm [4] that computes implications between signal values in a boolean network. The research was performed as a masterrsquos thesis [5] at the University of Frankfurt. The recursive algorithm is rather complex for a hardware realization and therefore the FPGA implementation is an interesting example for the potential of reconfigurable computing beyond systolic algorithms. A circuit generator was written that transforms a boolean network into a network of small processing elements and a global control logic which together implement the algorithm. The resulting circuit performs the computation two orders of magnitudes faster than a software implementation run by a conventional workstation.
This paper presents a new timing driven approach for cell replication tailored to the practical needs of standard cell layout design. Cell replication methods have been studied extensively in the context of generic partitioning problems. However, until now it has remained unclear what practical benefit can be obtained from this concept in a realistic environment for timing driven layout synthesis. Therefore, this paper presents a timing driven cell replication procedure, demonstrates its incorporation into a standard cell placement and routing tool and examines its benefit on the final circuit performance in comparison with conventional gate or transistor sizing techniques. Furthermore, we demonstrate that cell replication can deteriorate the stuck-at fault testability of circuits and show that stuck-at redundancy elimination must be integrated into the placement procedure. Experimental results demonstrate the usefulness of the proposed methodology and suggest that cell replication should be an integral part of the physical design flow complementing traditional gate sizing techniques.
One of the most severe short-comings of currently available equivalence checkers is their inability to verify integer multipliers. In this paper, we present a bit level reverse-engineering technique that can be integrated into standard equivalence checking flows. We propose a Boolean mapping algorithm that extracts a network of half adders from the gate netlist of an addition circuit. Once the arithmetic bit level representation of the circuit is obtained, equivalence checking can be performed using simple arithmetic operations. Experimental results show the promise of our approach.
We present new concepts to integrate logic synthesis and physical design. Our methodology uses general Boolean transformations as known from technology-independent synthesis, and a recursive bi-partitioning placement algorithm. In each partitioning step, the precision of the layout data increases. This allows effective guidance of the logic synthesis operations for cycle time optimization. An additional advantage of our approach is that no complicated layout corrections are needed when the netlist is changed.
We study queueing strategies in the adversarial queueing model. Rather than discussing individual prominent queueing strategies we tackle the issue on a general level and analyze classes of queueing strategies. We introduce the class of queueing strategies that base their preferences on knowledge of the entire graph, the path of the packet and its progress. This restriction only rules out time keeping information like a packet’s age or its current waiting time.
We show that all strategies without time stamping have exponential queue sizes, suggesting that time keeping is necessary to obtain subexponential performance bounds. We further introduce a new method to prove stability for strategies without time stamping and show how it can be used to completely characterize a large class of strategies as to their 1-stability and universal stability.
The thesis in general deals with CORBA, the Common Object Request Broker Architecture. More specifically, it takes a look at the server-side, where object adapters exist to aid the developer in implementing objects and in dealing with request processing. The new Portable Object Adapter was recently added to the CORBA 2.2 standard. My task was the implementation of the POA in MICO and the examination if (a) the POA specification is sensible and (b) in which areas it improves over the old Basic Object Adapter. After introducing distributed platforms in general and CORBA in particular, the thesis' main two chapters are a detailed abstract examination ("Design") of the POA design and their relization ("Implementation"), highlighting the potential trouble spots, persistence and collocation.
The synchronization of neuronal firing activity is considered an important mechanism in cortical information processing. The tendency of multiple neurons to synchronize their joint firing activity can be investigated with the 'unitary event' analysis (Grün, 1996). This method is based on the nullhypothesis of independent Bernoulli processes and can therefore not tell whether coincidences observed between more than two processes can be considered "genuine" higher- order coincidences or whether they might be caused by coincidences of lower order that coincide by chance ("chance coincidences"). In order to distinguish between genuine and chance coincidences, a parametric model of independent interaction processes (MIIP) is presented. In the framework of this model, Maximum-Likelihood estimates are derived for the firing rates of n single processes and for the rates with which genuine higher order correlations occur. The asymptotic normality of these estimates is used to derive their asymptotic variance and in order to investigate whether higher order coincidences can be considered genuine or whether they can be explained by chance coincidences. The empirical test power of this procedure for n=2 and n=3 processes and for finite analysis windows is derived with simulations and compared to the asymptotic values. Finally, the model is extended in order to allow for the analysis of correlations that are caused by jittered coincidences.
Jet physics in ALICE
(2005)
This work aims at the performance of the ALICE detector for the measurement of high-energy jets at mid-pseudo-rapidity in ultra-relativistic nucleus-nucleus collisions at LHC and their potential for the characterization of the partonic matter created in these collisions. In our approach, jets at high energy with E_{T}>50 GeV are reconstructed with a cone jet finder, as typically done for jet measurements in hadronic collisions. Within the ALICE framework we study its capabilities of measuring high-energy jets and quantify obtainable rates and the quality of reconstruction, both, in proton-proton and in lead-lead collisions at LHC conditions. In particular, we address whether modification of the jet fragmentation in the charged-particle sector can be detected within the high particle-multiplicity environment of the central lead-lead collisions. We comparatively treat these topics in view of an EMCAL proposed to complete the central ALICE tracking detectors. The main activities concerning the thesis are the following: a) Determination of the potential for exclusive jet measurements in ALICE. b) Determination of jet rates that can be acquired with the ALICE setup. c) Development of a parton-energy loss model. d) Simulation and study of the energy-loss effect on jet properties.
This thesis has explored how structural techniques can be applied to the problem of formal verification for sequential circuits. Algorithms for formal verification which operate on non-canonical gate netlist representations of digital circuits have certain advantages over the traditional techniques based on canonical representations as BDDs. They allow to exploit problem-specific knowledge because they can take into account structural properties of the designs being analyzed. This allows us to break the problem down into sub-problems which are (hopefully) easier to be solved. However, in the past, the main application of such structural techniques was in the field of combinational equivalence checking. One reason for this is that the behaviour of a sequential system does not only depend on its inputs but also on its internal states, and no concepts had been developed to-date allowing structural methods to deal with large sets of states. An important goal of this research was therefore to develop structural, non-canonical forms of representing the reachable states of a finite state machine and to develop methods for reachability analysis based on such representations. In order to reach this goal, two steps were taken. Firstly, a framework for manipulating Boolean functions represented as gate netlists has been established. Secondly, using this framework, a structural method for FSM traversal was developed serving as the basis for an equivalence checking algorithm for sequential circuits. The framework for manipulating Boolean functions represented as multi-level combinational networks is based on a new concept of an implicant in a multi-level network and on an AND/ORtype enumeration technique which allows us to derive such implicants. This concept extends the classical notion of an implicant in two-level circuits to the multi-level case. Using this notion, arbitrary transformations in multi-level combinational networks can be performed. The multi-level network implicants can be determined from AND/OR reasoning graphs, which are associated with an AND/OR reasoning technique operating directly on the gate netlist description of a multi-level circuit. This reasoning technique has the important property that it is complete, i.e. the associated AND/OR trees contain all prime implicants of a Boolean function at an arbitrary node in a combinational circuit. In other words, AND/OR graphs constructed for a network function serve as a representation of this function. A great advantage over BDDs is that AND/OR graphs, besides representing the logic function, also represent some structural properties of the analyzed circuitry. This permits to develop heuristics that are specially tailored for certain applications such as logic optimization or verification. Another advantage which is especially useful for logic optimization is the fact that the proposed AND/OR enumeration scheme is not restricted to the use of a specific logic alphabet such as B3 = {0, 1, X}. By using Roth’s D-calculus based on B5 = {0, 1, D, D-Komplement} permissible implicants can be determined. Transformations based on permissible implicants exploit observability don’t-care conditions in logic synthesis by creating permissible functions at internal network nodes. In order to evaluate the new structural framework for manipulating Boolean functions represented as gate netlists, several experiments with implicant-based optimization of multi-level circuits were performed. The results show that implicant-based circuit transformations lead to significantly better optimization results than traditional synthesis techniques. Next, based on the proposed structural methods for Boolean function manipulation, techniques for representing and manipulating the set of states of a sequential circuit have been developed. The concept of a “stub circuit” was introduced which implicitly represents a set of state vectors as the range of a multi-output function given as a gate netlist. The stub circuit is the result of an existential quantification operation which is obtained by functional decomposition using implicant-based netlist transformations and a network cutting procedure. Using this existential quantification operation, a new structural FSM traversal algorithm was formulated which performs a fixed point iteration on the set of reachable states represented by the stub circuit. The proposed approach performs a reachability analysis of the states of a sequential circuit. It operates on gate netlists and naturally allows to incorporate structural properties of a design under consideration into the reasoning. Therefore, structural FSM traversal is an interesting alternative to traditional symbolic FSM traversal, especially in those applications of formal verification, where structural properties can be exploited. Structural FSM traversal was applied to the problem of sequential equivalence checking. Here, structural similarities between the designs to be compared can effectively reduce the complexity of the verification task. The FSM to be traversed is a special product machine called sequential miter. The special structural properties of this product machine have made it possible to formulate an approximate algorithm for structural FSM traversal, called record and play(). This algorithm uses an approximation on the reachable state set represented by the stub circuit which is very beneficial for performance. Instead of calculating the stub circuit using the exact algorithm, implicant-based transformations directly using structural design similarities are performed. These transformations, together with existential quantification implemented by the cutting procedure, lead to an over-approximation of the reachable state set. By this overapproximation, only such unreachable product states are added to the set of states represented by the stub circuit which are unreachable at the current point in time but which are nevertheless equivalent. Therefore, more product states are added to the set of reachable states sometimes leading to drastic acceleration of the traversal, i.e. the fixed point is reached in much fewer steps. The algorithm record and play() was applied to the problem of checking the equivalence of a circuit with its optimized and retimed version. Retiming is a form of sequential circuit optimization which can radically alter the state encoding of a circuit. Traditional FSM traversal techniques often fail because the BDDs needed to represent the reachable state set and the transition relation of the product machine become too large. Experiments were conducted to evaluate the performance of record and play() on a standard set of sequential benchmark circuits. The algorithm was capable of proving the equivalence of optimized and retimed circuits with their original versions, some of which (to our knowledge) have never before been verified using traditional techniques like symbolic FSM traversal. The experimental results are very promising. Future research will therefore explore how structural FSM traversal can be applied to model checking.
This paper argues that short (clause-internal) scrambling to a pre-subject position has A properties in Japanese but A'-properties in German, while long scrambling (scrambling across sentence boundaries) from finite clauses, which is possible in Japanese but not in German, has A'-properties throughout. It is shown that these differences between German and Japanese can be traced back to parametric variation of phrase structure and the parameterized properties of functional heads. Due to the properties of Agreement, sentences in Japanese may contain multiple (Agro- and Agrs-) specifiers whereas German does not allow for this. In Japanese, a scrambled element may be located in a Spec AgrP, i.e. an A- or L-related position, whereas scrambled NPs in German can only appear in an AgrP-adjoined (broadly-L-related) position, which only has A'-properties. Given our assumption that successive cyclic adjunction is generally impossible, elements in German may not be long scrambled because a scrambled element that is moved to an adjunction site inside an embedded clause may not move further. In Japanese, long distance scrambling out of finite CPs is possible since scrambling may proceed in a successive cyclic manner via embedded Spec- (AgrP) positions. Our analysis of the differences between German and Japanese scrambling provides us with an account of further contrasts between the two languages such as the existence of surprising asymmetries between German and Japanese remnant-movement phenomena, and the fact that unlike German, Japanese freely allows wh-scrambling. Investigation of the properties of Japanese wh-movement also leads us to the formulation of the "Wh-cluster Hypothesis", which implies that Japanese is an LF multiple wh-fronting language.
Left dislocation in Zulu
(2004)
This paper examines left dislocation constructions in Zulu, a Southern Bantu language belonging to the Nguni group (Zone S 40). In Zulu left dislocation configurations, a topic phrase in the beginning of the sentence is linked to a resumptive element within the associated clause. Typically, the resumptive element is an incorporated pronoun (cf. Bresnan & Mchombo 1987), as illustrated by the examples in (1) and (2). In these examples, the object pronoun (in italics) is part of the verbal morphology and agrees with the noun class (gender) of the dislocate. This situation is schematically illustrated in (3), where co-indexation represents agreement: ...
In this paper I discuss the properties of particle verbs in light of a proposal about syntactic projection. In section 2 I suggest that projection involves functional structure in two important ways: (i) only functional phrases can be complements, and (ii) lexical heads that take complements and project must be inflected. In section 3, I show that the structure of particle verbs is not uniform with respect to (i) and (ii). On the one hand, a particle always combines with an inflected verb; in this respect, particle verbs look like verb-complement constructions. On the other hand, the particle is not a functional phrase and therefore is not a proper complement, which makes the combination of the particle and the verb look more like a morphologically complex verb. I argue that syntactic rules can in fact interpret the node dominating the particle and the verb as a projection and as a complex head. In section 4, I show that many of the characteristic properties of particle verbs in the Germanic languages follow from the fact that they are structural hybrids.
In this article, I discuss some important properties of wh-questions and wh-scrambling in Japanese. The questions I will address are (i) which instances of (wh-) scrambling involve reconstruction and (ii) how the undoing effects of scrambling can be derived. First I will discuss the claim that (wh-) scrambling is semantically vacuous and is therefore undone at LF (Saito 1989, 1992). Then I consider the data that led Takahashi (1993) to the conclusion that at least some instances of wh-scrambling have to be analyzed as instances of "full wh-movement" i.e., overt movement of the wh-phrase in its scopal position. It will be argued that these examples are not instances of full wh-movement in Japanese, but that they also represent semantically vacuous scrambling. Those instances of scrambling that apprently cannot be undone are best explained with recourse to parsing effects. I conclude that wh-scrambling in Japanese is always triggered by a ([-wh]-) scrambling feature. In addition, long distance scrambling (scrambling out of finite CPs) is analyzed as adjunction movement, whereas short distance scrambling is movement to a specifier position of IP. Turning to the mechanisms of undoing, I will argue that only long distance scrambling is undone. This is shown to follow from Chomsky's (1995) bare phrase structure analysis, according to which multi-segmental categories derived by adjunction movement are not licensed at LF. The article is organized as follows. In section 2, the wh-scrambling phenomenon is described. In section 3, I discuss the reconstruction properties of scrambling. In addition, this section provides some basic assumptions about my analysis of Japanese scrambling in general. In section 4, I turn to the analysis of wh-scrambling as an instance of full wh-movement in Japanese. Section 5 provides discussion of multiple wh-questions in Japanese, and section 6 gives the conclusion.
The languages of the world differ with respect to argument extraction possibilities. In languages such as English, wh-movement is possible from Spec IP and from the complement position, whereas in languages such as Malagasy only extraction from Spec IP is possible. This difference correlates with the fact that these language types obey different island constraints and behave differently with respect to wh-in situ and superiority effects. The goal of this paper is to outline an analysis for these differences. The basic idea is that in contrast to languages such as English, in Malagasy-type languages every argument can be merged in the complement position of the selecting head.
Expletives as features
(2000)
Expletives have always been a central topic of theoretical debate and subject to different analyses within the different stages of the Principles and Parameter theory (see Chomsky 1981, 1986, 1995; Lasnik 1992, 1995; Frampton and Gutman 1997; among others). However, most analyses center on the question how to explain the behavior of expletives in A-chains (such as there in English or Þad in Icelandic). No account relates wh-expletives (as one finds them in so-called partial wh-movement constructions in languages such as Hungarian, Romani, and German) to expletives in Achains. In this paper, I argue that the framework of the Minimalist Program opens up the possibility of accounting for expletive-associate relations in A-/A'-chains in a unified manner. The main idea of the unitary analysis is that an expletive is an overtly realized feature bundle that is (sub)extracted from its associate DP. There in an expletive-associate chain is a moved D-feature which orginates inside the associate DP. Similarily, in A'-chains, the whexpletive originates as a focus-/wh-feature in the wh-phrase with which it is associated. This analysis provides evidence for the feature-checking theory in Chomsky (1995). The paper is organized as follows. Section 2 contains the discussion of expletive there. In section 3 I suggest an analysis for whexpletives, and I also explore whether this analysis can be extended to relations between X°-categories such as auxiliary and participle complexes.
In this paper I show that Clitic Climbing (CC) in Spanish and Long Scrambling (LS) in German (and Polish) are (im-)possible out of the same environments. For an explanation of this fact I propose a feature-oriented analysis of incorporation phenomena. The idea is that restructuring is a phenomenon of syntactic incorporation. In German and Polish, Agro incorporates covertly into the matrix clause and licenses LS out of the infinitival into the matrix clause. Similarily the clitic in Spanish, which is analysed as an Agro-head, incorporates into the matrix clause. I argue that this movement is necessary for reasons of feature-checking, i. e. for checking of an [+R]- or Restructuring-feature. In section 2 I discuss several differences between CC and LS. For example, the proposed analysis correctly predicts that clitics in contrast to scrambled phrases are subject to several serialization restrictions. Throughout the paper I use the term restructuring only in a descriptive sense, in order to describe the phenomenon in question.
The assumption that mankind is able to have an in uence on global or regional climate, respectively, due to the emission of greenhouse gases, is often discussed. This assumption is both very important and very obscure. In consequence, it is necessary to clarify definitively which meteorological elements (climate parameters) are in uencend by the anthropogenic climate impact, and to which extent in which regions of the world. In addition, to be able to interprete such an information properly, it is also necessary to know the magnitude of the different climate signals due to natural variability (for example due to volcanic or solar activity) and the magnitide of stochastic climate noise. The usual tool of climatologists, general circulation models (GCM) suffer from the problem that they are at least quantitatively uncertain with regard to the regional patterns of the behaviour of climate elements and from the lack of accurate information about long-term (decadal and centennial) forcing. In contrast to that, statistical methods as used in this study have the advantage to test hypotheses directly based on observational data. So, we focus to the very reality of climate variability as it has occurred in the past. We apply two strategies of time series analyis with regard to the observed climate variables under consideration. First, each time series is splitted into its variation components. This procedure is called 'structure-oriented time series separation'. The second strategy called 'cause-oriented time series separation' matches various time series representing various forcing mechanisms with those representing the climate behaviour (climate elements). In this way it can be assessed which part of observed climate variability can be explained by this (combined) forcing and which part remains unexplained.
The results presented here strongly indicate that ubiquitination of the recombinant human alpha1 GlyR at the plasma membrane of Xenopus oocytes is involved in receptor internalisation and degradation. Ubiquitination of the human alpha1 GlyR has been demonstrated by radio-iodination of plasma membrane-boundalpha1 GlyRs, whose subunits differed in molecular weight by additional 7, 14 or 21 kDa, corresponding to the molecular weights of one, two and three conjugated ubiquitin molecules, respectively, and by co-isolation of the non-tagged human alpha1 GlyR through hexahistidyl-tagged ubiquitin. Ubiquitin conjugated GlyRs where prominent at the plasma membrane, but could be hardly detected in total cell homogenates, indicating that ubiquitination takes place exclusively at the plasma membrane. Ubiquitination of the alpha1 GlyR at the plasma membrane was no longer detectable when the ten lysine residues of the cytoplasmic loop between transmembrane segments M3 and M4 were replaced by arginines. Despite this proteolytic cleavage continued to take place at the same extent as with the wild type alpha1 GlyR, suggesting that removal of GlyRs from the plasma membrane and routing to lysosomes for degradation were not dependent on ubiquitination. Also replacing a tyrosine in position 339, which was speculated to be part of an additional endocytosis motif, did not lead to a significant reduction of cleavage of the GlyR alpha1 subunits. However, a mutant lacking both, ubiquitination sites and 339Y, was significantly less processed. These results may suggest that the GlyR alpha1 subunit harbors at least two endocytosis motifs, which may act independently to regulate the density of alpha1 GlyR. Apparently, each of the two signals may be capable of compensating entirely the loss of the other. Part two of this Dissertation demonstrates that the correct topology of the glycine receptor alpha1 subunit depends critically on six positively charged residues within a basic cluster, RFRRKRR, located in the large cytoplasmic loop following the C-terminal end of M3. Neutralization of one or more charges of this cluster, but not of other charged residues in the M3-M4 loop, led to an aberrant translocation into the endoplasmic reticulum lumen of the M3-M4 loop. However, when two of the three basic charges located in the ectodomain linking M2 and M3 were neutralized, in addition to two charges of the basic cluster, endoplasmic reticulum disposition of the M3-M4 loop was prevented. We conclude that a high density of basic residues C-terminal to M3 is required to compensate for the presence of positively charged residues in the M2-M3 ectodomain, which otherwise impair correct membrane integration of the M3 segment. Part three of this Dissertation describes my contribution (blue native PAGE analysis of metabolically labeled alpha7 and 5HT3A receptors and the examination of the glycosylation state of metabolically labeled alpha7 subunits) to a work on the limited assembly capacity of Xenopus oocytes for nicotinic alpha7 subunits. While 5HT3A subunits combined efficiently to pentamers, alpha7 subunits existed in various assembly states including trimers, tetramers, pentamers, and aggregates. Only alpha7 subunits that completed the assembly process to homopentamers acquired complex-type carbohydrates and appeared at the cell surface. We conclude that Xenopus oocytes have a limited capacity to guide the assembly of alpha7 subunits, but not 5HT3A subunits to homopentamers. Accordingly, ER retention of imperfectly assembled alpha7 subunits rather than inefficient routing of fully assembled alpha7 receptors to the cell surface limits surface expression levels of alpha7 nicotinic acetylcholine receptors. Part four of this Dissertation describes my contribution (the biochemical analysis of the human P2X2 and P2X6 subtypes) to studies on the quaternary structure of P2X receptors. Armaz Aschrafi, the main author of the paper showed that subsequent to isolation under non-denaturing conditions from Xenopus oocytes the His-rP2X2 protein migrated on blue native PAGE predominantly in an aggregated form. The only discrete protein band detectable could be assigned to homotrimers of the His-rP2X2 subunit. Because of the exceptional assembly-behaviour of the rP2X2 protein compared to the rP2X1, rP2X3, rP2X4 and rP2X5 proteins, its human orthologue was investigated in the same manner. In contrast to rP2X2 subunits, hP2X2 subunits migrated under virtually identical conditions in a single defined assembly state, which could be clearly assigned to a trimer. P2X6 subunits represent the sole P2X subtype that is unable to form functional homomeric receptors in Xenopus oocytes. The blue native PAGE analysis of metabolically labeled hP2X6 receptors and the examination of the glycosylation state revealed that hP2X6 subunits form tetramers and aggregates that are not exported to the plasma membrane of Xenopus oocytes.
Homing in with GPS
(2000)
In the present work, the Heidelberg electron beam ion trap (EBIT) at the Max-Planck-Institute für Kernphysik (MPIK) has been used to produce, trap highly charged argon ions and study their magnetic dipole (M1) forbidden transitions. These transitions are of relativistic origin and, hence, provide unique possibilities to perform precise studies of relativistic effects in many electron systems. In this way, the transitions energies of the 1s22s22p for the 2P3/2 - 2P1/2 transition in Ar13+ and the 1s22s2p for the 3P1 - 3P2 transition in Ar14+, for 36Ar and 40Ar isotopes were compared. The observed isotopic effect has confirmed the relativistic nuclear recoil effect corrections due to the finite nuclear mass in a recent calculation made by Tupitsyn [TSC03], in which major inconsistencies of earlier theoretical methods have been corrected for the first time. The finite mass, or recoil effect, composed of the normal mass shift (NMS), and the specific mass shift (SMS) were corrected for relativistic contributions, RNMS and RSMS. The present experimental results have shown that the recoil effects on the Breit level are indeed very important, as well as the effects of the correlated relativistic dynamics in a many electron ion.
This a review of the present status of heavy-ion collisions at intermediate energies. The main goal of heavy-ion physics in this energy regime is to shed some light on the nuclear equation of state (EOS), hence we present the basic concept of the EOS in nuclear matter as well as of nuclear shock waves which provide the key mechanism for the compression of nuclear matter. The main part of this article is devoted to the models currently used for describing heavy-ion reactions theoretically and to the observables useful for extracting information about the EOS from experiments. A detailed discussion of the flow effects with a broad comparison with the avaible data is presented. The many-body aspects of such reactions are investigated via the multifragmentation break up of excited nuclear systems and a comparison of model calculations with the most recent multifragmentation experiments is presented.
In the framework of the relativistic quantum dynamics approach we investigate antiproton observables in Au-Au collisions at 10.7A GeV. The rapidity dependence of the in-plane directed transverse momentum p(y) of p's shows the opposite sigh of the nucleon flow, which has indeed recently been discovered at 10.7A GeV by the E877 group. The "antiflow" of p's is also predicted at 2A GeV and at 160 A GeV and appears at all energies also for pi's and K's. These predicted p anticorrelations are a direct proof of strong p annihilation in massive heavy ion reactions.
The quantum statistical model (QSM) is used to calculate nuclear fragment distributions in chemical equilibrium. Several observable isotopic effects are predicted for intermediate energy heavy ion collisions. It is demonstrated that particle ratios for different systemsdo not depend on the breakup density-the only free parameter in our model.The importance of entropy measurements is discussed. Specific particle ratios for the system Au-Au are predicted, which can be used to determine the chemical potentials of the hot midrapidity fragment source in nearly central heavy ion collisions. Pacs-Nr. 25.70 Pq