Refine
Year of publication
Document Type
- Article (16041)
- Part of Periodical (2820)
- Working Paper (2358)
- Preprint (2224)
- Doctoral Thesis (2069)
- Book (1736)
- Conference Proceeding (1172)
- Part of a Book (1071)
- Report (471)
- Review (165)
Language
- English (30322) (remove)
Keywords
- taxonomy (749)
- new species (449)
- morphology (178)
- Deutschland (142)
- Syntax (126)
- Englisch (120)
- distribution (117)
- biodiversity (103)
- inflammation (100)
- Deutsch (98)
Institute
- Medizin (5400)
- Physik (4058)
- Wirtschaftswissenschaften (1927)
- Frankfurt Institute for Advanced Studies (FIAS) (1857)
- Biowissenschaften (1567)
- Center for Financial Studies (CFS) (1499)
- Informatik (1496)
- Biochemie und Chemie (1090)
- Sustainable Architecture for Finance in Europe (SAFE) (1077)
- House of Finance (HoF) (714)
In this dissertation a non-deterministic lambda-calculus with call-by-need evaluation is treated. Call-by-need means that subexpressions are evaluated at most once and only if their value must be known to compute the overall result. Also called "sharing", this technique is inevitable for an efficient implementation. In the lambda-ND calculus of chapter 3 sharing is represented explicitely by a let-construct. Above, the calculus has function application, lambda abstractions, sequential evaluation and pick for non-deterministic choice. Non-deterministic lambda calculi play a major role as a theoretical foundation for concurrent processes or side-effected input/output. In this work, non-determinism additionally makes visible when sharing is broken. Based on the bisimulation method this work develops a notion of equality which respects sharing. Using bisimulation to establish contextual equivalence requires substitutivity within contexts, i.e., the ability to "replace equals by equals" within every program or term. This property is called congruence or precongruence if it applies to a preorder. The open similarity of chapter 4 represents a new concept, insofar that the usual definition of a bisimulation is impossible in the lambda-ND calculus. So in section 3.2 a further calculus lambda-Approx has to be defined. Section 3.3 contains the proof of the so-called Approximation Theorem which states that the evaluation in lambda-ND and lambda-Approx agrees. The foundation for the non-trivial precongruence proof is set out in chapter 2 where the trailblazing method of Howe is extended to be capable with sharing. By the use of this (extended) method, the Precongruence Theorem proves open similarity to be a precongruence, involving the so-called precongruence candidate relation. Joining with the Approximation Theorem we obtain the Main Theorem which says that open similarity of the lambda-Approx calculus is contained within the contextual preorder of the lambda-ND calculus. However, this inclusion is strict, a property whose non-trivial proof involves the notion of syntactic continuity. Finally, chapter 6 discusses possible extensions of the base calculus such as recursive bindings or case and constructors. As a fundamental study the calculus lambda-ND provides neither of these concepts, since it was intentionally designed to keep the proofs as simple as possible. Section 6.1 illustrates that the addition case and constructors could be accomplished without big hurdles. However, recursive bindings cannot be represented simply by a fixed point combinator like Y, thus further investigations are necessary.
A new approach to optimize multilevel logic circuits is introduced. Given a multilevel circuit, the synthesis method optimizes its area while simultaneously enhancing its random pattern testability. The method is based on structural transformations at the gate level. New transformations involving EX-OR gates as well as Reed–Muller expansions have been introduced in the synthesis of multilevel circuits. This method is augmented with transformations that specifically enhance random-pattern testability while reducing the area. Testability enhancement is an integral part of our synthesis methodology. Experimental results show that the proposed methodology not only can achieve lower area than other similar tools, but that it achieves better testability compared to available testability enhancement tools such as tstfx. Specifically for ISCAS-85 benchmark circuits, it was observed that EX-OR gate-based transformations successfully contributed toward generating smaller circuits compared to other state-of-the-art logic optimization tools.
Retiming is a widely investigated technique for performance optimization. It performs powerful modifications on a circuit netlist. However, often it is not clear, whether the predicted performance improvement will still be valid after placement has been performed. This paper presents a new retiming algorithm using a highly accurate timing model taking into account the effect of retiming on capacitive loads of single wires as well as fanout systems. We propose the integration of retiming into a timing-driven standard cell placement environment based on simulated annealing. Retiming is used as an optimization technique throughout the whole placement process. The experimental results show the benefit of the proposed approach. In comparison with the conventional design flow based on standard FEAS our approach achieved an improvement in cycle time of up to 34% and 17% on the average.
Retiming is a widely investigated technique for performance optimization. In general, it performs extensive modifications on a circuit netlist, leaving it unclear, whether the achieved performance improvement will still be valid after placement has been performed. This paper presents an approach for integrating retiming into a timing-driven placement environment. The experimental results show the benefit of the proposed approach on circuit performance in comparison with design flows using retiming only as a pre- or postplacement optimization method.
Channel routing is an NP-complete problem. Therefore, it is likely that there is no efficient algorithm solving this problem exactly.In this paper, we show that channel routing is a fixed-parameter tractable problem and that we can find a solution in linear time for a fixed channel width.We implemented our approach for the restricted layer model. The algorithm finds an optimal route for channels with up to 13 tracks within minutes or up to 11 tracks within seconds.Such narrow channels occur for example as a leaf problem of hierarchical routers or within standard cell generators.
We present a theoretical analysis of structural FSM traversal, which is the basis for the sequential equivalence checking algorithm Record & Play presented earlier. We compare the convergence behaviour of exact and approximative structural FSM traversal with that of standard BDD-based FSM traversal. We show that for most circuits encountered in practice exact structural FSM traversal reaches the fixed point as fast as symbolic FSM traversal, while approximation can significantly reduce in the number of iterations needed. Our experiments confirm these results.
We present the FPGA implementation of an algorithm [4] that computes implications between signal values in a boolean network. The research was performed as a masterrsquos thesis [5] at the University of Frankfurt. The recursive algorithm is rather complex for a hardware realization and therefore the FPGA implementation is an interesting example for the potential of reconfigurable computing beyond systolic algorithms. A circuit generator was written that transforms a boolean network into a network of small processing elements and a global control logic which together implement the algorithm. The resulting circuit performs the computation two orders of magnitudes faster than a software implementation run by a conventional workstation.
This paper presents a new timing driven approach for cell replication tailored to the practical needs of standard cell layout design. Cell replication methods have been studied extensively in the context of generic partitioning problems. However, until now it has remained unclear what practical benefit can be obtained from this concept in a realistic environment for timing driven layout synthesis. Therefore, this paper presents a timing driven cell replication procedure, demonstrates its incorporation into a standard cell placement and routing tool and examines its benefit on the final circuit performance in comparison with conventional gate or transistor sizing techniques. Furthermore, we demonstrate that cell replication can deteriorate the stuck-at fault testability of circuits and show that stuck-at redundancy elimination must be integrated into the placement procedure. Experimental results demonstrate the usefulness of the proposed methodology and suggest that cell replication should be an integral part of the physical design flow complementing traditional gate sizing techniques.
One of the most severe short-comings of currently available equivalence checkers is their inability to verify integer multipliers. In this paper, we present a bit level reverse-engineering technique that can be integrated into standard equivalence checking flows. We propose a Boolean mapping algorithm that extracts a network of half adders from the gate netlist of an addition circuit. Once the arithmetic bit level representation of the circuit is obtained, equivalence checking can be performed using simple arithmetic operations. Experimental results show the promise of our approach.
We present new concepts to integrate logic synthesis and physical design. Our methodology uses general Boolean transformations as known from technology-independent synthesis, and a recursive bi-partitioning placement algorithm. In each partitioning step, the precision of the layout data increases. This allows effective guidance of the logic synthesis operations for cycle time optimization. An additional advantage of our approach is that no complicated layout corrections are needed when the netlist is changed.
We study queueing strategies in the adversarial queueing model. Rather than discussing individual prominent queueing strategies we tackle the issue on a general level and analyze classes of queueing strategies. We introduce the class of queueing strategies that base their preferences on knowledge of the entire graph, the path of the packet and its progress. This restriction only rules out time keeping information like a packet’s age or its current waiting time.
We show that all strategies without time stamping have exponential queue sizes, suggesting that time keeping is necessary to obtain subexponential performance bounds. We further introduce a new method to prove stability for strategies without time stamping and show how it can be used to completely characterize a large class of strategies as to their 1-stability and universal stability.
The thesis in general deals with CORBA, the Common Object Request Broker Architecture. More specifically, it takes a look at the server-side, where object adapters exist to aid the developer in implementing objects and in dealing with request processing. The new Portable Object Adapter was recently added to the CORBA 2.2 standard. My task was the implementation of the POA in MICO and the examination if (a) the POA specification is sensible and (b) in which areas it improves over the old Basic Object Adapter. After introducing distributed platforms in general and CORBA in particular, the thesis' main two chapters are a detailed abstract examination ("Design") of the POA design and their relization ("Implementation"), highlighting the potential trouble spots, persistence and collocation.
The synchronization of neuronal firing activity is considered an important mechanism in cortical information processing. The tendency of multiple neurons to synchronize their joint firing activity can be investigated with the 'unitary event' analysis (Grün, 1996). This method is based on the nullhypothesis of independent Bernoulli processes and can therefore not tell whether coincidences observed between more than two processes can be considered "genuine" higher- order coincidences or whether they might be caused by coincidences of lower order that coincide by chance ("chance coincidences"). In order to distinguish between genuine and chance coincidences, a parametric model of independent interaction processes (MIIP) is presented. In the framework of this model, Maximum-Likelihood estimates are derived for the firing rates of n single processes and for the rates with which genuine higher order correlations occur. The asymptotic normality of these estimates is used to derive their asymptotic variance and in order to investigate whether higher order coincidences can be considered genuine or whether they can be explained by chance coincidences. The empirical test power of this procedure for n=2 and n=3 processes and for finite analysis windows is derived with simulations and compared to the asymptotic values. Finally, the model is extended in order to allow for the analysis of correlations that are caused by jittered coincidences.
Jet physics in ALICE
(2005)
This work aims at the performance of the ALICE detector for the measurement of high-energy jets at mid-pseudo-rapidity in ultra-relativistic nucleus-nucleus collisions at LHC and their potential for the characterization of the partonic matter created in these collisions. In our approach, jets at high energy with E_{T}>50 GeV are reconstructed with a cone jet finder, as typically done for jet measurements in hadronic collisions. Within the ALICE framework we study its capabilities of measuring high-energy jets and quantify obtainable rates and the quality of reconstruction, both, in proton-proton and in lead-lead collisions at LHC conditions. In particular, we address whether modification of the jet fragmentation in the charged-particle sector can be detected within the high particle-multiplicity environment of the central lead-lead collisions. We comparatively treat these topics in view of an EMCAL proposed to complete the central ALICE tracking detectors. The main activities concerning the thesis are the following: a) Determination of the potential for exclusive jet measurements in ALICE. b) Determination of jet rates that can be acquired with the ALICE setup. c) Development of a parton-energy loss model. d) Simulation and study of the energy-loss effect on jet properties.
This thesis has explored how structural techniques can be applied to the problem of formal verification for sequential circuits. Algorithms for formal verification which operate on non-canonical gate netlist representations of digital circuits have certain advantages over the traditional techniques based on canonical representations as BDDs. They allow to exploit problem-specific knowledge because they can take into account structural properties of the designs being analyzed. This allows us to break the problem down into sub-problems which are (hopefully) easier to be solved. However, in the past, the main application of such structural techniques was in the field of combinational equivalence checking. One reason for this is that the behaviour of a sequential system does not only depend on its inputs but also on its internal states, and no concepts had been developed to-date allowing structural methods to deal with large sets of states. An important goal of this research was therefore to develop structural, non-canonical forms of representing the reachable states of a finite state machine and to develop methods for reachability analysis based on such representations. In order to reach this goal, two steps were taken. Firstly, a framework for manipulating Boolean functions represented as gate netlists has been established. Secondly, using this framework, a structural method for FSM traversal was developed serving as the basis for an equivalence checking algorithm for sequential circuits. The framework for manipulating Boolean functions represented as multi-level combinational networks is based on a new concept of an implicant in a multi-level network and on an AND/ORtype enumeration technique which allows us to derive such implicants. This concept extends the classical notion of an implicant in two-level circuits to the multi-level case. Using this notion, arbitrary transformations in multi-level combinational networks can be performed. The multi-level network implicants can be determined from AND/OR reasoning graphs, which are associated with an AND/OR reasoning technique operating directly on the gate netlist description of a multi-level circuit. This reasoning technique has the important property that it is complete, i.e. the associated AND/OR trees contain all prime implicants of a Boolean function at an arbitrary node in a combinational circuit. In other words, AND/OR graphs constructed for a network function serve as a representation of this function. A great advantage over BDDs is that AND/OR graphs, besides representing the logic function, also represent some structural properties of the analyzed circuitry. This permits to develop heuristics that are specially tailored for certain applications such as logic optimization or verification. Another advantage which is especially useful for logic optimization is the fact that the proposed AND/OR enumeration scheme is not restricted to the use of a specific logic alphabet such as B3 = {0, 1, X}. By using Roth’s D-calculus based on B5 = {0, 1, D, D-Komplement} permissible implicants can be determined. Transformations based on permissible implicants exploit observability don’t-care conditions in logic synthesis by creating permissible functions at internal network nodes. In order to evaluate the new structural framework for manipulating Boolean functions represented as gate netlists, several experiments with implicant-based optimization of multi-level circuits were performed. The results show that implicant-based circuit transformations lead to significantly better optimization results than traditional synthesis techniques. Next, based on the proposed structural methods for Boolean function manipulation, techniques for representing and manipulating the set of states of a sequential circuit have been developed. The concept of a “stub circuit” was introduced which implicitly represents a set of state vectors as the range of a multi-output function given as a gate netlist. The stub circuit is the result of an existential quantification operation which is obtained by functional decomposition using implicant-based netlist transformations and a network cutting procedure. Using this existential quantification operation, a new structural FSM traversal algorithm was formulated which performs a fixed point iteration on the set of reachable states represented by the stub circuit. The proposed approach performs a reachability analysis of the states of a sequential circuit. It operates on gate netlists and naturally allows to incorporate structural properties of a design under consideration into the reasoning. Therefore, structural FSM traversal is an interesting alternative to traditional symbolic FSM traversal, especially in those applications of formal verification, where structural properties can be exploited. Structural FSM traversal was applied to the problem of sequential equivalence checking. Here, structural similarities between the designs to be compared can effectively reduce the complexity of the verification task. The FSM to be traversed is a special product machine called sequential miter. The special structural properties of this product machine have made it possible to formulate an approximate algorithm for structural FSM traversal, called record and play(). This algorithm uses an approximation on the reachable state set represented by the stub circuit which is very beneficial for performance. Instead of calculating the stub circuit using the exact algorithm, implicant-based transformations directly using structural design similarities are performed. These transformations, together with existential quantification implemented by the cutting procedure, lead to an over-approximation of the reachable state set. By this overapproximation, only such unreachable product states are added to the set of states represented by the stub circuit which are unreachable at the current point in time but which are nevertheless equivalent. Therefore, more product states are added to the set of reachable states sometimes leading to drastic acceleration of the traversal, i.e. the fixed point is reached in much fewer steps. The algorithm record and play() was applied to the problem of checking the equivalence of a circuit with its optimized and retimed version. Retiming is a form of sequential circuit optimization which can radically alter the state encoding of a circuit. Traditional FSM traversal techniques often fail because the BDDs needed to represent the reachable state set and the transition relation of the product machine become too large. Experiments were conducted to evaluate the performance of record and play() on a standard set of sequential benchmark circuits. The algorithm was capable of proving the equivalence of optimized and retimed circuits with their original versions, some of which (to our knowledge) have never before been verified using traditional techniques like symbolic FSM traversal. The experimental results are very promising. Future research will therefore explore how structural FSM traversal can be applied to model checking.
This paper argues that short (clause-internal) scrambling to a pre-subject position has A properties in Japanese but A'-properties in German, while long scrambling (scrambling across sentence boundaries) from finite clauses, which is possible in Japanese but not in German, has A'-properties throughout. It is shown that these differences between German and Japanese can be traced back to parametric variation of phrase structure and the parameterized properties of functional heads. Due to the properties of Agreement, sentences in Japanese may contain multiple (Agro- and Agrs-) specifiers whereas German does not allow for this. In Japanese, a scrambled element may be located in a Spec AgrP, i.e. an A- or L-related position, whereas scrambled NPs in German can only appear in an AgrP-adjoined (broadly-L-related) position, which only has A'-properties. Given our assumption that successive cyclic adjunction is generally impossible, elements in German may not be long scrambled because a scrambled element that is moved to an adjunction site inside an embedded clause may not move further. In Japanese, long distance scrambling out of finite CPs is possible since scrambling may proceed in a successive cyclic manner via embedded Spec- (AgrP) positions. Our analysis of the differences between German and Japanese scrambling provides us with an account of further contrasts between the two languages such as the existence of surprising asymmetries between German and Japanese remnant-movement phenomena, and the fact that unlike German, Japanese freely allows wh-scrambling. Investigation of the properties of Japanese wh-movement also leads us to the formulation of the "Wh-cluster Hypothesis", which implies that Japanese is an LF multiple wh-fronting language.
Left dislocation in Zulu
(2004)
This paper examines left dislocation constructions in Zulu, a Southern Bantu language belonging to the Nguni group (Zone S 40). In Zulu left dislocation configurations, a topic phrase in the beginning of the sentence is linked to a resumptive element within the associated clause. Typically, the resumptive element is an incorporated pronoun (cf. Bresnan & Mchombo 1987), as illustrated by the examples in (1) and (2). In these examples, the object pronoun (in italics) is part of the verbal morphology and agrees with the noun class (gender) of the dislocate. This situation is schematically illustrated in (3), where co-indexation represents agreement: ...
In this paper I discuss the properties of particle verbs in light of a proposal about syntactic projection. In section 2 I suggest that projection involves functional structure in two important ways: (i) only functional phrases can be complements, and (ii) lexical heads that take complements and project must be inflected. In section 3, I show that the structure of particle verbs is not uniform with respect to (i) and (ii). On the one hand, a particle always combines with an inflected verb; in this respect, particle verbs look like verb-complement constructions. On the other hand, the particle is not a functional phrase and therefore is not a proper complement, which makes the combination of the particle and the verb look more like a morphologically complex verb. I argue that syntactic rules can in fact interpret the node dominating the particle and the verb as a projection and as a complex head. In section 4, I show that many of the characteristic properties of particle verbs in the Germanic languages follow from the fact that they are structural hybrids.
In this article, I discuss some important properties of wh-questions and wh-scrambling in Japanese. The questions I will address are (i) which instances of (wh-) scrambling involve reconstruction and (ii) how the undoing effects of scrambling can be derived. First I will discuss the claim that (wh-) scrambling is semantically vacuous and is therefore undone at LF (Saito 1989, 1992). Then I consider the data that led Takahashi (1993) to the conclusion that at least some instances of wh-scrambling have to be analyzed as instances of "full wh-movement" i.e., overt movement of the wh-phrase in its scopal position. It will be argued that these examples are not instances of full wh-movement in Japanese, but that they also represent semantically vacuous scrambling. Those instances of scrambling that apprently cannot be undone are best explained with recourse to parsing effects. I conclude that wh-scrambling in Japanese is always triggered by a ([-wh]-) scrambling feature. In addition, long distance scrambling (scrambling out of finite CPs) is analyzed as adjunction movement, whereas short distance scrambling is movement to a specifier position of IP. Turning to the mechanisms of undoing, I will argue that only long distance scrambling is undone. This is shown to follow from Chomsky's (1995) bare phrase structure analysis, according to which multi-segmental categories derived by adjunction movement are not licensed at LF. The article is organized as follows. In section 2, the wh-scrambling phenomenon is described. In section 3, I discuss the reconstruction properties of scrambling. In addition, this section provides some basic assumptions about my analysis of Japanese scrambling in general. In section 4, I turn to the analysis of wh-scrambling as an instance of full wh-movement in Japanese. Section 5 provides discussion of multiple wh-questions in Japanese, and section 6 gives the conclusion.
The languages of the world differ with respect to argument extraction possibilities. In languages such as English, wh-movement is possible from Spec IP and from the complement position, whereas in languages such as Malagasy only extraction from Spec IP is possible. This difference correlates with the fact that these language types obey different island constraints and behave differently with respect to wh-in situ and superiority effects. The goal of this paper is to outline an analysis for these differences. The basic idea is that in contrast to languages such as English, in Malagasy-type languages every argument can be merged in the complement position of the selecting head.
Expletives as features
(2000)
Expletives have always been a central topic of theoretical debate and subject to different analyses within the different stages of the Principles and Parameter theory (see Chomsky 1981, 1986, 1995; Lasnik 1992, 1995; Frampton and Gutman 1997; among others). However, most analyses center on the question how to explain the behavior of expletives in A-chains (such as there in English or Þad in Icelandic). No account relates wh-expletives (as one finds them in so-called partial wh-movement constructions in languages such as Hungarian, Romani, and German) to expletives in Achains. In this paper, I argue that the framework of the Minimalist Program opens up the possibility of accounting for expletive-associate relations in A-/A'-chains in a unified manner. The main idea of the unitary analysis is that an expletive is an overtly realized feature bundle that is (sub)extracted from its associate DP. There in an expletive-associate chain is a moved D-feature which orginates inside the associate DP. Similarily, in A'-chains, the whexpletive originates as a focus-/wh-feature in the wh-phrase with which it is associated. This analysis provides evidence for the feature-checking theory in Chomsky (1995). The paper is organized as follows. Section 2 contains the discussion of expletive there. In section 3 I suggest an analysis for whexpletives, and I also explore whether this analysis can be extended to relations between X°-categories such as auxiliary and participle complexes.
In this paper I show that Clitic Climbing (CC) in Spanish and Long Scrambling (LS) in German (and Polish) are (im-)possible out of the same environments. For an explanation of this fact I propose a feature-oriented analysis of incorporation phenomena. The idea is that restructuring is a phenomenon of syntactic incorporation. In German and Polish, Agro incorporates covertly into the matrix clause and licenses LS out of the infinitival into the matrix clause. Similarily the clitic in Spanish, which is analysed as an Agro-head, incorporates into the matrix clause. I argue that this movement is necessary for reasons of feature-checking, i. e. for checking of an [+R]- or Restructuring-feature. In section 2 I discuss several differences between CC and LS. For example, the proposed analysis correctly predicts that clitics in contrast to scrambled phrases are subject to several serialization restrictions. Throughout the paper I use the term restructuring only in a descriptive sense, in order to describe the phenomenon in question.
The assumption that mankind is able to have an in uence on global or regional climate, respectively, due to the emission of greenhouse gases, is often discussed. This assumption is both very important and very obscure. In consequence, it is necessary to clarify definitively which meteorological elements (climate parameters) are in uencend by the anthropogenic climate impact, and to which extent in which regions of the world. In addition, to be able to interprete such an information properly, it is also necessary to know the magnitude of the different climate signals due to natural variability (for example due to volcanic or solar activity) and the magnitide of stochastic climate noise. The usual tool of climatologists, general circulation models (GCM) suffer from the problem that they are at least quantitatively uncertain with regard to the regional patterns of the behaviour of climate elements and from the lack of accurate information about long-term (decadal and centennial) forcing. In contrast to that, statistical methods as used in this study have the advantage to test hypotheses directly based on observational data. So, we focus to the very reality of climate variability as it has occurred in the past. We apply two strategies of time series analyis with regard to the observed climate variables under consideration. First, each time series is splitted into its variation components. This procedure is called 'structure-oriented time series separation'. The second strategy called 'cause-oriented time series separation' matches various time series representing various forcing mechanisms with those representing the climate behaviour (climate elements). In this way it can be assessed which part of observed climate variability can be explained by this (combined) forcing and which part remains unexplained.
The results presented here strongly indicate that ubiquitination of the recombinant human alpha1 GlyR at the plasma membrane of Xenopus oocytes is involved in receptor internalisation and degradation. Ubiquitination of the human alpha1 GlyR has been demonstrated by radio-iodination of plasma membrane-boundalpha1 GlyRs, whose subunits differed in molecular weight by additional 7, 14 or 21 kDa, corresponding to the molecular weights of one, two and three conjugated ubiquitin molecules, respectively, and by co-isolation of the non-tagged human alpha1 GlyR through hexahistidyl-tagged ubiquitin. Ubiquitin conjugated GlyRs where prominent at the plasma membrane, but could be hardly detected in total cell homogenates, indicating that ubiquitination takes place exclusively at the plasma membrane. Ubiquitination of the alpha1 GlyR at the plasma membrane was no longer detectable when the ten lysine residues of the cytoplasmic loop between transmembrane segments M3 and M4 were replaced by arginines. Despite this proteolytic cleavage continued to take place at the same extent as with the wild type alpha1 GlyR, suggesting that removal of GlyRs from the plasma membrane and routing to lysosomes for degradation were not dependent on ubiquitination. Also replacing a tyrosine in position 339, which was speculated to be part of an additional endocytosis motif, did not lead to a significant reduction of cleavage of the GlyR alpha1 subunits. However, a mutant lacking both, ubiquitination sites and 339Y, was significantly less processed. These results may suggest that the GlyR alpha1 subunit harbors at least two endocytosis motifs, which may act independently to regulate the density of alpha1 GlyR. Apparently, each of the two signals may be capable of compensating entirely the loss of the other. Part two of this Dissertation demonstrates that the correct topology of the glycine receptor alpha1 subunit depends critically on six positively charged residues within a basic cluster, RFRRKRR, located in the large cytoplasmic loop following the C-terminal end of M3. Neutralization of one or more charges of this cluster, but not of other charged residues in the M3-M4 loop, led to an aberrant translocation into the endoplasmic reticulum lumen of the M3-M4 loop. However, when two of the three basic charges located in the ectodomain linking M2 and M3 were neutralized, in addition to two charges of the basic cluster, endoplasmic reticulum disposition of the M3-M4 loop was prevented. We conclude that a high density of basic residues C-terminal to M3 is required to compensate for the presence of positively charged residues in the M2-M3 ectodomain, which otherwise impair correct membrane integration of the M3 segment. Part three of this Dissertation describes my contribution (blue native PAGE analysis of metabolically labeled alpha7 and 5HT3A receptors and the examination of the glycosylation state of metabolically labeled alpha7 subunits) to a work on the limited assembly capacity of Xenopus oocytes for nicotinic alpha7 subunits. While 5HT3A subunits combined efficiently to pentamers, alpha7 subunits existed in various assembly states including trimers, tetramers, pentamers, and aggregates. Only alpha7 subunits that completed the assembly process to homopentamers acquired complex-type carbohydrates and appeared at the cell surface. We conclude that Xenopus oocytes have a limited capacity to guide the assembly of alpha7 subunits, but not 5HT3A subunits to homopentamers. Accordingly, ER retention of imperfectly assembled alpha7 subunits rather than inefficient routing of fully assembled alpha7 receptors to the cell surface limits surface expression levels of alpha7 nicotinic acetylcholine receptors. Part four of this Dissertation describes my contribution (the biochemical analysis of the human P2X2 and P2X6 subtypes) to studies on the quaternary structure of P2X receptors. Armaz Aschrafi, the main author of the paper showed that subsequent to isolation under non-denaturing conditions from Xenopus oocytes the His-rP2X2 protein migrated on blue native PAGE predominantly in an aggregated form. The only discrete protein band detectable could be assigned to homotrimers of the His-rP2X2 subunit. Because of the exceptional assembly-behaviour of the rP2X2 protein compared to the rP2X1, rP2X3, rP2X4 and rP2X5 proteins, its human orthologue was investigated in the same manner. In contrast to rP2X2 subunits, hP2X2 subunits migrated under virtually identical conditions in a single defined assembly state, which could be clearly assigned to a trimer. P2X6 subunits represent the sole P2X subtype that is unable to form functional homomeric receptors in Xenopus oocytes. The blue native PAGE analysis of metabolically labeled hP2X6 receptors and the examination of the glycosylation state revealed that hP2X6 subunits form tetramers and aggregates that are not exported to the plasma membrane of Xenopus oocytes.
Homing in with GPS
(2000)
In the present work, the Heidelberg electron beam ion trap (EBIT) at the Max-Planck-Institute für Kernphysik (MPIK) has been used to produce, trap highly charged argon ions and study their magnetic dipole (M1) forbidden transitions. These transitions are of relativistic origin and, hence, provide unique possibilities to perform precise studies of relativistic effects in many electron systems. In this way, the transitions energies of the 1s22s22p for the 2P3/2 - 2P1/2 transition in Ar13+ and the 1s22s2p for the 3P1 - 3P2 transition in Ar14+, for 36Ar and 40Ar isotopes were compared. The observed isotopic effect has confirmed the relativistic nuclear recoil effect corrections due to the finite nuclear mass in a recent calculation made by Tupitsyn [TSC03], in which major inconsistencies of earlier theoretical methods have been corrected for the first time. The finite mass, or recoil effect, composed of the normal mass shift (NMS), and the specific mass shift (SMS) were corrected for relativistic contributions, RNMS and RSMS. The present experimental results have shown that the recoil effects on the Breit level are indeed very important, as well as the effects of the correlated relativistic dynamics in a many electron ion.
This a review of the present status of heavy-ion collisions at intermediate energies. The main goal of heavy-ion physics in this energy regime is to shed some light on the nuclear equation of state (EOS), hence we present the basic concept of the EOS in nuclear matter as well as of nuclear shock waves which provide the key mechanism for the compression of nuclear matter. The main part of this article is devoted to the models currently used for describing heavy-ion reactions theoretically and to the observables useful for extracting information about the EOS from experiments. A detailed discussion of the flow effects with a broad comparison with the avaible data is presented. The many-body aspects of such reactions are investigated via the multifragmentation break up of excited nuclear systems and a comparison of model calculations with the most recent multifragmentation experiments is presented.
In the framework of the relativistic quantum dynamics approach we investigate antiproton observables in Au-Au collisions at 10.7A GeV. The rapidity dependence of the in-plane directed transverse momentum p(y) of p's shows the opposite sigh of the nucleon flow, which has indeed recently been discovered at 10.7A GeV by the E877 group. The "antiflow" of p's is also predicted at 2A GeV and at 160 A GeV and appears at all energies also for pi's and K's. These predicted p anticorrelations are a direct proof of strong p annihilation in massive heavy ion reactions.
The quantum statistical model (QSM) is used to calculate nuclear fragment distributions in chemical equilibrium. Several observable isotopic effects are predicted for intermediate energy heavy ion collisions. It is demonstrated that particle ratios for different systemsdo not depend on the breakup density-the only free parameter in our model.The importance of entropy measurements is discussed. Specific particle ratios for the system Au-Au are predicted, which can be used to determine the chemical potentials of the hot midrapidity fragment source in nearly central heavy ion collisions. Pacs-Nr. 25.70 Pq
The Monte Carlo parton string model for multiparticle production in hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions at high energies is described. An adequate choice of the parameters in the model gives the possibility of recovering the main results of the dual parton model, with the advantage of treating both hadron and nuclear interactions on the same footing, reducing them to interactions between partons. Also the possibility of considering both soft and hard parton interactions is introduced.
The properties of pions from the hot and dense reaction stage of relativistic heavy ion collisions are investigated with the quantum molecular dynamics model. Pions originating from this reaction stage stem from resonance decay with enhanced mass. They carry high transverse momenta. The calculation shows a direct correlation between high pt pions, early freeze-out times and high freeze-out densities.
Dilepton spectra for p+p and p+d reactions at 4.9GeV are calculated. We consider electromagnetic bremsstrahlung also in inelastic reactions. N* and Delta* decay present the major contributions to the pho and omega meson yields.Pion annihilation yields only 1.5% of all pho's in p+d. The pho mass spectrum is strongly distorted due to phase space effects, populating dominantly dilepton masses below 770MeV.
We calculate thermal photon and neutral pion spectra in ultrarelativistic heavy-ion collisions in the framework of three-fluid hydrodynamics. Both spectra are quite sensitive to the equation of state used. In particular, within our model, recent data for S + Au at 200 AGeV can only be understood if a scenario with a phase transition (possibly to a quark-gluon plasma) is assumed. Results for Au+Au at 11 AGeV and Pb + Pb at 160 AGeV are also presented.
We predict the formation of highly dense baryon-rich resonance matter in Au+Au collisions at AGS energies. The final pion yields show observable signs for resonance matter. The Delta1232 resonance is predicted to be the dominant source for pions of small transverse momenta. Rescattering e ects consecutive excitation and deexcitation of Delta's lead to a long apparent life- time (> 10 fm/c) and rather large volumina (several 100 fm3) of the Delta-matter state. Heavier baryon resonances prove to be crucial for reaction dynamics and particle production at AGS.
Strong mean meson fields, which are known to exist in normal nuclei, experience a violent deformation in the course of a heavy-ion collision at relativistic energies. This may give rise to a new collective mechanism of the particle production, not reducible to the superposition of elementary nucleon-nucleon collisions.
We investigate the sensivity of pionic bounce-off and squeeze-out on the density and momentum dependence of the real part of the nucleon optical potential. For the in-plane pion bounce-off we find a strong sensivity on both the density and momentum dependence whereas the out-of-plane pion squeeze-out shows a strong sensivity only towards the momentum dependence but little sensivity towards the density dependence.
We demonstrate the importance of the Bose-statistical effects for pion production in relativistic heavy-ion collisions. The evolution of the pion phase-space density in central collisions of ultrarelativistic nuclei is studied in a simple kinetic model taking into account the effect of Bose-simulated pion production by the NN collisions in a dense cloud of mesons.
Triple differential cross sections of pions in heavy ion collisions at 1 GeV/nucl. are studied with the IQMD model. After discussing general properties of resonance and pion production we focus on azimuthal correlations: At projectile- and target-rapidities we observe an anticorrelation in the in-plane transverse momentum between pions and protons. At c.m.-rapidity, however, we find that high pt pions are being preferentially emitted perpendicular to the event-plane. We investigate the causes of those correlations and their sensitivity on the density and momentum dependence of the real and imaginary part of the nucleon and pion optical potential.
The rapidity distribution of thermal photons produced in Pb+Pb collisions at CERN-SPS energies is calculated within scaling and three- fluid hydrodynamics. It is shown that these scenarios lead to very different rapidity spectra. A measurement of the rapidity dependence of photon radiation can give cleaner insight into the reaction dynamics than pion spectra, especially into the rapidity dependence of the temperature.
This thesis examines the spread and promotion of English on a global level, from a historical perspective in particular ‘Third World’ contexts. The globalization of English as an exclusive language of power is considered to be a trap, when accompanied by an ideology aiming to universalize monolingual and monocultural norms and standards. World-wide English diffusion is related - not to any mystical effects of some psycho-social mechanisms or transmuting alchemy - but to a global rise of military, political, economic, communicational and cultural Euro-American hegemony. The fact that the English language has become perhaps the primary medium of social control and power has not been given a prominent place in the analyses of established social scientists or political planners. On the contrary, the positively idealized dominance of English as a universal medium has become part of a collection of myths seeking to deny the global reality of multilingualism. Not allowing for the existence of any power besides itself, the perpetuation of this hegemony of English within a multilingual scenario has become a contradiction in terms. Centuries of colonialism, followed by neo-colonialism, are seen to have resulted in a world-wide consensus favouring centralization and homogenization of state and world economies, administrations, language, education and mass media systems, as prerequisites to local and global unity. The particular case of India as encountered by a colonizing Britain is used to illustrate the historical clash between differing language and educational traditions and cultures. It was on the strength of their own predominantly positive attitudes towards diversity - encoded in their promotion of complex social and religious philosophies, as well as varied economic and educational practices of pluralism and hierarchy-without-imposition, unity in diversity, etc. - that the people and their leaders finally achieved Indian independence from British colonialism. Contemporary Indian society, however, is still grappling with the legacy of a Eurocentric civilizational model - encoded in the neo-colonial system of English education - and in conflict with its own positively idealized and actively promoted traditions of pluralism. On national and international levels, the destabilization and destruction of diversity continues to threaten more than the linguistic and cultural uniqueness of numerous communities and individuals. For those majorities and minorities who refuse to give up their ‘differences’, political, economic and physical survival is at stake. A paradoxical reality, seldom acknowledged, is that while for the politically and economically already powerful language groups, the enormous resources spent on formal (language) education have become a means to maintain their material and political capital, whereas for the majority of modern societies' marginalized members, powerful linguistic barriers to full economic or political participation remain firmly in place. The justifications for perpetuating exclusionary policies and sustaining structural inequality have come from monocultural ideological assumptions in education and language policies as one of the key mechanisms for state control of labour. This thesis concludes that the trap of an ideologically exclusive status for English can be avoided by theoretically positivizing and institutionally promoting existing multilingual and multicultural peoples’ realities as an integral part of their human rights, in order to resist global Englishization.
Spectra of various particle species have been calculated with the Quantum Molecular Dynamics (QMD) model for very central collisions of Au+Au. They are compatible with the idea of a fully stopped thermal source which exhibits a transversal expansion besides the thermal distribution of an ideal gas. How- ever, the microscopic analyses of the local flow velocities and temperatures indicate much lower temperatures at densities associated with the freeze-out. The results express the overall impossibility of a model-independent determi- nation of nuclear temperatures from heavy ion spectral data, also at other energies (e.g. CERN) or for other species (i.e. pions, kaons, hyperons)
In the framework of RQMD we investigate antiproton observables in massive heavy ion collisions at AGS energies and compare to preliminary results of the E878 collaboration. We focus here on the considerable influence of the real part of an antinucleon nucleus optical potential on the ¯p momentum spectra. Pacs-numbers: 14.20 Dh, 25.70.-z
In the framework of the relativistic quantum molecular dynamics approach (RQMD) we investigate antideuteron (d) observables in Au+Au collisions at 10.7 AGeV. The impact parameter dependence of the formation ratios d/p2 and d/p2 is calculated. In central collisions, the antideuteron formation ratio is predicted to be two orders of magnitude lower than the deuteron formation ratio. The d yield in central Au+Au collisions is one order of magnitude lower than in Si+Al collisions. In semicentral collisions di erent configuration space distributions of p s and d s lead to a large squeeze out e ect for antideuterons, which is not predicted for the p s.
Different numerical approaches and algorithms arising in the context of modelling of cellular tissue evolution are discussed in this thesis. Being suited in particular to off-lattice agent-based models, the numerical tool of three-dimensional weighted kinetic and dynamic Delaunay triangulations is introduced and discussed for its applicability to adjacency detection. As there exists no implementation of a code that incorporates all necessary features for tissue modelling, algorithms for incremental insertion or deletion of points in Delaunay triangulations and the restoration of the Delaunay property for triangulations of moving point sets are introduced. In addition, the numerical solution of reaction-diffusion equations and their connection to agent-based cell tissue simulations is discussed. In order to demonstrate the applicability of the numerical algorithms, biological problems are studied for different model systems: For multicellular tumour spheroids, the weighted Delaunay triangulation provides a great advantage for adjacency detection, but due to the large cell numbers the model used for the cell-cell interaction has to be simplified to allow for a numerical solution. The agent-based model reproduces macroscopic experimental signatures, but some parameters cannot be fixed with the data available. A much simpler, but in key properties analogous, continuum model based on reaction-diffusion equations is likewise capable of reproducing the experimental data. Both modelling approaches make differing predictions on non-quantified experimental signatures. In the case of the epidermis, a smaller system is considered which enables a more complete treatment of the equations of motion. In particular, a control mechanism of cell proliferation is analysed. Simple assumptions suffice to explain the flow equilibrium observed in the epidermis. In addition, the effect of adhesion on the survival chances of cancerous cells is studied. For some regions in parameter space, stochastic effects may completely alter the outcome. The findings stress the need of establishing a defined experimental model to fix the unknown model parameters and to rule out further models.