Refine
Year of publication
- 1999 (6) (remove)
Document Type
- Doctoral Thesis (6) (remove)
Language
- English (6) (remove)
Has Fulltext
- yes (6)
Is part of the Bibliography
- no (6)
Keywords
- CERN ; Bleitarget ; Blei-Reaktion ; Quark-Gluon-Plasma (1)
- Diskursanalyse (1)
- Englisch (1)
- German Grammar (1)
- German language Study and teaching (1)
- Indien (1)
- Kolonialismus (1)
- Sprachpolitik (1)
Institute
- Physik (2)
- Erziehungswissenschaften (1)
- Informatik (1)
- Medizin (1)
This thesis has explored how structural techniques can be applied to the problem of formal verification for sequential circuits. Algorithms for formal verification which operate on non-canonical gate netlist representations of digital circuits have certain advantages over the traditional techniques based on canonical representations as BDDs. They allow to exploit problem-specific knowledge because they can take into account structural properties of the designs being analyzed. This allows us to break the problem down into sub-problems which are (hopefully) easier to be solved. However, in the past, the main application of such structural techniques was in the field of combinational equivalence checking. One reason for this is that the behaviour of a sequential system does not only depend on its inputs but also on its internal states, and no concepts had been developed to-date allowing structural methods to deal with large sets of states. An important goal of this research was therefore to develop structural, non-canonical forms of representing the reachable states of a finite state machine and to develop methods for reachability analysis based on such representations. In order to reach this goal, two steps were taken. Firstly, a framework for manipulating Boolean functions represented as gate netlists has been established. Secondly, using this framework, a structural method for FSM traversal was developed serving as the basis for an equivalence checking algorithm for sequential circuits. The framework for manipulating Boolean functions represented as multi-level combinational networks is based on a new concept of an implicant in a multi-level network and on an AND/ORtype enumeration technique which allows us to derive such implicants. This concept extends the classical notion of an implicant in two-level circuits to the multi-level case. Using this notion, arbitrary transformations in multi-level combinational networks can be performed. The multi-level network implicants can be determined from AND/OR reasoning graphs, which are associated with an AND/OR reasoning technique operating directly on the gate netlist description of a multi-level circuit. This reasoning technique has the important property that it is complete, i.e. the associated AND/OR trees contain all prime implicants of a Boolean function at an arbitrary node in a combinational circuit. In other words, AND/OR graphs constructed for a network function serve as a representation of this function. A great advantage over BDDs is that AND/OR graphs, besides representing the logic function, also represent some structural properties of the analyzed circuitry. This permits to develop heuristics that are specially tailored for certain applications such as logic optimization or verification. Another advantage which is especially useful for logic optimization is the fact that the proposed AND/OR enumeration scheme is not restricted to the use of a specific logic alphabet such as B3 = {0, 1, X}. By using Roth’s D-calculus based on B5 = {0, 1, D, D-Komplement} permissible implicants can be determined. Transformations based on permissible implicants exploit observability don’t-care conditions in logic synthesis by creating permissible functions at internal network nodes. In order to evaluate the new structural framework for manipulating Boolean functions represented as gate netlists, several experiments with implicant-based optimization of multi-level circuits were performed. The results show that implicant-based circuit transformations lead to significantly better optimization results than traditional synthesis techniques. Next, based on the proposed structural methods for Boolean function manipulation, techniques for representing and manipulating the set of states of a sequential circuit have been developed. The concept of a “stub circuit” was introduced which implicitly represents a set of state vectors as the range of a multi-output function given as a gate netlist. The stub circuit is the result of an existential quantification operation which is obtained by functional decomposition using implicant-based netlist transformations and a network cutting procedure. Using this existential quantification operation, a new structural FSM traversal algorithm was formulated which performs a fixed point iteration on the set of reachable states represented by the stub circuit. The proposed approach performs a reachability analysis of the states of a sequential circuit. It operates on gate netlists and naturally allows to incorporate structural properties of a design under consideration into the reasoning. Therefore, structural FSM traversal is an interesting alternative to traditional symbolic FSM traversal, especially in those applications of formal verification, where structural properties can be exploited. Structural FSM traversal was applied to the problem of sequential equivalence checking. Here, structural similarities between the designs to be compared can effectively reduce the complexity of the verification task. The FSM to be traversed is a special product machine called sequential miter. The special structural properties of this product machine have made it possible to formulate an approximate algorithm for structural FSM traversal, called record and play(). This algorithm uses an approximation on the reachable state set represented by the stub circuit which is very beneficial for performance. Instead of calculating the stub circuit using the exact algorithm, implicant-based transformations directly using structural design similarities are performed. These transformations, together with existential quantification implemented by the cutting procedure, lead to an over-approximation of the reachable state set. By this overapproximation, only such unreachable product states are added to the set of states represented by the stub circuit which are unreachable at the current point in time but which are nevertheless equivalent. Therefore, more product states are added to the set of reachable states sometimes leading to drastic acceleration of the traversal, i.e. the fixed point is reached in much fewer steps. The algorithm record and play() was applied to the problem of checking the equivalence of a circuit with its optimized and retimed version. Retiming is a form of sequential circuit optimization which can radically alter the state encoding of a circuit. Traditional FSM traversal techniques often fail because the BDDs needed to represent the reachable state set and the transition relation of the product machine become too large. Experiments were conducted to evaluate the performance of record and play() on a standard set of sequential benchmark circuits. The algorithm was capable of proving the equivalence of optimized and retimed circuits with their original versions, some of which (to our knowledge) have never before been verified using traditional techniques like symbolic FSM traversal. The experimental results are very promising. Future research will therefore explore how structural FSM traversal can be applied to model checking.
Towards a German grammar programme for post-leaving certificate students at Dublin City University
(1999)
With the introduction of the communicative method of language learning, overall standards of grammatical competence and performance among Irish second level students would appear to have been significantly reduced. As a consequence, learners who continue to study a given language at third level apparently no longer possess the knowledge which, under the grammar-translation methodology, further education institutions were able to build upon. This thesis examines the basis for the above perceptions, investigates the role of formal grammar instruction in the second language acquisition process and reports on a programme which was developed at Dublin City University (DCU) in order to ease, for Irish university students of German, the transition from a primarily memory-based approach to language acquisition to the analytical approach which is still being considered crucial to a university student's linguistic education. While the research was undertaken in response to locally existing difficulties, it may also be considered as a case study of more general interest, and as such serve as an exemplar to German departments in other universities as well as to other foreign language departments both within DCU or outside. The aim of the programme under investigation was to ease the transition on a socio-affective, cognitive and metacognitive level without lowering overall proficiency expectations and standards. Primary research was conducted among secondary school teachers, post-Leaving Certificate students on entry into DCU and among third level lecturers. The purpose of this research was to identify and define the programme’s content and progression. To this effect, the German junior and senior cycle syllabi at second level were also taken into consideration. The subsequent German grammar programme was implemented at DCU in the academic year 1996/7. While the programme would appear to have been judged favourably regarding some affective and cognitive-motivational aspects, results show mixed success rates for the other two factors under investigation, cognitive-analytical and metacognitive skills. Thus, some degree courses and some language combinations clearly benefited more from the programme than others. One of the conclusions drawn from this research suggests that unless certain changes are introduced prior to students’ entry into third level, university graduates are likely to remain well below the standards of accuracy and overall proficiency which were previously achieved.
Das Ziel der Untersuchung von ultra-relativistischen Schwerionenkollisionen ist die Suche nach dem Quark Gluon Plasma (QGP), einem Zustand hochdichter stark wechselwirkender Materie in dem der Einschluss von Quarks und Gluonen in Hadronen aufgehoben ist. Die bisher gewonnenen experimentellen Hinweise deuten daraufhin,daß in Schwerionenkollisionen bei den derzeit höchsten zur Verfügung stehenden Energien von 158 GeV/Nukleon in Pb+Pb Reaktionen am CERN-SPS die Rahmenbedingungen für einen Phasenübergang von hadronischer Materie zu einer partonischen Phaseerfüllt sind. Die exakte Phasenstruktur stark wechselwirkender Materie hingegen ist derzeit noch nicht vollständig verstanden. Da inklusive hadronische Observablen und "penetrierende Proben" nicht direkt sensitiv auf die Existenz und Natur des Phasenübergangs sind, wurde die Analyse von Einzelereignis-"event-by-event"-Fluktuationenvorgeschlagen. Das Fluktuationsverhalten von Einzelereignis-Observablen sollte direkt sensitiv auf die Natur des zu beobachtenden Phasenübergangssein. In dieser Arbeit wurden Fluktuationen in der "chemischen" Zusammensetzung der Teilchenquelle untersucht und erste Ergebnisse werden präsentiert.
A strong interest is currently going on in the physics of high intensity and high energy beams: intense proton or deuteron beams are required in various fields of science and industry, including sources of neutrons for research experiments and material processing, nuclear physics experiments, tritium production and nuclear waste transmutation. High current heavy ion beams are envisaged for power production facilities (inertial fusion). Several projects presently under study are based on rf linacs as driver, sometimes followed by accumulation and/or compressor rings [Acc98]. The critical issue for all of them is to be operated in a low loss regime, because of activation problems in the structure. For this reason careful investigations have to be performed in order to understand and control the beam behaviour, aiming at conserving the beam quality, reducing the emittance growth and filamentation and avoiding the formation of halo. The beam current to be accelerated is actually limited by the amount of beam losses, which depends upon the beam halo: in order to reduce induced radioactivity and to allow for hands-on maintenance, normally losses <1 W/m are considered as acceptable [Sto96]. One of the major facilities under study is the European Spallation Source (ESS), a project based on a H- linac accelerating a 107 mA peak current beam (360 ns pulse in the DTL) and on two compressor rings, producing 5 MW average beam power [ESS]. Also the USA are developing a proposal for a Spallation Neutron Source (SNS), providing a short pulse H- beam with average power of 1÷2 MW; a 30 mA linac is required [SNS]. The Accelerator for Production of Tritium (APT), studied at Los Alamos, requires a 100 mA proton beam current (cw) to produce a power of 130÷170 MW [APT]. A similar but smaller accelerator (40 mA, 40 MW beam power) would serve as driver for the Accelerator Driven Transmutation of Waste (ADTW) system [ATW]. The accelerator system for the International Fusion Material Irradiation Facility (IFMIF) will test the behaviour of materials to be used for magnetic fusion (e.g. ITER); it consists of two 125 mA deuteron beams in parallel, to generate a fusion-like neutron spectrum with 10 MW cw [IFM]. In the field of heavy ions, for about 20 years scientists have been working on inertial confinement fusion, as an alternative to magnetic confinement one, to find a practical and cleaner method for producing energy. Nuclear fusion occurs when the nuclei of lighter elements (in a state of matter called "plasma") merge to form heavier elements; the extremely high temperatures and densities needed to get the nuclei to collide in the proper way and release big amounts of energy are obtained in a small "pellet" of fusion fuel, which receives energy from laser or ion beams, implodes and its inertia compresses it hard enough to hold together the plasma until it reaches ignition. Both laser and accelerator facilities have been investigated as drivers, since a demonstration of ignition at low gains is more easily accessible by lasers, whereas the intrinsic properties of accelerators -efficiency and repetition rate- will be essential for a medium-gain power plant. One study for a fusion power system driven by heavy ion beams (HIBALL) was completed in Europe already in 1982 [Bad81]. When the USA declassified essential information on pellet design, "indirect drive" targets have been considered openly, where the pellet is hit by X-rays generated from laser or ion beams rather than directly from the beams. Main progress has been achieved during the latest years in the understanding of pellet dynamics after ignition, i.e. in plasma physics [Sym1][Sym2][Sym3][Bas97][Lut97], imposing also new requirements on the layout of the driver accelerator facilities. In 1994-95 Frankfurt University and several other European laboratories (leaded by GSI) started a new collaboration called HIDIF (Heavy Ion Driven Ignition Facility) in order to simplify the accelerator plant design owing to the new technique of indirectly driven targets and to some technological improvements. First studies were oriented towards the conceptual goal of a facility providing just enough beam energy for the ignition of fusion reactions at very low gain (a "proof of principle") [Hof98]. In a recent phase of the study, it was realized that the proposed concept would make this scheme a more appropriate choice for energy production rather than for ignition; the acronym HIDIF was therefore intended as Heavy Ion Driven Inertial Fusion, and the parameters are going to be modified accordingly [Hof96][Hof97][Hof98]. The scenario presently discussed by this group proposes the formation and acceleration of an intense beam (400 mA) of singly charged heavy ions of three different atomic species, with mass differences of about 10% (the reference one is 209Bi+) in a main rf linac; they are then injected into some storage rings at an energy of 50 MeV/u, bunched in induction linacs and finally transported to a target with different velocities in such a way that the three species merge on the pellet ("telescoping") at 500 TW peak power. In this thesis the main linac of the HIDIF proposal is extensively investigated as an example of a high intensity heavy ion linac. Results are presented from numerical simulations of multi-particle beam dynamics carried out for the first time in this context. After a short presentation of the HIDIF reference scenario (Ignition Facility), including a discussion of the motivations for a high current heavy ion linac, some elements of the theory of beam transport and acceleration are recalled [Con91][Hof82][Kap85] [Lap87][Law88][Mit78][Rei94][Str83]. Then the used simulation programs are described, and a particle dynamics layout of a conventional 200 MHz Alvarez DTL is discussed with respect to low emittance growth at high transmission, including large space-charge effects, taking into account the influence of different kinds of statistical errors and of input mismatch on the beam dynamics. The modifications needed for "telescoping" are investigated with simulations for the nominal mass difference (10%) and for a smaller one (5%); finally the transfer line between DTL and rings is discussed and studied both analytically and by numerical calculations. The large mass number (A= 209) helps to reduce the space-charge effects with respect to protons, therefore the behaviour of the beam is not space-charge dominated. Nevertheless the tune depression values (similar to those of the ESS linac e.g.) indicate that these effects cannot be neglected. For a linac with low duty cycle, as in the case of an ignition facility, the results from particle dynamics calculations can be considered as a reliable guideline for the DTL layout, since they indicate that such a high intensity linac can fulfill the requirements on smooth beam behaviour and low losses.
Die vorliegende Dissertation berichtet über eine Serie von Verhaltens- und funktionellen Bildgebungsstudien zu experimentalpsychologischen Paradigmata, die eine räumliche Analyse und Koordinatentransformation von Material der visuellen Wahrnehmung oder Vorstellung beinhalten. Nach einer Einführung in die Prinzipien und Techniken der funktionellen Kernspintomographie, der hier benutzten Methode für die Messung von Gehirnaktivität, werden die Versuche einer Replikation des berühmten Stratton'schen Umkehrbrillen-Experiments dargestellt (Kapitel 1). Unsere vier Probanden zeigten zwar eine zügige Anpassung der visuomotorischen Funktionen an die neue visuelle Umwelt, berichteten aber, anders als Stratton, nicht, daß sie die Welt nach einigen Tagen mit der Umkehrbrille wieder normal sähen. Diese Persistenz des umgekehrten Bildes wurde durch eine psychphysische Testbatterie bestätigt. Des weiteren ergaben die funktionellen Kernspinmessungen, daß sich die kortikale retinotope Organisation im Verlaufe des Experiments nicht geändert hat. Da sich also Strattons Haupteffekt, das Aufrechtsehen durch die Umkehrbrille nach einwöchiger Adaptation, nicht replizieren ließ, werden andere Möglichkeiten der Interpretation der verschiedenen Umkehrexperimente der letzten hundert Jahre vorgeschlagen. Dieses Ergebnis einer funktionellen Anpassung ohne größere Änderungen der visuellen Wahrnehmung (und ohne Veränderungen der Repräsentation der Netzhautareale in der Sehrinde) führte zu der Hypothese, daß die erforderlichen Transformationen auf einer höheren Stufe der kortikalen Hierarchie der visuellen Verarbeitung erfolgen. Zur Testung dieser Hypothese wurde eine funktionelle Kernspinstudie des Umkehrlesens durchgeführt (Kapitel 2). Hierbei lasen die Probanden Wörter und Sätze in Spiegelschrift oder auf dem Kopf. Der neuronale Mechanismus der räumlichen Transformationen, die zur Bewältigung dieser Aufgabe nötig sind, konnte in bestimmten Regionen des Parietallappens, die zwischen den Probanden sehr konstant waren, lokalisiert werden. Weiterhin fand sich eine Koaktivierung okzipitootemporaler Objekterkennungs-Areale. Die Spezifizität der parietalen Aktivierungsfoci wurde durch ein Kontrollexperiment bestätigt, in welchem das kortikale System für räumliche Transformationen von den Netzwerken der allgemeinen visuellen Aufmerksamkeit und der Augenbewegungskontrolle unterschieden werden konnte. In einem weiteren Experiment wurden die räumlichen Funktionen des Parietallappens unter dem Vorzeichen der visuellen Vorstellung untersucht. Als Paradigma wurde der "mental clock" - Test verwendet, bei welchem die Probanden die Winkel der Zeiger zweier Uhren vergleichen müssen, deren Zeiten nur akustisch vorgegeben werden (Kapitel 3). Diese Aufgabe erfordert die Generierung eines entsprechenden Vorstellungsbildes und dessen räumliche Analyse, stellt also sowohl ein kontrolliertes Vorstellungs-Paradigma als auch einen Test räumlicher Funktionen dar, der nicht auf visuell präsentiertem Material beruht. Das parietale Aktivierungsmuster, das der Analyse der Winkel dieser vorgestellten Uhren zugeschrieben werden konnte, entsprach weitgehend demjenigen, das mit der räumlichen Transformation von Buchstaben verbunden war. Es handelt sich also wahrscheinlich um ein kortikales System für räumliche Analyse und Koordinatentransformationen, das nicht auf eine visuelle Stimulation angewiesen ist, sondern auch bei bloßer visueller Vorstellung aktiv werden kann. Die vorgelegten Resultate werden im Kontext neuerer neuropsychologischer Befunde zu Defiziten räumlicher Analyse und Vorstellung bei Läsionen des Parietallappens diskutiert (Kapitel 4). Auch die methodologischen Probleme der kognitiven Subtraktion, die in unseren Studien teilweise benutzt wurde, werden behandelt. Dabei wird erläutert, inwiefern diese für die Beurteilung der vorgelegten Studien nur von untergeordneter Bedeutung sind. Nichtsdestoweniger schlagen wir Modifikationen der experimentellen Paradigmata im Sinne des parametrischen Designs und des "event-related functional magnetic resonance imaging" vor, die bei zukünftigen Studien einen vollständigen Verzicht auf die kognitive Subtraktion ermöglichen dürften. Schließlich wird die Bedeutung der vorgelegten Ergebnisse für die Erforschung der Anpassungsfähigkeit des menschlichen Gehirns und des Verhältnisses von Vorstellung und visueller Wahrnehmung dargelegt.
This thesis examines the spread and promotion of English on a global level, from a historical perspective in particular ‘Third World’ contexts. The globalization of English as an exclusive language of power is considered to be a trap, when accompanied by an ideology aiming to universalize monolingual and monocultural norms and standards. World-wide English diffusion is related - not to any mystical effects of some psycho-social mechanisms or transmuting alchemy - but to a global rise of military, political, economic, communicational and cultural Euro-American hegemony. The fact that the English language has become perhaps the primary medium of social control and power has not been given a prominent place in the analyses of established social scientists or political planners. On the contrary, the positively idealized dominance of English as a universal medium has become part of a collection of myths seeking to deny the global reality of multilingualism. Not allowing for the existence of any power besides itself, the perpetuation of this hegemony of English within a multilingual scenario has become a contradiction in terms. Centuries of colonialism, followed by neo-colonialism, are seen to have resulted in a world-wide consensus favouring centralization and homogenization of state and world economies, administrations, language, education and mass media systems, as prerequisites to local and global unity. The particular case of India as encountered by a colonizing Britain is used to illustrate the historical clash between differing language and educational traditions and cultures. It was on the strength of their own predominantly positive attitudes towards diversity - encoded in their promotion of complex social and religious philosophies, as well as varied economic and educational practices of pluralism and hierarchy-without-imposition, unity in diversity, etc. - that the people and their leaders finally achieved Indian independence from British colonialism. Contemporary Indian society, however, is still grappling with the legacy of a Eurocentric civilizational model - encoded in the neo-colonial system of English education - and in conflict with its own positively idealized and actively promoted traditions of pluralism. On national and international levels, the destabilization and destruction of diversity continues to threaten more than the linguistic and cultural uniqueness of numerous communities and individuals. For those majorities and minorities who refuse to give up their ‘differences’, political, economic and physical survival is at stake. A paradoxical reality, seldom acknowledged, is that while for the politically and economically already powerful language groups, the enormous resources spent on formal (language) education have become a means to maintain their material and political capital, whereas for the majority of modern societies' marginalized members, powerful linguistic barriers to full economic or political participation remain firmly in place. The justifications for perpetuating exclusionary policies and sustaining structural inequality have come from monocultural ideological assumptions in education and language policies as one of the key mechanisms for state control of labour. This thesis concludes that the trap of an ideologically exclusive status for English can be avoided by theoretically positivizing and institutionally promoting existing multilingual and multicultural peoples’ realities as an integral part of their human rights, in order to resist global Englishization.