Refine
Year of publication
Document Type
- Conference Proceeding (737) (remove)
Language
- English (737) (remove)
Keywords
- Computerlinguistik (20)
- Informationsstruktur (16)
- Phonetik (12)
- Japanisch (9)
- Democracy (8)
- Englisch (7)
- Grammatik (7)
- Law (6)
- Maschinelle Übersetzung (6)
- Nungisch (6)
Institute
- Physik (240)
- Rechtswissenschaft (101)
- Medizin (72)
- Universitätsbibliothek (68)
- Informatik (37)
- Extern (27)
- Frankfurt Institute for Advanced Studies (FIAS) (27)
- Wirtschaftswissenschaften (14)
- Geowissenschaften (13)
- Biochemie und Chemie (11)
1. There are two classes of theories of Universal Grammar: (1) Formalist theories, such as the widespread varieties of generative grammar. These theories start from the assumption that certain strings of linguistic forms are grammatical while other strings are ungrammatical. A grammar of this type produces grammatical strings and does not produce ungrammatical ones. All theories of this class fail in the same respect: they do not account for the meaning of the strings. (2) Semiotactic theories, which describe the meaning of a string in terms of the meanings of its constituent forms and their interrelations. The only elaborate formalized theory of this class presently available is the one advanced by C.L. Ebeling (Syntax and Semantics, Leiden: Brill, 1978). I shall discuss some of its mathematical properties here.
To reach high luminosities in future linear colliders short range wakes havea to be controlled in the range of X-band frequencies or higher. Rectangular irises can be used to introduce strong focusing quadrupole-like rf-fields. Even circular irises in iris-loaded accelarator structures have the capability of focusing if the particle velocity differs from phase velocity. Theoretical investigations concerning the focusing strength to be expected are presented. Their applicability for linear colliders is discussed.
A new method of measuring quality factors in cavities is presented. This method is well suited to measure quality factors in undamped cavities as well as in heavily damped cavities, and in addition this method provides a possibility of separating modes and measuring quality factors especially in cases of overlapping modes. Measurements have been carried out on HOM-damped cavities for the DESY/THD linear collider project. Results are presented.
Damping cells for the higher order modes are necessary for the S-band linear collider to minimize BBU (Beam-Break-Up). The construction of the damper cells has to take into account the different field geometries of the higher order modes. So two different types of dampers have been designed: a wall slotted an an iris slotted cell. In order to optimize the two types of damping cells with respect to damping strength, impedance matching between coupling system and waveguide dampers and between damping cell and undamped cells and the tuning system, damping cells of both types have been built and examinated.
The lemmings theory of case
(1995)
In this paper I show that Clitic Climbing (CC) in Spanish and Long Scrambling (LS) in German (and Polish) are (im-)possible out of the same environments. For an explanation of this fact I propose a feature-oriented analysis of incorporation phenomena. The idea is that restructuring is a phenomenon of syntactic incorporation. In German and Polish, Agro incorporates covertly into the matrix clause and licenses LS out of the infinitival into the matrix clause. Similarily the clitic in Spanish, which is analysed as an Agro-head, incorporates into the matrix clause. I argue that this movement is necessary for reasons of feature-checking, i. e. for checking of an [+R]- or Restructuring-feature. In section 2 I discuss several differences between CC and LS. For example, the proposed analysis correctly predicts that clitics in contrast to scrambled phrases are subject to several serialization restrictions. Throughout the paper I use the term restructuring only in a descriptive sense, in order to describe the phenomenon in question.
Guess how?
(1996)
Given x small epsilon, Greek Rn an integer relation for x is a non-trivial vector m small epsilon, Greek Zn with inner product <m,x> = 0. In this paper we prove the following: Unless every NP language is recognizable in deterministic quasi-polynomial time, i.e., in time O(npoly(log n)), the ℓinfinity-shortest integer relation for a given vector x small epsilon, Greek Qn cannot be approximated in polynomial time within a factor of 2log0.5 − small gamma, Greekn, where small gamma, Greek is an arbitrarily small positive constant. This result is quasi-complementary to positive results derived from lattice basis reduction. A variant of the well-known L3-algorithm approximates for a vector x small epsilon, Greek Qn the ℓ2-shortest integer relation within a factor of 2n/2 in polynomial time. Our proof relies on recent advances in the theory of probabilistically checkable proofs, in particular on a reduction from 2-prover 1-round interactive proof-systems. The same inapproximability result is valid for finding the ℓinfinity-shortest integer solution for a homogeneous linear system of equations over Q.
Preferences and defaults for definiteness and number in japanese to german machine translation
(1996)
A significant problem when translating Japanese dialogues into German is the missing information on number and definiteness in the Japanese analysis output. The integration of the search for such information into the transfer process provides an efficient solution. General transfer includes conditions to make it possible to consider external knowledge. Thereby, grammatical and lexical knowledge of the source language, knowledge of lexical restrictions on the target language, domain knowledge and discourse knowledge are accessible.
The behavior of hadronic matter at high baryon densities is studied within Ultrarelativistic Quantum Molecular Dynamics (URQMD). Baryonic stopping is observed for Au+Au collisions from SIS up to SPS energies. The excitation function of flow shows strong sensitivities to the underlying equation of state (EOS), allowing for systematic studies of the EOS. Dilepton spectra are calculated with and without shifting the rho pole. Except for S+Au collisions our calculations reproduce the CERES data.
An anaphor resolution algorithm is presented which relies on a combination of strategies for narrowing down and selecting from antecedent sets for re exive pronouns, nonre exive pronouns, and common nouns. The work focuses on syntactic restrictions which are derived from Chomsky's Binding Theory. It is discussed how these constraints can be incorporated adequately in an anaphor resolution algorithm. Moreover, by showing that pragmatic inferences may be necessary, the limits of syntactic restrictions are elucidated.
In this short note on my talk I want to point out the mathematical difficulties that arise in the study of the relation of Wightman and Euclidean quantum field theory, i.e., the relation between the hierarchies of Wightman and Schwinger functions. The two extreme cases where the reconstructed Wightman functions are either tempered distributions - the well-known Osterwalder-Schrader reconstruction - or modified Fourier hyperfunctions are discussed in some detail. Finally, some perpectives towards a classification of Euclidean reconstruction theorems are outlined and preliminary steps in that direction are presented.
Due to the additional need of very short bunches for the FEL operation with the TESLA-machine strong wakefield effects are expected. One third of the total wakefield energy per bunch is radiated into the frequency region above the energy gap of Cooper pairs in superconducting niobium. The energy of the cooper pairs in superconducting niobium at 2 K corresponds to a frequency of 700 GHz. An analytical and experimental estimation for the overall energy loss of the FEL bunch above energy gap is presented. The analytical method is based on a study from R. B. Palmer [1]. The results of the wakefield estimations are used to calculate possible quality factor reduction of the TESLA cavities during FEL operation. Results are presented.
Syntactic coindexing restrictions are by now known to be of central importance to practical anaphor resolution approaches. Since, in particular due to structural ambiguity, the assumption of the availability of a unique syntactic reading proves to be unrealistic, robust anaphor resolution relies on techniques to overcome this deficiency. In this paper, two approaches are presented which generalize the verification of coindexing constraints to de cient descriptions. At first, a partly heuristic method is described, which has been implemented. Secondly, a provable complete method is specified. It provides the means to exploit the results of anaphor resolution for a further structural disambiguation. By rendering possible a parallel processing model, this method exhibits, in a general sense, a higher degree of robustness. As a practically optimal solution, a combination of the two approaches is suggested.
This paper proposes a new approach for the encoding of images by only a few important components. Classically, this is done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the neural network community. Applied to images, we aim for the most important source patterns with the highest occurrence probability or highest information called principal independent components (PIC). For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
Thermodynamical variables and their time evolution are studied for central relativistic heavy ion collisions from 10.7 to 160 AGeV in the microscopic Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). The UrQMD model exhibits drastic deviations from equilibrium during the early high density phase of the collision. Local thermal and chemical equilibration of the hadronic matter seems to be established only at later stages of the quasi-isentropic expansion in the central reaction cell with volume 125 fm 3. Baryon energy spectra in this cell are reproduced by Boltzmann distributions at all collision energies for t > 10 fm/c with a unique rapidly dropping temperature. At these times the equation of state has a simple form: P = (0.12 - 0.15) Epsilon. At SPS energies the strong deviation from chemical equilibrium is found for mesons, especially for pions, even at the late stage of the reaction. The final enhancement of pions is supported by experimental data.
We analyze the reaction dynamics of central Pb+Pb collisions at 160 GeV/nucleon. First we estimate the energy density pile-up at mid-rapidity and calculate its excitation function: The energy density is decomposed into hadronic and partonic contributions. A detailed analysis of the collision dynamics in the framework of a microscopic transport model shows the importance of partonic degrees of freedom and rescattering of leading (di)quarks in the early phase of the reaction for E >= 30 GeV/nucleon. The energy density reaches up to 4 GeV/fm 3, 95% of which are contained in partonic degrees of freedom. It is shown that cells of hadronic matter, after the early reaction phase, can be viewed as nearly chemically equilibrated. This matter never exceeds energy densities of 0.4 GeV/fm 3, i.e. a density above which the notion of separated hadrons loses its meaning. The final reaction stage is analyzed in terms of hadron ratios, freeze-out distributions and a source analysis for final state pions.
High perveance negative ion beams with low emittance are essential for several next generation particle accelerators (i. g. spallation sources like ESS [1] and SNS [2]). The extraction and transport of these beams have intrinsic difficulties different from positive ion beams. Limitation of beam current and emittance growth have to be avoided. To fulfill the requirements of those projects a detailed knowledge of the physics of beam formation the interaction of the H- with the residual gas and transport is substantial. A compact cesium free H- volume source delivering a low energy high perveance beam (6.5 keV, 2.3 mA, perveance K= 0.0034) has been built to study the fundamental physics of beam transport and will be integrated into the existing LEBT section in the near future. First measurements of the interaction between the ion beam and the residual gas will be presented together with the experimental set up and preliminary results.
The knowledge of the build up time of space charge compensation (SCC) and the investigation of the compensation process is of main interest for low energy beam transport of pulsed high perveance ion beams under space charge compensated conditions. To investigate experimentally the rise of compensation an LEBT system consisting of a pulsed ion source, two solenoids and a drift tube as diagnostic section has been set up. The beam potential has been measured time resolved by a residual gas ion energy analyser (RGA). A numerical simulation for the calculation of self-consistent equilibrium states of the beam plasma has been developed to determine plasma parameters which are difficult measure directly. The results of the simulation has been compared with the measured data to investigate the behavior of the compensation electrons as a function of time. The acquired data shows that the theoretical rise time of space charge compensation is by a factor of two shorter than the build up time determined experimentally. In view of description the process of SCC an interpretation of the gained results is given.
This paper presents the design and implementation of a group oriented, decentralised, virtual learning settings where students meet in groups of 3 –5 people at different locations all over the world and communicate via the internet. After presenting the objective of such a didactical design the paper gives an insight into the technical implementation. It presents the advantages and disadvantages of several internet services in such a virtual setting and a way of combining these internet applications according to their special characteristics. While the role of teachers change to those of coordinators the communication process within and between the groups becomes more important – as discussed in the following chapter. The paper concludes with the presentation of two practical applications as offered by the Institute for Didactics and Economics at the Johann Wolfgang Goethe-University Frankfurt/Main (Germany) and some evaluating remarks.
Coherence in hypertext
(1999)
At first sight hypertext does not look !ike a good subject for research on coherence. Hypertext is non-linear text, and coherence is typically defined for linear text. So coherence does not seem to be involved in hypertext at all. But on closer inspection it emerges that some of the basic structural problems with hypertexts are classical problems of coherence.
Particles fullfill several distinct central roles in the Japanese language. They can mark arguments as well as adjuncts, can be functional or have semantic functions. There is, however, no straightforward matching from particles to functions, as, e.g., 'ga' can mark the subject, the object or the adjunct of a sentence. Particles can cooccur. Verbal arguments that could be identified by particles can be eliminated in the Japanese sentence. And finally, in spoken language particles are often omitted. A proper treatment of particles is thus necessary to make an analysis of Japanese sentences possible. Our treatment is based on an empirical investigation of 800 dialogues. We set up a type hierarchy of particles motivated by their subcategorizational and modificational behaviour. This type hierarchy is part of the Japanese syntax in VERBMOBIL.
This paper presents a new timing driven approach for cell replication tailored to the practical needs of standard cell layout design. Cell replication methods have been studied extensively in the context of generic partitioning problems. However, until now it has remained unclear what practical benefit can be obtained from this concept in a realistic environment for timing driven layout synthesis. Therefore, this paper presents a timing driven cell replication procedure, demonstrates its incorporation into a standard cell placement and routing tool and examines its benefit on the final circuit performance in comparison with conventional gate or transistor sizing techniques. Furthermore, we demonstrate that cell replication can deteriorate the stuck-at fault testability of circuits and show that stuck-at redundancy elimination must be integrated into the placement procedure. Experimental results demonstrate the usefulness of the proposed methodology and suggest that cell replication should be an integral part of the physical design flow complementing traditional gate sizing techniques.
Low energy beam transport (LEBT) for a future heavy ion driven inertial fusion (HIDIF [1]) facility is a crucial point using a Bi+ beam of 40 mA at 156 keV. High space charge forces (generalised perveance K=3.6*10-3) restrict the use of electrostatic focussing systems. On the other hand magnetic lenses using space charge compensation suffer from the low particle velocity. Additionally the emittance requirements are very high in order to avoid particle losses in the linac and at ring injection [2]. urthermore source noise and rise time of space charge compensation [3] might enhance particle losses and emittance. Gabor lenses [4] using a continuous space charge cloud for focussing could be a serious alternative to conventional LEBT systems. They combine strong cylinder symmetric focussing with partly space charge compensation and low emittance growth due to lower non linear fields. A high tolerance against source noise and current fluctuations and reduced investment costs are other possible advantages. The proof of principle has already been shown [5, 6]. To broaden the experiences an experimental program was started. Therefrom the first experimental results using a double Gabor lens (DGPL, see fig. 1 ) LEBT system for transporting an high perveance Xe+ beam will be presented and the results of numerical simulations will be shown.
The determination of the beam emittance using conventional destructive methods suffers from two main disadvantages. The interaction between the ion beam and the measurement device produces a high amount of secondary particles. Those particles interact with the beam and can change the transport properties of the accelerator. Particularly in the low energy section of high current accelerators like proposed for IFMIF, heavy ion inertial fusion devices (HIDIF) and spallation sources (ESS, SNS) the power deposited on the emittance measurement device can lead to extensive heat on the detector itself and can destruct or at least dejust the device (slit or grit for example). CCD camera measurements of the incident light emitted from interaction of beam ions with residual gas are commonly used for determination of the beam emittance. Fast data acquisition and high time resolution are additional features of such a method. Therefore a matrix formalism is used to derive the emittance from the measured profile of the beam [1,2] which does not take space charge effects and emittance growth into account. A new method to derive the phase space distribution of the beam from a single CCD camera image using statistical numerical methods will be presented together with measurements. The results will be compared with measurements gained from a conventional Allison type (slit-slit) emittance measurement device.
In the current globalization debate the law appears to be entangled in economic and political developments which move into a new dimension of depoliticization, de-centralization and de-individualization. For all the correct observations in detail, though, this debate is bringing about a drastic (polit)economic reduction of the role of law in the globalization process that I wish to challenge in this paper. Here one has to take on Wallerstein’s misconception of “worldwide economies” according to which the formation of the global society is seen as a basically economic process. Autonomous globalization processes in other social spheres running parallel to economic globalization need to be taken seriously. In protest against such (polit)economic reductionism several strands of the debate, among them the neo-institutionalist theory of “global culture”, post-modern concepts of global legal pluralism, systems theory studies of differentiated global society and various versions of “global civil society” have shaped a concept of a polycentric globalization. From these angles the remarkable multiplicity of the world society, in which tendencies to re-politicization, re-regionalization and re-individualization are becoming visible at the same time, becomes evident. I shall contrast two current theses on the globalization of law with two less current counter-theses: First thesis: globalization is relevant for law because the emergence of global markets undermines the control potential of national policy, and therefore also the chances of legal regulation. First counter-thesis: globalization produces a set of problems intrinsic to law itself, consisting in a change to the dominant lawmaking processes. Second thesis: globalization means that the law institutionalizes the worldwide shift in power from governmental actors to economic actors. Second counter-thesis: globalization means that the law has a chance of contributing to a dual constitution of autonomous sectors of world society.
We present a solution for the representation of Japanese honorifical information in the HPSG framework. Basically, there are three dimensions of honorification. We show that a treatment is necessary that involves both the syntactic and the contextual level of information. The japanese grammar is part of a machine translation system.
Expletives as features
(2000)
Expletives have always been a central topic of theoretical debate and subject to different analyses within the different stages of the Principles and Parameter theory (see Chomsky 1981, 1986, 1995; Lasnik 1992, 1995; Frampton and Gutman 1997; among others). However, most analyses center on the question how to explain the behavior of expletives in A-chains (such as there in English or Þad in Icelandic). No account relates wh-expletives (as one finds them in so-called partial wh-movement constructions in languages such as Hungarian, Romani, and German) to expletives in Achains. In this paper, I argue that the framework of the Minimalist Program opens up the possibility of accounting for expletive-associate relations in A-/A'-chains in a unified manner. The main idea of the unitary analysis is that an expletive is an overtly realized feature bundle that is (sub)extracted from its associate DP. There in an expletive-associate chain is a moved D-feature which orginates inside the associate DP. Similarily, in A'-chains, the whexpletive originates as a focus-/wh-feature in the wh-phrase with which it is associated. This analysis provides evidence for the feature-checking theory in Chomsky (1995). The paper is organized as follows. Section 2 contains the discussion of expletive there. In section 3 I suggest an analysis for whexpletives, and I also explore whether this analysis can be extended to relations between X°-categories such as auxiliary and participle complexes.
This paper describes a first version of the GPS flight recorder for homing pigeons. The GPS recorder consists of a hybrid GPS board, a patch antenna 19*19 mm, a 3 V Lithium battery as power supply, a DCDC converter, a logging facility and an additional microprocessor. It has a weight of 33g. Prototypes were tested and worked reliably with a sampling rate of 1/sec and with an operation time of about 3 h. In first tests on homing pigeons 9 flight paths were recorded, showing details like loops flown immediately after the release, complete routes over 30 km including detours, rest periods and speed.
What role does language play in the development of numerical cognition? In the present paper I argue that the evolution of symbolic thinking (as a basis for language) laid the grounds for the emergence of a systematic concept of number. This concept is grounded in the notion of an infinite sequence and encompasses number assignments that can focus on cardinal aspects ("three pencils"), ordinal aspects ("the third runner"), and even nominal aspects ("bus #3"). I show that these number assignments are based on a specific association of relational structures, and that it is the human language faculty that provides a cognitive paradigm for such an association, suggesting that language played a pivotal role in the evolution of systematic numerical cognition.
I discuss the status of WH-words for interrogative interpretations, and show that the derivation of constituent questions evolves from a specific interplay of syntactic and semantic representations with pragmatics. I argue that WH-pronouns are not ‘interrogative’. Rather, they are underspecified elements; due to this underspecification, WH-words can form a constitutive part not only of interrogative, but also of exclamative and declarative clauses. WH-words introduce a variable of a particular conceptual domain into the semantic representation. Accordingly, they have to be specified for interpretation. Different WH-contexts give rise to different interpretations. In a cross-linguistic overview, I discuss the characteristic elements contributing to the derivation of interrogatives. I argue that specific particles or their phonologically empty counterparts in the head of CP contribute the interrogative aspect. The speech act of ‘asking’ is then carried out via an intonational contour that identifies a question. By default, this intonational contour operates on interrogative sentences; however, other sentence formats – in particular, those of declarative sentences – are possible as well. The distinction of (a) grammatical (syntactic, semantic and phonological) sentence formats for interrogative and declarative sentences, and (b) intonational contours serving the discrimination of speech acts like questions and assertions, can be related to psychological and neurological evidence.
In its first part, this contribution reviews shortly the application of neural network methods to medical problems and characterizes its advantages and problems in the context of the medical background. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic systems. Then, paradigm of neural networks is shortly introduced and the main problems of medical data base and the basic approaches for training and testing a network by medical data are described. Additionally, the problem of interfacing the network and its result is given and the neuro-fuzzy approach is presented. Finally, as case study of neural rule based diagnosis septic shock diagnosis is described, on one hand by a growing neural network and on the other hand by a rule based system. Keywords: Statistical Classification, Adaptive Prediction, Neural Networks, Neurofuzzy, Medical Systems
In linguistics and the philosophy of language, the mass/count distinction has traditionally been regarded as a bi-partition on the nominal domain, where typical instances are nouns like "beef" (mass) vs."cow" (count). In the present paper, we argue that this partition reveals a system that is based on both syntactic features and conceptual features, and present experimental evidence suggesting that the discrimination of the two kinds of features has a psychological reality.
The experiment NA49 at the CERN SPS is a large acceptance detector for charmed hadrons. The identification of neutral strange hadrons Lambda and AntiLambda is based on the measurement of their charged decay particles and the reconstruciton of the decay vertex. The charged particles were measured with the 4 time projection chambers (TPC), two of them are situated inside 2 large dipole magnets, the two others are downstream of the magnet. Lambda and AntiLambda baryons have been measured in central Pb+Pb collisions at 40, 80 and 160 GeV/nucleon over a wide range in rapidity (1 - 5) and transverse momentum (0 - 3 GeV/c). Particle yields and spectra will be shown for the different energies. The results will be put into the existing systematics of Lambda-production as a function of beam energy.
A model is proposed that interprets a variety of connected speech processes as resulting from prosodic modulations at different tiers of functional speech motor control along the hypo-hyper dimension [10]. The general background of the model is given by the trichotomy of A-, B- and C-prosodic phenomena [15] that together constitute the acoustic makeup of any speech utterance (with regard to their respective time domains at the uttarance/phrase level, the syllabic level and the segmental level).
Evaluating the quality of credit portfolio risk models is an important issue for both banks and regulators. Lopez and Saidenberg (2000) suggest cross-sectional resampling techniques in order to make efficient use of available data. We show that their proposal disregards cross-sectional dependence in resampled portfolios, which renders standard statistical inference invalid. We proceed by suggesting the Berkowitz (1999) procedure, which relies on standard likelihood ratio tests performed on transformed default data. We simulate the power of this approach in various settings including one in which the test is extended to incorporate cross-sectional information. To compare the predictive ability of alternative models, we propose to use either Bonferroni bounds or the likelihood-ratio of the two models. Monte Carlo simulations show that a default history of ten years can be sufficient to resolve uncertainties currently present in credit risk modeling.