Refine
Year of publication
Document Type
- Preprint (2417) (remove)
Has Fulltext
- yes (2417)
Keywords
- Kollisionen schwerer Ionen (33)
- heavy ion collisions (27)
- Deutsch (23)
- Quark-Gluon-Plasma (14)
- equation of state (13)
- QGP (12)
- heavy-ion collisions (11)
- Kongress (10)
- Syntax (10)
- quark-gluon plasma (10)
Institute
- Physik (1513)
- Frankfurt Institute for Advanced Studies (FIAS) (1020)
- Informatik (817)
- Medizin (180)
- Extern (82)
- Biowissenschaften (76)
- Ernst Strüngmann Institut (74)
- Psychologie (49)
- Mathematik (48)
- MPI für Hirnforschung (47)
We discuss the phase diagram of moderately dense, locally neutral three-flavor quark matter using the framework of an effective model of quantum chromodynamics with a local interaction. The phase diagrams in the plane of temperature and quark chemical potential as well as in the plane of temperature and lepton-number chemical potential are discussed.
We study the effect of neutrino trapping on the phase diagram of dense, locally neutral three-flavor quark matter within the framework of a Nambu--Jona-Lasinio model. In the analysis, dynamically generated quark masses are taken into account self-consistently. The phase diagrams in the plane of temperature and quark chemical potential, as well as in the plane of temperature and lepton-number chemical potential are presented. We show that neutrino trapping favors two-flavor color superconductivity and disfavors the color-flavor-locked phase at intermediate densities of matter. At the same time, the location of the critical line separating the two-flavor color-superconducting phase and the normal phase of quark matter is little affected by the presence of neutrinos. The implications of these results for the evolution of protoneutron stars are briefly discussed. PACS numbers: 12.39.-x 12.38.Aw 26.60.+c
The properties of the outer crust of non-accreting cold neutron stars are studied by using modern nuclear data and theoretical mass tables updating in particular the classic work of Baym, Pethick and Sutherland. Experimental data from the atomic mass table from Audi, Wapstra, and Thibault of 2003 is used and a thorough comparison of many modern theoretical nuclear models, relativistic and non-relativistic ones, is performed for the first time. In addition, the influences of pairing and deformation are investigated. State-of-the-art theoretical nuclear mass tables are compared in order to check their differences concerning the neutron dripline, magic neutron numbers, the equation of state, and the sequence of neutron-rich nuclei up to the dripline in the outer crust of non-accreting cold neutron stars.
Elliptic flow analysis at RHIC with the Lee-Yang Zeroes method in a relativistic transport approach
(2006)
The Lee-Yang zeroes method is applied to study elliptic flow (v_2) in Au+Au collisions at sqrt s =200 A GeV, with the UrQMD model. In this transport approach, the true event plane is known and both the nonflow effects and event-by-event v_2 fluctuations exist. Although the low resolutions prohibit the application of the method for most central and peripheral collisions, the integral and differential elliptic flow from the Lee-Yang zeroes method agrees with the exact v_2 values very well for semi-central collisions.
The transverse momentum dependence of the anisotropic flow v_2 for pi, K, nucleon, Lambda, Xi and Omega is studied for Au+Au collisions at sqrt s_NN = 200 GeV within two independent string-hadron transport approaches (RQMD and UrQMD). Although both models reach only 60% of the absolute magnitude of the measured v_2, they both predict the particle type dependence of v_2, as observed by the RHIC experiments: v_2 exhibits a hadron-mass hierarchy (HMH) in the low p_T region and a number-of-constituent-quark (NCQ) dependence in the intermediate p_T region. The failure of the hadronic models to reproduce the absolute magnitude of the observed v_2 indicates that transport calculations of heavy ion collisions at RHIC must incorporate interactions among quarks and gluons in the early, hot and dense phase. The presence of an NCQ scaling in the string-hadron model results suggests that the particle-type dependencies observed in heavy-ion collisions at intermediate p_T are related to the hadronic cross sections in vacuum rather than to the hadronization process itself, as suggested by quark recombination models.
Based on the UrQMD transport model, the transverse momentum and the rapidity dependence of the Hanbury-Brown-Twiss (HBT) radii R_L, R_O, R_S as well as the cross term R_OL at SPS energies are investigated and compared with the experimental NA49 and CERES data. The rapidity dependence of the R_L, R_O, R_S is weak while the R_OL is significantly increased at large rapidities and small transverse momenta. The HBT "life-time" issue (the phenomenon that the calculated sqrt R_O^2-R_S^2 value is larger than the correspondingly extracted experimental data) is also present at SPS energies.
We propose to measure correlations of heavy-flavor hadrons to address the status of thermalization at the partonic stage of light quarks and gluons in high-energy nuclear collisions, shown on the example of azimuthal correlations of D-Dbar pairs. We show that hadronic interactions at the late stage can not disturb these correlations significantly. Thus, a decrease or the complete absence of these initial correlations indicates frequent interactions of heavy-flavor quarks in the partonic stage. Therefore, early thermalization of light quarks is likely to be reached. PACS numbers: 25.75.-q
We propose to measure azimuthal correlations of heavy-flavor hadrons to address the status of thermalization at the partonic stage of light quarks and gluons in high-energy nuclear collisions. In particular, we show that hadronic interactions at the late stage cannot significantly disturb the initial back-to-back azimuthal correlations of DDbar pairs. Thus, a decrease or the complete absence of these initial correlations does indicate frequent interactions of heavy-flavor quarks and also light partons in the partonic stage, which are essential for the early thermalization of light partons.
We develop a 1+1 dimensional hydrodynamical model for central heavy-ion collisions at ultrarelativistic energies. Deviations from Bjorken's scaling are taken into account by implementing finite-size profiles for the initial energy density. The calculated rapidity distributions of pions, kaons and antiprotons in central Au+Au collisions at the c.m. energy 200 AGeV are compared with experimental data of the BRAHMS Collaboration. The sensitivity of the results to the choice of the equation of state, the parameters of initial state and the freeze-out conditions is investigated. The best fit of experimental data is obtained for a soft equation of state and Gaussian-like initial profiles of the energy density.
The concept of Large Extra Dimensions (LED) provides a way of solving the Hierarchy Problem which concerns the weakness of gravity compared with the strong and electro-weak forces. A consequence of LED is that miniature Black Holes (mini-BHs) may be produced at the Large Hadron Collider in p+p collisions. The present work uses the CHARYBDIS mini-BH generator code to simulate the hadronic signal which might be expected in a mid-rapidity particle tracking detector from the decay of these exotic objects if indeed they are produced. An estimate is also given for Pb+Pb collisions.
The experimental signatures of TeV-mass black hole (BH) formation in heavy ion collisions at the LHC is examined. We find that the black hole production results in a complete disappearance of all very high p_T (> 500 GeV) back-to-back correlated di-jets of total mass M > M_f ~ 1 TeV. We show that the subsequent Hawking-decay produces multiple hard mono-jets and discuss their detection. We study the possibility of cold black hole remnant (BHR) formation of mass ~ M_f and the experimental distinguishability of scenarios with BHRs and those with complete black hole decay. Due to the rather moderate luminosity in the first year of LHC running the least chance for the observation of BHs or BHRs at this early stage will be by ionizing tracks in the ALICE TPC. Finally we point out that stable BHRs would be interesting candidates for energy production by conversion of mass to Hawking radiation.
We examine experimental signatures of TeV-mass black hole formation in heavy ion collisions at the LHC. We find that the black hole production results in a complete disappearance of all very high p_T (> 500 GeV) back-to-back correlated di-jets of total mass M > M_f ~ 1 TeV. We show that the subsequent Hawking-decay produces multiple hard mono-jets and discuss their detection. We study the possibility of cold black hole remnant (BHR) formation of mass ~ M_f and the experimental distinguishability of scenarios with BHRs and those with complete black hole decay. Finally we point out that a Heckler-Kapusta-Hawking plasma may form from the emitted mono-jets. In this context we present new simulation data of Mach shocks and of the evolution of initial conditions until the freeze-out.
The production of Large Extra Dimension (LXD) Black Holes (BHs), with a new, fundamental mass scale of M_f = 1 TeV, has been predicted to occur at the Large Hadron Collider, LHC, with the formidable rate of 10^8 per year in p-p collisions at full energy, 14 TeV, and at full luminosity. We show that such LXD-BH formation will be experimentally observable at the LHC by the complete disappearance of all very high p_t (> 500 GeV) back-to-back correlated Di-Jets of total mass M > M_f = 1 TeV. We suggest to complement this clear cut-off signal at M > 2*500 GeV in the di-jet-correlation function by detecting the subsequent, Hawking-decay products of the LXD-BHs, namely either multiple high energy (> 100 GeV) SM Mono-Jets (i.e. away-side jet missing), sprayed off the evaporating BHs isentropically into all directions or the thermalization of the multiple overlapping Hawking-radiation in a eckler-Kapusta-Plasma. Microcanonical quantum statistical calculations of the Hawking evaporation process for these LXD-BHs show that cold black hole remnants (BHRs) of Mass sim M_f remain leftover as the ashes of these spectacular Di-Jet-suppressed events. Strong Di-Jet suppression is also expected with Heavy Ion beams at the LHC, due to Quark-Gluon-Plasma induced jet attenuation at medium to low jet energies, p_t < 200 GeV. The (Mono-)Jets in these events can be used to trigger for Tsunami-emission of secondary compressed QCD-matter at well defined Mach-angles, both at the trigger side and at the awayside (missing) jet. The Machshock-angles allow for a direct measurement of both the equation of state EoS and the speed of sound c_s via supersonic bang in the "big bang" matter. We discuss the importance of the underlying strong collective flow - the gluon storm - of the QCD- matter for the formation and evolution of these Machshock cones. We predict a significant deformation of Mach shocks from the gluon storm in central Au+Au collisions at RHIC and LHC energies, as compared to the case of weakly coupled jets propagating through a static medium. A possible complete stopping of pt > 50 GeV jets at the LHC in 2-3 fm yields nonlinear high density Mach shocks in he quark gluon plasma, which can be studied in the complex emission and disintegration pattern of the possibly supercooled matter. We report on first full 3-dimensional fluid dynamical studies of the strong effects of a first order phase transition on the evolution and the Tsunami-like Mach shock emission of the QCD matter.
We have calculated the D-meson spectral density at finite temperature within a self-consistent coupled-channel approach that generates dynamically the Lambda_c (2593) resonance. We find a small mass shift for the D-meson in this hot and dense medium while the spectral density develops a sizeable width. The reduced attraction felt by the D-meson in hot and dense matter together with the large width observed have important consequences for the D-meson production in the future CBM experiment at FAIR.
The rapidity dependence of the single- and double- neutron to proton ratios of nucleon emission from isospin-asymmetric but mass-symmetric reactions Zr+Ru and Ru+Zr at energy range 100 ~ 800 A MeV and impact parameter range 0 ~ 8 fm is investigated. The reaction system with isospin-asymmetry and mass-symmetry has the advantage of simultaneously showing up the dependence on the symmetry energy and the degree of the isospin equilibrium. We find that the beam energy- and the impact parameter dependence of the slope parameter of the double neutron to proton ratio (F_D) as function of rapidity are quite sensitive to the density dependence of symmetry energy, especially at energies E_b ~ 400 A MeV and reduced impact parameters around 0.5. Here the symmetry energy effect on the F_D is enhanced, as compared to the single neutron to proton ratio. The degree of the equilibrium with respect to isospin (isospin mixing) in terms of the F_D is addressed and its dependence on the symmetry energy is also discussed.
Several observables of unbound nucleons which are to some extent sensitive to the medium modifications of nucleon-nucleon elastic cross sections in neutron-rich intermediate energy heavy ion collisions are investigated. The splitting effect of neutron and proton effective masses on cross sections is discussed. It is found that the transverse flow as a function of rapidity, the Q_zz as a function of momentum, and the ratio of halfwidths of the transverse to that of longitudinal rapidity distribution R_t/l are very sensitive to the medium modifications of the cross sections. The transverse momentum distribution of correlation functions of two-nucleons does not yield information on the in-medium cross section.
No black holes at IceCube
(2006)
Gravitational radiation from ultra high energy cosmic rays in models with large extra dimensions
(2006)
The effects of classical gravitational radiation in models with large extra dimensions are investigated for ultra high energy cosmic rays (CRs). The cross sections are implemented into a simulation package (SENECA) for high energy hadron induced CR air showers. We predict that gravitational radiation from quasi-elastic scattering could be observed at incident CR energies above 10^9 GeV for a setting with more than two extra dimensions. It is further shown that this gravitational energy loss can alter the energy reconstruction for CR energies E_CR > 5 10^9 GeV.
The pion source as seen through HBT correlations at RHIC energies is investigated within the UrQMD approach. We find that the calculated transverse momentum, centrality, and system size dependence of the Pratt-HBT radii R_L and R_S are reasonably well in line with experimental data. The predicted R_O values in central heavy ion collisions are larger as compared to experimental data. The corresponding quantity sqrt R_O^2-R_S^2 of the pion emission source is somewhat larger than experimental estimates.
We demonstrate the occurrence of canonical suppression associated with the conservation of an U(1)-charge in current transport models. For this study a pion gas is simulated within two different transport approaches by incorporating inelastic and volume-limited collisions pi pi leftrightarrow K bar-K for the production of kaon pairs. Both descriptions can dynamically account for the suppression in the yields of rare strange particles in a limited box, being in full accordance with a canonical statistical description.
We propose to use the hadron number fluctuations in the limited momentum regions to study the evolution of initial flows in high energy nuclear collisions. In this method by a proper preparation of a collision sample the projectile and target initial flows are marked in fluctuations in the number of colliding nucleons. We discuss three limiting cases of the evolution of flows, transparency, mixing and reflection, and present for them quantitative predictions obtained within several models. Finally, we apply the method to the NA49 results on fluctuations of the negatively charged hadron multiplicity in Pb+Pb interactions at 158A GeV and conclude that the data favor a hydrodynamical model with a significant degree of mixing of the initial flows at the early stage of collisions.
Language universals are statements that are true of all languages, for example: “all languages have stop consonants”. But beneath this simple definition lurks deep ambiguity, and this triggers misunderstanding in both interdisciplinary discourse and within linguistics itself. A core dimension of the ambiguity is captured by the opposition “absolute vs. statistical universal”, although the literature uses these terms in varied ways. Many textbooks draw the boundary between absolute and statistical according to whether a sample of languages contains exceptions to a universal. But the notion of an exception-free sample is not very revealing even if the sample contained all known languages: there is always a chance that an as yet undescribed language, or an unknown language from the past or future, will provide an exception.
Recent approaches to Word Sense Disambiguation (WSD) generally fall into two classes: (1) information-intensive approaches and (2) information-poor approaches. Our hypothesis is that for memory-based learning (MBL), a reduced amount of data is more beneficial than the full range of features used in the past. Our experiments show that MBL combined with a restricted set of features and a feature selection method that minimizes the feature set leads to competitive results, outperforming all systems that participated in the SENSEVAL-3 competition on the Romanian data. Thus, with this specific method, a tightly controlled feature set improves the accuracy of the classifier, reaching 74.0% in the fine-grained and 78.7% in the coarse-grained evaluation.
Prepositional phrase (PP) attachment is one of the major sources for errors in traditional statistical parsers. The reason for that lies in the type of information necessary for resolving structural ambiguities. For parsing, it is assumed that distributional information of parts-of-speech and phrases is sufficient for disambiguation. For PP attachment, in contrast, lexical information is needed. The problem of PP attachment has sparked much interest ever since Hindle and Rooth (1993) formulated the problem in a way that can be easily handled by machine learning approaches: In their approach, PP attachment is reduced to the decision between noun and verb attachment; and the relevant information is reduced to the two possible attachment sites (the noun and the verb) and the preposition of the PP. Brill and Resnik (1994) extended the feature set to the now standard 4-tupel also containing the noun inside the PP. Among many publications on the problem of PP attachment, Volk (2001; 2002) describes the only system for German. He uses a combination of supervised and unsupervised methods. The supervised method is based on the back-off model by Collins and Brooks (1995), the unsupervised part consists of heuristics such as ”If there is a support verb construction present, choose verb attachment”. Volk trains his back-off model on the Negra treebank (Skut et al., 1998) and extracts frequencies for the heuristics from the ”Computerzeitung”. The latter also serves as test data set. Consequently, it is difficult to compare Volk’s results to other results for German, including the results presented here, since not only he uses a combination of supervised and unsupervised learning, but he also performs domain adaptation. Most of the researchers working on PP attachment seem to be satisfied with a PP attachment system; we have found hardly any work on integrating the results of such approaches into actual parsers. The only exceptions are Mehl et al. (1998) and Foth and Menzel (2006), both working with German data. Mehl et al. report a slight improvement of PP attachment from 475 correct PPs out of 681 PPs for the original parser to 481 PPs. Foth and Menzel report an improvement of overall accuracy from 90.7% to 92.2%. Both integrate statistical attachment preferences into a parser. First, we will investigate whether dependency parsing, which generally uses lexical information, shows the same performance on PP attachment as an independent PP attachment classifier does. Then we will investigate an approach that allows the integration of PP attachment information into the output of a parser without having to modify the parser: The results of an independent PP attachment classifier are integrated into the parse of a dependency parser for German in a postprocessing step.
The renowned Grimm Dictionary (1854-1961) makes the statement that the German copula sein (to be) is “the most general and colourless of all verbal concepts” (der allgemeinste und farbloseste aller verbalbegriffe). A more concise summary of the linguistic issues surrounding the copula is hardly possible. These two properties (and the latent tension between them!) make copulas a particularly interesting and vexing subject of linguistic research. Copulas appear to be almost colourless, i.e., devoid of any concrete meaning, thus leading to the question of why such expressions exist at all, not only in German but in the majority of the world’s languages. And at the same time copulas presumably provide the best window into the core of verbal concepts thereby telling us what it actually means to be a verb – at least in a language like German or English. While there is a rather rich body of research on copulas in philosophical and formal semantics including several in-depth studies on the copular systems of individual languages, copulas have received comparably little attention from a typological perspective. The monograph of Regina Pustet sets out to fill this gap. She presents an extensive cross-linguistic study of copula usage based on a sample of 154 languages drawn from the language families of the world. The analysis is embedded in the theoretical framework of functional typology. The study aims at uncovering universal principles that govern the distribution of copulas in nominal, adjectival, and verbal predications. Its major objective is the development of a “semantically-based model of copula distribution” (p.62) by means of which the presence vs. absence of copulas can be motivated through the inherent meaning of the lexical items they potentially combine with. Drawing mainly on the work by Givón (1979, 1984) and Croft (1991, 2001), who provide a functional foundation of the traditional parts of speech, Pustet identifies four semantic parameters which, if taken together, are claimed to support substantial generalisations on copula distribution – within a given language as well as crosslinguistically. These parameters are DYNAMICITY, TRANSIENCE, TRANSITIVITY, and DEPENDENCY. Pustet goes on to argue – and this is in fact the driving force behind the overall monograph – that the distributional behaviour of copulas, in turn, yields a useful methodology for developing a general approach to lexical categorization. Thus, in the long run Pustet aims at contributing to a better understanding of the traditional parts of speech, noun, adjective, and verb by defining them in terms of “semantic feature bundles, which can be arranged in [a] coherent semantic similarity space” (p.193).
This paper presents an LTAG analysis of reflexives like himself and reciprocals like each other. These items need to find a c-commanding antecedent from which they retrieve (part of) their own denotation and with which they syntactically agree. The relation between anaphoric item and antecendent must satisfy the following important locality conditions (Chomsky (1981)).
The goal of this paper is to re-examine the status of the condition in (1) proposed in Alexiadou and Anagnostopoulou (2001; henceforth A&A 2001), in view of recent developments in syntactic theory. (1) The subject-in-situ generalization (SSG) By Spell-Out, vP can contain only one argument with a structural Case feature. We argue that (1) is a more general condition than previously recognized, and that the domain of its application is parametrized. More specifically, based on a comparison between Indo-European (IE) and Khoisan languages, we argue that (1) supports an interpretation of the EPP as a general principle, and not as a property of T. Viewed this way, the SSG is a condition that forces dislocation of arguments as a consequence of a constraint on Case checking.
Presupposition
(2007)
Effective knowledge communication presupposes common ground (Clark & Brennan, 1991) that needs to be established and maintained. This is particularly difficult in remote communication as well as in non-interactive settings, because the speaker cannot use gestures or mimic and has to tailor his utterances to the addressee without receiving feedback. In these situations, the speaker may achieve mutual understanding for example by adopting the addressee’s perspective. We present a study conducted to test the impact of instructions that support and hinder individual problem solving and knowledge communication. We used a picture-sorting task requiring individual cognitive processes of feature search (Treisman & Gelade, 1980) in addition to referential communication. As our study focused on the design of utterances, all participants assumed the role of speaker. Participants were told that their descriptions would be recorded and then listened to later on by a participant in the role of addressee. Eight sets of pictures were used, which varied on two dimensions: the individual cognitive demands of detecting the relevant features (varied as between-subject factor) and the communicative demands (varied as within-subject factor). A further between-subject factor was the type of instructions: The participants received either a collaboration script as supporting instructions, or time pressure was applied to induce stress, or else they were given no additional instructions (control group). We used the speakers’ verbal utterances to examine the quality of the speakers’ descriptions. For both dimensions of difficulty, we found the expected effects. In the conditions with a collaboration script, there were fewer irrelevant features mentioned and fewer features were described with delay. In the conditions with time pressure, there were fewer irrelevant features described, but the number of correctly described pictures was impaired through the fact that relevant features were also neglected. Under time pressure, speakers tended to provide ambiguous descriptions regarding the frame of reference.
In this paper, we will argue for a novel analysis of the auxiliary alternation in Early English, its development and subsequent loss which has broader consequences for the way that auxiliary selection is looked at cross-linguistically. We will present evidence that the choice of auxiliaries accompanying past participles in Early English differed in several significant respects from that in the familiar modern European languages. Specifically, while the construction with have became a full-fledged perfect by some time in the ME period, that with be was actually a stative resultative, which it remained until it was lost. We will show that this accounts for some otherwise surprising restrictions on the distribution of BE in Early English and allows a better understanding of the spread of HAVE through late ME and EModE. Perhaps more importantly, the Early English facts also provide insight into the genesis of the kind of auxiliary selection found in German, Dutch and Italian. Our analysis of them furthermore suggests a promising strategy for explaining cross-linguistic variation in auxiliary selection in terms of variation in the syntactico-semantic structure of the perfect. In this introductory section, we will first provide some background on the historical situation we will be discussing, then we will lay out the main claims for which we will be arguing in the paper.
In this paper, we introduce an extension of the XMG system (eXtensibleMeta-Grammar) in order to allow for the description of Multi-Component Tree Adjoining Grammars. In particular, we introduce the XMG formalism and its implementation, and show how the latter makes it possible to extend the system relatively easily to different target formalisms, thus opening the way towards multi-formalism.
Die Frage, was Literatur ist, scheint nicht nur die grundlegendste zu sein, die sich der Literaturwissenschaft stellt, sie ist zugleich ihre abgründigste. Grundlegend ist sie, weil sie nach dem Wesen der Literatur fragt und damit eigentlich eine Selbstverständlichkeit aufruft, die die Auseinandersetzung mit Literatur begleitet. Abgründig ist sie, weil auch die scheinbar selbstverständlichsten Definitionen der Literatur bisher nicht zu einer einheitlichen Auffassung vom Wesen der Literatur geführt haben. So steht die Literaturwissenschaft bereits mit der ersten Frage, die sich ihr stellt, vor einem scheinbar unaufhebbaren Dilemma. Auf den Gegenstand angesprochen, der ihr zugehört und der entsprechend über ihre Berechtigung als Wissenschaft Auskunft zu geben vermöchte, bleibt sie im Unklaren.
In this paper we will explore the similarities and differences between two feature logic-based approaches to the composition of semantic representations. The first approach is formulated for Lexicalized Tree Adjoining Grammar (LTAG, Joshi and Schabes 1997), the second is Lexical Ressource Semantics (LRS, Richter and Sailer 2004) and was first defined in Head-driven Phrase Structure Grammar. The two frameworks have several common characteristics that make them easy to compare: 1 They use languages of two sorted type theory for semantic representations. 2. They allow underspecification. LTAG uses scope constraints while LRS provides component-of contraints. 3 They use feature logics for computing semantic representations. 4. they are designed for computational applications. By comparing the two frameworks we will also point outsome characteristics and advantages of feature logic-based semantic computation in genereal.
Der Präventivkrieg gegen den Irak, die sogenannte Operation Iraqi Freedom vom 20. März bis 1. Mai 2003, scheint es erneut zu bestätigen – demokratische Öffentlichkeiten sind manipulierbar. Wie sonst ließen sich die erheblichen Fehlwahrnehmungen der amerikanischen Bevölkerung erklären, die sich mit der "Vermarktung" des Militäreinsatzes durch die Administration von George W. Bush (vgl. Freedman 2004, Kaufmann 2004 u. Pfiffner 2004) so sehr im Einklang zu befinden scheinen? ...
We adopt Markert and Nissim (2005)’s approach of using the World Wide Web to resolve cases of coreferent bridging for German and discuss the strength and weaknesses of this approach. As the general approach of using surface patterns to get information on ontological relations between lexical items has only been tried on English, it is also interesting to see whether the approach works for German as well as it does for English and what differences between these languages need to be accounted for. We also present a novel approach for combining several patterns that yields an ensemble that outperforms the best-performing single patterns in terms of both precision and recall.
Multicomponent Tree Adjoining Grammars (MCTAG) is a formalism that has been shown to be useful for many natural language applications. The definition of MCTAG however is problematic since it refers to the process of the derivation itself: a simultaneity constraint must be respected concerning the way the members of the elementary tree sets are added. This way of characterizing MCTAG does not allow to abstract away from the concrete order of derivation. In this paper, we propose an alternative definition of MCTAG that characterizes the trees in the tree language of an MCTAG via the properties of the derivation trees (in the underlying TAG) the MCTAG licences. This definition gives a better understanding of the formalism, it allows a more systematic comparison of different types of MCTAG, and, furthermore, it can be exploited for parsing.
This paper investigates the relation between TT-MCTAG, a formalism used in computational linguistics, and RCG. RCGs are known to describe exactly the class PTIME; simple RCG even have been shown to be equivalent to linear context-free rewriting systems, i.e., to be mildly context-sensitive. TT-MCTAG has been proposed to model free word order languages. In general, it is NP-complete. In this paper, we will put an additional limitation on the derivations licensed in TT-MCTAG. We show that TT-MCTAG with this additional limitation can be transformed into equivalent simple RCGs. This result is interesting for theoretical reasons (since it shows that TT-MCTAG in this limited form is mildly context-sensitive) and, furthermore, even for practical reasons: We use the proposed transformation from TT-MCTAG to RCG in an actual parser that we have implemented.
The dynamics of many systems are described by ordinary differential equations (ODE). Solving ODEs with standard methods (i.e. numerical integration) needs a high amount of computing time but only a small amount of storage memory. For some applications, e.g. short time weather forecast or real time robot control, long computation times are prohibitive. Is there a method which uses less computing time (but has drawbacks in other aspects, e.g. memory), so that the computation of ODEs gets faster? We will try to discuss this question for the assumption that the alternative computation method is a neural network which was trained on ODE dynamics and compare both methods using the same approximation error. This comparison is done with two different errors. First, we use the standard error that measures the difference between the approximation and the solution of the ODE which is hard to characterize. But in many cases, as for physics engines used in computer games, the shape of the approximation curve is important and not the exact values of the approximation. Therefore, we introduce a subjective error based on the Total Least Square Error (TLSE) which gives more consistent results. For the final performance comparison, we calculate the optimal resource usage for the neural network and evaluate it depending on the resolution of the interpolation points and the inter-point distance. Our conclusion gives a method to evaluate where neural nets are advantageous over numerical ODE integration and where this is not the case. Index Terms—ODE, neural nets, Euler method, approximation complexity, storage optimization.
This paper deals with the variable position of adjectives in the Romanian DP. As all other Romance languages, Romanian allows for adjectives to appear in both prenominal and post-nominal position. In addition, however, Romanian has a third pattern: the so-called cel construction, in which the adjective in the post-nominal position is preceded by a determiner-like element, cel. This pattern is superficially similar to Determiner Spreading in Greek. In this paper we contrast the cel construction to Greek DS and discuss the similarities and differences between the two. We then present an analysis of cel as involving an appositive specification clause, building on de Vries (2002). We argue that the same structure is also involved in the context of nominal ellipsis, the second environment in which cel is found.
The ACL 2008 Workshop on Parsing German features a shared task on parsing German. The goal of the shared task was to find reasons for the radically different behavior of parsers on the different treebanks and between constituent and dependency representations. In this paper, we describe the task and the data sets. In addition, we provide an overview of the test results and a first analysis.
The problem of vocalization, or diacritization, is essential to many tasks in Arabic NLP. Arabic is generally written without the short vowels, which leads to one written form having several pronunciations with each pronunciation carrying its own meaning(s). In the experiments reported here, we define vocalization as a classification problem in which we decide for each character in the unvocalized word whether it is followed by a short vowel. We investigate the importance of different types of context. Our results show that the combination of using memory-based learning with only a word internal context leads to a word error rate of 6.64%. If a lexical context is added, the results deteriorate slowly.
How to compare treebanks
(2008)
Recent years have seen an increasing interest in developing standards for linguistic annotation, with a focus on the interoperability of the resources. This effort, however, requires a profound knowledge of the advantages and disadvantages of linguistic annotation schemes in order to avoid importing the flaws and weaknesses of existing encoding schemes into the new standards. This paper addresses the question how to compare syntactically annotated corpora and gain insights into the usefulness of specific design decisions. We present an exhaustive evaluation of two German treebanks with crucially different encoding schemes. We evaluate three different parsers trained on the two treebanks and compare results using EVALB, the Leaf-Ancestor metric, and a dependency-based evaluation. Furthermore, we present TePaCoC, a new testsuite for the evaluation of parsers on complex German grammatical constructions. The testsuite provides a well thought-out error classification, which enables us to compare parser output for parsers trained on treebanks with different encoding schemes and provides interesting insights into the impact of treebank annotation schemes on specific constructions like PP attachment or non-constituent coordination.
Part-of-Speech tagging is generally performed by Markov models, based on bigram or trigram models. While Markov models have a strong concentration on the left context of a word, many languages require the inclusion of right context for correct disambiguation. We show for German that the best results are reached by a combination of left and right context. If only left context is available, then changing the direction of analysis and going from right to left improves the results. In a version of MBT (Daelemans et al., 1996) with default parameter settings, the inclusion of the right context improved POS tagging accuracy from 94.00% to 96.08%, thus corroborating our hypothesis. The version with optimized parameters reaches 96.73%.
Class features as probes
(2008)
In this article, we adress (i) the form and (ii) the function on inflection class features in minimalist grammar. The empirical evidence comes from noun inflection systems involving fusional markers in German, Greek, and Russian. As for (i), we argue (based on instances of transparadigmatic syncretism) that class features are not privative; rather, class information must be decomposed into more abstract, binary features. Concerning (ii), we propose that class features qualify as the very device that brings about fusional infection: They are uninterpretable in syntax and actas probes on stems, with matching inflection markers as goels, and thus trigger morphological Agree operations that merge stem and inflection marker before syntax is reached.
In this paper we investigate the distribution of PPs related to external arguments (agent, causer, instrument, causing event) in Greek. We argue that their distribution supports an analysis, according to which agentive/instrument and causer PPs are licensed by distinct functional heads, respectively. We argue against a conceivable alternative analysis, which links agentivity and causation to the prepositions themselves. We furthermore identify a particular type of Voice head in Greek anticausative realised by non-active Voice morphology.
On the role of syntactic locality in morphological processes : the case of (Greek) derived nominals
(2008)
The paper is structured as follows. In section 2, I briefly summarize the facts on English and Greek nominalizations. In section 3, I discuss English nominal derivation in some detail. In section 4, I turn to the question of licensing of AS in nominals. In section 5, I turn to the issue of the optionality of licensing of AS in the nominal system.
In this paper we compare the distribution of PPs introducing external arguments in nominalizations with PPs introducing external arguments in the verbal domain. We show that several mismatches exist between the behavior of PPs in nominalizations and PPs in the verbal domain. This leads us to suggest that while PPs in the verbal domain are licensed by functional structure alone, within the nominal domain, PPs can also be licensed via an interplay of the encyclopaedic meaning of the root involved and the properties of the preposition itself. This second mechanism kicks in in the absence of functional structure.
This article presents linguistic features of and educational approaches to a new variety of German that has emerged in multi-ethnic urban areas in Germany: Kiezdeutsch (‘Hood German’). From a linguistic point of view, Kiezdeutsch is very interesting, as it is a multi-ethnolect that combines features of a youth language with those of a contact language. We will present examples that illustrate the grammatical productivity and innovative potential of this variety. From an educational perspective, Kiezdeutsch has also a high potential in many respects: school projects can help enrich intercultural communication and weaken derogatory attitudes. In grammar lessons, Kiezdeutsch can be a means to enhance linguistic competence by having the adolescents analyse their own language. Keywords: German, Kiezdeutsch, multi-ethnolect, migrants’ language, language change, educational proposals
Cet article étudie la relation entre les grammaires darbres adjoints à composantes multiples avec tuples darbres (TT-MCTAG), un formalisme utilisé en linguistique informatique, et les grammaires à concaténation dintervalles (RCG). Les RCGs sont connues pour décrire exactement la classe PTIME, il a en outre été démontré que les RCGs « simples » sont même équivalentes aux systèmes de réécriture hors-contextes linéaires (LCFRS), en dautres termes, elles sont légèrement sensibles au contexte. TT-MCTAG a été proposé pour modéliser les langages à ordre des mots libre. En général ces langages sont NP-complets. Dans cet article, nous définissons une contrainte additionnelle sur les dérivations autorisées par le formalisme TT-MCTAG. Nous montrons ensuite comment cette forme restreinte de TT-MCTAG peut être convertie en une RCG simple équivalente. Le résultat est intéressant pour des raisons théoriques (puisqu’il montre que la forme restreinte de TT-MCTAG est légèrement sensible au contexte), mais également pour des raisons pratiques (la transformation proposée ici a été utilisée pour implanter un analyseur pour TT-MCTAG).
Anhand eines Datensatzes von 1.708 Vegetationsaufnahmen aus 154 bayerischen Naturwaldreservaten wurde die realisierte ökologische Nische von 25 Baumarten hinsichtlich Lichtbedarf bzw. Schattentoleranz untersucht. Für jede Baumart wurde die Stetigkeit des Vorkommens in Baumschicht und Verjüngung berechnet. Für jede Aufnahme wurde die dem Bestandesunterwuchs zur Verfügung stehende Lichtmenge durch Berechnung des mittleren ungewichteten Licht-Zeigerwertes (mL) aller vorkommenden Arten (ohne Baumschicht) auf einer Relativskala geschätzt. Für jede 0,5-Einheiten-Stufe von mL wurde die Präferenz jeder Baumart, getrennt nach Baum- (> 5m) und Verjüngungsschicht (< 5m), als Differenz zwischen relativer Häufigkeit der jeweiligen Art und der relativen Häufigkeit aller Aufnahmen in der mL-Stufe im gesamten Datensatz berechnet. Die Präferenzprofile von Baumschicht und Verjüngungsschicht bildeten die Grundlage einer numerischen Klassifikation von 6 lichtökologischen Nischen typen. Diese Typen werden hinsichtlich ihrer Bindung an bestimmte Entwicklungsphasen und Strukturen der natürlichen Walddynamik diskutiert, mit geläufigen Einteilungen der Baumarten verglichen und im Hinblick auf eine Prognose des Verhaltens unter sich ändernden Umweltbedingungen ausgewertet. – Während sich Edellaubbäume des Tilio-Acerion in den Reservaten sehr ähnlich wie Fagus und Abies verhalten, bilden die Baumarten der Eichenmischwälder eine lichtökologische Gruppe mit rückläufiger Verjüngungstendenz. Unter den übrigen Halbschattbaumarten hebt sich eine Gruppe heraus, welche sich in geschlossenen Beständen vorausverjüngt und nach Störung in die Baumschicht vordringt. Pionierbaumarten bleiben in Naturwaldreservaten weitestgehend auf Sonderstandorte, wo ihre Verjüngung viel Licht vorfindet, beschränkt.
In the late seventies, Bernard Comrie was one of the first linguists to explore the effects of the referential hierarchy (RH) on the distribution of grammatical relations (GRs). The referential hierarchy is also known in the literature as the animacy, empathy or indexibability hierarchy and ranks speech act participants (i.e. first and second person) above third persons, animates above inanimates, or more topical referents above less topical referents. Depending on the language, the hierarchy is sometimes extended by analogy to rankings of possessors above possessees, singulars above plurals, or other notions. In his 1981 textbook, Comrie analyzed RH effects as explaining (a) differential case (or adposition) marking of transitive subject ("A") noun phrases in low RH positions (e.g. inanimate or third person) and of object ("P") noun phrases in high RH positions (e.g. animate or first or second person), and (b) hierarchical verb agreement coupled with a direct vs. inverse distinction, as in Algonquian (Comrie 1981: Chapter 6).
Un titolo quale "Dialettica negativa e antropologia negativa" sembrerebbe preannunciare un lavoro di confronto tra Th. W. Adorno e Ulrich Sonnemann, sulla scia di una indicazione mutuata dalla "Introduzione" di "Dialettica negativa" (1966). E invece, disattendendo una simile aspettativa, la "Negative Anthropologie" cui ci si riferisce in questo saggio è quella di Günther Stern/Anders. L’idea di un confronto tra le due prospettive nasce dalla curiosità di capire la corrispondenza tra la "dialettica negativa" e l'"antropologia negativa", laddove con il secondo sintagma si intende la concezione andersiana di un'umanità inadeguata al mondo. Che poi non si tratti di una stranezza ma di un interrogativo legittimo lo conferma, indirettamente, lo stesso Adorno, che in una nota contenuta nella sezione della "Dialettica negativa" dedicata alla lettura del pensiero di Heidegger, chiama in causa proprio la lezione di Anders.
Im vorliegenden Artikel geht es um sprachliche Elemente, die in einer Sprache bereits vorhanden sind, als Nonstandard gelten bzw. nicht in anerkannter verbindlicher Weise standardisiert sind und nun in verändertem Gebrauch differenzierend genutzt werden. Der neue Gebrauch hat ein oder mehrere initiale Ereignisse, die – systemorientiert formuliert – an einer oder mehreren Stellen eines Sprachraums auftreten und in einer evolutionären Drift häufiger werden oder verschwinden, bzw. – handlungsorientiert formuliert – von unterschiedlichen Sprechern übernommen, mit neuen Semantiken versehen werden oder unbeachtet bleiben.
Die Sprachen der Städte
(2008)
Die frühen Sprachkarten, für die Georg Wenker Ende des 19. Jh. in über 40.000 Schulorten des deutschen Reiches schriftliche Übersetzungen in die Mundart gesammelt hatte, dokumentieren die Sonderstellung vieler Städte im sprachlichen Raum. Zum Beispiel zeigen Berlin und die nähere Umgebung sprachliche Formen, die sonst erst weiter südlich oder in der Schriftsprache gelten.
Die unten folgende Stellungnahme wurde dem Herausgeber der Zeitschrift für deutsches Altertum und deutsche Literatur angeboten, um eine Reihe von gravierenden Missverständnissen eines Rezensenten (Jürgen Schulz-Grobert) auszuräumen, die dieser in seiner Besprechung des zweiten Bandes der Sämtlichen Werke Johann Fischarts der Fachwelt gegenüber erkennen ließ. Der Herausgeber der Zeitschrift verweigerte sich einer Diskussion und lehnte den Abdruck unserer Entgegnung ab. Dies ist umso bedauerlicher, als uns der Rezensent den Vorwurf gemacht hat, unsere "Diskussionsbereitschaft [...] [sei] auch in anderen entscheidenden Fragen ausgesprochen begrenzt", was immer er damit meint.
Der vorliegende Beitrag versucht, am Leitfaden der Scham einen Zugang zu Agambens Theorie der Subjektivität zu gewinnen, um die theoretischen und historischen Voraussetzungen seiner Ethik einer Prüfung zu unterziehen, die zugleich an die Kritik Thomäs anschließen kann. Den Ausgangspunkt der folgenden Überlegungen bietet Agambens Untersuchung zum 'homo sacer'. In einem zweiten Schritt geht es um die Theorie der Scham, die "Was von Auschwitz bleibt" vorlegt. Die kritische Diskussion von Agambens Ethik leitet die Auseinandersetzung mit dem Gewährsmann ein, den "Was von Auschwitz bleibt präsentiert", mit Primo Levi. Sie wird weitergeführt und zugespitzt durch die Überbietung, die Levis' Frage "Ist das ein Mensch?" in Imre Kertész' "Roman eines Schicksallosen" gefunden hat. Vor dem Hintergrund der zentralen Bedeutung der Scham bei Primo Levi und Imre Kertécs kehrt der letzte Teil zu Agambens Ethik zurück, um deren Grundlagen im Rückgriff auf Aristoteles einer Revision zu unterziehen.
The mechanism by which the enzyme pyruvate decarboxylase from yeast is activated allosterically has been elucidated. A total of seven three-dimensional structures of the enzyme, of enzyme variants or of enzyme complexes from two yeast species (three of them reported here for the first time) provide detailed atomic resolution snapshots along the activation coordinate. The prime event is the covalent binding of the substrate pyruvate to the side chain of cysteine 221, thus forming a thiohemiketal. This reaction causes the shift of a neighbouring amino acid, which eventually leads to the rigidification of two otherwise flexible loops, where one of the loops provides two histidine residues necessary to complete the enzymatically competent active site architecture. The structural data are complemented and supported by kinetic investigations and binding studies and provide a consistent picture of the structural changes, which occur upon enzyme activation.
In this paper, we present an open-source parsing environment (Tübingen Linguistic Parsing Architecture, TuLiPA) which uses Range Concatenation Grammar (RCG) as a pivot formalism, thus opening the way to the parsing of several mildly context-sensitive formalisms. This environment currently supports tree-based grammars (namely Tree-Adjoining Grammars (TAG) and Multi-Component Tree-Adjoining Grammars with Tree Tuples (TT-MCTAG)) and allows computation not only of syntactic structures, but also of the corresponding semantic representations. It is used for the development of a tree-based grammar for German.
TT-MCTAG lets one abstract away from the relative order of co-complements in the final derived tree, which is more appropriate than classic TAG when dealing with flexible word order in German. In this paper, we present the analyses for sentential complements, i.e., wh-extraction, thatcomplementation and bridging, and we work out the crucial differences between these and respective accounts in XTAG (for English) and V-TAG (for German).
Developing linguistic resources, in particular grammars, is known to be a complex task in itself, because of (amongst others) redundancy and consistency issues. Furthermore some languages can reveal themselves hard to describe because of specific characteristics, e.g. the free word order in German. In this context, we present (i) a framework allowing to describe tree-based grammars, and (ii) an actual fragment of a core multicomponent tree-adjoining grammar with tree tuples (TT-MCTAG) for German developed using this framework. This framework combines a metagrammar compiler and a parser based on range concatenation grammar (RCG) to respectively check the consistency and the correction of the grammar. The German grammar being developed within this framework already deals with a wide range of scrambling and extraction phenomena.
In this paper we present a parsing architecture that allows processing of different mildly context-sensitive formalisms, in particular Tree-Adjoining Grammar (TAG), Multi-Component Tree-Adjoining Grammar with Tree Tuples (TT-MCTAG) and simple Range Concatenation Grammar (RCG). Furthermore, for tree-based grammars, the parser computes not only syntactic analyses but also the corresponding semantic representations.
We show that loanword adaptation can be understood entirely in terms of phonological and phonetic comprehension and production mechanisms in the first language. We provide explicit accounts of several loanword adaptation phenomena (in Korean) in terms of an Optimality-Theoretic grammar model with the same three levels of representation that are needed to describe L1 phonology: the underlying form, the phonological surface form, and the auditory-phonetic form. The model is bidirectional, i.e., the same constraints and rankings are used by the listener and by the speaker. These constraints and rankings are the same for L1 processing and loanword adaptation.
In seinen Sammlungen bildet das Deutsche Literaturarchiv Marbach (DLA) das Netzwerk des literarischen Lebens in all seinen Facetten ab. Im Zentrum des quellenorientierten Sammelns und der Erschließung steht der Autor (bzw. die Autorin). Die Literatur wird dokumentiert vom Entstehungsprozess eines Werkes über die verschiedenen Ausgaben und dessen Rezeption in der Literaturkritik, seine dramaturgische Umsetzung in Hörfunk, Film, auf der Bühne und in der Musik. Seit 2008 bezieht das DLA auch Internetquellen wie literarische Zeitschriften, Netzliteratur und Weblogs in sein Spektrum mit ein und reagiert damit auf die zunehmende Bedeutung des Internets als Publikationsforum. Sammeln, Erschließen und Archivieren bilden eine notwendige Einheit; gerade die Flüchtigkeit der netzbasierten Ressourcen macht eine langfristige Sicherung der Verfügbarkeit erforderlich. Notwendig sind daher mehrere Säulen, auf denen diese neue Sammlung von „Literatur im Netz“ basiert.
Nous présentons ici différents algorithmes d’analyse pour grammaires à concaténation d’intervalles (Range Concatenation Grammar, RCG), dont un nouvel algorithme de type Earley, dans le paradigme de l’analyse déductive. Notre travail est motivé par l’intérêt porté récemment à ce type de grammaire, et comble un manque dans la littérature existante.
Multicomponent Tree Adjoining Grammars (MCTAGs) are a formalism that has been shown to be useful for many natural language applications. The definition of non-local MCTAG however is problematic since it refers to the process of the derivation itself: a simultaneity constraint must be respected concerning the way the members of the elementary tree sets are added. Looking only at the result of a derivation (i.e., the derived tree and the derivation tree), this simultaneity is no longer visible and therefore cannot be checked. I.e., this way of characterizing MCTAG does not allow to abstract away from the concrete order of derivation. In this paper, we propose an alternative definition of MCTAG that characterizes the trees in the tree language of an MCTAG via the properties of the derivation trees (in the underlying TAG) the MCTAG licences. We provide similar characterizations for various types of MCTAG. These characterizations give a better understanding of the formalisms, they allow a more systematic comparison of different types of MCTAG, and, furthermore, they can be exploited for parsing.
This paper investigates the class of Tree-Tuple MCTAG with Shared Nodes, TT-MCTAG for short, an extension of Tree Adjoining Grammars that has been proposed for natural language processing, in particular for dealing with discontinuities and word order variation in languages such as German. It has been shown that the universal recognition problem for this formalism is NP-hard, but so far it was not known whether the class of languages generated by TT-MCTAG is included in PTIME. We provide a positive answer to this question, using a new characterization of TT-MCTAG.
We present a CYK and an Earley-style algorithm for parsing Range Concatenation Grammar (RCG), using the deductive parsing framework. The characteristic property of the Earley parser is that we use a technique of range boundary constraint propagation to compute the yields of non-terminals as late as possible. Experiments show that, compared to previous approaches, the constraint propagation helps to considerably decrease the number of items in the chart.
Die drei Bereiche, die hier verglichen werden sollen, entsprechen in etwa der überkommenen Trias von Literatur, Musik und bildender Kunst, einer Gliederung, die im Medienzeitalters mit Videos, CDs, Installationen oder Happenings eigentlich obsolet ist. Allerdings geht es hier nur um die Eigenart der Zeichensysteme, auf denen die verschiedenen Bereiche beruhen, nicht um die Werke, die dadurch möglich werden, obgleich natürlich auch die Kunstwerke im emphatischen Sinn, die bedeutenden und die banalen, die großen und die misslungenen Gestaltungen nur möglich und verstehbar sind aufgrund der Zeichen, auf denen sie beruhen.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if there is a mismatch between the data structures and representations used by the parser and the gold standard. A particular case in point is German, for which two treebanks (TiGer and TüBa-D/Z) are available with highly different annotation schemes for the acquisition of (e.g.) PCFG parsers. The differences between the TiGer and TüBa-D/Z annotation schemes make fair and unbiased parser evaluation difficult [7, 9, 12]. The resource (TEPACOC) presented in this paper takes a different approach to parser evaluation: instead of providing evaluation data in a single annotation scheme, TEPACOC uses comparable sentences and their annotations for 5 selected key grammatical phenomena (with 20 sentences each per phenomena) from both TiGer and TüBa-D/Z resources. This provides a 2 times 100 sentence comparable testsuite which allows us to evaluate TiGer-trained parsers against the TiGer part of TEPACOC, and TüBa-D/Z-trained parsers against the TüBa-D/Z part of TEPACOC for key phenomena, instead of comparing them against a single (and potentially biased) gold standard. To overcome the problem of inconsistency in human evaluation and to bridge the gap between the two different annotation schemes, we provide an extensive error classification, which enables us to compare parser output across the two different treebanks. In the remaining part of the paper we present the testsuite and describe the grammatical phenomena covered in the data. We discuss the different annotation strategies used in the two treebanks to encode these phenomena and present our error classification of potential parser errors.
The aim of this paper is to address two main counterarguments raised in Landau (2007) against the movement analysis of Control, and especially against the phenomenon of Backward Control. The paper shows that unlike the situation described in Tsez (Polinsky & Potsdam 2002), Landau's objections do not hold for Greek and Romanian, where all obligatory control verbs exhibit Backward Control. Our results thus provide stronger empirical support for a theoretical approach to Control in terms of Movement, as defended in Hornstein (1999 and subsequent work).
Parsing coordinations
(2009)
The present paper is concerned with statistical parsing of constituent structures in German. The paper presents four experiments that aim at improving parsing performance of coordinate structure: 1) reranking the n-best parses of a PCFG parser, 2) enriching the input to a PCFG parser by gold scopes for any conjunct, 3) reranking the parser output for all possible scopes for conjuncts that are permissible with regard to clause structure. Experiment 4 reranks a combination of parses from experiments 1 and 3. The experiments presented show that n- best parsing combined with reranking improves results by a large margin. Providing the parser with different scope possibilities and reranking the resulting parses results in an increase in F-score from 69.76 for the baseline to 74.69. While the F-score is similar to the one of the first experiment (n-best parsing and reranking), the first experiment results in higher recall (75.48% vs. 73.69%) and the third one in higher precision (75.43% vs. 73.26%). Combining the two methods results in the best result with an F-score of 76.69.
In the recent literature the phenomenon of long distance agreement has become the focus of several studies as it seems to violate certain locality conditions which require that agreeing elements in general stand in clause-mate relationships. In particular, it involves a verb agreeing with a constituent which is located in the verb's clausal complement and hence poses a challenge for theories that assume a strictly local relationship for agreement. In this paper we present empirical evidence from Greek and Romanian for the reality of long distance agreement. Specifically, we focus on raising constructions in these two languages and we show that they do not involve movement but rather instantiate long distance agreement. We further argue that subjunctives allowing long distance agreement lack both a CP layer and semantic Tense. However, since the embedded verb also bears phi-features, these constructions pose a further problem for assumptions that view the presence of phi-features as evidence for the presence of a C layer. Finally, we raise the question of the common properties that these languages have that lead to the presence of long distance agreement.
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.
The STAR Collaboration at the Relativistic Heavy Ion Collider presents measurements of 𝐽/𝜓→𝑒+𝑒− at midrapidity and high transverse momentum (𝑝𝑇>5 GeV/𝑐) in 𝑝+𝑝 and central Cu+Cu collisions at √𝑠𝑁𝑁=200 GeV. The inclusive 𝐽/𝜓 production cross section for Cu+Cu collisions is found to be consistent at high 𝑝𝑇 with the binary collision-scaled cross section for 𝑝+𝑝 collisions. At a confidence level of 97%, this is in contrast to a suppression of 𝐽/𝜓 production observed at lower 𝑝𝑇. Azimuthal correlations of 𝐽/𝜓 with charged hadrons in 𝑝+𝑝 collisions provide an estimate of the contribution of 𝐵-hadron decays to 𝐽/𝜓 production of 13%±5%.
Impurismus ist eine uralte Weltanschauung und eine alte Poetik. Beides habe ich in meinem Buch von 2007 Illustrierte Poetik des Impurismus ausführlich dargestellt. Da ich mich nicht wiederholen will, kann ich die umfangreichen Funde zum Thema hier nicht erneut vortragen. Andererseits soll der Leser dieser Fortsetzung nicht ganz unvorbereitet in die Materie einsteigen. Deshalb will ich einige nackte Fakten als Erinnerung hier zusammenstellen, muß aber doch dringend auf die anschaulichen Grundlagen in dem genannten Buch verweisen, sonst verschreckt die in aller Kürze vorgetragene Ungeheuerlichkeit der ganzen Entdeckung manchen willigen Leser. ...
Wir Philologen haben gut reden. Wir sehen zu, wie andere, die zumeist nicht zu unserer Zunft gehören, die unübersehbare Fülle von Geschriebenem aus seiner jeweiligen Ursprache in alle möglichen Sprachen bringen, und wir verhalten uns dazu als interessierte Zuschauer. Wir haben allen Grund, uns daran zu freuen: Ohne diesen grenzüberschreitenden Waren- und Gedankentausch bliebe das Feld, auf dem wir grasen, enger und parzellierter, als es nach der Intention der Autoren und auch der Sache nach sein müsste. Wir können (sofern wir den nötigen Überblick haben) das loben, was die Übersetzer zu Wege gebracht haben: die Entsprechungen, die sie entdeckt oder erfunden haben, die Kraft, Geschmeidigkeit und Modulationsvielfalt, die sie in ihren Zielsprachen mit Tausenden von einleuchtenden Funden oder mit dem ganzen Ton und Duktus ihrer Übersetzungen erst aktiviert haben. Wenn wir es uns zutrauen, können wir ihnen ins Handwerk pfuschen und einzelne Stellen oder ganze Werke selber übersetzen. Wir können sie kritisieren, wo uns die vorgelegten Übersetzungen zu matt erscheinen oder wo sie sachlich oder stilistisch mehr als nötig ‚hinter dem Original zurückbleiben; wir können Verbesserungsvorschläge machen. Wenn wir Übersetzungen zitieren und es nötig finden, sie abzuwandeln, bewegen wir uns in einer Grauzone zwischen dem Respekt vor dem Übersetzer, der Lust an noch weiteren erkannten Potenzen des Textes und dem Drang, möglichst ‚alles, was wir aus dem Original herausgelesen haben, in der eigenen Sprache den Hörern oder Lesern nahezubringen.
Integer point sets minimizing average pairwise L1 distance: What is the optimal shape of a town?
(2010)
An n-town, n[is an element of]N , is a group of n buildings, each occupying a distinct position on a 2-dimensional integer grid. If we measure the distance between two buildings along the axis-parallel street grid, then an n-town has optimal shape if the sum of all pairwise Manhattan distances is minimized. This problem has been studied for cities, i.e., the limiting case of very large n. For cities, it is known that the optimal shape can be described by a differential equation, for which no closed-form solution is known. We show that optimal n-towns can be computed in O(n[superscript 7.5]) time. This is also practically useful, as it allows us to compute optimal solutions up to n=80.
Römische Bildnisse : Bibliographie, ungekürzt, mit den zu ergänzenden Literaturverweisen des Autors
(2010)
Originalfassung der in der Verlagspublikation um zahlreiche Literaturverweise gekürzten Bibliographie des Werkes: Götz Lahusen: Römische Bildnisse : Auftraggeber, Funktionen, Standorte. - Mainz : von Zabern, 2010. - Lizenz der WBG (Wiss. Buchges.) Darmstadt. - ISBN: 978-3-8053-3738-0. Pp. : EUR 49.90
We propose a framework of individual problem-solving and communicative demands (IproCo) that bridges the gap between models from cognitive psychology and communication pragmatics. Furthermore, we present two experiments conducted to identify factors influencing the demands and to test possibilities for support. The experiments employed a remote collaborative picture-sorting task with concrete and abstract pictures and applied non-interactive conditions compared to interactive conditions. In a first experiment, the influence of the postulated demands on collaboration process and outcome was analysed, and the impact of shared applications was tested. In a second experiment, we evaluated instructional support measures consisting of model collaboration and a collaboration script. The collaboration process showed benefits of the support but the outcome did not. However, the support measures fostered the collaboration process even in the particularly difficult conditions with non-interactive communication. We discuss the impact of the IproCo framework and apply it to other tasks.
Im Mittelpunkt des Textes, so scheint es, steht die trauernde Verarbeitung eines lang zurückliegenden Ereignisses, damit zugleich Erinnerung und Abschied als Grundmotive des Werkes von Droste-Hülshoff, wie sie auch in anderen Texten wie "Meine Toten" oder dem Byron-Gedicht "Lebt Wohl" zum Ausdruck kommen. In der "Taxuswand" durchmisst Droste-Hülshoff eine lange Zeitspanne, achtzehn Jahre, die zwischen der Begegnung und seiner dichterischen Verarbeitung stehen. Die Frage, die in diesem Zusammenhang im Raum steht, ist die nach dem grundsätzlichen Verhältnis von dichterischer Erinnerungsleistung und biographischem Erlebnis im Werk der Annette von Droste-Hülshoff. Dass beide in ähnlicher Weise wie bei Baudelaire nicht einfach zusammenfallen, sondern auseinandertreten, ist die Vermutung, der es im Folgenden nachzugehen gilt.
Avventurarsi e poi inoltrarsi nell'opera di Thomas Bernhard non è precisamente come fare una passeggiata, ma la passeggiata è un motivo ricorrente nell'opera bernhardiana (insieme a quella di Handke, di Sebald, di Walser, per fare solo alcuni nomi di passeggiatori nel Novecento di lingua tedesca). Le figure di Bernhard camminano, marciano, corrono, ma in una "direzione opposta" rispetto a quella indicata da Stifter. Talvolta i loro percorsi si snodano nella natura, come quando entrano in un bosco per non fare più ritorno (Gelo, Al limite boschivo, La partita a carte), a volte marciano nel chiuso della loro "casa-prigione", seguendo i percorsi labirintici e infiniti della loro mente (La Fornace, Cemento), altre volte ancora si muovono in un contesto cittadino e metropolitano, a Roma in Estinzione (dove la passeggiata con l'allievo Gambetti conserva un alone aristotelico, il peripatetico) o - più spesso - a Vienna.
Die zentrale These des vorliegenden Aufsatzes ist es, dass es ein Adam Smith-Problem im traditionellen Sinne nicht gibt, aber sehr wohl einen Selbstwiderspruch in Adam Smith ökonomischer Theorie.
Der Aufsatz behandelt zunächst die enge systematische Verbindung von Smith ökonomischer und ethischer Theorie. Die Verbindung beruht auf der Annahme eines höchsten Wesens und einer daraus gefolgerten prästabilisierenden Harmonie Dem religiösen Vertrauen auf eine natürliche Ordnung korresponiert der Glaube an die Gerechtigkeit des Marktes. Smith weitere politische Analyse produziert allerdings einen Selbstwiderspruch. Smith zeigt auf, dass die unternehmerischen Eigeninteressen dem Allgemeininteresse der Gesellschaft widersprechen und die Unternehmer zudem virtuoser und erfolgreicher beim Durchsetzen ihrer eigenen Interessen agieren als andere Marktakteure. Dennoch hält Smith an der Annahme fest, der Markt entfalte eine harmonisierende und den allseitigen Wohlstand fördernde Wirkung. Diese Annahme mutiert bei seinen Epigonen zu einer ontologischen Gewissheit.
Background: Microvolt T-wave alternans (MTWA) testing in many studies has proven to be a highly accurate predictor of ventricular tachyarrhythmic events (VTEs) in patients with risk factors for sudden cardiac death (SCD) but without a prior history of sustained VTEs (primary prevention patients). In some recent studies involving primary prevention patients with prophylactically implanted cardioverter-defibrillators (ICDs), MTWA has not performed as well.
Objective: This study examined the hypothesis that MTWA is an accurate predictor of VTEs in primary prevention patients without implanted ICDs, but not of appropriate ICD therapy in such patients with implanted ICDs.
Methods: This study identified prospective clinical trials evaluating MTWA measured using the spectral analytic method in primary prevention populations and analyzed studies in which: (1) few patients had implanted ICDs and as a result none or a small fraction (≤15%) of the reported end point VTEs were appropriate ICD therapies (low ICD group), or (2) many of the patients had implanted ICDs and the majority of the reported end point VTEs were appropriate ICD therapies (high ICD group).
Results: In the low ICD group comprising 3,682 patients, the hazard ratio associated with a nonnegative versus negative MTWA test was 13.6 (95% confidence interval [CI] 8.5 to 30.4) and the annual event rate among the MTWA-negative patients was 0.3% (95% CI: 0.1% to 0.5%). In contrast, in the high ICD group comprising 2,234 patients, the hazard ratio was only 1.6 (95% CI: 1.2 to 2.1) and the annual event rate among the MTWA-negative patients was elevated to 5.4% (95% CI: 4.1% to 6.7%). In support of these findings, we analyzed published data from the Multicenter Automatic Defibrillator Trial II (MADIT II) and Sudden Cardiac Death in Heart Failure Trial (SCD-HeFT) trials and determined that in those trials only 32% of patients who received appropriate ICD therapy averted an SCD.
Conclusion: This study found that MTWA testing using the spectral analytic method provides an accurate means of predicting VTEs in primary prevention patients without implanted ICDs; in particular, the event rate is very low among such patients with a negative MTWA test. In prospective trials of ICD therapy, the number of patients receiving appropriate ICD therapy greatly exceeds the number of patients who avert SCD as a result of ICD therapy. In trials involving patients with implanted ICDs, these excess appropriate ICD therapies seem to distribute randomly between MTWA-negative and MTWA-nonnegative patients, obscuring the predictive accuracy of MTWA for SCD. Appropriate ICD therapy is an unreliable surrogate end point for SCD.
Die Internationalisierung der deutschen Hochschulen nahm in den letzten Jahren stark zu. Umgang mit Studierenden aus unterschiedlichen Kulturen bedeutet für Lehrende längst Alltag. Nicht immer jedoch verläuft die Kommunikation zwischen Angehörigen unterschiedlicher Kulturen reibungslos. Um möglichen Schwierigkeiten entgegenzuwirken, setzen einige Universitäten interkulturelle Trainings ein zur Sensibilisierung für interkulturelle Unterschiede. Die Autoren haben im Rahmen eines hochschuldidaktischen Weiterbildungsprogramms für Lehrende ein interkulturelles Training entwickelt und eingesetzt. Über den Aufbau und die Ziele des Trainings wird im vorliegenden Artikel berichtet. Weiterhin wird ein Untersuchungsdesign vorgestellt, mit welchem der Einfluss von Kultur auf die Online-Kommunikation in der Lehre untersucht wurde.
STAR's measurements of directed flow (v1) around midrapidity for π±, K±, K0S, p and p¯ in Au + Au collisions at $\sqrtsNN = 200$ GeV are presented. A negative v1(y) slope is observed for most of produced particles (π±, K±, K0S and p¯). The proton v1(y) slope is found to be much closer to zero compared to antiprotons. A sizable difference is seen between v1 of protons and antiprotons in 5-30% central collisions. The v1 excitation function is presented. Comparisons to model calculations (RQMD, UrQMD, AMPT, QGSM with parton recombination, and a hydrodynamics model with a tilted source) are made. Anti-flow alone cannot explain the centrality dependence of the difference between the v1(y) slopes of protons and antiprotons.
Wikis in der Hochschullehre
(2012)
Dieser Beitrag gibt einen Überblick über Einsatzszenarien von Wikis in Lern- und Lehrprozessen und deren Eignung für die kollaborative Wissensproduktion, während zugleich Einschränkungen, Bedingungen und Gestaltungsempfehlungen thematisiert werden. Zudem werden Erfahrungen mit verschiedenen Wiki-Anwendungen an der Universität Frankfurt dokumentiert, die vom begleitenden Einsatz im Seminar bis hin zur studentisch initiierten Bereitstellung studienbegleitender Materialien reichen. Die vorher ausgearbeiteten Aspekte werden nochmals anhand der Beispiele aufgegriffen und ihrer Praxisrelevanz verdeutlicht.
Während der wissenschaftliche Nachwuchs im Forschungsbereich strategisch und wissenschaftlich fundiert samt diversen Prüfungen (Bachelor, Master, Promotion, ggf. auch Habilitation) ausgebildet wird, existiert im Bereich der Lehre nichts auch nur annährend Vergleichbares. Die übliche „Qualifizierung“ des Nachwuchslehrenden findet meist nur „On-the-job“ (vgl. Conradi, 1983) statt, d.h. durch eigenes Ausprobieren nach Beobachtung anderer Lehrender während des eigenen Studiums. Unter guten Bedingungen hat der Lehrende vorab oder begleitend Weiterbildungen zu guter Lehre besucht. Eine strategische Einbettung dieser Personalentwicklungsmaßnahmen, wie es seitens der Forschung intendiert wird, ist nicht vorhanden. Dieser Beitrag stellt mögliche Formen vor und führt exemplarisch eine darunter näher aus.
Im Rahmen des Bund-Länder-Programms "Qualitätspakt Lehre" hat die Goethe-Universität Frankfurt erfolgreich das Programm "Starker Start ins Studium" eingeworben. Dadurch verfügt das Institut für Psychologie nun über die personellen Möglichkeiten, die fachliche und soziale Integration neuer Psychologiestudierender im sechssemestrigen Bachelorstudiengang Psychologie zu verbessern. Hierzu wurden zwei obligate je zweisemestrige Lehrmodule entwickelt. In dem vorliegenden Beitrag wird das übergeordnete Lehrkonzept beschrieben und dessen Implementierung im Fach Psychologie als Praxisbeispiel illustriert.
If projection and transference represent similar terms that imply a fundamental form of ignorance, the aim of this investigation can not be to draw a sharp distinction between projection and transference. Of course, the dialectic of inside and outside doesn't play the central role in transference like it does in projection. In a certain way, the notion of projection concerns all forms of perception and seems to be wider than the notion of transference. But on the other hand, the notion of transference as a poetic act of creating metaphorical analogies seems to be wider than that of projection. My interest in the following lines lies not in the attempt to draw a valuable distinction between both terms, but to look at their interplay in a novel that discusses all forms of archaism, primitivism and regression, commonly linked with projection, a novel, that at the same time tries to give an explanation of the foundation of modern art. Thomas Mann's Doktor Faustus offers an insight not only into the combination of projection and love, but also into ignorance as the common ground of projection and transference. I will therefore first try to determine the modernity of Thomas Mann's novel in regard to the abounding intertextual dimension that characterizes the text, and then closely examine the central scene of the novel, the confrontation between Adrian Leverkühn and the obscure figure of the devil.
Verständnisvolle Dozenten haben weniger Fachwissen : Wirkungen der sprachlichen Anpassung an Laien
(2012)
In der Interaktion mit Studierenden ist schriftliche Online-Kommunikation ein wichtiges Arbeitsmedium für jeden Lehrenden geworden. Die Interaktionspartner haben dabei für ihre Urteilsbildung über den jeweils anderen ausschließlich den geschriebenen Text mit seinen lexikalen und grammatikalischen Merkmalen zur Verfügung. Das Ausmaß der lexikalen Anpassung an die Wortwahl eines Studierenden kann daher einen Einfluss auf die studentische Bewertung ihrer Dozenten hinsichtlich unterschiedlicher Persönlichkeitseigenschaften haben. In der vorliegenden Studie beurteilten Studierende jeweils zwei Dozenten hinsichtlich Verständnis, Gewissenhaftigkeit und Intellekt (IPIP, Goldberg, Johnson, Eber et al., 2006) auf Grundlage einer Emailkommunikation. Der Grad der lexikalen Anpassung der Lehrenden wurde dabei variiert. Es zeigte sich, dass Studierende Dozenten mit umgangssprachlicher Wortwahl als verständnisvoller, gewissenhafter aber tendenziell weniger wissend einschätzen.
STAR's measurements of directed flow (v1) around midrapidity for π±, K±, K0S, p and p¯ in Au + Au collisions at $\sqrtsNN = 200$ GeV are presented. A negative v1(y) slope is observed for most of produced particles (π±, K±, K0S and p¯). In 5-30% central collisions a sizable difference is present between the v1(y) slope of protons and antiprotons, with the former being consistent with zero within errors. The v1 excitation function is presented. Comparisons to model calculations (RQMD, UrQMD, AMPT, QGSM with parton recombination, and a hydrodynamics model with a tilted source) are made. For those models which have calculations of v1 for both pions and protons, none of them can describe v1(y) for pions and protons simultaneously. The hydrodynamics model with a tilted source as currently implemented cannot explain the centrality dependence of the difference between the v1(y) slopes of protons and antiprotons.