Refine
Year of publication
Document Type
- Article (7)
- Preprint (3)
- Part of a Book (1)
- Diploma Thesis (1)
Has Fulltext
- yes (12)
Is part of the Bibliography
- no (12)
Keywords
- Datenbanksystem (1)
- Direct reactions (1)
- Ehe <Motiv> (1)
- Erzählperspektive (1)
- Heterogenität (1)
- High-energy neutron detection (1)
- Jean / Siebenkäs (1)
- Johann Wolfgang von Goethe (1)
- Kongress (1)
- Korpusannotation (1)
Institute
- Physik (6)
- ELEMENTS (4)
- Extern (3)
- Biowissenschaften (1)
- Philosophie (1)
Der TUSNELDA-Standard : ein Korpusannotierungsstandard zur Unterstützung linguistischer Forschung
(2001)
Die Verwendung von Standards für die Annotierung größerer Sammlungen elektronischer Texte (Korpora) ist eine Voraussetzung für eine mögliche Wiederverwendung dieser Korpora. Dieser Artikel stellt einen Korpusannotierungsstandard vor, der die Anforderungen der Untersuchung unterschiedlichster linguistischer Phänomene berücksichtigt. Der Standard wurde im SFB 441 an der Universität Tübingen entwickelt. Er geht von bestehenden Standards, insbesondere CES und TEI, aus, die sich als teilweise zu ausführlich und zu wenig restriktiv,teilweise auch als nicht ausdrucksstark genug erweisen, um den Bedürfnissen korpusbasierter linguistischer Forschung gerecht zu werden.
This paper describes the creation and preparation of TUSNELDA, a collection of corpus data built for linguistic research. This collection contains a number of linguistically annotated corpora which differ in various aspects such as language, text sorts / data types, encoded annotation levels, and linguistic theories underlying the annotation. The paper focuses on this variation on the one hand and the way how these heterogeneous data are integrated into one resource on the other hand.
Die vorliegende Arbeit umfasst die Rekonstruktion der Körpermasse pleistozäner Rhinocerotidae in Europa und Südost-Asien , hier speziell der Insel Java. Methodisch wird dieses Ziel durch lineare Regressionen nach Janis (1990) verfolgt. Zunächst wird ein Rezentmodell erstellt, das es ermöglicht Körpermasse mit verschiedenen Zahnparametern in Zusammenhang zu bringen. Die aus dem Rezentmodell resultierenden Regressionsgleichungen für jeden Zahn werden dann für die Rekonstruktion fossiler Körpermassen verwendet. Das fossile Zahnmaterial wurde vermessen und die Körpermassen für alle Zahnparameter errechnet. Um einen Vergleich mit veröffentlichten Werten zu ermöglichen, wurde die Körpermasse gleichfalls nach Legendre (1986) ermittelt, welcher eine Formel zur Körpermassenrekonstruktion entwickelte, die heute allgemein Verwendung findet. Um die oftmals sehr großen Schwankungen in der Körpermasse, verursacht durch Ernährungs- und Gesundheitszustand eines Tieres abzufedern, sind die absoluten Werte in Körpermassenklassen eingeteilt. Die ermittelten Körpermassen wurden dann in verschiedenen Zusammenhängen betrachtet und, soweit möglich , Aussagen über Gründe für Veranderungen oder Unterschiede zwischen Messstrecken, Zeiträumen, Habitaten oder auch Spezies genannt.
The neutron-unbound isotope 13Be has been studied in several experiments using different reactions, different projectile energies, and different experimental setups. There is, however, no real consensus in the interpretation of the data, in particular concerning the structure of the low-lying excited states. Gathering new experimental information, which may reveal the 13Be structure, is a challenge, particularly in light of its bridging role between 12Be, where the N = 8 neutron shell breaks down, and the Borromean halo nucleus 14Be. The purpose of the present study is to investigate the role of bound excited states in the reaction product 12Be after proton knockout from 14B, by measuring coincidences between 12Be, neutrons, and γ rays originating from de-excitation of states fed by neutron decay of 13Be. The 13Be isotopes were produced in proton knockout from a 400 MeV/nucleon 14B beam impinging on a CH2 target. The 12 Be-n relative-energy spectrum d σ /d Ef n was obtained from coincidences between 12Be(g.s.) and a neutron, and also as threefold coincidences by adding γ rays, from the de-excitation of excited states in 12Be. Neutron decay from the first 5/2+ state in 13Be to the 2+ state in 12Be at 2.11 MeV is confirmed. An energy independence of the proton-knockout mechanism is found from a comparison with data taken with a 35 MeV/nucleon 14B beam. A low-lying p-wave resonance in 13Be(1/2−) is confirmed by comparing proton- and neutron-knockout data from 14B and 14Be.
Observation of enhanced subthreshold K+ production in central collisions between heavy nuclei
(1994)
In the very heavy collision system 197Au+197Au the K+ production process was studied as a function of impact parameter at 1 GeV/nucleon, a beam energy well below the free N-N threshold. The K+ multiplicity increases more than linearly with the number of participant nucleons and the K+/ pi + ratio rises significantly when going from peripheral to central collisions. The measured K+ double differential cross section is enhanced by a factor of 6 compared to microscopic transport calculations if secondary processes (Delta N-->K Lambda N and Delta Delta -->K Lambda N) are ignored.
For reasons of curiosity, we perused the two recent Oxford handbooks on legal history looking for discussions of digital methods in legal history. One of the fundamental decisions to be made when organizing such a handbook is defining which methodological approaches deserve an article of their own and which ones are to be understood rather as cross-cutting themes to be discussed in the context of many articles dedicated to other things. In the case of digital methods in legal history, this decision seems to have been a tough one – at one point, you can find a curious reference to a "chapter on 'Legal History and Digital Humanities'" (OHBLH 354), but in the final publication there is no such text.
However, discussing digital methods in the context of other subjects has, in our opinion, the disadvantage that more systematic, methodological arguments cannot really be developed. Put more concretely, the most "substantial" contributions regarding digital methods are, for whatever reason, those on "The Intellectual History of Law" by Assaf Likhovski, on "Taking the Long View" by Paul D. Halliday, on "Quantitative Legal History" by Daniel Klerman, and on "Indian Law" by Mitra Sharafi, all of which are in the Oxford Handbook on Legal History. (Equally surprising, there is no mention of digital methods at all in Angela Fernandez’s "Legal History as The History of Legal Texts".) However, even these articles do not really "discuss" digital methods, rather they merely refer to them (and to some projects) as contributions of sorts to their respective fields of interest.
Thus, if you are looking for digital methods in those handbooks, you can hardly find more than some namedropping passages where things like "digital mapping […], network analysis […], text analysis" (OHBLH 845f.) are mentioned, together with references to example projects where they have been employed but without any explanation as to:
–why these methods are mentioned and not others,
–what they are doing, to which end and under what circumstances,
–what, possibly transformative, impact these methods have on the (respective sub-) field of legal history, and
–what a scholar considering to apply these methods should be aware of.
While the space for this is limited, the present Forum contribution tries to mitigate the scarcity of such discussions by presenting and discussing a few textual analyses that make use – for demonstration purposes – of digital methods. Some other methods of analysis, network analysis, and geo-mapping (among others), cannot be covered here, but we provide a link to an online bibliography where you can find them applied to legal history or a related domain, and discussed critically. A general discussion of digital perspectives beyond concrete methods of analysis concludes this contribution.
In this paper, we investigate the role of sub-optimality in training data for part-of-speech tagging. In particular, we examine to what extent the size of the training corpus and certain types of errors in it affect the performance of the tagger. We distinguish four types of errors: If a word is assigned a wrong tag, this tag can belong to the ambiguity class of the word (i.e. to the set of possible tags for that word) or not; furthermore, the major syntactic category (e.g. "N" or "V") can be correctly assigned (e.g. if a finite verb is classified as an infinitive) or not (e.g. if a verb is classified as a noun). We empirically explore the decrease of performance that each of these error types causes for different sizes of the training set. Our results show that those types of errors that are easier to eliminate have a particularly negative effect on the performance. Thus, it is worthwhile concentrating on the elimination of these types of errors, especially if the training corpus is large.
This paper proposes a corpus encoding standard that meets the needs of linguistic research using a variety of linguistic data structures. The standard was developed in SFB 441, a research project at the University of Tuebingen. The principal concern of SFB 441 are the empirical data structures which feed into linguistic theory building. SFB 441 consists of several projects, most of which are building corpora to empirically investigate various linguistic phenomena in various languages (e.g. modal verbs in German, forms of address and politeness in Russian). These corpora will form the components of the "Tuebingen collection of reusable, empirical, linguistic data structures (TUSNELDA)". The TUSNELDA annotation standard aims at providing a uniform encoding scheme for all subcorpora and texts of TUSNELDA such that they can be processed with uniform standardized tools. To guarantee maximal reusability we use XML for encoding. Previous SGML standards for text encoding were provided by the Text Encoding Initiative (TEI) and the Expert Advisory Group on Language Engineering Standards (Corpus Encoding Standard, CES). The TUSNELDA standard is based on TEI and XCES (XML version of CES) but takes into account the specific needs of the SFB projects, i.e. the peculiarities of the examined languages and linguistic phenomena.
Neutron total cross sections are an important source of experimental data in the evaluation of neutron-induced cross sections. The sum of all neutron-induced reaction cross sections can be determined with a precision of a few per cent in a relative measurement. The neutron spectrum of the photoneutron source nELBE extends in the fast region from about 100 keV to 10 MeV and has favourable conditions for transmission measurements due to the low instantaneous flux of neutrons and low gamma-flash background. Several materials of interest (in part included in the CIELO evaluation or on the HPRL of OECD/NEA) have been investigated: 197Au [1, 2], natFe [2], natW [2], 238U, natPt, 4He, natO, natNe, natXe. For gaseous targets high pressure gas cells with flat end-caps have been built that hold up to 200 bar pressure. The experimental setup will be presented including results from several transmission experiments and the data analysis leading to the total cross sections will be discussed.
The Coulomb Dissociation (CD) cross sections of the stable isotopes 92,94,100Mo and of the unstable isotope 93Mo were measured at the LAND/R3B setup at GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt, Germany. Experimental data on these isotopes may help to explain the problem of the underproduction of 92,94Mo and 96,98Ru in the models of p-process nucleosynthesis. The CD cross sections obtained for the stable Mo isotopes are in good agreement with experiments performed with real photons, thus validating the method of Coulomb Dissociation. The result for the reaction 93Mo(γ,n) is especially important since the corresponding cross section has not been measured before. A preliminary integral Coulomb Dissociation cross section of the 94Mo(γ,n) reaction is presented. Further analysis will complete the experimental database for the (γ,n) production chain of the p-isotopes of molybdenum.