Universitätspublikationen
Refine
Year of publication
- 2004 (245) (remove)
Document Type
- Article (85)
- Part of Periodical (49)
- Doctoral Thesis (47)
- Review (27)
- Book (11)
- Conference Proceeding (10)
- Preprint (9)
- Part of a Book (2)
- Working Paper (2)
- Diploma Thesis (1)
Language
- German (143)
- English (97)
- Portuguese (2)
- French (1)
- Italian (1)
- Multiple languages (1)
Is part of the Bibliography
- no (245)
Keywords
- Forschung (3)
- Frankfurt <Main> / Universität (3)
- Zeitschrift (3)
- septic shock (2)
- Abort (1)
- Adorno (1)
- Alter (1)
- Angiogenesis (1)
- Aspirin (1)
- Atemwege (1)
Institute
- Medizin (40)
- Physik (30)
- Rechtswissenschaft (29)
- Präsidium (18)
- E-Finance Lab e.V. (14)
- Biochemie und Chemie (12)
- Biowissenschaften (12)
- Kulturwissenschaften (10)
- Institut für Wirtschaft, Arbeit, und Kultur (IWAK) (9)
- Geschichtswissenschaften (8)
- Philosophie (8)
- Biochemie, Chemie und Pharmazie (7)
- Universitätsbibliothek (7)
- Gesellschaftswissenschaften (6)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (5)
- Informatik (5)
- Neuere Philologien (5)
- Erziehungswissenschaften (4)
- Geographie (4)
- Geowissenschaften (4)
- Wirtschaftswissenschaften (4)
- Center for Financial Studies (CFS) (3)
- Frankfurt Institute for Advanced Studies (FIAS) (3)
- Institut für Sozialforschung (IFS) (3)
- MPI für Biophysik (3)
- Hochschulrechenzentrum (2)
- Philosophie und Geschichtswissenschaften (2)
- Senckenbergische Naturforschende Gesellschaft (2)
- Sprachwissenschaften (2)
- Zentrum für Weiterbildung (2)
- Biodiversität und Klima Forschungszentrum (BiK-F) (1)
- Cornelia Goethe Centrum für Frauenstudien und die Erforschung der Geschlechterverhältnisse (CGC) (1)
- Europäische Akademie der Arbeit in der Universität Frankfurt am Main (1)
- Frobenius Institut (1)
- Georg-Speyer-Haus (1)
- Institut für Ökologie, Evolution und Diversität (1)
- Institute for Law and Finance (ILF) (1)
- Mathematik (1)
- Pharmazie (1)
- Sonderforschungsbereiche / Forschungskollegs (1)
- Sprach- und Kulturwissenschaften (1)
- Zentrum für Interdisziplinäre Afrikaforschung (ZIAF) (1)
- Zentrum für Nordamerika-Forschung (ZENAF) (1)
"Weimar" war die am häufigsten gebrauchte Chiffre im Selbstfindungsprozess der Deutschen nach 1945. Dort wollte man wieder "anknüpfen", aber auch lernen, was vermieden werden sollte. Weimar, das waren Nationalversammlung und Verfassungsgebung, die "goldenen Zwanziger", die Inflation, die Blockierung der Politik, der "Parteienstaat", die Koalition der "Systemfeinde", das war leuchtendes Vorbild, aber auch Chaos und Vorhölle zum NS-Staat. Christoph Gusy hat auf seiner Bielefelder Tagung daraus die Frage formuliert, wie die frühe Bundesrepublik mit Traum und Trauma von "Weimar" umgegangen ist, wie sie die Aneignung von Historie betrieben und in Politik umgesetzt hat. Auf diese Fragen antworten zunächst Wolfram Pyta mit einem souveränen Überblick über den jahrzehntelangen schrittweisen Prozess der Historisierung von Weimar, sodann Elke Seefried über die Grübeleien der Exilpolitiker, was "falsch gelaufen" und künftig zu vermeiden sei – überraschend antiparlamentarische und autoritäre Grübeleien übrigens. ...
"Solidarität" ist ein Wieselwort von gallertartiger Konsistenz. Es ist allgegenwärtig, gibt sich bedeutungsschwer und meist auch etwas vorwurfsvoll. Sein Kontext ist durchweg normativ. Es taucht dort auf, wo es um den Appell an Personen auf der gleichen Ebene geht. Typischerweise rufen Gruppen nach "Solidarität", deren innere Bindungen bröckeln oder die sich in Gefahrenlagen zusammenscharen. Als es noch Standesgenossen gab, erinnerten sie gerne an die Solidarität, wenn es darum ging, den Zusammenhalt derselben Schicht zu wahren. Die alten Zünfte, Gilden, Gaffeln, Einungen und Genossenschaften waren Solidaritätsverbände. Die Arbeiterbewegung übernahm hiervon nicht nur das Wort "Genossen", sondern auch den Appell an die Klassensolidarität. Und selbst wer heutzutage einen Krieg führen will, erinnert an alte Dankesschulden und appelliert an die Solidarität der Bundesgenossen. ...
Der Titel "Portrayed on the Heart" von Cynthia Hahns vorliegender Publikation bezieht sich auf jene von Gregor den Großen formulierte didaktische Funktion von bildlichen Darstellungen, deren Verinnerlichung gleichsam auf eine Vervollkommnung des Menschen zielte. Dieser Aufgabe war auch die bildliche Hagiographie des Mittelalters verpflichtet, "in inducing a movement beyond words and images - in creating an effect on the soul." (S. 331) Die Frage, welche Bildstrategien entwickelt wurden, um von der Heiligkeit der dargestellten Person zu überzeugen, und wie die Bildrhetorik auf die Wahrnehmung von Heiligkeit Einfluß zu nehmen vermochte, versucht sie am Beispiel der illuminierten "libelli" des 10. bis 13. Jahrhunderts zu beantworten. Diese Handschriften stellten aufgrund ihres Bildprogramms eine eigene Gruppe innerhalb der oft auch schmucklosen "libelli" dar, welche verschiedene hagiographische Texte zur Verehrung eines Heiligen, wie die Vita, das Offizium, Hymnen und Gebete, enthielten. Die in "Portrayed on the Heart" analysierten "libelli" geben der Autorin die Möglichkeit, ihre bisher in verschiedenen Aufsätzen veröffentlichen Erkenntnisse zum Thema von Hagiographie und Bildnarration zu synthetisieren sowie Wesen und Stellenwert von Heiligkeit im frühen und hohen Mittelalter aus mediävistischer Perspektive theoretisch zu hinterfragen. In diesem Sinne stellt das Buch eine anspruchvolle Einführung in das Forschungsfeld mittelalterlicher Heiligkeit dar. ...
In der letzten Zeit sind eine Reihe von Büchern erschienen, in denen Kunstkritiker/innen und Kurator/innen ihre gesammelten Artikel veröffentlichen. Auch Isabelle Graws "Die bessere Hälfte. Künstlerinnen des 20. und 21. Jahrhunderts" liegt eine Reihe von Aufsätzen zugrunde, die in den 1990er Jahren in Zeitschriften wie Artis und den von Graw herausgegebenen Texten zur Kunst zu lesen waren. Allerdings hat Graw ihre kontinuierliche Auseinandersetzung mit der Positionierung von Künstlerinnen im Betriebssystem Kunst in drei Kapiteln neu zusammengefaßt, so dass ein eigenständiges Buch entstanden ist. ...
Epigraphic documents attest that the two neighbouring, inland sites, Idalion and Tamassos, were kingdoms during the Cypro-Archaic period, and that-within an interval of nearly a century - they were both incorporated by the kingdom of Kition during the Cypro-Classical period, thereby losing their independent status. The geographical position of Idalion and Tamassos must have been both a blessing and a curse: while the two polities could thrive on the exploitation of the nearby copper mines, they also had to withstand the economic interest of other Cypriote polities in these natural resources. In addition, we may assume that, because of their inland position, Idalion and Tamassos were forced to seek economic collaboration with polities that had direct access to the sea for the export and exchange of commodities beyond the island. We may further expect that the control of ore-mining and forestry activities must have been a potential source of territorial strife between the two inland kingdoms. Therefore, the geo-economic reality likely induced Idalion and Tamassos to a dualistic relationship of being both allies and competitors. ...
The development of image-guided neurosurgery represents a substantial improvement in the microsurgical treatment of tumors, vascular malformations and other intracranial lesions. Despite the wide applicability and many fascinating aspects of image-guided navigation systems, a major drawback of this technology is they use images, mainly MRI pictures, acquired preoperatively, on which the planning of the operative procedure as well as its intraoperative performance is based. As dynamic changes of the intracranial contents regularly occur during the surgical procedure, the surgeon is faced with a continuously changing intraoperative field. Only intraoperatively acquired images will provide the neurosurgeon with the information he needs to perform real intraoperative image-guided surgery. A number of tools have been developed in recent years, like intraoperative ultrasound and dedicated moveable intraoperative CT units. Because of its excellent imaging qualities, combined with the avoidance of ionizing radiation, MRI currently is and definitely will be in the future for the superior imaging method for intraoperative image guidance. In this short overview, the development as well as some of the current and possible future applications of MRI-guided neurosurgery is outlined.
Intra-arterial (IA) chemotherapy for curative treatment of head and neck cancer experienced a revival in the last decade. Mainly, it was used in concurrent combination with radiation in organ-preserving settings. The modern method of transfemoral approach for catheterisation, superselective perfusion of the tumour-feeding vessel, and high-dose (150 mg m−2) administration of cisplatin with parallel systemic neutralisation with sodium thiosulphate (9 g m−2) made preoperative usage feasible. The present paper presents the results of a pilot study on a population of 52 patients with resectable stage 1–4 carcinomas of the oral cavity and the oropharynx, who were treated with one cycle of preoperative IA chemotherapy executed as mentioned above and radical surgery. There have been no interventional complications of IA chemotherapy, and acute side effects have been low. One tracheotomy had to be carried out due to swelling. The overall clinical local response has been 69%. There was no interference with surgery, which was carried out 3–4 weeks later. Pathological complete remission was assessed in 25%. The mean observation time was 3 years. A 3-year overall and disease-free survival was 82 and 69%, respectively, and at 5 years 77 and 59%, respectively. Survival results were compared to a treatment-dependent prognosis index for the same population. As a conclusion, it can be stated that IA high-dose chemotherapy with cisplatin and systemic neutralisation in a neoadjuvant setting should be considered a feasible, safe, and effective treatment modality for resectable oral and oropharyngeal cancer. The low toxicity of this local chemotherapy recommends usage especially in stage 1–2 patients. The potential of survival benefit as indicated by the comparison to the prognosis index should be controlled in a randomised study.
In eukaryotes, double-stranded (ds) RNA induces sequence-specific inhibition of gene expression referred to as RNA interference (RNAi). We exploited RNAi to define the role of HER2/neu in the neoplastic proliferation of human breast cancer cells. We transfected SK-BR-3, BT-474, MCF-7, and MDA-MB-468 breast cancer cells with short interfering RNA (siRNA) targeted against human HER2/neu and analyzed the specific inhibition of HER2/neu expression by Northern and Western blots. Transfection with HER2/neu-specific siRNA resulted in a sequence-specific decrease in HER2/neu mRNA and protein levels. Moreover, transfection with HER2/neu siRNA caused cell cycle arrest at G0/G1 in the breast cancer cell lines SKBR-3 and BT-474, consistent with a powerful RNA silencing effect. siRNA treatment resulted in an antiproliferative and apoptotic response in cells overexpressing HER2/neu, but had no influence in cells with almost no expression of HER2/neu proteins like MDA-MB-468 cells. These data indicate that HER2/neu function is essential for the proliferation of HER2/neuoverexpressing breast cancer cells. Our observations suggest that siRNA targeted against human HER2/neu may be valuable tools as anti proliferative agents that display activity against neoplastic cells at very low doses.
In his Yiddish autobiography “Fun Lublin biz Rige”, Riga: 1940, the actor Abraham Eines reported on his 30-year lasting career as an actor in Yiddish theatre companies in Eastern Europe and also on the period when he was an artist in the Yiddish theatre in Riga. The so called “Naier idisher teater” had been planned since 1913 and opened in 1927 on the initiative of Jakob Landau, Paul Minz and Lew Ginsberg.
This thesis is based on Eines’ autobiography and researches in Latvian, Lithuanian and Polish archives and libraries. The aim was to reconstruct the history of this specialized Yiddish theatre, which fortunately is kept until today in the art nouveau quarter of Riga.
The thesis deals with the history of this theatre, the plans which resulted in the construction of the building, people and organisations that were involved, its opening, playing schedules, companies and actors as well as the intercultural, economic and social environments and activities.
In January 1927, the “Naier idisher teater” opened under the main direction of M. Karpinowitsch and the art direction of Abraham Morewski. It was financially supported by membership fees from the “Jewish Theatre Company”. New artists were often engaged by the “Warsaw Association of Artists”.
In the following years, the art direction changed several times because of disagreements between the direction of the theatre and the company. Actors demanded more sophisticated plays and greater artistic licenses. The theatre had big economic problems. The repertoire of the theatre differed distinctly from that of the guest companies coming to Riga: the “Vilner Trupe”, staged Yiddish classics by Scholem Alejchem, Scholem Asch, Jacob Gordin, as well as by Oskar Wilde, Shakespeare and Moliere. Furthermore, Alexander Granovsky (GOSET) gave guest performances with his company of the Moscow theatre “Habima” in Riga. Besides “Habima” started its Europe tour in this Yiddish theatre Riga. Many artists were partly engaged for a long period in Riga`s “Naier idisher teater” and the theatre was well attended – on average 70 000 visitors per season. The theatre was equipped with 473 seats and 160 seats on balconies. It existed with different names until the occupation of Riga by the Germans. Today, the museum „Jews in Latvia“ (Muzejs Ebreji Latvijā) is located in the former theatre building.
This is a not revised edition of the thesis.
Electric stimulation of the auditory nerve via cochlear implants has made the treatment of sensory deafness possible. Advanced signal processing and stimulation paradigms have led to continuously improved results in speech understanding. Consequently, indication criteria have been extended to patients with profound and severe-to-profound hearing loss and limited speech understanding with conventional acoustic amplification.
Outside this group, a considerable number of patients presents with rather wellpreserved, low frequency hearing of 30-60 dB up to 1 kHz, but severe loss in the mid to high frequency range of more than 60-70 dB. Monosyllabic word scores in these patients do not generally exceed 35%, due to missing consonant information. But, even increasing the audibility of these high frequencies by acoustic amplification still has very limited efficiency for discriminating speech, and therefore, these patients obtain only minor benefit from conventional hearing aids. On the other hand, standard cochlear implantation would carry a high risk of causing complete hearing loss. This situation has led to considering a combination of both modes of stimulation for these patients who are on the borderline between hearing aids and cochlear implant.
In our present model, the surviving low frequency region of the cochlea could still be stimulated acoustically-combined with additional electrical stimulation of the impaired mid and high frequency region of the cochlea.
Several questions still have to be answered with regard to combined electric and acoustic stimulation (EAS). The possible interaction of electric and acoustic stimuli on the different levels off the auditory system is a major issue. Animal experiments clearly demonstrate that tuning properties of auditory neurons, in response to acute acoustic stimulation, are essentially preserved in the presence of electric stimulation even at high levels of electric stimulation, and that chronic electric stimulation of tie intact inner ear does not have a significant effect on the compound action potentials (CAP) thresholds or inner ear function.
In a previous report, we were able to show that this combined F.A.S of the auditory system is possible in humans, and that it has a synergistic effect on speech understanding. Further major issues regard the surgical feasibility and reproducibility of cochlear implantation with the preservation of residual hearing.
Encouraged by our findings, a clinical study was initiated on the application of EAS. So far, seven adults have been included in this study. In addition, one child has been implanted outside the study.
A small electrostatic storage ring is the central machine of the Frankfurt Ion Storage Experiments (FIRE) which will be built at the new Stern-Gerlach Center of Frankfurt University. As a true multiuser, multipurpose facility with ion energies up to 50 keV, it will allow new methods to analyze complex many-particle systems from atoms to very large biomolecules. With envisaged storage times of some seconds and beam emittances in the order of a few mm mrad, measurements with up to 6 orders of magnitude better resolutions as compared to single-pass experiments become possible. In comparison to earlier designs, the ring lattice was modified in many details: Problems in earlier designs were related to, e.g., the detection of light particles and highly charged ions with different charge states. Therefore, the deflectors were redesigned completely, allowing a more flexible positioning of the diagnostics. Here, after an introduction to the concept of electrostatic machines, an overview of the planned FIRE is given and the ring lattice and elements are described in detail.
Rezension zu: Sarah Kember: Cyberfeminism and Artificial Life. London/New York: Routledge 2003. 257 Seiten, ISBN 0–415–24026–3 (Hardcover) / 0–415–24027–1 (Paperback), € 71,82 (Hardcover) / € 21,98 (Paperback)
"Künstliches Leben" zu schaffen, galt über Jahrhunderte hinweg als Phantasma, dem man vor allem mit den Mitteln der Literatur und der Kunst nachjagte. Ein Topos, der Kultur als Kontrolle, Beherrschung und Verbesserung der Natur definiert – und in dem sich menschliche Machtphantasien und misogyne Obsessionen auf markante Weise mischen: Wo die biologischen Funktionen von "sex" eigentlich überflüssig werden sollten, treten Geschlechterdichotomien und -hierarchien als Konstruktionen um so deutlicher hervor. Daran hat sich bis heute wenig geändert. Allerdings haben mit den aktuellen Entwicklungen in den Bio- und Informationstechnologien die Phantasmen zunehmend an Realität gewonnen. Ob nun in den Computerlaboren der Unterhaltungsindustrie oder in denen der Genomforschung: Allenthalben scheint es um die Formel des Lebens zu gehen. Aber was bedeutet das eigentlich? Welche Rolle wird "Künstliches Leben" in unserem künftigen Leben spielen? Und welche Rolle spielen dabei die Phantasmen, die dieser Topos transportiert? Wie greifen diese "virtuellen Realitäten" in unsere Körper- und Identitätskonzepte, unsere Subjekt- und Geschlechtervorstellungen ein? Sarah Kembers Buch verspricht, erhellende Schneisen durch das Dickicht der definitionsmächtigen Diskurse, Konzepte und Konstruktionen zu schlagen und neue Wege für feministische Interventionen in die Auseinandersetzungen um "Artificial Life" aufzuzeigen.
1. Hessische Schülerakademie : schulpraktische Veranstaltung für Lehramtsstudierende : Dokumentation
(2004)
This paper sets out to analyze the influence of different types of venture capitalists on the performance of their portfolio firms around and after IPO. We investigate the hypothesis that different governance structures, objectives, and track records of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis using a data set embracing all IPOs that have occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after IPO as compared to all other IPOs, and their share prices fluctuate less than those of their counterparts in this period of time. On the contrary, firms backed by public VCs show relative underperformance. The fact that this could occur implies that market participants did not correctly assess the role played by different types of VCs.
Quantitative analysis of the cardiac fibroblast transcriptome implications for NO/cGMP signaling
(2004)
Cardiac fibroblasts regulate tissue repair and remodeling in the heart. To quantify transcript levels in these cells we performed a comprehensive gene expression study using serial analysis of gene expression (SAGE). Among 110,169 sequenced tags we could identify 30,507 unique transcripts. A comparison of SAGE data from cardiac fibroblasts with data derived from total mouse heart revealed a number of fibroblast-specific genes. Cardiac fibroblasts expressed a specific collection of collagens, matrix proteins and metalloproteinases, growth factors, and components of signaling pathways. The NO/cGMP signaling pathway was represented by the mRNAs for α1 and β1 subunits of guanylyl cyclase, cGMP-dependent protein kinase type I (cGK I), and, interestingly, the G-kinase-anchoring protein GKAP42. The expression of cGK I was verified by RT-PCR and Western blot. To establish a functional role for cGK I in cardiac fibroblasts we studied its effect on cell proliferation. Selective activation of cGK I with a cGMP analog inhibited the proliferation of serum-stimulated cardiac fibroblasts, which express cGK I, but not higher passage fibroblasts, which contain no detectable cGK I. Currently, our data suggest that cGK I mediates the inhibitory effects of the NO/cGMP pathway on cardiac fibroblast growth. Furthermore the SAGE library of transcripts expressed in cardiac fibroblasts provides a basis for future investigations into the pathological regulatory mechanisms underlying cardiac fibrosis.
Using a normalized CES function with factor-augmenting technical progress, we estimate a supply-side system of the US economy from 1953 to 1998. Avoiding potential estimation biases that have occurred in earlier studies and putting a high emphasis on the consistency of the data set, required by the estimated system, we obtain robust results not only for the aggregate elasticity of substitution but also for the parameters of labor and capital augmenting technical change. We find that the elasticity of substitution is significantly below unity and that the growth rates of technical progress show an asymmetrical pattern where the growth of laboraugmenting technical progress is exponential, while that of capital is hyperbolic or logarithmic.
The objective of this paper is the study of the equilibrium behavior of a population on the hierarchical group ΩN consisting of families of individuals undergoing critical branching random walk and in addition these families also develop according to a critical branching process. Strong transience of the random walk guarantees existence of an equilibrium for this two-level branching system. In the limit N→∞ (called the hierarchical mean field limit), the equilibrium aggregated populations in a nested sequence of balls B(N)ℓ of hierarchical radius ℓ converge to a backward Markov chain on R+. This limiting Markov chain can be explicitly represented in terms of a cascade of subordinators which in turn makes possible a description of the genealogy of the population.
Entlang des Einsatzes Neuer Medien in der Hochschullehre entstehen neue Anforderungen an die Kompetenzen und Qualifikationen der Hochschullehrenden. Welche Kompetenzen Hochschullehrende zur Planung, Gestaltung, Erstellung und Durchführung netzbasierter Lehrveranstaltungen haben müssen, wird in der deutschsprachigen Literatur erst seit kurzem diskutiert. Gleichzeitig wird zunehmend deutlich, dass die erfolgreiche Einführung neuer Medien in der Lehre nicht ohne eine entsprechende Qualifizierung der Hochschullehrenden und ihrer Mitarbeiterinnen und Mitarbeiter funktionieren kann. Neben hochschuldidaktischen Qualifikationen sind Kompetenzen im Bereich der Planung, Gestaltung und Umsetzung multimedialer Lehrmaterialien notwendig, um eine medienadäquate Nutzung dieser Medien sicherzustellen. Dieser Beitrag beleuchtet, welche Kompetenzen Hochschullehrende in diesem Kontext erwerben müssen, um einen erfolgreichen Medieneinsatz gestalten zu können und welche Aufgaben sie im Bereich des Planungs- und Erstellungsprozesses neuer Medien in der Lehre letztendlich selbst übernehmen.
Balloon-borne measurements of CFC-11 (on flights of the DIRAC in situ gas chromatograph and the DESCARTES grab sampler), ClO and O3 were made during the 1999/2000 winter as part of the SOLVE-THESEO 2000 campaign. Here we present the CFC-11 data from nine flights and compare them first with data from other instruments which flew during the campaign and then with the vertical distributions calculated by the SLIMCAT 3-D CTM. We calculate ozone loss inside the Arctic vortex between late January and early March using the relation between CFC-11 and O3 measured on the flights, the peak ozone loss (1200 ppbv) occurs in the 440–470 K region in early March in reasonable agreement with other published empirical estimates. There is also a good agreement between ozone losses derived from three independent balloon tracer data sets used here. The magnitude and vertical distribution of the loss derived from the measurements is in good agreement with the loss calculated from SLIMCAT over Kiruna for the same days.
Chronic obstructive pulmonary disease (COPD) is a major global health problem and is predicted to become the third most common cause of death by 2020. Apart from the important preventive steps of smoking cessation, there are no other specific treatments for COPD that are as effective in reversing the condition, and therefore there is a need to understand the pathophysiological mechanisms that could lead to new therapeutic strategies. The development of experimental models will help to dissect these mechanisms at the cellular and molecular level. COPD is a disease characterized by progressive airflow obstruction of the peripheral airways, associated with lung inflammation, emphysema and mucus hypersecretion. Different approaches to mimic COPD have been developed but are limited in comparison to models of allergic asthma. COPD models usually do not mimic the major features of human COPD and are commonly based on the induction of COPD-like lesions in the lungs and airways using noxious inhalants such as tobacco smoke, nitrogen dioxide, or sulfur dioxide. Depending on the duration and intensity of exposure, these noxious stimuli induce signs of chronic inflammation and airway remodelling. Emphysema can be achieved by combining such exposure with instillation of tissue-degrading enzymes. Other approaches are based on genetically-targeted mice which develop COPD-like lesions with emphysema, and such mice provide deep insights into pathophysiological mechanisms. Future approaches should aim to mimic irreversible airflow obstruction, associated with cough and sputum production, with the possibility of inducing exacerbations.
Two tetrahydroisoquinoline alkaloids were extracted from the alkaloid fraction of a methanol extract of the seeds of Calycotome Villosa Subsp. intermedia. Their structures were established as (R)-1-hydroxymethyl-7-8-dimethoxy-1,2,3,4-tetrahydro- isoquinoline (1) and (S)-7-hydroxymethyl-2-3-dimethoxy-7,8,9,10-tetrahydroisoquinoline chloride (2) by spectroscopic techniques and X-ray diffraction analysis.
We have used the SLIMCAT 3-D off-line chemical transport model (CTM) to quantify the Arctic chemical ozone loss in the year 2002/2003 and compare it with similar calculations for the winters 1999/2000 and 2003/2004. Recent changes to the CTM have improved the model's ability to reproduce polar chemical and dynamical processes. The updated CTM uses σ-θ as a vertical coordinate which allows it to extend down to the surface. The CTM has a detailed stratospheric chemistry scheme and now includes a simple NAT-based denitrification scheme in the stratosphere.
In the model runs presented here the model was forced by ECMWF ERA40 and operational analyses. The model used 24 levels extending from the surface to ~55 km and a horizontal resolution of either 7.5°×7.5° or 2.8°×2.8°. Two different radiation schemes, MIDRAD and the CCM scheme, were used to diagnose the vertical motion in the stratosphere. Based on tracer observations from balloons and aircraft, the more sophisticated CCM scheme gives a better representation of the vertical transport in this model which includes the troposphere. The higher resolution model generally produces larger chemical O3 depletion, which agrees better with observations.
The CTM results show that very early chemical ozone loss occurred in December 2002 due to extremely low temperatures and early chlorine activation in the lower stratosphere. Thus, chemical loss in this winter started earlier than in the other two winters studied here. In 2002/2003 the local polar ozone loss in the lower stratosphere was ~40% before the stratospheric final warming. Larger ozone loss occurred in the cold year 1999/2000 which had a persistently cold and stable vortex during most of the winter. For this winter the current model, at a resolution of 2.8°×2.8°, can reproduce the observed loss of over 70% locally. In the warm and more disturbed winter 2003/2004 the chemical O3 loss was generally much smaller, except above 620 K where large losses occurred due to a period of very low minimum temperatures at these altitudes.
Conference Reader zur gemeinsam von Athansios Orphanides (Federal Reserve Board, Washington D.C.), John C. Williams (Federal Reserve Bank of San Francisco), Heinz Hermann (Deutsche Bundesbank), und Volker Wieland (Center for Financial Studies and Goethe University Frankfurt) organisierten Konferenz, die vom 30. - 31. August, 2003 in Eltville stattgefunden hat. Inhaltsverzeichnis: * Volker Wieland (Director Center for Financial Studies): Foreword * Hans Georg Fabritius (Member of the Executive Board of the Deutsche Bundesbank): Opening Remarks * Charles Goodhart (Norman Sosnow Professor of Banking and Finance at the London School of Economics and External Member of the Bank of England's Monetary Policy Commitee): After Dinner Speech * Paper Abstracts * List of Participants
Rechenschaftsbericht des Präsidiums 2002-2003 / Präsidium der Johann Wolfgang Goethe-Universität
(2004)
IFLS-Journal. Nr. 6, 2004
(2004)
IFLS-Journal. Nr. 5, 2004
(2004)
Using faculty-librarian partnerships to ensure that students become information fluent in the 21st century In the 21st century educators in partnership with librarians must prepare students effectively for productive use of information especially in higher education. Students will need to graduate from universities with appropriate information and technology skills to enable them to become productive citizens in the workplace and in society. Technology is having a major impact on society; in economics e-business is moving to the forefront; in communication e-mail, the Internet and cellular telephones have reformed how people communicate; in the work environment computers and web utilizations are emphasized and in education virtual learning and teaching are becoming more important. These few examples indicate how the 21st century information environment requires future members of the workforce to be information fluent so they will have the ability to locate information efficiently, evaluate information for specific needs, organize information to address issues, apply information skillfully to solve problems, use information to communicate effectively, and use information responsibly to ensure a productive work environment. Individuals can achieve information fluency by acquiring cultural, visual, computer, technology, research and information management skills to enable them to think critically.
Teaching information literacy: substance and process This presentation explores the concept of information literacy within the broader context of higher education. It argues that, certain assertions in the library literature notwithstanding, the concepts associated with information literacy are not new, but rather very closely resemble the qualities traditionally considered to characterize a well-educated person. The presentation also considers the extent to which the higher education system does indeed foster the attributes commonly associated with information literacy. The term information literacy has achieved the immediacy it currently enjoys within the library community with the advent of the so-called "information age" The information age is commonly touted in the literature, both popular and professional, as constituting nothing short of a revolution. Academic librarians and other educators have of course felt called upon to make their teaching reflect both the growing proliferation of information formats and the major transformations affecting the process of information seeking. Faced with so much novelty and uncertainty, it is no surprise that many have felt that these changes call for a revolution in teaching. It is within this context that the concept of information literacy has flourished. It is argued in this presentation, however, that by treating information literacy as an essentially new specialty that owes much of its importance to the plethora of electronic information, we risk obscuring some of the most fundamental and enduring educational values we should be imparting to our students. Much of the literature on information literacy assumes - rather than argues - that recent changes in the way we approach education are indications of progress. Indeed, much of the self-narrative that institutions produce (in bulletins, mission statements, web sites, etc.) endorses an approach to education that will result in lifelong learners who are critical consumers of information. After critically examining the degree to which such statements of educational approach reflect reality, this presentation concludes by considering the effects of certain changes in the culture of higher education. It considers particularly the transformation - at least in North America - of the traditional model of higher education as a public good to a market-driven business model. It poses the question of whether a change of this significance might in fact detract from, rather than promote, the development of information literate students.
Mexiko und Venezuela
(2004)
Since the description of sepsis by Schottmüller in 1914, the amount on knowledge available on sepsis and its underlying pathophysiology has substantially increased. Epidemiologic examinations of abdominal septic shock patients show the potential for high risk posed by and the extensive therapy situation in the intensive care unit (ICU) (5). Unfortunately, until now it has not been possible to significantly reduce the mortality rate of septic shock, which is as high as 50-60% worldwide, although PROWESS' results (1) are encouraging. This paper summarizes the main results of the MEDAN project and their medical impacts. Several aspects are already published, see the references. The heterogeneity of patient groups and the variations in therapy strategies is seen as one of the main problems for sepsis trials. In the MEDAN multi-center study of 71 intensive care units in Germany, a group of 382 patients made up exclusively of abdominal septic shock patients who met the consensus criteria for septic shock (3) was analysed. For use within scores or stand-alone experiments variables are often studied as isolated variables, not as a multidimensional whole, e.g. a recent study takes a look at the role thrombocytes play (15). To avoid this limitation, our study compares several established scores (SOFA, APACHE II, SAPS II, MODS) by a multi-dimensional neuronal network analysis. For outcome prediction the data of 382 patients was analysed by using most of the commonly documented vital parameters and doses of medicine (metric variables). Data was collected in German hospitals from 1998 to 2001. The 382 handwritten patient records were transferred to an electronic database giving the amount of 2.5 million data entries. The metric data contained in the database is composed of daily measurements and doses of medicine. We used range and plausibility checks to allow no faulty data in the electronic database. 187 of the 382 patients are deceased (49 %).
Data driven automatic model selection and parameter adaptation – a case study for septic shock
(2004)
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. This paper propose as model selection criterion the least complex description of the observed data by the model, the minimum description length. For the small, but important example of inflammation modeling the performance of the approach is evaluated.
In bioinformatics, biochemical signal pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically obtaining the most appropriate model and learning its parameters is extremely interesting. One of the most often used approaches for model selection is to choose the least complex model which “fits the needs”. For noisy measurements, the model which has the smallest mean squared error of the observed data results in a model which fits too accurately to the data – it is overfitting. Such a model will perform good on the training data, but worse on unknown data. This paper propose as model selection criterion the least complex description of the observed data by the model, the minimum description length. For the small, but important example of inflammation modeling the performance of the approach is evaluated. Keywords: biochemical pathways, differential equations, septic shock, parameter estimation, overfitting, minimum description length.
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. In this paper, for the small, important example of inflammation modeling a network is constructed and different learning algorithms are proposed. It turned out that due to the nonlinear dynamics evolutionary approaches are necessary to fit the parameters for sparse, given data. Keywords: model parameter adaption, septic shock. coupled differential equations, genetic algorithm.
Die vorliegende Arbeit setzt sich mit den Auswirkungen der Blutkomponenten (neutrophile Granulozyten und Seren) von mit HLM operierten Patienten auf die interzellulären Kontakte in cerebralen mikrovaskulären Zellverbänden auseinander. Dabei wurden Blutkomponenten sowohl von Patienten mit langen (> 80 min) als auch mit kurzen (< 80 min) HLM-Zeiten isoliert. Das Ziel war herauszufinden, welche der Blutkomponenten für die Veränderungen der Integrität und der Morphologie der Zell-Zell-Kontakte verantwortlich sind und dadurch pathologische Störungen der Blut-Hirn-Schranke verursachen können. Die Untersuchungen wurden mit einem BHS-Modell aus cerebralen mikrovaskulären Endothelzellen [BCEC] vom Schwein durchgeführt. Dabei wurde in vitro nach Behandlung des BHS-Modells mit den entsprechenden Blutkomponenten die Integrität der interzellulären Kontakte durch TEER-Messungen untersucht und die morphologischen Veränderungen sowie die Expression der zellkontaktbildenden Moleküle (VE-Cadherin, β-Catenin und Occludin) beobachtet. Es stellte sich heraus, dass Patientenseren, die zu unterschiedlichen Zeitpunkten der Herzoperationen isoliert wurden, keinen Einfluss auf die interzellulären Kontakte der BCEC-Kulturen ausüben. Dagegen führten die Kokultivierungen der BCEC mit den neutrophilen Granulozyten [PMN], die zu den gleichen Operationszeitpunkten isoliert wurden wie die Seren, zu Veränderungen innerhalb der Zell-Zell-Kontakte sowohl auf funktioneller (TEER-Abnahme) als auch auf morphologischer Ebene. Unabhängig von der Dauer der HLM-Zeiten und den PMNEntnahmezeitpunkten wirkte sich die Zugabe von PMN herzchirurgischer Patienten durch die TEER-Reduktion negativ auf die Integrität der BCEC-Kulturen aus, wobei PMN, die während des herzschirurgichen Eingriffes isoliert wurden, zu etwas stärkeren aber nicht signifikanten TEER-Abnahmen führten als die preoperativ (vor Narkose) isolierten. Auf morphologischer Ebene konnte man ebenfalls durch die Zugabe von PMN herzchirurgischer Patienten verschiedene Veränderungen der interzellulären Kontakte in BCEC-Kulturen beobachten. Die Patienten-PMN führten unabhängig von den Operationszeitpunkten, an denen sie isoliert wurden, und der Länge der HLM-Zeiten zu einer Ausstreckung der BCEC und somit zu einer Zellformveränderung sowie zu einer Verminderung des membranassoziierten β-Catenin- und Occludin-Anteils und einer Umwandlung des „zickzack“ Membranmusters in ein glattes, fast linienförmiges Muster. Zusätzlich konnte man mit Hilfe der Western-Blot-Analyse eine Abnahme der β-Catenin-Expression (AJ-Protein) in BCEC-Kulturen feststellen, die mit während und nach HLM-Einsatz isolierten PMN kokultiviert wurden. Dabei stammten die isolierten PMN von herzchirurgischen Patienten mit langen HLM-Zeiten aber auch von älteren (77 und 79 Jahre) mit kurzen HLM-Zeiten, was auf eine Abhängigkeit des durch die Patienten PMN verursachten negativen Expressionseffektes vom Patientenalter und der Dauer der HLM-Einsatzzeit deutete. Einen Einfluss der Patienten PMN auf die Expression von VE-Cadherin und Occludin konnte nicht festgestellt werden. Durch die Charakterisierung der Konzentrationsverläufe von neurologischen Markern in herzchirurgischen Patientenseren konnte eine Korrelation zwischen der Dauer der HLM-Einsätze und der NSE- und S-100B-Konzentration in Patientenseren fesgestellt werden. Pathologische Werte der beiden neurologischen Marker konnten jedoch nur bei Patienten mit HLM-Zeiten von mehr als 80 Minuten gemessen werden. Im tierexperimentellen Modell wurde dann die Modifikation der Blut-Hirn-Schranken-Permeabilität im Rahmen von herzchirurgischen Eingriffen mit HLM untersucht. Durch quantitativen Nachweis im Hirngewebe von intravenös appliziertem Evan's-Blue Farbstoff konnte eine erhöhte Permeabilität der Blut-Hirn-Schranke bei den mit HLM operierten Tieren festgestellt werden. Eine Hirnödembildung war jedoch 6 Stunden nach Experimentende mit Hilfe von MRT-Untersuchungen in keinem der untersuchten Fällen zu beobachten. Zusätzlich zu den oben genannten Experimenten wurde die stabilisierende Eigenschaft von Interferon-β [FN-β] gegenüber dem Blut-Hirn-Schranke-Modell untersucht. Dabei führte die Behandlung der konfluenten BCEC-Kulturen mit IFN-β verschiedener Konzentrationen zu hohen TEER-Anstiegen, was auf eine Stabilisierung der interzellulären Kontakte und somit der Integrität und der Barriereeigenschaften der konfluenten BCEC-Kulturen hinwies.
E-Learning Strategien als Spannungsfeld für Hochschulentwicklung, Kompetenzansätze und Anreizsysteme
(2004)
Dieser Beitrag gibt einen Einstieg in das Thema E-Learning Strategien und zugleich einen Überblick über die Themen, die in den Beiträgen in diesem Band versammelt sind. Anhand der ausführlichen Darstellung der Aspekte, die bei der Strategieentwicklung für den Medieneinsatz zu beachten sind, wird deutlich, in welcher Reihenfolge die hier vorgestellten Beispiele aus Hochschulen zu einem besseren Verständnis für die konzeptionelle und infrastrukturellen Überlegungen im Rahmen einer E-Learning-Gesamtstrategie beitragen. Neben der Einrichtung von Multimedia-Kompetenzzentren und anderen Serviceeinrichtungen sind dies Qualifizierungsangebote, Projektförderungen und begleitende Evaluations- und Beratungsansätze. Der einleitende Beitrag macht zudem deutlich, welche Schritte zur Entwicklung einer solchen Konzeption vorzunehmen, welche Hürden und Aspekte zu beachten sind, um zu einem erfolgreichen, nachhaltigen und geeigneten Medieneinsatz in der Lehre der eigenen Hochschullandschaft zu gelangen und wie Akteure und Zentren frühzeitig in einen solchen Prozess einzubinden sind.
Die bei der Photosynthese verwendete Lichtenergie wird zu einem großen Anteil von Lichtsammlersystemen bereitgestellt. In der pflanzlichen Photosynthese wird unterschieden zwischen Lichtsammlersytem I (light harvesting complex I, LHC-I), assoziiert mit Photosystem I (PS-I) und Lichtsammlersystem II (light harvesting complex II, LHC-II), assoziiert mit Photosystem II (PS-II). LHC-II ist der häufigste Protein-Pigment Komplex der Chloroplasten und bindet bis zu 50% aller Chlorophylle in der Thylakoidmembran. Der Protein-Pigment Komplex LHC-II hat vier, teils miteinander verwandte Funktionen in der Photosynthese. I) Die Sammlung und Weiterleitung von Lichtenergie, II) Stabilisierung der Granastapel, III) Ausgleich der Anregungsenergie von PS-I und PS-II, IV) Schutz der Photosynthese vor Überanregung mittels nichtphotochemischer Eliminierung von Anregungsenergie (NPQ). In der Pflanze bildet LHC-II Trimere in verschiedenen Kombinationen dreier Isoformen (Lhcb1, Lhcb2 und Lhcb3), wobei Lhcb1 mit 70-90% den Hauptteil des LHC-II stellt. Jedes Monomer bindet 8 verschiedene Co-Faktoren in unterschiedlichen Mengen, die ca. 30% seiner Masse ausmachen. Die drei Isoformen des LHC-II sind in allen Pflanzen stark konserviert. Die funktionelle Bedeutung der Isoformen ist jedoch weitestgehend unklar. Dies liegt vor allem an der schwierigen Isolierung reiner Isoformen aus Pflanzenmaterial. Im ersten Teil dieser Arbeit wurden deshalb alle drei Isoformen rekombinant hergestellt und mit getrennt isolierten Lipiden und photosynthetischen Pigmenten in ihre native Form gefaltet. Die anschließende biochemische und spektroskopische Charakterisierung zeigte einen hohen Grad an Homologie zwischen den drei Isoformen, wobei Lhcb3 die größten Unterschiede aufwies (Standfuss und Kühlbrandt 2004). Die wahrscheinlichsten Funktionen für Lhcb1 und Lhcb2 ist die Anpassung der Photosynthese an variierende Lichtbedingungen. LHC-II Heterotrimere mit Lhcb3 Anteil könnten an der Weiterleitung von Lichtenergie von der Haupt Lhcb1/Lhcb2 Antenne zum PS-II Reaktionszentrum beteiligt sein. Für die Erforschung des LHC-II war das mittels Cryo-Elektronenmikroskopie an 2D Kristallen erstellte atomare Modell des Komplexes von enormer Bedeutung. Ein tiefes Verständnis der Funktionen des LHC-II benötigt jedoch eine Struktur von höherer Auflösung, welche mit 2D Kristallen nur schwer zu erreichen ist. Im Verlauf der Arbeit wurden deshalb mehr als 100000 3D Kristallisationsexperimente durchgeführt, wodurch die Kristallisation von aus Erbsenblättern isoliertem und in vitro gefaltetem LHC-II gelang. Die 3D Kristalle aus nativem Material zeigten einen für die röntgenkristallographische Strukturaufklärung ausreichenden Ordnungsgrad und führten zu einer Struktur des LHC-II bei 2.5 Å Auflösung (Standfuss et al., eingereicht). Die Struktur zeigt 223 der 232 Aminosäuren und die Position und Orientierung von 4 Carotinoiden (2 Luteine, 1 Neoxanthin und 1 Violaxanthin), 14 Chlorophyllen (8 Chl a und 6 Chl b) und zwei Lipiden (PG und DGDG) pro Monomer. Diese Informationen sind essentiell für das Verständnis des Energietransfers innerhalb des LHC-II und zu den Photoreaktionszentren und sollten zusammen mit der großen Anzahl von spektroskopischen Untersuchungen eine zukünftige detaillierte Modellierung dieser ultraschnellen und extrem effizienten Energietransfer Prozesse ermöglichen. Auf Basis der Ladungsverteilung der stromalen Seite des Komplexes konnte ein Modell für die Beteiligung des LHC-II an der Stapelung von Grana in Chloroplasten erstellt werden. Dieses liefert außerdem eine plausible Erklärung für den mittels Phosphorylierung des N-Terminus gesteuerten Ausgleich von Anregungsenergie zwischen PS-I und PS-II. Die 2.5 Å Struktur des LHC-II zeigt schließlich einen einfachen aber effektiven Mechanismus zur Optimierung und Schutz des Photosyntheseapparates mittels NPQ. Dieser benötigt keine Strukturänderungen des LHC-II oder der restlichen Lichtsammelantenne und beruht auf der reversiblen Bindung der Xanthophylle Violaxanthin und Zeaxanthin an LHC-II. Diese Arbeit liefert damit Beiträge zu allen Funktionen des LHC-II Komplexes und hilft damit grundlegende Regulationsmechanismen und die Bereitstellung von solarer Energie für die pflanzliche Photosynthese zu verstehen.
Hydroxyethylstärke (HES) ist ein kolloidales Volumenersatzmittel, das zur Volumenbehandlung bei Trauma und bei Schock und zur Verbesserung der Rheologie bei Durchblutungsstörungen angewendet wird. Amylopektin, die Grundlage von HES, wird zur Veränderung der physikalischen Eigenschaften substituiert, um eine für die Infusion geeignete Lösung herstellen zu können. Ein wichtiger Begleiteffekt dieser Substitution ist, dass durch die dadurch erzeugten Störstellen der enzymatische Abbau der Volumenersatzmittel durch Serumglykosidasen minimiert wird. Die molekularen Eigenschaften der HES können anhand der Molekulargewichtsverteilung, beschrieben durch den Gewichtsmittelwert der Molmassen Mw, den Zahlenmittelwert der Molmassen Mn und die Molmasse im Peakmaximum Mp, sowie nach dem Ausmaß der Substitution beschrieben werden. Im Handel befindliche HES-Lösungen werden anhand des Gewichtsmittelwertes der Molmassen (Mw) und der molaren Substitution (MS) gekennzeichnet. Nach bisherigen Erkenntnissen zur Speicherung der HES in Organen stellten sich die Fragen, ob die Hypothese, dass HES durch lysosomale Enzyme abgebaut wird untermauert werden kann und ob es möglich ist, die Sicherheit der HES für die Anwendung am Patienten durch gezielte Verwendung bestimmter HES-Fraktionen zu verbessern. Ziel dieser Arbeit war daher, erstmals die Molekulargewichtsverteilung der nach Infusion von HES in Milz und Leber gespeicherten HES mittels Ausschluss-Chromatographie gekoppelt mit Mehrwinkel-Laser-Streulicht-Detektion zu bestimmen. Untersucht wurden drei handelsübliche HES-Präparate mit unterschiedlichem Mw und unterschiedlicher Substitution (die Bezeichnung schließt Mw (kDa) und MS ein): HES 130/0,4 und HES 200/0,5 sowie HES 450/0,7. Je acht Wistar-Ratten pro Versuchsgruppe erhielten 18 ml HES infundiert. Die Organe wurden für die Molmassenbestimmung bis zu fünfzig Tagen nach Infusion entnommen. Die Hämoglobinkonzentrationen und Hämatokritwerte bei den Blutabnahmen in den ersten 48 Stunden wurden ermittelt und gaben Aufschluss über die Hämodilution. Als wichtigstes Ergebnis wurde eine unterschiedliche Molmassenverteilung der HES aus Milz und Leber festgestellt. In der Leber werden vorwiegend niedermolekulare Anteile gespeichert. Das Mw der HES in der Leber lag direkt nach Infusion bei 89.606±8.570 (HES 450/0,7), 20.038±1.600 (HES 200/0,5) und 23.769±2.489 (HES 130/0,4). Im Verlauf der Untersuchungen stieg das Mw in der Leber bis maximal Tag 5 (HES 450/0,7) nach Infusion zwar an, fiel dann aber bei den weiteren Bestimmungen nach mehr als 5 Tagen wieder ab. Das Peakmaximum der Molmassenverteilung der HES in der Leber blieb dabei größtenteils konstant (HES 450/0,7: ~60 kDa; HES 200/0,5: ~30 kDa; HES 130/0,4: ~30 kDa). Die Molmassenverteilung der Milz wies hingegen hochmolekulare HES auf, wobei die Molmassen im Verlauf der Zeit noch zunahmen. Das Mw nach Infusion von HES 450/0,7 stieg dabei von 148.220 Da auf 229.617 Da im Mittel an. Möglicherweise erfolgt in der Milz vor allem eine Speicherung schwer zu spaltender HES. In der Leber konnte nach Infusion aller HES-Präparate und bereits unmittelbar nach Infusion HES gefunden werden. In der Milz war nur nach Infusion der hochmolekularen, hochsubstituierten HES 450/0,7 und der mittelmolekularen, mittelsubstituierten HES 200/0,5 gespeicherte HES nachzuweisen. Nach Infusion der HES 200/0,5 war dabei nur vereinzelt und erst ab einem Tag HES in der Milz auszumachen. In der Leber war die Speicherung der HES 450/0,7 ebenfalls am längsten festzustellen, während bei HES 130/0,4 die Speicherung in der Leber nur bis 3 Tage nach Infusion bestand. Der Verlauf der Molmassenverteilung in der Leber deutet auf einen intrazellulären Abbau der HES durch lysosomale Enzyme hin, während in der Milz über einen langen Zeitraum nicht gespaltene hochmolekulare HES angereichert wird. Die niedermolekulare, niedrigsubstituierte HES ist hinsichtlich der vorhersehbaren Dauer der Speicherung als besonders günstig anzusehen. In der Leber werden jedoch bei allen HES-Präparaten niedermolekulare Anteile in Konkurrenz zur renalen Elimination aufgenommen. Daher ist die wiederholte, hochdosierte Anwendung von HES bei dekompensierter Niereninsuffizienz aufgrund der Gefahr einer mechanischen Beeinträchtigung der Leber durch dort kumulierte HES stets kritisch zu betrachten.
We report on the rapidity and centrality dependence of proton and antiproton transverse mass distributions from 197Au + 197Au collisions at sqrt[sNN ]=130 GeV as measured by the STAR experiment at the Relativistic Heavy Ion Collider (RHIC). Our results are from the rapidity and transverse momentum range of |y| <0.5 and 0.35< pt <1.00 GeV/c . For both protons and antiprotons, transverse mass distributions become more convex from peripheral to central collisions demonstrating characteristics of collective expansion. The measured rapidity distributions and the mean transverse momenta versus rapidity are flat within |y| <0.5 . Comparisons of our data with results from model calculations indicate that in order to obtain a consistent picture of the proton (antiproton) yields and transverse mass distributions the possibility of prehadronic collective expansion may have to be taken into account.
We report results on rho (770)0--> pi + pi - production at midrapidity in p+p and peripheral Au+Au collisions at sqrt[sNN]=200 GeV. This is the first direct measurement of rho (770)0--> pi + pi - in heavy-ion collisions. The measured rho 0 peak in the invariant mass distribution is shifted by ~40 MeV/c2 in minimum bias p+p interactions and ~70 MeV/c2 in peripheral Au+Au collisions. The rho 0 mass shift is dependent on transverse momentum and multiplicity. The modification of the rho 0 meson mass, width, and shape due to phase space and dynamical effects are discussed.
The main results obtained within the energy scan program at the CERN SPS are presented. The anomalies in energy dependence of hadron production indicate that the onset of deconfinement phase transition is located at about 30 A GeV. For the first time we seem to have clear evidence for the existence of a deconfined state of matter in nature. PACS numbers: 24.85.+p
We suggest that the fluctuations of strange hadron multiplicity could be sensitive to the equation of state and microscopic structure of strongly interacting matter created at the early stage of high energy nucleus-nucleus collisions. They may serve as an important tool in the study of the deconfinement phase transition. We predict, within the statistical model of the early stage, that the ratio of properly filtered fluctuations of strange to non-strange hadron multiplicities should have a non-monotonic energy dependence with a minimum in the mixed phase region.
The data on mT spectra of K0S K+ and K- mesons produced in all inelastic p + p and p + pbar interactions in the energy range sqrt(s)NN=4.7-1800GeV are compiled and analyzed. The spectra are parameterized by a single exponential function, dN/(m_T*dm_T)=C exp(-m_T/T), and the inverse slope parameter T is the main object of study. The T parameter is found to be similar for K0S, K+ and K- mesons. It increases monotonically with collision energy from T~30MeV at sqrt(s)NN=4.7GeV to T~220MeV at sqrt(s)NN=1800GeV. The T parameter measured in p+p and p+pbar interactions is significantly lower than the corresponding parameter obtained for central Pb+Pb collisions at all studied energies. Also the shape of the energy dependence of T is different for central Pb+Pb collisions and p+p(pbar) interactions.
Fluctuations of charged particle number are studied in the canonical ensemble. In the infinite volume limit the fluctuations in the canonical ensemble are different from the fluctuations in the grand canonical one. Thus, the well-known equivalence of both ensembles for the average quantities does not extend for the fluctuations. In view of a possible relevance of the results for the analysis of fluctuations in nuclear collisions at high energies, a role of the limited kinematical acceptance is studied.
Report from NA49
(2004)
The most recent data of NA49 on hadron production in nuclear collisions at CERN SPS energies are presented. Anomalies in the energy dependence of pion and kaon production in central Pb+Pb collisions are observed. They suggest that the onset of deconfinement is located at about 30 AGeV. Large multiplicity and transverse momentum fluctuations are measured for collisions of intermediate mass systems at 158 AGeV. The need for a new experimental programme at the CERN SPS is underlined.
The transverse mass mt distributions for deuterons and protons are measured in Pb+Pb reactions near midrapidity and in the range 0<mt–m<1.0 (1.5) GeV/c2 for minimum bias collisions at 158A GeV and for central collisions at 40 and 80 A GeV beam energies. The rapidity density dn/dy, inverse slope parameter T and mean transverse mass <mt> derived from mt distributions as well as the coalescence parameter B2 are studied as a function of the incident energy and the collision centrality. The deuteron mt spectra are significantly harder than those of protons, especially in central collisions. The coalescence factor B2 shows three systematic trends. First, it decreases strongly with increasing centrality reflecting an enlargement of the deuteron coalescence volume in central Pb+Pb collisions. Second, it increases with mt. Finally, B2 shows an increase with decreasing incident beam energy even within the SPS energy range. The results are discussed and compared to the predictions of models that include the collective expansion of the source created in Pb+Pb collisions.
Event-by-event fluctuations of particle ratios in central Pb + Pb collisions at 20 to 158 AGeV
(2004)
In the vicinity of the QCD phase transition, critical fluctuations have been predicted to lead to non-statistical fluctuations of particle ratios, depending on the nature of the phase transition. Recent results of the NA49 energy scan program show a sharp maximum of the ratio of K+ to Pi+ yields in central Pb+Pb collisions at beam energies of 20-30 AGeV. This observation has been interpreted as an indication of a phase transition at low SPS energies. We present first results on event-by-event fluctuations of the kaon to pion and proton to pion ratios at beam energies close to this maximum.
Results are presented on event-by-event electric charge fluctuations in central Pb+Pb collisions at 20, 30, 40, 80 and 158 AGeV. The observed fluctuations are close to those expected for a gas of pions correlated by global charge conservation only. These fluctuations are considerably larger than those calculated for an ideal gas of deconfined quarks and gluons. The present measurements do not necessarily exclude reduced fluctuations from a quark-gluon plasma because these might be masked by contributions from resonance decays.
System size and centrality dependence of the balance function in A + A collisions at √sNN = 17.2 GeV
(2004)
Electric charge correlations were studied for p+p, C+C, Si+Si and centrality selected Pb+Pb collisions at sqrt s_NN = 17.2$ GeV with the NA49 large acceptance detector at the CERN-SPS. In particular, long range pseudo-rapidity correlations of oppositely charged particles were measured using the Balance Function method. The width of the Balance Function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
System size dependence of multiplicity fluctuations of charged particles produced in nuclear collisions at 158 A GeV was studied in the NA49 CERN experiment. Results indicate a non-monotonic dependence of the scaled variance of the multiplicity distribution with a maximum for semi-peripheral Pb+Pb interactions with number of projectile participants of about 35. This effect is not observed in a string-hadronic model of nuclear collision HIJING.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to transform the hot and dense nuclear matter into a deconfined phase.