Refine
Year of publication
- 2021 (3779) (remove)
Document Type
- Article (2374)
- Part of Periodical (273)
- Doctoral Thesis (248)
- Preprint (239)
- Contribution to a Periodical (157)
- Working Paper (142)
- Book (135)
- Review (79)
- Conference Proceeding (52)
- Part of a Book (50)
Language
- English (2577)
- German (1106)
- Portuguese (33)
- Turkish (21)
- Multiple languages (17)
- Spanish (12)
- French (10)
- Italian (2)
- cze (1)
Keywords
- taxonomy (96)
- Deutsch (67)
- new species (48)
- COVID-19 (46)
- SARS-CoV-2 (37)
- Literatur (30)
- Fremdsprache (27)
- morphology (26)
- Virtuelle Hochschule (23)
- Literaturwissenschaft (22)
Institute
- Medizin (989)
- Physik (385)
- Präsidium (284)
- Wirtschaftswissenschaften (238)
- Biowissenschaften (203)
- Frankfurt Institute for Advanced Studies (FIAS) (172)
- Sustainable Architecture for Finance in Europe (SAFE) (159)
- Biochemie, Chemie und Pharmazie (154)
- Informatik (130)
- Psychologie und Sportwissenschaften (107)
Drawing on the role of teachers for peer ecologies, we investigated whether students favored ethnically homogenous over ethnically diverse relationships, depending on classroom diversity and perceived teacher care. We specifically studied students’ intra- and interethnic relationships in classrooms with different ethnic compositions, accounting for homogeneous subgroups forming on the basis of ethnicity and gender diversity (i.e., ethnic-demographic faultlines). Based on multilevel social network analyses of dyadic networks between 1299 early adolescents in 70 German fourth grade classrooms, the results indicated strong ethnic homophily, particularly driven by German students who favored ethnically homogenous dyads over mixed dyads. As anticipated, the results showed that there was more in-group bias if perceived teacher care was low rather than high. Moreover, stronger faultlines were associated with stronger in-group bias; however, this relation was moderated by teacher care: If students perceived high teacher care, they showed a higher preference for mixed-ethnic dyads, even in classrooms with strong faultlines. These findings highlight the central role of teachers as agents of positive diversity management and the need to consider contextual classroom factors other than ethnic diversity when investigating intergroup relations in schools.
Systemic lupus erythematosus (SLE) is a severe autoimmune disease of unknown etiology. The major histocompatibility complex (MHC) class I-related chain A (MICA) and B (MICB) are stress-inducible cell surface molecules. MICA and MICB label malfunctioning cells for their recognition by cytotoxic lymphocytes such as natural killer (NK) cells. Alterations in this recognition have been found in SLE. MICA/MICB can be shed from the cell surface, subsequently acting either as a soluble decoy receptor (sMICA/sMICB) or in CD4+ T-cell expansion. Conversely, NK cells are frequently defective in SLE and lower NK cell numbers have been reported in patients with active SLE. However, these cells are also thought to exert regulatory functions and to prevent autoimmunity. We therefore investigated whether, and how, plasma membrane and soluble MICA/B are modulated in SLE and whether they influence NK cell activity, in order to better understand how MICA/B may participate in disease development. We report significantly elevated concentrations of circulating sMICA/B in SLE patients compared with healthy individuals or a control patient group. In SLE patients, sMICA concentrations were significantly higher in patients positive for anti-SSB and anti-RNP autoantibodies. In order to study the mechanism and the potential source of sMICA, we analyzed circulating sMICA concentration in Behcet patients before and after interferon (IFN)-α therapy: no modulation was observed, suggesting that IFN-α is not intrinsically crucial for sMICA release in vivo. We also show that monocytes and neutrophils stimulated in vitro with cytokines or extracellular chromatin up-regulate plasma membrane MICA expression, without releasing sMICA. Importantly, in peripheral blood mononuclear cells from healthy individuals stimulated in vitro by cell-free chromatin, NK cells up-regulate CD69 and CD107 in a monocyte-dependent manner and at least partly via MICA-NKG2D interaction, whereas NK cells were exhausted in SLE patients. In conclusion, sMICA concentrations are elevated in SLE patients, whereas plasma membrane MICA is up-regulated in response to some lupus stimuli and triggers NK cell activation. Those results suggest the requirement for a tight control in vivo and highlight the complex role of the MICA/sMICA system in SLE.
The tendency to apply media in regional studies for students of German as a foreign language (Deutsch als Fremdsprache, DaF) is no longer a novelty. Thus, we see media applied as such in multiple different forms, like videos, press releases, radio segments or online statements. This contribution is focused on the radio interview, as a constitutive part of the regional studies’ courses for DaF students of Germanistik in Romania. As a starting point serves the assumption, that the application of visual media can be enhanced through dialogue sequences, which further highlight the subject of the course. Through chosen interviews, that have been aired by the state channels, students can become familiar with such issues, that affect the German minority in Romania. The authenticity that is being sought for, will result from the primary situation of the interviewer and the interviewee communicating, as well as through the utilization of Rumäniendeutsch as the standard language of the German minority, living in Romania. Thus can cultural, social, historical and linguistic phenomena that are specific for the local German speaking population and that are (re)constructing the cultural inheritance of Germans in Romania, be regarded in greater detail.
Evidence of hydrothermal activity is reported for the Mesozoic pre- and syn-rift successions of the western Adriatic palaeomargin of the Alpine Tethys, preserved in the Western Southalpine Domain (NW Italy). The products of hydrothermal processes are represented by vein and breccia cements, as well as dolomitization and silicification of the host rocks. In the eastern part of the study area, interpreted as part of the necking zone of the continental margin, Middle Triassic dolostones and Lower Jurassic sediments are crossed by veins and hydrofracturing breccias cemented by saddle dolomite. The precipitation of dolomite cements occurred within the stratigraphic succession close to the sediment–water interface. Despite the shallow burial depth, fluid inclusion microthermometry and clumped isotopes show that hydrothermal fluids were relatively hot (80–150°C). In the western part of the study area, interpreted as part of the hyperextended distal zone, a polyphase history of host-rock fracturing is recorded, with at least two generations of veins cemented by calcite, dolomite and quartz. Vein opening and cementation occurred at shallow burial depth around the time of deposition of the syn-rift clastic succession. Fluid inclusion microthermometry on both quartz and dolomite cements indicates a fluid temperature of 90–130°C, again pointing to hydrothermal fluids. Both in Fenera-Sostegno and Montalto Dora areas, O, C and Sr isotope values, coupled with fluid inclusion and clumped isotope data, indicate that hydrothermal fluids derived from seawater interacted with crustal rocks during hydrothermal circulation. Stratigraphic and petrographic evidence, and U–Pb dating of dolomitized clasts within syn-rift sediments, document that hydrothermal fluids circulated through sediments from the latest Triassic to the Toarcian, corresponding to the entire syn-rift evolution of the western portion of the Adriatic palaeomargin. The documented hydrothermal processes are temporally correlated with regional-scale thermal events that took place in the same time interval at deeper crustal levels.
The physical processes behind the production of light nuclei in heavy ion collisions are unclear. The successful theoretical description of experimental yields by thermal models conflicts with the very small binding energies of the observed states, being fragile in such a hot and dense environment. Other available ideas are delayed production via coalescence, or a cooling of the system after the chemical freeze-out according to a Saha equation, or a ‘quench’ instead of a thermal freeze-out. A recently derived prescription of an (interacting) Hagedorn gas is applied to consolidate the above pictures. The tabulation of decay rates of Hagedorn states into light nuclei allows to calculate yields usually inaccessible due to very poor Monte Carlo statistics. Decay yields of stable hadrons and light nuclei are calculated. While the scale-free decays of Hagedorn states alone are not compatible with the experimental data, a thermalized hadron and Hagedorn state gas is able to describe the experimental data. Applying a cooling of the system according to a Saha-equation with conservation of nucleon and anti-nucleon numbers leads to (nearly) temperature independent yields, thus a production of the light nuclei at temperatures much lower than the chemical freeze-out temperature is compatible with experimental data and with the statistical hadronization model.
Conditional yield skewness is an important summary statistic of the state of the economy. It exhibits pronounced variation over the business cycle and with the stance of monetary policy, and a tight relationship with the slope of the yield curve. Most importantly, variation in yield skewness has substantial forecasting power for future bond excess returns, high-frequency interest rate changes around FOMC announcements, and consensus survey forecast errors for the ten-year Treasury yield. The COVID pandemic did not disrupt these relations: historically high skewness correctly anticipated the run-up in long-term Treasury yields starting in late 2020. The connection between skewness, survey forecast errors, excess returns, and departures of yields from normality is consistent with a theoretical framework where one of the agents has biased beliefs.
The authors identify U.S. monetary and fiscal dominance regimes using machine learning techniques. The algorithms are trained and verified by employing simulated data from Markov-switching DSGE models, before they classify regimes from 1968-2017 using actual U.S. data. All machine learning methods outperform a standard logistic regression concerning the simulated data. Among those the Boosted Ensemble Trees classifier yields the best results. The authors find clear evidence of fiscal dominance before Volcker. Monetary dominance is detected between 1984-1988, before a fiscally led regime turns up around the stock market crash lasting until 1994. Until the beginning of the new century, monetary dominance is established, while the more recent evidence following the financial crisis is mixed with a tendency towards fiscal dominance.
This note argues that the European Central Bank should adjust its strategy in order to consider broader measures of inflation in its policy deliberations and communications. In particular, it points out that a broad measure of domestic goods and services price inflation such as the GDP deflator has increased along with the euro area recovery and the expansion of monetary policy since 2013, while HICP inflation has become more variable and, on average, has declined. Similarly, the cost of owner-occupied housing, which is excluded from the HICP, has risen during this period. Furthermore, it shows that optimal monetary policy at the effective lower bound on nominal interest rates aims to return inflation more slowly to the inflation target from below than in normal times because of uncertainty about the effects and potential side effects of quantitative easing.
The plaque reduction neutralization test (PRNT) is a preferred method for the detection of functional, SARS-CoV-2 specific neutralizing antibodies from serum samples. Alternatively, surrogate enzyme-linked immunosorbent assays (ELISAs) using ACE2 as the target structure for the detection of neutralization-competent antibodies have been developed. They are capable of high throughput, have a short turnaround time, and can be performed under standard laboratory safety conditions. However, there are very limited data on their clinical performance and how they compare to the PRNT. We evaluated three surrogate immunoassays (GenScript SARS-CoV-2 Surrogate Virus Neutralization Test Kit (GenScript Biotech, Piscataway Township, NJ, USA), the TECO® SARS-CoV-2 Neutralization Antibody Assay (TECOmedical AG, Sissach, Switzerland), and the Leinco COVID-19 ImmunoRank™ Neutralization MICRO-ELISA (Leinco Technologies, Fenton, MO, USA)) and one automated quantitative SARS-CoV-2 Spike protein-based IgG antibody assay (Abbott GmbH, Wiesbaden, Germany) by testing 78 clinical samples, including several follow-up samples of six BNT162b2 (BioNTech/Pfizer, Mainz, Germany/New York, NY, USA) vaccinated individuals. Using the PRNT as a reference method, the overall sensitivity of the examined assays ranged from 93.8 to 100% and specificity ranged from 73.9 to 91.3%. Weighted kappa demonstrated a substantial to almost perfect agreement. The findings of our study allow these assays to be considered when a PRNT is not available. However, the latter still should be the preferred choice. For optimal clinical performance, the cut-off value of the TECO assay should be individually adapted.
In ‘Justice and Natural Resources,’ Chris Armstrong offers a rich and sophisticated egalitarian theory of resource justice, according to which the benefits and burdens flowing from natural (and non-natural) resources are ideally distributed with a view to equalize people’s access to wellbeing, unless there are compelling reasons that justify departures from that egalitarian default. Armstrong discusses two such reasons: special claims from ‘improvement’ and ‘attachment.’ In this paper, I critically assess the account he gives of these potential constraints on global equality. I argue that his recognition of them has implications that Armstrong does not anticipate, and which challenge some important theses in his book. First, special claims from improvement will justify larger departures from the egalitarian default than Armstrong believes. Second, a consistent application of Armstrong’s life planfoundation for special claims from attachment implies that nation-states may move closer to justify ‘permanent sovereignty’ over the resources within their territories than what his analysis suggests.
Clean water is fundamental to human health and ecosystem integrity. However, water quality deteriorates due to novel anthropogenic pollutants present at microgram per liter concentrations in urban water cycles (termed micropollutants). Wastewater treatment plants (WWTP) have been identified as major point sources for aquatic (micro-)pollutants. Chemical and ecotoxicological analyses have shown that conventional biological WWTPs do not fully remove micropollutants and associated toxicities, which is often because of mobile, polar and/or recalcitrant compounds and transformation products (TPs). To minimize possible environmental risks, advanced wastewater treatment (AWWT) technologies could be a promising mitigation measure. Multiple processes are therefore being developed and evaluated such as ozonation and ozonation followed by granulated activated carbon (GAC) or biological filtration. Assessing the performance of these combined AWWTs was the focus the TransRisk project. Within this project, this thesis accomplished four major goals.
Firstly, the preparation of (waste)water samples was optimised for in vitro bioassays. Acidification, filtration and solid phase extraction (SPE) were tested for their impact on environmentally relevant in vitro endocrine activities, mutagenicity, genotoxicity and cytotoxicity. Significantly different outcomes of these assays were detected comparing neutral and acidified samples. Sample filtration had a lesser impact, but in some cases retention of particle-bound compounds could have caused significant toxicity losses. Out of three SPE sorbents the Telos C18/ENV at sample pH 2.5 extracted highest toxicity, some undetected in aqueous samples. These results indicate that sample preparation needs to be optimised for specific sample matrices and bioassays to avoid false-positive or -negative detects in effect-based analyses.
Secondly, the above listed in vitro toxicities were monitored in a protected region for drinking water production in South-West Germany (2012-2015). Out of 30 sampling sites surface water and groundwater were the least polluted. Nonetheless, a few groundwater samples induced high anti-estrogenic activity that prompted further monitoring. The latter included a waterworks in which no toxicity was detected. Hospital wastewater also had elevated in vitro toxicities and hospitals are, thus, relevant intervention points for source control. The biological WWTPs were effective in removing most of the detected toxicity, and the selected bioassays proved to be pertinent tools for water quality assessment and prioritisation of pollution hotspots.
Thirdly, the in vivo bioassay ISO10872 based on Caenorhabditis elegans (C. elegans) was adapted for this thesis. Using this model, a median effect concentration (EC50) for reproductive toxicity of the polycyclic aromatic hydrocarbon β-naphthoflavone (β- NF) of 114 µg/L was computed which is slightly lower than reported in the scientific literature. β-NF induced cyp-35A3::GFP (a biomarker in transgenic animals) in a time and concentration dependent manner (≤ 21.3–24 fold above controls). β-NF spiked wastewater samples supported earlier hypotheses on particle-bound pollutants. Reproductive toxicity (96 h) and cyp-35A3 induction (24 h) of biologically treated and/or ozonated wastewater extracts and growth promoting effects of GAC/biologically filtered ozonated wastewater extracts were observed. This suggested the presence of residual bioactive/toxic chemicals not included in the targeted chemical analysis. It also highlighted the importance of integrating multiple (apical and molecular) endpoints in wastewater assessments.
Fourthly, five in vitro and the adapted C. elegans bioassay were integrated into a wastewater quality evaluation (developed within TransRisk). Out of the five AWWT options, ozonation (at 1 g O3,applied/g DOC, HRT ~ 18 min) combined with nonaerated GAC filtration was rated most effective for toxicity removal. All five AWWTs largely removed estrogenic and (anti-)androgenic activities, but not anti-estrogenic activity and mutagenicity, which even increased during ozonation. This has been observed in related studies and points towards toxic TPs. These results also emphasized the need for implementing an effective post-treatment for ozonation. The results from a parallel in vivo study with Lumbriculus variegatus and Potamopyrgus antipodarum conducted on site at the WWTP (using flow through systems) were in accordance with the C. elegans results. In this context, it is suggested to further implement C. elegans as sensitive, feasible and ecologically relevant model.
In conclusion, this thesis shows how optimised sample preparation, long-term (in vitro) environmental monitoring, sensitive and ecologically relevant (in vivo) bioassays as well as innovative evaluation concepts, are pivotal in improving the removal of micropollutants and their toxicities with AWWTs. Future research should further develop and evaluate measures at sewer systems, conventional biological, tertiary and other advanced treatment technologies, as well as sociopolitical strategies (e.g., source control or natural conservation) and restoration projects. The effect-based tools optimised in this thesis will support assessing their success.
In the past decades, the use and production of chemicals has been on the rise globally due to increasing industrialization and intensive agriculture; resulting in the occurrence and ecotoxicological risks of chemicals of emerging concern (CECs) in the aquatic compartments. Risks include changes in community structure resulting in the dominance of one species and ecosystem imbalance. When dominant disease-causing organisms are in the environment, the disease transmission is increased. For example, host snails for the schistosomiasis, a human trematode disease, are known to be tolerant to pesticide
exposure compared to the predators. This would therefore result in an increased abundance of snails which consequently increase the disease transmission in the human population.
Kenya, being a low income country faces a lot of challenges with provision of clean water, diseases and sanitation facilities, and increasing population which results in intensive agriculture coupled with pesticide use. Although a lot of research has been carried out on the environmental occurrence and risk of CECs (Chapter 1), most of these studies have been done in developed countries with limited information from Africa. Additionally, research in Africa focused on urban areas with limited number of compounds analyzed and mostly in the water phase, and inadequate information on the effects of CECs on the aquatic organisms. In order to reduce this knowledge gap, this dissertation focused on identification and quantification of CECs present in water, sediment and snails from western Kenya, and the contribution of pesticides to the transmission of schistosomiasis.
Chapter 2 gives a summary of the results and discussion of the dissertation. In Chapter 3, a comprehensive chemical analysis was carried out on 48 water samples to identify compounds, spatial patterns and associated risks for fish, crustacean and algae using toxic unit (TU) approach. A total of 78 compounds were detected with pesticides and biocides being the compounds most frequently detected. Spatial pattern analysis revealed limited compound grouping based on land use. Acute risk for crustaceans and algae were driven by one to three individual compounds. These compounds responsible for toxicity were prioritized as candidate compounds for monitoring and regulation in Kenya.
In Chapter 4, an extension of Chapter 3 was done to cover the CECs present in snails and sediment from the 48 sites. A total of 30 compounds were found in snails and 78 in sediments with 68 additional compounds being found which were not previously detected in water. Higher contaminant concentrations were found in agricultural sites than in areas without anthropogenic activities. The highest acute toxicity (TU 0.99) was determined for crustaceans based on compounds in sediment samples. The risk was driven by diazinon and pirimiphos-methyl. Acute and chronic risks to algae were driven by diuron whereas fish were found to be at low to no acute risk.
In Chapter 5, the effect of pesticide contamination on schistosomiasis transmission was evaluated by applying complimentary laboratory and field studies. In the field studies, the ecological mechanisms through which pesticides and physical chemical parameters affect host snails, predators and competitors were investigated. Pesticide data was obtained from the results in chapter 3. The overall distribution of grazers and predators was not affected by pesticide pollution. However, within the grazers, pesticide pollution increased dominance of host snails. On the contrary, the host-snail competitors were highly sensitive to pesticide exposure. For the laboratory studies, macroinvertebrates including Schistosoma-host snails, competitors and predators were exposed to 6 concentrations levels of imidacloprid and diazinon. Snails showed higher insecticide tolerance compared to competitors and predators. Finally, Chapter 6 summarizes the conclusions of this dissertation, placing it in a broader
context. In this dissertation, a comprehensive chemical characterization and risk assessment of CECs has been carried out in freshwater systems; together with the effects of pesticides on schistosomiasis transmission in rural western Kenya. Results of this dissertation showed that rural areas are contaminated posing a risk to aquatic organisms which contribute to schistosomiasis transmission. This shows the need for regular monitoring and policy formulation to reduce pollutant emissions which contributes negatively to both ecological and human health effects.
In this paper, we discuss Armstrong’s account of attachment-based claims to natural resources, the kind of rights that follow from attachment-based claims, and the limits we should impose on such claims. We hope to clarify how and why attachment matters in the discourse on resource rights by presenting three challenges to Armstrong’s theory. First, we question the normative basis for certain attachment claims, by trying to distinguish more clearly between different kinds of attachment and other kinds of claims. Second, we highlight the need to supplement Armstrong’s account with a theory of how to weigh different attachment claims so as to establish the normative standing that different kinds of attachment claims should have. Third, we propose that sustainability must be a necessary requirement for making attachment claims to natural resources legitimate. Based on these three challenges and the solutions we propose, we argue that attachment claims are on the one hand narrower than Armstrong suggests, while on the other hand they can justify more far-reaching rights to control than Armstrong initially considers, because of the particular weight that certain attachment claims have.
Johann Nepomuk Nestroy (1801-1862) foi um dos dramaturgos mais importantes da história do teatro da Áustria. Ator e autor aclamado durante seu tempo, é também nome de referência e enorme influência para muitos escritores austríacos canônicos do século XX e do começo do XXI, que até hoje se definem como devedores de sua poética dramatúrgica (por exemplo, Karl Kraus ou Elfriede Jelinek). Este artigo, além de destacar o papel do dramaturgo com "ancestral" para uma parte da literatura austríaca, lança um olhar sobre a única tradução de uma peça de Nestroy disponível no Brasil ("Cacique Vento-da-Tarde ou O festim do horror", publicada em 1990), com o intuito de colocar em questão a visão recorrente da obra de Nestroy como "intraduzível".
Der vorliegende Artikel gibt einen Überblick über die Deutsch - Spanisch - Äquivalenz von Begriffen des Bildungswesens in Bezug auf die Urkundenübersetzung. Zunächst wird die Übersetzung dieser Textsorte aus dem Blickwinkel verschiedener Übersetzungstheorien betrachtet. Nachfolgend werden – aus funktionaler Perspektive – die Adressaten der Zieltexte und ihre Bedürfnisse umrissen. Es folgen Beispiele für Fachausdrücke in beiden Sprachen, die in eine dreiteilige Systematik (vollständige Äquivalenz, Teiläquivalenz, Nicht-Äquivalenz) eingeordnet werden. Dabei stechen vor allem die "falschen Freunde" als häufige Fehlerquelle hervor. Eingegangen wird auch auf übersetzerische Hilfsmittel und die wichtigsten Übersetzungsstrategien im Rahmen der Übersetzung solcher Begriffe. Bezugssysteme sind das deutsche und das mexikanische Bildungswesen.
Her geçen gün yabancı ve yabancılık gibi kavramların önem kazanması ve filoloji alanlarında xenolojik ve fenomenolojik çalışmaların artması nedeniyle çeviribilimsel bir yaklaşımın da sergilenmesi zaruri olmuştur. Çeviribilim çerçevesinden bakıldığında, çevirmenler her kaynak metinde yabancıyla karşı karşıya gelmektedir. Çevirmen hem kaynak metni anlamak ve ardından da çeviride karşılaştığı yabancı fenomenleri erek metne aktarmakla yüklümdür. Bu bağlamda hem xenoloji hem de fenomenolojiden yararlanmakta fayda vardır çünkü xenolojinin temel amacı yabancı sorunu ortaya çıkarmaktır; fenomenoloji ise yabancı fenomeni görünür kılar. Waldenfels yabancı fenomenoloji kuramında yabancı deneyimini ve yabancıyı anlamayı esas alarak yabancıyı derecelendirmektedir. Waldenfels'in yaklaşımı çeviri alanına uyarlandığında, kaynak metinde geçen yabancı fenomenleri derecelerine göre ayrıştırmak çevirmenlere kolaylık sağlamaktadır. Bu nedenle bu çalışmada Katharina Reiss'ın metin tipleri modeli, Bernhard Waldenfels'in yabancılık dereceleri üzerine olan çalışmasıyla ilişkilendirilerek, Reiss'ın yaklaşımına yeni bir bakış açısı sağlanacaktır. Çevirmenler, çeviri eyleminin başlangıç safhasında, metin tipini belirlerken, hangi tür yabancılık ile karşılaşabileceklerini saptayabilir ve çevirmen kararlarını gözden geçirebilirler. Her bir saptama yabancıyı görünür kıldığından, bu görünürlüğün çeviri sürecinde de etkili olabileceği söylenebilir zira yabancının erek metinde ne kadar görünür olacağı metin tipine göre ve çevirmenin Skopos'una göre değişiklik gösterecektir.
Der Ansatz der Dekonstruktion von Jacques Derrida (1967) stellt sich gegen die Universalität sprachlicher Bedeutungskonstrukte, während sie Wörtern einen historischen Charakter zuordnet und auf die Relevanz von Subjekt, Zeit und Ort, insbesondere im Rahmen der schriftlichen Kommunikation hinweist. Die Terminologieforschung jedoch fordert eine möglichst genaue konzeptionelle Definition von Termini, damit Fachkommunikation tunlichst eindeutig verlaufen kann. In der Übersetzungswissenschaft hingegen besteht insbesondere im Rahmen theoretischer Forschungsarbeiten neben der Anforderung an eine standardisierte Terminusdefinition auch die Kritik an der Varietät der Definitionen eines Terminus. Die vorliegende Arbeit fokussiert sich auf die Darstellung einer neuen Perspektive für die Herangehensweise an Definitionen übersetzungswissenschaftlicher Terminologie. Mit Bezug auf Paradigmenwechsel innerhalb der Übersetzungswissenschaft wird in diesem Zusammenhang die historische Entwicklung der konzeptionellen Inhalte der Termini Äquivalenz, Übersetzung und Übersetzer(in) unter dem theoretischen Blickwinkel der Dekonstruktion betrachtet. Diese Arbeit soll dabei den Beleg dafür erbringen, dass Definitionen bestimmter Termini der Übersetzungswissenschaft nicht als konkret begrenzt und universal aufgefasst werden sollten.
Este trabalho tem como objetivo discutir as noções de apropriação pela via da tradução a partir da análise de oito excertos da Preparação à primeira parte da obra "Die Kunst das Clavier zu spielen" de Friedrich Wilhelm Marpurg. A Preparação desta obra compõe-se de 23 parágrafos, dos quais oito refletem ideias apresentadas numa obra anterior, a saber, "L'Art de toucher le clavecin" (1716), de François Couperin. Para este trabalho será considerado o reflexo dessas ideias pela via da tradução. Os trechos traduzidos aparecem ora aumentados, ora reduzidos, ou então são suprimidos. Esta dinâmica de tradução é afirmada por Marpurg no prefácio de duas edições anteriores a 1762: as edições de 1750 e 1751. Investigamos, nas discussões sobre apropriação, o que teria levado Marpurg a abandonar a reverência a Couperin nos prefácios de edições posteriores, o que contribuiu para que essa apropriação - ocorrida também, mas não apenas por meio da tradução - pudesse ser relacionada, a princípio, à autoria. Nesse sentido, apoiados em obras científicas sobre tradução e apropriação, discutimos a noção de tradução, apropriação e autoria, traçando um diálogo com duas áreas dos Estudos da Tradução: análise comparativa e tradução comentada.
What are called 'natural languages' are artificial, often politically instituted and regulated, phenomena; a more accurate picture of speech practices around the globe is of a multidimensional continuum. This essay asks what the implications of this understanding of language are for translation, and focuses on the variety of Afrikaans known as Kaaps, which has traditionally been treated as a dialect rather than a language in its own right. An analysis of a poem in Kaaps by Nathan Trantraal reveals the challenges such a use of language constitutes for translation. A revised understanding of translation is proposed, relying less on the notion of transfer of meaning from one language to another and more on an active engagement with the experience of the reader.
Zwar werden erst weitere Untersuchungen erweisen können, ob und inwieweit die neu entdeckten Fragmente ein neues Licht auf die Textfassung und Geschichte der Handschrift, aus der sie stammen, werfen können oder gar auf die allgemeine Text- und Überlieferungsgeschichte des deutschen 'Lucidarius'. In jedem Fall aber hat dieser Heidelberger Neufund die aus der Einband- und Provenienzanalyse gewonnenen Überlegungen zum ursprünglichen Trägerband der von Mone entdeckten 'Lucidarius'-Fragmente vollauf bestätigt. Die Frage, wo bzw. in welchem Buch Franz Joseph Mone die zuerst entdeckten Teile der Göttinger 'Lucidarius'-Handschrift gefunden hat, kann nun als geklärt gelten.
El expresionismo alemán nace como tendencia en el período de entreguerras, y es en ese contexto histórico, político y social donde ve la luz la película "El gabinete del Dr. Caligari" (1920). El presente artículo se propone analizar dicho film prestando especial atención a la relación interpersonal que se establece entre los personajes Caligari y Cesare. Resultan fundamentales para ello los planteos sobre estética, teoría del cine y filosofía acuñados por Siegfried Kracauer. El objetivo de este trabajo es entonces mostrar de qué manera la dialéctica hegeliana permite reflexionar sobre cómo el cine alemán ha expresado miedos y tensiones sociales que luego serán materializados en la realidad política y social de la Alemania del momento.
Introduction
(2021)
Ziel dieser Arbeit war es, einen genaueren Einblick in die Rolle von PaCLPXP für den Energiemetabolismus von P. anserina zu erhalten und mögliche Komponenten zu identifizieren, welche wichtig für die Langlebigkeit der PaClpP-Deletionsmutante sind. Folgende neue Erkenntnisse konnten hierbei gewonnen werden:
1. Die Substrat-Analyse durch eine Cycloheximid-Behandlung und anschließender Proteom-Analyse legte erfolgreich eine Reihe potentieller bisher nicht bekannter Substrate von PaCLPP offen. Interessanterweise waren unter den identifizierten Proteinen viele ribosomale Untereinheiten und Komponenten verschiedener Stoffwechselwege des Energiemetabolismus zu finden. Am auffälligsten unter diesen Substraten war die extreme Anreicherung eines Retikulon-ähnlichen Proteins, das einen neuen Aspekt der möglichen molekularbiologischen Rolle von PaCLPP in P. anserina andeutet.
2. Durch die Zugabe von Butyrat zum Medium, konnte erfolgreich die Autophagie sowohl im P. anserina Wildtyp als auch in der PaClpP-Deletionsmutante reduziert werden. Diese Verminderung der Autophagie sorgt bei ΔPaClpP für eine Verkürzung der Lebensspanne. Dieser Effekt ist spezifisch für die PaClpP-Deletionsmutante, während die Auswirkung von Butyrat auf den Wildtyp nur marginal ist. Dieses Ergebnis untermauert frühere Analysen dieser Deletionsmutante, welche besagen, dass die Langlebigkeit von ΔPaClpP Autophagie abhängig ist (Knuppertz und Osiewacz, 2017).
3. Die Metabolom-Analyse von ΔPaClpP im Vergleich zum Wildtyp zeigt, dass das Fehlen der PaCLPP zu Veränderungen in der Menge der Metaboliten der Glykolyse und des Citratzyklus kommt. Außerdem sind die Mengen der meisten Aminosäuren und der Nukleotide betroffen. Diese Analyse beweist, dass das Fehlen dieser mitochondrialen Protease weitreichende Folgen für die ganze Zelle hat. Durch die signifikante Verringerung von ATP und die Anreicherung von AMP in jungen ΔPaClpP-Stämmen und durch den Umstand der gesteigerten Autophagie in dieser Mutante, fiel das Augenmerk auf die AMPK. Dieses veränderte AMP/ATP-Verhältnis ist ein Indiz für eine gesteigerte AMPK-Aktivität und könnte auch den Umstand der gesteigerten Autophagie in ΔPaClpP erklären.
4. Das Gen codierend für die katalytische α-Untereinheit der AMPK (PaSnf1) konnte erfolgreich in P. anserina deletiert werden. Das Fehlen von PaSNF1 führt zu einer reduzierten Wuchsrate, eine beeinträchtige weibliche Fertilität und eine verzögerte Sporenreifung. Es konnte gezeigt werden, dass die Autophagie infolge einer PaSnf1-Deletion nicht gänzlich unterdrückt wird, PaSNF1 allerdings für die Stress-induzierte Autophagie notwendig ist. Überraschenderweise führt die Abwesenheit von PaSNF1 zu einer verlängerten Lebensspanne im Vergleich zum Wildtyp. Die meisten Effekte infolge einer PaSnf1-Deletion konnten durch die Einbringung eines FLAG::PaSNF1-Konstrukts komplementiert werden.
5. Eine gleichzeitige PaSnf1 und PaClpP-Deletion führt zu eine unerwarteten, extremen Lebenspannenverlängerung, die die Verlängerung der Lebensspanne bei der PaClpP-Deletionsmutante noch übertrifft. Interessanterweise geht dieser Phänotyp nicht mit einer erhöhten Autophagie einher. Des Weiteren konnte beobachtet werden, dass das Fehlen von PaSNF1 sowohl in ΔPaSnf1 als auch in ΔPaSnf1/ΔPaClpP zu einer veränderten Mitochondrien-Morphologie im Alter führt. Die Abwesenheit von PaSNF1 verursacht, dass die Stämme auch im Alter (20d) noch überwiegend filamentöse Mitochondrien aufweisen. Zudem zeigen die drei analysierten Deletionsstämme (ΔPaSnf1, ΔPaClpP und ΔPaSnf1/ΔPaClpP) massive Einschränkungen wenn sie auf die mitochondriale Funktion angewiesen sind.
6. Auffallend war, dass bei ΔPaSnf1, ΔPaClpP und bei ΔPaSnf1/ΔPaClpP die Stämme mit dem Paarungstyp „mat-“ langlebiger sind als die Stämme mit dem Paarungstyp „mat+“. Dieser Effekt ist bei der ΔPaSnf1/ΔPaClpP-Doppelmutante am stärksten ausgeprägt. Weitere Untersuchungen dazu ergaben, dass die Paarungstypen immer dann eine Rolle spielen, wenn die Stämme mitochondrialem Stress ausgesetzt, oder aber auf die mitochondriale Funktion angewiesen sind. Verantwortlich für diese Unterschiede sind zwei rmp1-Allele, die mit den unterschiedlichen Paarungstyp-Loci gekoppelt sind und mit dem jeweiligen Paarungstyp-Locus vererbt werden (rmp1-1 mit „mat-“; rmp1-2 mit „mat+“).
Operons wurden zuerst im Jahre 1961 beschrieben. Bis heute ist bekannt, dass die prokaryotischen Domänen Bacteria und Archaea Gene sowohl in monocistronischen als auch in bi- oder polycistronischen Transkripten exprimieren können. Häufig überlappen Gene sogar in ihren Sequenzen. Diese überlappenden Genpaare stehen nicht in Korrelation mit der Kompaktheit ihres Genoms. Das führt zu der Annahme, dass eine Art der Regulation vorliegt, welche weitere Proteine oder Gene nicht benötigt. Diese könnte eine gekoppelte Translation sein. Das bedeutet die Translation des stromabwärts-liegenden Gens ist abhängig von der Translation eines stromaufwärts-liegenden Gens. Diese Abhängigkeit kann zum Beispiel durch lang reichende Sekundärstrukturen entstehen, bei welchen Ribosomenbindestellen (RBS) des stromabwärts-liegenden Gens blockiert sind. Die de novo-Initiation am stromabwärts-liegenden Gen kann nur stattfinden, wenn das erste Gen translatiert wird und dabei die Sekundärstruktur an der RBS aufgeschmolzen wird. Für Genpaare in E. coli ist dieser Mechanismus gut untersucht. Ein anderes Beispiel für die Translationskopplung ist die Termination-Reinitiation, bei welcher ein Ribosom das erste Gen translatiert bis zum Stop-Codon, dort terminiert und direkt am stromabwärts-liegenden Start-Codon reinitiiert. Der Mechanismus via Termination-Reinitiation ist bis jetzt nur für eukaryontische Viren beschrieben worden. Im Gegensatz zu einer Kopplung über Sekundärstrukturen kommt es bei der Termination-Reinitiation am stromabwärts-liegenden Gen nicht zu einer de novo-Initiation sondern eine Reinitiation des Ribosoms findet statt. Diese Arbeit analysiert jene Art der Translationskopplung an Genen polycistronischer mRNAs in jeweils einem Modellorganismus als Vertreter der Archaea (Haloferax volcanii) und Bacteria (Escherichia coli). Hierfür wurden Reportergenvektoren erstellt, welche die überlappenden Genpaare an Reportergene fusionierten. Für diese Reportergene ist es möglich die Transkriptmenge zu quantifizieren sowie für die exprimierten Proteine Enzymassays durchgeführt werden können. Aus beiden Werten können Translationseffizienzen berechnet werden indem jeweils die Enzymaktivität pro Transkriptmenge ermittelt wird. Durch ein prämatures Stop-Codon in diesen Konstrukten ist es möglich zu unterscheiden ob es für die Translation des zweiten Gens essentiell ist, dass das Ribosom den Überlapp erreicht. Hiermit konnte für neun Genpaare in H. volcanii und vier Genpaare in E. coli gezeigt werden, dass eine Art der Kopplung stattfindet bei der es sich um eine Termination-Reinitiation handelt. Des Weiteren wurde analysiert, welche Auswirkungen intragene Shine-Dalgarno Sequenzen bei dem Event der Translationskopplung besitzen. Durch die Mutation solcher Motive und dem Vergleich der Translationseffizienzen der Konstrukte, mit und ohne einer SD Sequenz, wird für alle analysierten Genpaare beider Modellorganismen gezeigt, dass die SD Sequenz einen Einfluss auf diese Art der Kopplung hat. Zwischen den Genpaaren ist dieser Einfluss jedoch stark variabel. Weiterhin wurde der maximale Abstand zwischen zwei bicistronischen Genen untersucht, für welchen Translationskopplung via Termination-Reinitiation noch stattfinden kann. Hierfür wird durch site-directed mutagenesis jeweils ein prämatures Stop-Codon im stromaufwärts-liegenden Gen eingebracht, welches den intergenen Abstand zwischen den Genen in den jeweiligen Konstrukten vergrößert. Der Vergleich aller Konstrukte eines Genpaars zeigt in beiden Modellorganismen, dass die Termination-Reinitiation vom intergenen Abstand abhängig ist und die Translationseffizienz des stromabwärts-liegenden Reporters bereits ab 15 Nukleotiden Abstand abnimmt.
Eine weitere Fragestellung dieser Arbeit war es, den genauen Mechanismus der Termination-Reinitiation zu analysieren. Für Ribosomen gibt es an der mRNA nach der Termination der Translation zwei Möglichkeiten: Entweder als 70S Ribosom bestehen zu bleiben und ein weiteres Start-Codon auf der mRNA zu suchen oder in seine beiden Untereinheiten zu dissoziieren, während die 50S Untereinheit die mRNA verlässt und die 30S Untereinheit über Wechselwirkungen an der mRNA verbleiben kann. Um diesen Mechanismus auf molekularer Ebene zu untersuchen, wird ein Versuchsablauf vorgestellt. Dieser ermöglicht das Event bei der Termination-Reinitiation in vitro zu analysieren. Eine Unterscheidung von 30S oder 70S Ribosomen bei der Reinitiation der Translation des stromabwärts-liegenden Gens wird ermöglicht. Die Idee dabei basiert auf einem ribosome display, bei welchem Translationskomplexe am Ende der Translation nicht in ihre Bestandteile zerfallen können, da die eingesetzte mRNA kein Stop-Codon enthält Der genaue Versuchsablauf, die benötigten Bestandteile sowie proof-of-principal Versuche sind in der Arbeit dargestellt und mögliche Optimierungen werden diskutiert.
In Germany, traffic planning still follows the tradition of modernist urban planning theory from the beginning of the 1930s and car-oriented city planning during the post-war period in West Germany. From a methodological perspective, the prevailing narrative is that traffic can be abstracted and modelled under laboratory conditions (in vitro) as a spatial movement process of individual neutral particles. The use of these laboratory experiments in traffic planning cannot be understood as a neutral application of experimental results, assumed to be true, in a variety of spatial contexts. Rather, it is an active practice of staging traffic according to a particular social interactionist paradigm.
According to this, traffic is staged through interventions in planning authorities as well as the practices of people on the streets. In order to describe these staging conduits, traffic is ontologically thought of as a social order that is continuously reproduced situationally through interactions, following Erving Goffman and Harold Garfinkel. To investigate the staging conduits empirically, an ethnographic-inspired field study was conducted at Willy-Brandt-Platz in Frankfurt am Main in May and June 2020. Through situational mapping and observation of social interactions (in situ), knowledge about the staging of social orders was generated.
These empirical findings are further embedded in debates that discuss traffic not only as a staging but also as an enactment of certain realities. Understanding planning practice as a political enactment, through which realities are not only described but also made, makes it possible for us to think and design alternative realities.
kurz und kn@pp news : Nr. 52
(2021)
kurz und kn@pp news : Nr. 51
(2021)
In the model of randomly perturbed graphs we consider the union of a deterministic graph G with minimum degree αn and the binomial random graph G(n, p). This model was introduced by Bohman, Frieze, and Martin and for Hamilton cycles their result bridges the gap between Dirac’s theorem and the results by Pósa and Korshunov on the threshold in G(n, p). In this note we extend this result in G ∪G(n, p) to sparser graphs with α = o(1). More precisely, for any ε > 0 and α: N ↦→ (0, 1) we show that a.a.s. G ∪ G(n, β/n) is Hamiltonian, where β = −(6 + ε) log(α). If α > 0 is a fixed constant this gives the aforementioned result by Bohman, Frieze, and Martin and if α = O(1/n) the random part G(n, p) is sufficient for a Hamilton cycle. We also discuss embeddings of bounded degree trees and other spanning structures in this model, which lead to interesting questions on almost spanning embeddings into G(n, p).
Introduction: Obesity is classified as a global epidemic and judged to be the greatest public health threat in Western countries. The tremendously increasing prevalence rates in children lead to morbidity and mortality in adults. In many countries, prevalence has doubled since the 1980s. Other countries show a continuous increase or stagnate at a very high level. Given these regional differences, this study aims to draw a global world map of childhood obesity research, including regional epidemiological characteristics, to comprehensively assess research influences and needs. Methods: In addition to established bibliometric parameters, this study uses epidemiological data to interpret metadata on childhood obesity research from the Web of Science in combination with state-of-the-art visualization methods, such as density equalizing map projections. Results: It was not until the 1990s that belated recognition of the dangerous effects of childhood obesity led to an increase in the number of publications worldwide. In addition, our findings show that countries’ study output does not correlate with epidemiologic rates of childhood obesity. In contrast, the primary driver of the research efforts on childhood obesity appears to be largely driven government funding structures. Discussion/Conclusion: The geographical differences in the epidemiological background of childhood obesity complicate the implementation of transnational research projects and cross-border prevention programs. Effective realization requires a sound scientific basis, which is facilitated by globally valid approaches. Hence, there is a need for information exchange between researchers, policy makers, and private initiatives worldwide.
Die folgende Arbeit beschäftigt sich mit dem Thema des Wortschatzerwerbs im Bereich Englisch als Fremdsprache. Die Relevanz und Notwendigkeit des Erlernens von Vokabular ist für den erfolgreichen Fremdsprachenerwerb unumstritten. Sprachen bauen auf Wörtern, sprich Bedeutungsträgern, auf, die es erst zulassen in einer Sprache miteinander kommunizieren zu können. Die Kommunikation kann mit mangelndem und unausreichendem Wortschatz nur begrenzt stattfinden und Missverständnisse können hervortreten. Daher wird in dieser Arbeit in der Einleitung betont, wie wichtig es ist, den Wortschatz von Studierenden zu fördern und je nach Niveau und Altersstufe die richtigen Methoden zur Vermittlung auszuwählen. Die Arbeit liefert dabei explizit ein Beispiel der Umsetzung. Diese findet im universitären Kontext in einer Vorbereitungsklasse einer privaten Universität (Nişantaşı Universität) in Istanbul statt. Die Student*innen der Vorbereitungsklasse befinden sich auf dem Niveau B1 und erwerben Englisch als Fremdsprache. Ein Teil dieser Arbeit zeigt, wie der Wortschatz zu einem bestimmten Thema, hier Beschreibung von Persönlichkeit/ Aussehen, vermittelt werden kann und geht dabei auf Materialien und Arbeitsvorgänge im Unterricht ein. Es werden ein Vortest und ein Nachtest zum Thema durchgeführt, die die Erfolgsquote vor und nach der Vermittlung messen sollen. Ein zweiter Teil der Arbeit beschäftigt sich mit dem Vergleich des im Englischen zusammengestellten Wortschatzes mit dem entsprechenden Wortschatz im Deutschen, um festzustellen, welche Gemeinsamkeiten und Diskrepanzen zwischen den beiden Sprachen existieren. Die Arbeit resultiert in einem Fazit, welches ein weiteres Mal die Relevanz von dem Wortschatzerwerb im Fremdsprachenunterricht betont.
Este artigo pretende apresentar algumas das ideias de Brecht sobre o cinema e a fotografia, elaboradas em obras literárias, ensaios e notas escritos ao longo de sua trajetória. A proposta aqui é oferecer ao leitor de português acesso a uma importante faceta do pensamento brechtiano, ainda pouco conhecida no Brasil: suas reflexões sobre a imagem técnica.
History films personalize, dramatize and emotionalize historical events and characters. They revive the past by exemplifying it in the present, engage ongoing discourses of history and as a result have proven to be the most influential medium in conveying history to large audiences. History films are regarded as an attractive, motivating and efficient (supplementary) teaching and learning medium in history as well as in foreign language classes. As part of the course "Historical Survey of Germany" (BA German-programme at University Putra Malaysia) history film projects on important periods and events in German history were conducted. The article introduces a film project on World War II and describes the pedagogical approach which aims to develop three core competencies of historical understanding – Content Knowledge, Historical Empathy/Perspective Recognition and Narrative Analysis. It discusses selected general findings provided as qualitative data in group and individual assignments. While the responses to questions related to Content Knowledge and Narrative Analysis show that students achieved higher competency levels, the participants showed shortcomings in the rational examination of historical characters, their perspectives and motivations for their actions. Time, practice and guidance can be identified as key factors in developing historical literacy competencies further.
We review the effective field theory associated with the superfluid phonons that we use for the study of transport properties in the core of superfluid neutrons stars in their low temperature regime. We then discuss the shear and bulk viscosities together with the thermal conductivity coming from the collisions of superfluid phonons in neutron stars. With regard to shear, bulk, and thermal transport coefficients, the phonon collisional processes are obtained in terms of the equation of state and the superfluid gap. We compare the shear coefficient due to the interaction among superfluid phonons with other dominant processes in neutron stars, such as electron collisions. We also analyze the possible consequences for the r-mode instability in neutron stars. As for the bulk viscosities, we determine that phonon collisions contribute decisively to the bulk viscosities inside neutron stars. For the thermal conductivity resulting from phonon collisions, we find that it is temperature independent well below the transition temperature. We also obtain that the thermal conductivity due to superfluid phonons dominates over the one resulting from electron-muon interactions once phonons are in the hydrodynamic regime. As the phonons couple to the Z electroweak gauge boson, we estimate the associated neutrino emissivity. We also briefly comment on how the superfluid phonon interactions are modified in the presence of a gravitational field or in a moving background.
Arachnides 103.2021
(2021)
Das Handschriftenfragment Bergen, Universitetsbiblioteket, MS 1550.5 (im Folgenden: MS 1550.5) veranschaulicht das Schicksal vieler Mittelaltercodices der frühen Neuzeit in Norwegen. Die meisten für liturgische Zwecke genutzten Codices wurden als Palimpseste für verschiedene Aktenstücke, Verstärkung in Buchrücken, Fütterung von Brieftaschen u.ä. wiederverwendet. Das hier vorgestellte Pergamentstück mit den Maßen 305 × 175 mm erhielt in der Forschung bereits weitreichende Aufmerksamkeit. Der Fokus lag allerdings auf der lateinischen Textüberlieferung. Der vorliegende Artikel zielt nun darauf ab, das Fragment in Zusammenhang mit seiner Wiederverwendung als altnorwegische Urkunde zu kontextualisieren und mittels einer vollständigen Beschreibung des Fragments dessen Rezeptionsgeschichte zu beleuchten.
Objectives: Inadequate oral hygiene still leads to many serious diseases all over the world. Therefore, this study aimed to analyze scientific research in the field of oral health in order to be able to comprehend their relevant subject areas, research connections, or developments. Methods: This study aimed to assess the global publication output on oral hygiene to create a world map that provides background information on key players, trends, and incentives of research. For this purpose, established bibliometric parameters were combined with state-of-the-art visualization techniques. Results: This study shows the actual key players of research on oral hygiene in high-income economies with only marginal participation from lower economies. This still corresponds to the current burden situations, but they are more and more shifting to the disadvantage of the low-income countries. There is a clear North–South and West–East gradient, with the USA and the Western European nations being the most publishing nations on oral hygiene. As an emerging country, Brazil plays a role in the research. Conclusions: The scientific power players were concentrated in high-income countries. However, the changing epidemiological situation requires a different scientific approach to oral hygiene. This requires an expansion of the international network to meet the demands of future global oral health burdens, which are mainly related to oral hygiene.
This article provides a comparative overview of phonological and phonetic differences of Mukrī Kurdish varieties and their geographical distribution. Based on the examined data, four distinct varieties can be distinguished. In each variety area, different phonological patterns are analyzed according to age, gender, and social groups in order to establish cross-regional and cross-generational developments in relation to specific phonological distributions and shifts. The variety regions which are examined in the present article include West Mukrī (representing an archaic form of Mukrī), Central Mukrī (representing a linguistically peripheral dialect), East Mukrī (representing mixed archaic and peripheral dialect features), and South Mukrī (sharing features of both Mukrī and Ardałānī). The study concludes that variation in the Mukrīyān region depends on phonological developments, which in turn are due to geographical and sociological factors. Moreover, contact-induced change and internal language development are also established as triggering factors distinguishing regional variants.
During RUN3 (2021-2023) of the Large Hadron Collider, the Time Projection Chamber (TPC) of ALICE will be operated with quadruple stacks of Gas Electron Multipliers (GEMs). This technology will allow to overcome the rate limitation due to the gated operation of the Multi-Wire Proportional Chambers (MWPCs) used in RUN1 (2009-2013) and RUN2 (2015-2018).
As part of the Upgrade project, long-term irradiation tests, so called "ageing tests", have been carried out. A test setup with a detector using a quadruple stack of 10x10cm2 GEMs was built and operated in Ar-CO2 and Ne-CO2-N2 gas mixtures. The detector performance such as gas gain and energy resolution were monitored continuously. In addition, outgassing tests of materials used for the assembly process of the upgraded TPC were performed. To reach the expected dose of the GEM-based TPC, the detector was operated at much higher gains than the TPC. It was found, that the GEMs could keep their performance within the projected lifetime of the TPC. Most of the tested materials showed no negative impact on the detector. For the tested epoxy adhesive no certain conclusion could be drawn.
At much higher doses than expected for the upgraded TPC, a new phenomenon was observed, which changed the hole geometry of the GEMs and led to a degradation of the energy resolution. Even though its occurrence is not expected during the lifetime of the GEM-based TPC, simulations were carried out to study this effect more systematically. The simulations confirmed, that a change of the hole geometries of the GEMs, lead to an increase of the local gain variation, which results in a decrease of the energy resolution.
Furthermore the effect of methane as quench gas on GEMs was studied, even though this gas is not foreseen to be used in the TPC. From ageing tests with single-wire proportional counters it is well known that hydrocarbons are produced in the plasma of the avalanches, which cover the electrodes and lead to a degradation of the detector performance. Even though GEMs have a quite different geometry, the ageing tests showed, that also this technology tends to methane-induced ageing. A loss of gas gain as well as a degradation of the energy resolution due to deposits on the electrodes was monitored. A qualitative and quantitative comparison between ageing in GEMs and proportional counters was performed.
The main focus of research in the field of high-energy heavy-ion physics is the study of the quark-gluon plasma (QGP). Topic of the present work is the measurement of electron-positron pairs (dielectrons), which grant direct access to some of the key properties of this state of matter, since after their formation they leave the hot and dense medium without significant interaction. In particular, the measurement of the initial QGP temperature is considered a "holy grail" of heavy-ion physics. Therefore, in addition to the analysis of existing data, a feasibility study has been conducted to determine to which extent this goal would be achievable by upgrading the ALICE experiment at CERN.
Dielectrons are produced during all stages of a heavy-ion collision, with their invariant mass reflecting the amount of energy available at the time of their formation. Dielectrons of highest mass are thus produced in the initial scatterings of the colliding nuclei by quark-antiquark annihilation. Correlated electron-positron pairs can also emerge from the decay chains of early-produced pairs of heavy-flavour (HF) particles. During the QGP stage and at the beginning of the hadronic phase, the system emits thermal radiation in the form of photons and dielectrons, which carry information about the medium temperature to the observer. In the final stage of the collision, decays of light-flavour (LF) hadrons produce additional contributions to the dielectron spectrum.
The present work is based on early data from the ALICE experiment recorded from lead-lead collisions at a center-of-mass energy of 2.76 TeV. Due to the limited amount of data, a focus is placed on achieving high efficiencies throughout the analysis. To this end, a special electron identification strategy is developed and a custom track selection applied, together resulting in a tenfold increase in pair efficiency. The dielectron spectrum is evaluated on a statistical basis, using a pair prefilter, which is optimized based on two signal quality criteria, to reduce the fraction of electrons and positrons from unwanted sources at minimum signal loss. In addition, an artifact of the track reconstruction is exploited to suppress pairs from photon conversions and to correct the dielectron yield for a contribution from different-conversion pairs. The main signal uncertainty is extracted from the deviation between results of 20 analysis settings and amounts to 20% in most of the studied kinematic range.
For comparison with the analysis results, a hadronic cocktail consisting of the LF and HF contributions is simulated, which can reasonably well describe the measured dielectron production, with a hint of an enhancement at low invariant mass. Two approaches to model the in-medium modification of the heavy-flavour are followed, resulting in up to 50% suppression, which creates some additional space for a thermal contribution at intermediate mass.
For a complete comparison between experimental data and theoretical expectation, two model calculations are consulted. The Thermal Fireball Model provides predictions for thermal dielectron radiation from the QGP and hadron gas. The data tends to be better described with these additional thermal contributions. For a comparison with a prediction by the UrQMD model, the HF component of the cocktail is subtracted from the data. This results in better agreement if the HF suppression by in-medium effects is taken into account.
The feasibility study in this work has served as a physical motivation for the ALICE upgrade for LHC Run 3. The precision with which the early temperature of the QGP can be determined via dielectrons is chosen as key observable. A multitude of individual contributions are merged into a fully modeled dielectron analysis. The resulting signal-to-background ratio represents some of the expected systematic uncertainties, while from the significance combined with the planned number of lead-lead collisions a realistic "measurement" with statistical fluctuations around the expected dielectron signal is generated using a Poisson sampling technique. Since the HF yield exceeds the QGP thermal radiation by about an order of magnitude, an additional analysis step exploiting the enhanced track reconstruction is introduced to reduce its contribution by up to a factor of five. The resulting reduction in pair efficiency is overcompensated by an up to hundred times higher collision rate. The entire cocktail is then subtracted from the sampled data to isolate the thermal excess yield. The final analysis of this spectrum shows that the inverse slope of the model prediction, which depends directly on the QGP temperature, can be reproduced within statistical and systematic uncertainties of about 10%.
The promising results of this study have contributed on the one hand to the realization of the ALICE upgrade and to a design decision for the new Inner Tracking System, and at the same time represent exciting predictions for upcoming measurements.
Das Feld der Hochenergie-Schwerionenforschung hat sich der Untersuchung des Quark-Gluon-Plasmas (QGP) gewidmet. Ein QGP ist ein sehr heißer und dichter Materiezustand, der kurz nach dem Urknall für einige Mikrosekunden das Universum füllte. Unter diesen extremen Bedingungen sind die fundamentalen Bausteine der Materie, die Quarks und Gluonen, quasi frei, also nicht in Hadronen eingeschlossen, wie es unter normalen Bedingungen der Fall ist. Hadronen sind Teilchen, die aus Quarks und Gluonen bestehen. Die bekanntesten Hadronen sind Protonen und Neutronen, die Bestandteile von Atomkernen, aus denen, zusammen mit Elektronen, die gesamte bekannte Materie aufgebaut ist.
Um ein QGP im Labor zu erzeugen, lässt man ultrarelativistische schwere Ionen, wie zum Beispiel Pb-208-Kerne, aufeinander prallen. Dies geschieht am CERN, dem größten Kernforschungszentrum der Welt. Der Teilchenbeschleuniger, welcher Protonen und Pb-Kerne beschleunigt und zur Kollision bringt, heißt Large Hadron Collider (LHC) und ist mit 27 km Umfang der größte der Welt. Bei einer einzigen Pb-Pb Kollision am LHC werden mehrere Tausend Teilchen und Antiteilchen erzeugt. Das dedizierte Experiment zur Untersuchung von Schwerionenkollisionen am LHC ist ALICE. ALICE ist mit mehreren Teilchendetektoren ausgerüstet, die es ermöglichen, tausende Teilchen gleichzeitig zu messen und zu identifizieren.
Unter den produzierten Teilchen befinden sich auch leichte Atomkerne, wenngleich diese nur sehr selten erzeugt werden. Die Anzahl der produzierten Teilchen pro Teilchensorte hängt nämlich von deren Masse ab. In Pb-Pb Kollisionen am LHC sinkt die Anzahl der produzierten (Anti)kerne exponentiell um einen Faktor 1/330 bei Hinzufügen jedes weiteren Nukleons. Die Menge an produzierten Teilchen pro Spezies stellt Informationen über den Produktionsmechanismus beim Übergang vom QGP zum Hadrongas zur Verfügung. Hierbei sind leichte (Anti)kerne von besonderem Interesse, da sie vergleichsweise groß sind und ihre Bindungsenergie bis zu zwei Größenordnungen kleiner ist als die Temperaturen, die bei der Erzeugung der Hadronen vorherrschen. Es ist bis heute noch nicht verstanden, wie leichte (Anti)kerne bei diesen Bedingungen erzeugt werden und überleben können.
Für diese Arbeit wurden ca. 270 Millionen Pb-Pb Kollisionen bei einer Schwerpunktsenergie von 5,02 TeV, die von ALICE im November 2018 aufgezeichnet wurden, analysiert. Es wurde die Produktion von (Anti)triton und (Anti)alpha untersucht. Wegen ihrer großen Masse werden beide Kerne sehr selten produziert, bei weitem nicht bei jeder Kollision. Antialpha ist der schwerste Antikern, der jemals gemessen wurde. Aufgrund dieser Seltenheit ist die Größe des zur Verfügung stehenden Datensatzes entscheidend. Es war möglich, das erste jemals gemessene Antialpha-Transversalimpulsspektrum zu extrahieren. Auch für (Anti)triton und Alpha wurden Transversalimpulsspektren bestimmt.
Die Ergebnisse wurden mit theoretischen Modellen und anderen ALICE Messungen verglichen.
Am Ende wird in einem Ausblick auf das kürzlich durchgeführte Upgrade der ALICE Spurendriftkammer (TPC) eingegangen. In der nächsten, bald startenden Datennahmeperiode wird der LHC seine Kollisionsrate erheblich erhöhen, was es ermöglichen wird, mehr als 100 mal so viele Daten wie bisher aufzuzeichnen. Hiervon werden die in dieser Arbeit beschriebenen (Anti)triton- und (Anti)alpha-Analysen beachtlich profitieren. Um mit den erheblich höheren Kollisionsraten zurecht zu kommen, mussten einige Detektoren, unter anderem die TPC, maßgeblich erneuert werden. In den ersten beiden Datennahmeperioden wurde die TPC mit Vieldrahtproportionalkammern betrieben. Diese sind allerdings viel zu langsam für die geplanten Kollisionsraten. Deshalb wurden sie im Jahr 2019, während einer langen Betriebspause des LHC, durch Quadrupel-GEM (Gas Electron Multiplier) Folien basierte Auslesekammern ersetzt, welche eine kontinuierliche Auslese der TPC ermöglichen. Da es sich um die erste jemals gebaute GEM TPC im Großformat handelt, war ein umfangreiches Forschungs- und Entwicklungs- (F&E) Programm notwendig, um die GEM Auslesekammern zu charakterisieren und zu testen. Im Rahmen dieses F&E Programms wurden am Anfang dieser Promotion systematische Messungen an einer kleinen Test TPC mit Quadrupel-GEM Auslese, die extra zu diesem Zweck gebaut worden war, durchgeführt. Hierbei wurde der Rückfluss der bei der Gasverstärkung erzeugten Ionen in das Driftvolumen der TPC und die Energieauflösung mit verschiedenen GEM Folien Typen und unterschiedlicher Anordnung gemessen. Das Ziel war, möglichst kleine Ionenrückflüsse bei möglichst guter Energieauflösung zu erreichen. Hierbei musste ein Kompromiss gefunden werden, da die beiden Größen sich gegenläufig verhalten. Es war jedoch möglich, mit mehreren GEM Konfigurationen Spannungseinstellungen zu identifizieren, bei denen beide Größen den gewünschten Anforderungen entsprachen.
The thermal fit to preliminary HADES data of Au+Au collisions at sNN=2.4 GeV shows two degenerate solutions at T≈50 MeV and T≈70 MeV. The analysis of the same particle yields in a transport simulation of the UrQMD model yields the same features, i.e. two distinct temperatures for the chemical freeze-out. While both solutions yield the same number of hadrons after resonance decays, the feeddown contribution is very different for both cases. This highlights that two systems with different chemical composition can yield the same multiplicities after resonance decays. The nature of these two minima is further investigated by studying the time-dependent particle yields and extracted thermodynamic properties of the UrQMD model. It is confirmed, that the evolution of the high temperature solution resembles cooling and expansion of a hot and dense fireball. The low temperature solution displays an unphysical evolution: heating and compression of matter with a decrease of entropy. These results imply that the thermal model analysis of systems produced in low energy nuclear collisions is ambiguous but can be interpreted by taking also the time evolution and resonance contributions into account.
The metaphor of DIADEM informs the way in which Proverbs depicts the character of a woman of strength and her place in the society. The metaphor serves the Proverbs to conceptualise a prudent, virtuous and reasonable character in relation to the divine and the human, and thus to provide the main support of a successful life.
Ambiguitäten der futuristischen Wortkunst zielen im Kern selbst auf die Aporien der avantgardistischen Programmatik, die durch die Aufhebung der Trennung von Kunst und Leben einmal mehr ihre Bezugsgrößen verunklart. Im Zentrum der folgenden Überlegungen stehen die rhetorischen Voraussetzungen, die ein derartiges entdifferenzierendes Kommunikationspostulat des Futurismus überhaupt erst ermöglichen. Dem revolutionären Sprachkonzept der 'parole in libertà' ('befreiten Worte) geht die Idee einer verkürzten und intensivierten Sprache voraus, die es auf ihre Kürze-Verfahren zu untersuchen gilt. Die 'römische Kraftform', für die insbesondere Sallust als stilprägend gelten kann, eröffnet einen subtilen Bezugspunkt für ein Paradigma viriler Performanz, deren Wirkungsästhetik sich aus der kurzen, prägnanten, aber auch obskuren Form speist. Ausgehend von den antiken Figurationen der 'brevitas' sowie deren verstärkter Rezeption im späten 19. Jahrhundert befasst sich der vorliegende Beitrag mit den futuristischen Dynamisierungsutopien. Dabei wird die These vertreten, dass die frühen Manifeste des Futurismus vor dem Hintergrund der beschleunigten Zeiterfahrung als dynamische Kraftformen konzipiert sind. Eine detaillierte Lektüre soll hierbei eine spezifische Produktionsästhetik der Kürze freilegen. Sie zeigt sich am Analogiestil, der eine Verschmelzung differenter Gegenstände zum Ziel hat und vermittels seiner vitalisierten Komprimierungen, so zumindest lautet das häufig formulierte Anliegen der Manifeste, zum 'Wesen' der technischen Materie vorzudringen vermag. Die rasche Zirkulation der sprachlichen Ausdrücke wiederum orientiert sich konzeptuell an einer telegrammatischen Codierung der verkürzten Kraftformen. Abschließend gilt es die Möglichkeit dunkler Kürze ('obscura brevitas'), welche die semantischen Nuancierungen des Textes im hermeneutischen "Dämmerlicht" belässt, in Bezug auf den Adressatenkreis von Marinettis Lebenskunst zu betrachten. Durch die Dekontextualisierung der Analogien läuft die vitalisierte Sprache nämlich selbst ständig Gefahr, unverständlich zu sein. Es wird sich zeigen, dass der futuristische Dichter auf seine symbolistische Herkunft verwiesen bleibt, aus der heraus die Unverständlichkeit für das Publikum gerade als Indiz für das Gelingen einer hermetischen Literaturkonzeption gewertet wird. Im Sinne dieser asymmetrischen Kommunikationssituation ist schließlich auch der performative Handlungsimpuls des Künstler-'Souveräns' zu bedenken, der die kurzen Formen ins Zentrum einer Taten-Rhetorik rückt, deren eigenlogische Chiffrierung nur dem futuristischen Genius verständlich ist.
Dantes Weltengedicht, das die Jenseitsreise des menschlichen Protagonisten Dante durch die drei Reiche Inferno, Purgatorium und Paradies schildert, verfügt offenbar über eine gewaltige visionäre Faszination und Ausstrahlung, die dazu tendiert, bildkünstlerische Arbeiten sowie Aneignungen in vielfältigen visuellen Medien zu stimulieren. Nicht zufällig partizipieren zahlreiche berühmte europäische Maler, Zeichner und Bildhauer an der unabschließbaren Aufgabe, die Stationen jener Jenseitsreise der Danteschen Projektionsfigur zu illustrieren oder im Medium der bildenden Kunst zu interpretieren. Vor diesem Hintergrund verwundert es außerdem kaum, dass auch die Zeichner und Autoren von Comics, Mangas und Graphic Novels im ausgehenden 20. und 21. Jahrhundert Dantes Werke und besonders die "Commedia" als geeignetes Sujet für sich entdeckt und sie in teils komplexen Text-Bild-Gestaltungen adaptiert haben. Der japanische Mangakünstler Gô Nagai schreibt sich mit "Dante Shinkyoku" (1994–1995) in den Kontext einer globalen und transmedialen Danterezeption ein und bedient sich dabei vielfältiger Referenzen auf die europäische Geschichte der Dante-Illustration.