Refine
Year of publication
Document Type
- Preprint (2165) (remove)
Has Fulltext
- yes (2165)
Keywords
- Kollisionen schwerer Ionen (33)
- heavy ion collisions (27)
- Deutsch (23)
- Quark-Gluon-Plasma (14)
- equation of state (13)
- QGP (12)
- heavy-ion collisions (11)
- Kongress (10)
- Syntax (10)
- quark-gluon plasma (10)
Institute
- Physik (1308)
- Frankfurt Institute for Advanced Studies (FIAS) (942)
- Informatik (755)
- Medizin (172)
- Extern (82)
- Biowissenschaften (71)
- Ernst Strüngmann Institut (69)
- Mathematik (48)
- MPI für Hirnforschung (46)
- Psychologie (46)
We deal with the reconstruction of inclusions in elastic bodies based on monotonicity methods and construct conditions under which a resolution for a given partition can be achieved. These conditions take into account the background error as well as the measurement noise. As a main result, this shows us that the resolution guarantees depend heavily on the Lamé parameter μ and only marginally on λ.
Effective spectral functions of the ρ meson are reconstructed by considering the lifetimes inside different media using the hadronic transport SMASH (Simulating Many Accelerated Strongly-interacting Hadrons). Due to inelastic scatterings, resonance lifetimes are dynamically shortened (collisional broadening), even though the employed approach assumes vacuum resonance properties. Analyzing the ρ meson lifetimes allows to quantify an effective broadening of the decay width and spectral function, which is important in order to distinguish dynamical effects from additional genuine medium modifications to the spectral functions, indicating e.g. an onset of chiral symmetry restoration. The broadening of the spectral function in a thermalized system is shown to be consistent with other theoretical calculations. The effective ρ meson spectral function is also presented for the dynamical evolution of heavy-ion collisions, finding a clear correlation of the broadening to system size, which is explained by an observed dependence of the width on the local hadron density. Furthermore, the difference in the results between the thermal system and full collision dynamics is explored, which may point to non-equilibrium effects.
The Calderón problem with finitely many unknowns is equivalent to convex semidefinite optimization
(2023)
We consider the inverse boundary value problem of determining a coefficient function in an elliptic partial differential equation from knowledge of the associated Neumann-Dirichlet-operator. The unknown coefficient function is assumed to be piecewise constant with respect to a given pixel partition, and upper and lower bounds are assumed to be known a-priori.
We will show that this Calderón problem with finitely many unknowns can be equivalently formulated as a minimization problem for a linear cost functional with a convex non-linear semidefinite constraint. We also prove error estimates for noisy data, and extend the result to the practically relevant case of finitely many measurements, where the coefficient is to be reconstructed from a finite-dimensional Galerkin projection of the Neumann-Dirichlet-operator.
Our result is based on previous works on Loewner monotonicity and convexity of the Neumann-Dirichlet-operator, and the technique of localized potentials. It connects the emerging fields of inverse coefficient problems and semidefinite optimization.
The exploration of hot and dense nuclear matter: Introduction to relativistic heavy-ion physics
(2022)
This article summarizes our present knowledge about nuclear matter at the highest energy densities and its formation in relativistic heavy ion collisions. We review what is known about the structure and properties of the quark-gluon plasma and survey the observables that are used to glean information about it from experimental data.
The idea of slow-neutron capture nucleosynthesis formulated in 1957 triggered a tremendous experimental effort in different laboratories worldwide to measure the relevant nuclear physics input quantities, namely (n,γ) cross sections over the stellar temperature range (from few eV up to several hundred keV) for most of the isotopes involved from Fe up to Bi. A brief historical review focused on total energy detectors will be presented to illustrate how, advances in instrumentation have led, over the years, to the assessment and discovery of many new aspects of s-process nucleosynthesis and to the progressive refinement of theoretical models of stellar evolution. A summary will be presented on current efforts to develop new detection concepts, such as the Total-Energy Detector with γ-ray imaging capability (i-TED). The latter is based on the simultaneous combination of Compton imaging with neutron time-of-flight (TOF) techniques, in order to achieve a superior level of sensitivity and selectivity in the measurement of stellar neutron capture rates.
The production of prompt Λ+c baryons at midrapidity (|y|<0.5) was measured in central (0-10%) and mid-central (30-50%) Pb-Pb collisions at the center-of-mass energy per nucleon-nucleon pair sNN−−−√=5.02 TeV with the ALICE detector. The Λ+c production yield, the Λ+c/D0 production ratio, and the Λ+c nuclear modification factor RAA are reported. The results are more precise and more differential in transverse momentum (pT) and centrality with respect to previous measurements. The Λ+c/D0 ratio, which is enhanced with respect to the pp measurement for 4<pT<8 GeV/c, is described by theoretical calculations that model the charm-quark transport in the quark-gluon plasma and include hadronization via both coalescence and fragmentation mechanisms.
The purpose of the paper is to initiate the development of the theory of Newton Okounkov bodies of curve classes. Our denition is based on making a fundamental property of NewtonOkounkov bodies hold also in the curve case: the volume of the NewtonOkounkov body of a curve is a volume-type function of the original curve. This construction allows us to conjecture a new relation between NewtonOkounkov bodies, we prove it in certain cases.
The present article proposes a re-reading of what "inclusion" into the sphere of the historical actually means in modern European historical discourse. It argues that this re-reading permits challenging a powerful, but problematic norm of ontological homogeneity as something to be achieved in and by historical discourse. At least some of the more conceptually profound challenges that accounts of "deep history" - of very distant pasts - pose to historical discourse have to do with pursuits of this norm. Historical theory has the potential of responding to some of these challenges and actually reverting them back at the practice of accounting for deep times in historical writing. The argument proceeds, in a first step, by analyzing the ties between modern European mortuary cultures and historical writing. In a second step, the history of humanitarian moralities is brought to bear on the analysis, in order to make visible, thirdly, the fractured presences of deep time in modern-era and contemporary historical writing. The fractures in question emerge, the article argues, from the ontological heterogeneity of historical knowledge. So in the end, a position beyond ontological homogeneity is adumbrated.
Release of neuropeptides from dense core vesicles (DCVs) is essential for neuromodulation. Compared to the release of small neurotransmitters, much less is known about the mechanisms and proteins contributing to neuropeptide release. By optogenetics, behavioral analysis, electrophysiology, electron microscopy, and live imaging, we show that synapsin SNN-1 is required for cAMP-dependent neuropeptide release in Caenorhabditis elegans hermaphrodite cholinergic motor neurons. In synapsin mutants, behaviors induced by the photoactivated adenylyl cyclase bPAC, which we previously showed to depend on acetylcholine and neuropeptides (Steuer Costa et al., 2017), are altered like in animals with reduced cAMP. Synapsin mutants have slight alterations in synaptic vesicle (SV) distribution, however, a defect in SV mobilization was apparent after channelrhodopsin-based photostimulation. DCVs were largely affected in snn-1 mutants: DCVs were ∼30% reduced in synaptic terminals, and not released following bPAC stimulation. Imaging axonal DCV trafficking, also in genome-engineered mutants in the serine-9 protein kinase A phosphorylation site, showed that synapsin captures DCVs at synapses, making them available for release. SNN-1 co-localized with immobile, captured DCVs. In synapsin deletion mutants, DCVs were more mobile and less likely to be caught at release sites, and in non-phosphorylatable SNN-1B(S9A) mutants, DCVs traffic less and accumulate, likely by enhanced SNN-1 dependent tethering. Our work establishes synapsin as a key mediator of neuropeptide release.
Der vorliegende Beitrag versucht, am Leitfaden der Scham einen Zugang zu Agambens Theorie der Subjektivität zu gewinnen, um die theoretischen und historischen Voraussetzungen seiner Ethik einer Prüfung zu unterziehen, die zugleich an die Kritik Thomäs anschließen kann. Den Ausgangspunkt der folgenden Überlegungen bietet Agambens Untersuchung zum 'homo sacer'. In einem zweiten Schritt geht es um die Theorie der Scham, die "Was von Auschwitz bleibt" vorlegt. Die kritische Diskussion von Agambens Ethik leitet die Auseinandersetzung mit dem Gewährsmann ein, den "Was von Auschwitz bleibt präsentiert", mit Primo Levi. Sie wird weitergeführt und zugespitzt durch die Überbietung, die Levis' Frage "Ist das ein Mensch?" in Imre Kertész' "Roman eines Schicksallosen" gefunden hat. Vor dem Hintergrund der zentralen Bedeutung der Scham bei Primo Levi und Imre Kertécs kehrt der letzte Teil zu Agambens Ethik zurück, um deren Grundlagen im Rückgriff auf Aristoteles einer Revision zu unterziehen.
If projection and transference represent similar terms that imply a fundamental form of ignorance, the aim of this investigation can not be to draw a sharp distinction between projection and transference. Of course, the dialectic of inside and outside doesn't play the central role in transference like it does in projection. In a certain way, the notion of projection concerns all forms of perception and seems to be wider than the notion of transference. But on the other hand, the notion of transference as a poetic act of creating metaphorical analogies seems to be wider than that of projection. My interest in the following lines lies not in the attempt to draw a valuable distinction between both terms, but to look at their interplay in a novel that discusses all forms of archaism, primitivism and regression, commonly linked with projection, a novel, that at the same time tries to give an explanation of the foundation of modern art. Thomas Mann's Doktor Faustus offers an insight not only into the combination of projection and love, but also into ignorance as the common ground of projection and transference. I will therefore first try to determine the modernity of Thomas Mann's novel in regard to the abounding intertextual dimension that characterizes the text, and then closely examine the central scene of the novel, the confrontation between Adrian Leverkühn and the obscure figure of the devil.
Wie Rolf Parr in seinem Aufsatz 'Liminale und andere Übergänge. Theoretische Modellierung von Grenzzonen, Normalitätsaspekten, Schwellen, Übergängen und Zwischenräumen in Literatur- und Kulturwissenschaft' deutlich macht, ist die Intertextualitäts- und Intermedialitätstheorie, die er im Anschluss an die Arbeiten Michel Foucaults und Jürgen Links vertritt, wesentlich von einem Moment der Grenzüberschreitung bestimmt. An die Stelle klar konturierter Grenzen treten Schwellen als "räumlichtopographische Zonen der Unentschiedenheit", die zugleich als zeitliche Erinnerungsschwellen fungieren. Parr richtet im Rekurs auf Foucault den Blick nicht allein auf diskursive Grenzen der Sagbarkeit durch Ausschlussmechanismen, Verbote etc. Er macht zugleich auf Foucaults frühes Konzept der Heterotopie aufmerksam, wo dieser Grenzziehungen auf bestimmte Raumstrukturen bezieht. Parrs eigenes Interesse liegt in diesem Zusammenhang in der Überführung der diskurstheoretischen Arbeiten Foucaults in eine Interdiskurstheorie, die eben die Schwellen einzelner Diskurse zu überschreiten hätte. Ich möchte hier einen anderen Akzent setzen und die Bedeutung von Schwellenerfahrungen bei Foucault selbst herausarbeiten. Ich konzentriere mich dabei zunächst auf den Begriff des historischen Aprioris aus 'Die Ordnung der Dinge', um daran anschließend auf den Begriff der Heterotopie einzugehen, der die Entstehung der 'Ordnung der Dinge' in den sechziger Jahren in gewisser Weise begleitet und komplementiert. Der Vergleich von Foucaults Schwellendenken mit dem Walter Benjamins soll zugleich erlauben, das Thema des Liminalen im Sinne Parrs als ein Grundmotiv von Foucaults Denken auszumachen.
Die Frage, was Literatur ist, scheint nicht nur die grundlegendste zu sein, die sich der Literaturwissenschaft stellt, sie ist zugleich ihre abgründigste. Grundlegend ist sie, weil sie nach dem Wesen der Literatur fragt und damit eigentlich eine Selbstverständlichkeit aufruft, die die Auseinandersetzung mit Literatur begleitet. Abgründig ist sie, weil auch die scheinbar selbstverständlichsten Definitionen der Literatur bisher nicht zu einer einheitlichen Auffassung vom Wesen der Literatur geführt haben. So steht die Literaturwissenschaft bereits mit der ersten Frage, die sich ihr stellt, vor einem scheinbar unaufhebbaren Dilemma. Auf den Gegenstand angesprochen, der ihr zugehört und der entsprechend über ihre Berechtigung als Wissenschaft Auskunft zu geben vermöchte, bleibt sie im Unklaren.
Ist die Literatur, als Abweichung oder als Erfüllung der Ausdrucksfunktion der Sprache verstanden, eine Diskursform, die dem Bereich der Wahrheit zugänglich ist, oder aber verhindert sie jeden systematischen Zugang zur Wahrheit? Und was ist überhaupt damit gewonnen, wenn Literatur und Wahrheit in einen Zusammenhang zueinander gesetzt werden? Diese Fragen mit einer neuen Dringlichkeit versehen zu haben, die über den Gegensatz von analytischer Philosophie und Dekonstruktion hinausreicht, ist das Verdienst der Arbeiten von Stanley Cavell. Im Folgenden geht es darum, die Frage nach der Wahrheit in der Literatur noch einmal anhand der Auseinandersetzung mit Cavells Schriften stellen, um die Reichweite wie die Grenzen des philosophischen Diskurses über die Literatur zu bestimmen. [...] Was für Cavell in grundsätzlicher Weise in Frage steht, ist zum einen das Wissen, das die Philosophie von der Welt haben kann und zum anderen das Wissen, was Philosophie und Literatur in ihrer gemeinsamen und doch unterschiedlichen Auseinandersetzung mit dem Skeptizismus voneinander haben können. In dem Maße, in dem er nach den Möglichkeiten einer Überwindung des Skeptizismus sucht, erkennt Cavell zunächst spezifische Formen des Nichtwissens an, die er im Kontext philosophischer wie literarischer Texte gleichermaßen thematisiert. Eine besondere Stellung nimmt in diesem Zusammenhang der wiederholte Rückgriff auf Shakespeare ein, der in "Der Anspruch der Vernunft" in einer Lektüre des "Othello" kulminiert, die anhand der Analyse der Tragödie als Ausdruck von und Antwort an den Skeptizismus das Problem von Wissen und Nichtwissen zu fassen erlaubt. Insofern bietet es sich an, Cavells Überlegungen zum Zusammenhang von Tragödie und Skeptizismus einer kritischen Lektüre zu unterziehen, die im Rahmen seiner eigenen Fragestellung noch einmal nach dem grundsätzlichen Verhältnis von Literatur und philosophischer Wahrheitsfindung fragt.
Im Mittelpunkt des Textes, so scheint es, steht die trauernde Verarbeitung eines lang zurückliegenden Ereignisses, damit zugleich Erinnerung und Abschied als Grundmotive des Werkes von Droste-Hülshoff, wie sie auch in anderen Texten wie "Meine Toten" oder dem Byron-Gedicht "Lebt Wohl" zum Ausdruck kommen. In der "Taxuswand" durchmisst Droste-Hülshoff eine lange Zeitspanne, achtzehn Jahre, die zwischen der Begegnung und seiner dichterischen Verarbeitung stehen. Die Frage, die in diesem Zusammenhang im Raum steht, ist die nach dem grundsätzlichen Verhältnis von dichterischer Erinnerungsleistung und biographischem Erlebnis im Werk der Annette von Droste-Hülshoff. Dass beide in ähnlicher Weise wie bei Baudelaire nicht einfach zusammenfallen, sondern auseinandertreten, ist die Vermutung, der es im Folgenden nachzugehen gilt.
We tested 6–7-year-olds, 18–22-year-olds, and 67–74-year-olds on an associative memory task that consisted of knowledge-congruent and knowledge-incongruent object–scene pairs that were highly familiar to all age groups. We compared the three age groups on their memory congruency effect (i.e., better memory for knowledge-congruent associations) and on a schema bias score, which measures the participants’ tendency to commit knowledge-congruent memory errors. We found that prior knowledge similarly benefited memory for items encoded in a congruent context in all age groups. However, for associative memory, older adults and, to a lesser extent, children overrelied on their prior knowledge, as indicated by both an enhanced congruency effect and schema bias. Functional Magnetic Resonance Imaging (fMRI) performed during memory encoding revealed an age-independent memory x congruency interaction in the ventromedial prefrontal cortex (vmPFC). Furthermore, the magnitude of vmPFC recruitment correlated positively with the schema bias. These findings suggest that older adults are most prone to rely on their prior knowledge for episodic memory decisions, but that children can also rely heavily on prior knowledge that they are well acquainted with. Furthermore, the fMRI results suggest that the vmPFC plays a key role in the assimilation of new information into existing knowledge structures across the entire lifespan. vmPFC recruitment leads to better memory for knowledge-congruent information but also to a heightened susceptibility to commit knowledge-congruent memory errors, in particular in children and older adults.
During the first two days of August 2016 a seismic crisis occurred on Brava, Cape Verde, which – according to observations based on a local seismic network – was characterized by more than thousand volcano–seismic signals. Brava is considered an active volcanic island, although it has not experienced any historic eruptions. Seismicity significantly exceeded the usual level during the crisis. We report on results based on data from a temporary seismic–array deployment on the neighbouring island of Fogo at a distance of about 35 km. The array was in operation from October 2015 to December 2016 and recorded a total of 1343 earthquakes, 355 thereof were localized. On 1 and 2 August we observed 54 earthquakes, 25 of which could be located beneath Brava. We further evaluate the observations with regards to possible precursors to the crisis and its continuation. Our analysis shows a migration of seismicity around Brava, but no distinct precursory pattern. However, the observations suggest that similar earthquake swarms commonly occur close to Brava. The results further confirm the advantages of seismic arrays as tools for the remote monitoring of regions with limited station coverage or access.
In the last decades, energy modelling has supported energy planning by offering insights into the dynamics between energy access, resource use, and sustainable development. Especially in recent years, there has been an attempt to strengthen the science-policy interface and increase the involvement of society in energy planning processes. This has, both in the EU and worldwide, led to the development of open-source and transparent energy modelling practices.This paper describes the role of an open-source energy modelling tool in the energy planning process and highlights its importance for society. Specifically, it describes the existence and characteristics of the relationship between developing an open-source, freely available tool and its application, dissemination and use for policy making. Using the example of the Open Source energy Modelling System (OSeMOSYS), this work focuses on practices that were established within the community and that made the framework's development and application both relevant and scientifically grounded. Keywords: Energy system modelling tool, Open-source software, Model-based public policy, Software development practice, Outreach practice
Introduction: In the development of bio-enabling formulations, innovative in vivo predictive tools to understand and predict the in vivo performance of such formulations are needed. Etravirine, a non-nucleoside reverse transcriptase inhibitor, is currently marketed as an amorphous solid dispersion (Intelence® tablets). The aims of this study were 1) to investigate and discuss the advantages of using biorelevant in vitro setups in simulating the in vivo performance of Intelence® 100 mg and 200 mg tablets, in the fed state, 2) to build a Physiologically Based Pharmacokinetic (PBPK) model by combining experimental data and literature information with the commercially available in silico software Simcyp® Simulator V17.1 (Certara UK Ltd.), and 3) to discuss the challenges when predicting the in vivo performance of an amorphous solid dispersion and identify the parameters which influence the pharmacokinetics of etravirine most.
Methods: Solubility, dissolution and transfer experiments were performed in various biorelevant media simulating the fasted and fed state environment in the gastrointestinal tract. An in silico PBPK model for healthy volunteers was developed in the Simcyp® Simulator, using in vitro results and data available from the literature as input. The impact of pre- and post-absorptive parameters on the pharmacokinetics of etravirine was investigated using simulations of various scenarios.
Results: In vitro experiments indicated a large effect of naturally occurring solubilizing agents on the solubility of etravirine. Interestingly, supersaturated concentrations of etravirine were observed over the entire duration of dissolution experiments on Intelence® tablets. Coupling the in vitro results with the PBPK model provided the opportunity to investigate two possible absorption scenarios, i.e. with or without implementation of precipitation. The results from the simulations suggested that a scenario in which etravirine does not precipitate is more representative of the in vivo data. On the post-absorptive side, it appears that the concentration dependency of the unbound fraction of etravirine in plasma has a significant effect on etravirine pharmacokinetics.
Conclusions: The present study underlines the importance of combining in vitro and in silico biopharmaceutical tools to advance our knowledge in the field of bio-enabling formulations. Future studies on other bio-enabling formulations can be used to further explore this approach to support rational formulation design as well as robust prediction of clinical outcomes.
Within the last decades, western democracies have experienced a rise of inequality, with the gap between lower and upper class citizens steadily increasing and a widespread sentiment of growing inequalities also in the political sphere. Against this background, and in the context of the current “crisis of democracy”, democratic innovations such as direct democratic instruments are discussed as a very popular means to bring citizens back in. However, research on direct democracy has produced rather inconsistent results with regard to the question of which effects referenda and initiatives have on equality. Studies in this field are often limited to single countries and certain aspects of equality. Moreover, most existing studies look at the mere availability of direct democratic instruments instead of actual bills that are put to a vote. This paper aims to take a first step to fill these gaps by giving an explorative overview of the outputs of direct democratic bills on multiple equality dimensions, analyzing all national referenda and initiatives in European democracies between 1990 and 2015. How many pro- and contra-equality bills have been put to a vote, how many of those succeeded at the ballot, and are there differences between country groups? Our findings show that a majority of direct democratic bills was not related to equality at all. Regarding the successful bills, we detect some regional differences along with the general tendency that there are more pro- than contra-equality bills. Our paper sheds new light on the question if direct democracy can serve as an appropriate means to complement representative democracy and to shape democratic institutions in the future. The potential of direct democracy in fostering or impeding equality should be an important criterion for the assessment of claims to extend decision-making by citizens.
Purpose: The design of biorelevant conditions for in vitro evaluation of orally administered drug products is contingent on obtaining accurate values for physiologically relevant parameters such as pH, buffer capacity and bile salt concentrations in upper gastrointestinal fluids.
Methods: The impact of sample handling on the measurement of pH and buffer capacity of aspirates from the upper gastrointestinal tract was evaluated, with a focus on centrifugation and freeze-thaw cycling as factors that can influence results. Since bicarbonate is a key buffer system in the fasted state and is used to represent conditions in the upper intestine in vitro, variations on sample handling were also investigated for bicarbonate-based buffers prepared in the laboratory.
Results: Centrifugation and freezing significantly increase pH and decrease buffer capacity in samples obtained by aspiration from the upper gastrointestinal tract in the fasted state and in bicarbonate buffers prepared in vitro. Comparison of data suggested that the buffer system in the small intestine does not derive exclusively from bicarbonates.
Conclusions: Measurement of both pH and buffer capacity immediately after aspiration are strongly recommended as “best practice” and should be adopted as the standard procedure for measuring pH and buffer capacity in aspirates from the gastrointestinal tract. Only data obtained in this way provide a valid basis for setting the physiological parameters in physiologically based pharmacokinetic models.
Introduction: When developing bio-enabling formulations, innovative tools are required to understand and predict in vivo performance and may facilitate approval by regulatory authorities. EMEND® is an example of such a formulation, in which the active pharmaceutical ingredient, aprepitant, is nano-sized. The aims of this study were 1) to characterize the 80 mg and 125 mg EMEND® capsules in vitro using biorelevant tools, 2) to develop and parameterize a physiologically based pharmacokinetic (PBPK) model to simulate and better understand the in vivo performance of EMEND® capsules and 3) to assess which parameters primarily influence the in vivo performance of this formulation across the therapeutic dose range.
Methods: Solubility, dissolution and transfer experiments were performed in various biorelevant media simulating the fasted and fed state environment in the gastrointestinal tract. An in silico PBPK model for healthy volunteers was developed in the Simcyp Simulator, informed by the in vitro results and data available from the literature.
Results: In vitro experiments indicated a large effect of native surfactants on the solubility of aprepitant. Coupling the in vitro results with the PBPK model led to an appropriate simulation of aprepitant plasma concentrations after administration of 80 mg and 125 mg EMEND® capsules in both the fasted and fed states. Parameter Sensitivity Analysis (PSA) was conducted to investigate the effect of several parameters on the in vivo performance of EMEND®. While nano-sizing aprepitant improves its in vivo performance, intestinal solubility remains a barrier to its bioavailability and thus aprepitant should be classified as DCS IIb.
Conclusions: The present study underlines the importance of combining in vitro and in silico biopharmaceutical tools to understand and predict the absorption of this poorly soluble compound from an enabling formulation. The approach can be applied to other poorly soluble compounds to support rational formulation design and to facilitate regulatory assessment of the bio-performance of enabling formulations.
Objectives Supersaturating formulations hold great promise for delivery of poorly soluble active pharmaceutical ingredients (APIs). To profit from supersaturating formulations, precipitation is hindered with precipitation inhibitors (PIs), maintaining drug concentrations for as long as possible. This review provides a brief overview of supersaturation and precipitation, focusing on precipitation inhibition. Trial-and-error PI selection will be examined alongside established PI screening techniques. Primarily, however, this review will focus on recent advances that utilise advanced analytical techniques to increase mechanistic understanding of PI action and systematic PI selection.
Key Findings. Advances in mechanistic understanding have been made possible by the use of analytical tools such as spectroscopy, microscopy and mathematical and molecular modelling, which have been reviewed herein. Using these techniques, PI selection can instead be guided by molecular rationale. However, more work is required to see wide-spread application of such an approach for PI selection.
Conclusions PIs are becoming increasingly important in enabling formulations. Trial-and-error approaches have seen success thus far. However, it is essential to learn more about the mode of action of PIs if the most optimal formulations are to be realised. Robust analytical tools, and the knowledge of where and how they can be applied, will be essential in this endeavour.
Supersaturating formulations are widely used to improve the oral bioavailability of poorly soluble drugs. However, supersaturated solutions are thermodynamically unstable and such formulations often must include a precipitation inhibitor (PI) to sustain the increased concentrations to ensure that sufficient absorption will take place from the gastrointestinal tract. Recent advances in understanding the importance of drug-polymer interaction for successful precipitation inhibition have been encouraging. However, there still exists a gap in how this newfound understanding can be applied to improve the efficiency of PI screening and selection, which is still largely carried out with trial and error-based approaches. The aim of this study was to demonstrate how drug-polymer mixing enthalpy, calculated with the Conductor like Screening Model for Real Solvents (COSMO-RS), can be used as a parameter to select the most efficient precipitation inhibitors, and thus realise the most successful supersaturating formulations. This approach was tested for three different Biopharmaceutical Classification System (BCS) II compounds: dipyridamole, fenofibrate and glibenclamide, formulated with the supersaturating formulation, mesoporous silica. For all three compounds, precipitation was evident in mesoporous silica formulations without a precipitation inhibitor. Of the nine precipitation inhibitors studied, there was a strong positive correlation between the drug-polymer mixing enthalpy and the overall formulation performance, as measured by the area under the concentration-time curve in in vitro dissolution experiments. The data suggest that a rank-order based approach using calculated drug-polymer mixing enthalpy can be reliably used to select precipitation inhibitors for a more focused screening. Such an approach improves efficiency of precipitation inhibitor selection, whilst also improving the likelihood that the most optimal formulation will be realised.
Objectives: The objective of this review is to provide an overview of PK/PD models, focusing on drug-specific PK/PD models and highlighting their value-added in drug development and regulatory decision-making.
Key findings: Many PK/PD models, with varying degrees of complexity and physiological understanding, have been developed to evaluate the safety and efficacy of drug products. In special populations (e.g. pediatrics), in cases where there is genetic polymorphism and in other instances where therapeutic outcomes are not well described solely by PK metrics, the implementation of PK/PD models is crucial to assure the desired clinical outcome. Since dissociation between the pharmacokinetic and pharmacodynamic profiles is often observed, it is proposed that physiologically-based pharmacokinetic (PBPK) and PK/PD models be given more weight by regulatory authorities when assessing the therapeutic equivalence of drug products.
Summary: Modeling and simulation approaches already play an important role in drug development. While slowly moving away from “one-size fits all” PK methodologies to assess therapeutic outcomes, further work is required to increase confidence in PK/PD models in translatability and prediction of various clinical scenarios to encourage more widespread implementation in regulatory decision-making.
Background: Drugs used to treat gastrointestinal diseases (GI drugs) are widely used either as prescription or over23 the-counter (OTC) medications and belong to both the ten most prescribed and ten most sold OTC medications worldwide. Current clinical practice shows that in many cases, these drugs are administered concomitantly with other drug products. Due to their metabolic properties and mechanisms of action, the drugs used to treat gastrointestinal diseases can change the pharmacokinetics of some co27 administered drugs. In certain cases, these interactions can lead to failure of treatment or to the occurrence of serious adverse events. The mechanism of interaction depends highly on drug properties and differs among therapeutic categories. Understanding these interactions is essential to providing recommendations for optimal drug therapy.
Objective: To discuss the most frequent interactions between GI and other drugs, including identification of the mechanisms behind these interactions, where possible.
Conclusion: Interactions with GI drugs are numerous and can be highly significant clinically. Whilst alterations in bioavailability due to changes in solubility, dissolution rate and metabolic interactions can be (for the most part) easily identified, interactions that are mediated through other mechanisms, such as permeability or microbiota, are less well understood. Future work should focus on characterizing these aspects.
Motivation: The topic of this paper is the estimation of alignments and mutation rates based on stochastic sequence-evolution models that allow insertions and deletions of subsequences ("fragments") and not just single bases. The model we propose is a variant of a model introduced by Thorne, Kishino, and Felsenstein (1992). The computational tractability of the model depends on certain restrictions in the insertion/deletion process; possible effects we discuss.
Results: The process of fragment insertion and deletion in the sequence-evolution model induces a hidden Markov structure at the level of alignments and thus makes possible efficient statistical alignment algorithms. As an example we apply a sampling procedure to assess the variability in alignment and mutation parameter estimates for HVR1 sequences of human and orangutan, improving results of previous work. Simulation studies give evidence that estimation methods based on the proposed model also give satisfactory results when applied to data for which the restrictions in the insertion/deletion process do not hold.
Availability: The source code of the software for sampling alignments and mutation rates for a pair of DNA sequences according to the fragment insertion and deletion model is freely available from www.math.uni-frankfurt.de/~stoch/software/mcmcsalut under the terms of the GNU public license (GPL, 2000).
Within the last year, expressions of second-hand embarrassment on Twitter significantly increased. We show how this relates to the current situation in U.S. politics under Trump and provide two explanations for why people feel this way in response to his actions. First, compared to former politicians, Trump’s norm violations seem intentional. Second, intentional norm violations specifically threaten the social integrity of in-group members—in this case, U.S citizens. We theorize that these strong, frequent and widespread feelings of second-hand embarrassment motivate political actions to prevent further harm to individuals’ self-concept and protect their social integrity.
Die Bedeutung des philosophischen Programms John McDowells, das schon in der theoretischen Philosophie eine revolutionäre Neuausrichtung vornimmt, kann erst voll erkannt werden, wenn man auch seine Konsequenzen für die praktische Philosophie in den Blick nimmt. Zwar geht Geist und Welt primär von Dilemmata der Erkenntnistheorie aus. Aus McDowells Vorschlag, die Gleichsetzung der äußeren Natur mit dem bedeutungsfreien Raum der Naturgesetze zugunsten einer Konzeption von Gründen in der Welt aufzugeben, ergibt sich aber die Möglichkeit einer so neuartigen Perspektive auf die Natur moralischer Urteile, dass es fast so scheint, als sei McDowells theoretisches Programm auf diesen Gewinn für die praktische Philosophie hin angelegt worden.
According to his own understanding, Jürgen Habermas’ Theory of Communicative Action offers a new account of the normative foundations of critical theory. 1 Habermas’ motivating insight is that neither a transcendental or metaphysical solution to the problem of normativity, nor a merely hermeneutic reconstruction of historically given norms, is sufficient to clarify the normative foundations of critical theory. In response to this insight, Habermas develops a novel account of normativity which locates the normative demands upon which critical theory draws within the socially instituted practice of communicative understanding. Although Habermas has claimed otherwise, this new foundation for critical theory constitutes a novel and innovative form of “immanent critique”. To argue for and to clarify this claim, I offer, in section 1, a formal account of immanent critique and distinguish between two different ways of carrying out such a critique. In section 2, I examine Habermas’ rejection of the first, hermeneutic option. Against this background, I then show, in section 3, that the Theory of Communicative Action attempts to formulate an immanent critique of contemporary societies according to a second, “practice-based” model. However, because Habermas, as I will argue in section 4, commits himself to an implausibly narrow view in regard to one central element of such a model – in regard to the social ontology of immanent normativity – his normative critique cannot develop its full potential (section 5).
Die vorliegende Untersuchung befasst sich mit verschiedenen Aspekten des Frauenbilds sowohl in der deutschen als auch in der arabischen bzw. orientalischen Literatur des Mittelalters.
Zu den untersuchenden Aspekten gehören zum Beispiel die Stellung der Frau in der Gesellschaft sowohl religiös als auch sozial, Rolle und Einfluss der Frau, Beschreibung der Schönheit der Frau, Beziehung zwischen Mann und Frau, Minne als Zentralmotiv, Ehe als sozialbedingte Lebensform sowie adlige Dame und bäuerliche Frau als Gegenbild.
Die Forschung wird sich auf bestimmte Textsorten der mittelalterlichen Literatur stützen, um mittels dieser induktiven Methode ein konkretes Bild zu geben und die verschiedenen Hauptelemente der Untersuchung zu betonen.
Diese Werke sind die klassischen höfischen Artusromane z.B. Erec, Iwein, Parzival und Tristan und Isolde in der deutschen Literatur des Mittelalters sowie Erzählungen aus der Geschichtensammlung von 1001 Nacht, der Antarroman und die Geschichte von Laila und Madjnun in der orientalischen Literatur des Mittelalters.
Das Bild der Frau ist ein Thema, das bis heute ein Schwerpunkt in vielen literarischen Werken bildet, der aber während der gesamten Epoche des Mittelalters eine besondere Bedeutung hatte. Das könnte auf die unterschiedlichen Darstellungsweisen vom Bild der Frau zurückzuführen, und wie sie in den verschiedenen Kulturen zum Ausdruck gebracht werden.
Wikis in der Hochschullehre
(2012)
Dieser Beitrag gibt einen Überblick über Einsatzszenarien von Wikis in Lern- und Lehrprozessen und deren Eignung für die kollaborative Wissensproduktion, während zugleich Einschränkungen, Bedingungen und Gestaltungsempfehlungen thematisiert werden. Zudem werden Erfahrungen mit verschiedenen Wiki-Anwendungen an der Universität Frankfurt dokumentiert, die vom begleitenden Einsatz im Seminar bis hin zur studentisch initiierten Bereitstellung studienbegleitender Materialien reichen. Die vorher ausgearbeiteten Aspekte werden nochmals anhand der Beispiele aufgegriffen und ihrer Praxisrelevanz verdeutlicht.
Während der wissenschaftliche Nachwuchs im Forschungsbereich strategisch und wissenschaftlich fundiert samt diversen Prüfungen (Bachelor, Master, Promotion, ggf. auch Habilitation) ausgebildet wird, existiert im Bereich der Lehre nichts auch nur annährend Vergleichbares. Die übliche „Qualifizierung“ des Nachwuchslehrenden findet meist nur „On-the-job“ (vgl. Conradi, 1983) statt, d.h. durch eigenes Ausprobieren nach Beobachtung anderer Lehrender während des eigenen Studiums. Unter guten Bedingungen hat der Lehrende vorab oder begleitend Weiterbildungen zu guter Lehre besucht. Eine strategische Einbettung dieser Personalentwicklungsmaßnahmen, wie es seitens der Forschung intendiert wird, ist nicht vorhanden. Dieser Beitrag stellt mögliche Formen vor und führt exemplarisch eine darunter näher aus.
Hochschuldidaktische Weiterbildungsveranstaltungen haben häufig nur eine geringe Akzeptanz bei etablierten Hochschullehrenden. Es wird angenommen, dass der Nachweis wissenschaftlicher Evidenz hochschuldidaktischer Maßnahmen deren Akzeptanz in Hochschulen erhöht. Zur Verknüpfung von empirischer Forschung und hochschuldidaktischen Weiterbildungen schlagen wir ein Spiralmodell vor. Praktisch werden ausgehend von theoretischen und empirischen Grundlagen relevante Ergebnisse für die Bearbeitung in hochschuldidaktischen Weiterbildungen entwickelt. Die Anwendung des Spiralmodells wird an einem Praxisbeispiel zum Themenfeld "Interkulturelle Kommunikation in der Hochschule" illustriert.
Die Internationalisierung der deutschen Hochschulen nahm in den letzten Jahren stark zu. Umgang mit Studierenden aus unterschiedlichen Kulturen bedeutet für Lehrende längst Alltag. Nicht immer jedoch verläuft die Kommunikation zwischen Angehörigen unterschiedlicher Kulturen reibungslos. Um möglichen Schwierigkeiten entgegenzuwirken, setzen einige Universitäten interkulturelle Trainings ein zur Sensibilisierung für interkulturelle Unterschiede. Die Autoren haben im Rahmen eines hochschuldidaktischen Weiterbildungsprogramms für Lehrende ein interkulturelles Training entwickelt und eingesetzt. Über den Aufbau und die Ziele des Trainings wird im vorliegenden Artikel berichtet. Weiterhin wird ein Untersuchungsdesign vorgestellt, mit welchem der Einfluss von Kultur auf die Online-Kommunikation in der Lehre untersucht wurde.
This book is a full reference grammar of Qiang, one of the minority languages of southwest China, spoken by about 70,000 Qiang and Tibetan people in Aba Tibetan and Qiang Autonomous Prefecture in northern Sichuan Province. It belongs to the Qiangic branch of Tibeto-Burman (one of the two major branches of Sino-Tibetan). The dialect presented in the book is the Northern Qiang variety spoken in Ronghong Village, Yadu Township, Chibusu District, Mao County. This book, the first book-length description of the Qiang language in English, is the result of many years of work on the language.
Im Rahmen des Bund-Länder-Programms "Qualitätspakt Lehre" hat die Goethe-Universität Frankfurt erfolgreich das Programm "Starker Start ins Studium" eingeworben. Dadurch verfügt das Institut für Psychologie nun über die personellen Möglichkeiten, die fachliche und soziale Integration neuer Psychologiestudierender im sechssemestrigen Bachelorstudiengang Psychologie zu verbessern. Hierzu wurden zwei obligate je zweisemestrige Lehrmodule entwickelt. In dem vorliegenden Beitrag wird das übergeordnete Lehrkonzept beschrieben und dessen Implementierung im Fach Psychologie als Praxisbeispiel illustriert.
Verständnisvolle Dozenten haben weniger Fachwissen : Wirkungen der sprachlichen Anpassung an Laien
(2012)
In der Interaktion mit Studierenden ist schriftliche Online-Kommunikation ein wichtiges Arbeitsmedium für jeden Lehrenden geworden. Die Interaktionspartner haben dabei für ihre Urteilsbildung über den jeweils anderen ausschließlich den geschriebenen Text mit seinen lexikalen und grammatikalischen Merkmalen zur Verfügung. Das Ausmaß der lexikalen Anpassung an die Wortwahl eines Studierenden kann daher einen Einfluss auf die studentische Bewertung ihrer Dozenten hinsichtlich unterschiedlicher Persönlichkeitseigenschaften haben. In der vorliegenden Studie beurteilten Studierende jeweils zwei Dozenten hinsichtlich Verständnis, Gewissenhaftigkeit und Intellekt (IPIP, Goldberg, Johnson, Eber et al., 2006) auf Grundlage einer Emailkommunikation. Der Grad der lexikalen Anpassung der Lehrenden wurde dabei variiert. Es zeigte sich, dass Studierende Dozenten mit umgangssprachlicher Wortwahl als verständnisvoller, gewissenhafter aber tendenziell weniger wissend einschätzen.
In diesem Beitrag werden Ansätze zur Förderung der Eignungsreflexion der Studierenden im Lehramt sowie der Beratungskompetenz der betreuenden Lehrenden an der Goethe-Universität Frankfurt dargestellt: Für die Studierenden wurden unterschiedliche Maßnahmen entwickelt und implementiert, die die Reflexion über die persönliche Eignung für den Lehrerberuf fördern und bestehende Defizite frühzeitig ausgleichen helfen. Für die betreuenden Lehrenden (an Universität und Schule) wurde eine hochschuldidaktische Weiterbildung entwickelt und eingesetzt, welche deren Beratungskompetenz stärken soll.
We propose a framework of individual problem-solving and communicative demands (IproCo) that bridges the gap between models from cognitive psychology and communication pragmatics. Furthermore, we present two experiments conducted to identify factors influencing the demands and to test possibilities for support. The experiments employed a remote collaborative picture-sorting task with concrete and abstract pictures and applied non-interactive conditions compared to interactive conditions. In a first experiment, the influence of the postulated demands on collaboration process and outcome was analysed, and the impact of shared applications was tested. In a second experiment, we evaluated instructional support measures consisting of model collaboration and a collaboration script. The collaboration process showed benefits of the support but the outcome did not. However, the support measures fostered the collaboration process even in the particularly difficult conditions with non-interactive communication. We discuss the impact of the IproCo framework and apply it to other tasks.
Effective knowledge communication presupposes common ground (Clark & Brennan, 1991) that needs to be established and maintained. This is particularly difficult in remote communication as well as in non-interactive settings, because the speaker cannot use gestures or mimic and has to tailor his utterances to the addressee without receiving feedback. In these situations, the speaker may achieve mutual understanding for example by adopting the addressee’s perspective. We present a study conducted to test the impact of instructions that support and hinder individual problem solving and knowledge communication. We used a picture-sorting task requiring individual cognitive processes of feature search (Treisman & Gelade, 1980) in addition to referential communication. As our study focused on the design of utterances, all participants assumed the role of speaker. Participants were told that their descriptions would be recorded and then listened to later on by a participant in the role of addressee. Eight sets of pictures were used, which varied on two dimensions: the individual cognitive demands of detecting the relevant features (varied as between-subject factor) and the communicative demands (varied as within-subject factor). A further between-subject factor was the type of instructions: The participants received either a collaboration script as supporting instructions, or time pressure was applied to induce stress, or else they were given no additional instructions (control group). We used the speakers’ verbal utterances to examine the quality of the speakers’ descriptions. For both dimensions of difficulty, we found the expected effects. In the conditions with a collaboration script, there were fewer irrelevant features mentioned and fewer features were described with delay. In the conditions with time pressure, there were fewer irrelevant features described, but the number of correctly described pictures was impaired through the fact that relevant features were also neglected. Under time pressure, speakers tended to provide ambiguous descriptions regarding the frame of reference.
Avventurarsi e poi inoltrarsi nell'opera di Thomas Bernhard non è precisamente come fare una passeggiata, ma la passeggiata è un motivo ricorrente nell'opera bernhardiana (insieme a quella di Handke, di Sebald, di Walser, per fare solo alcuni nomi di passeggiatori nel Novecento di lingua tedesca). Le figure di Bernhard camminano, marciano, corrono, ma in una "direzione opposta" rispetto a quella indicata da Stifter. Talvolta i loro percorsi si snodano nella natura, come quando entrano in un bosco per non fare più ritorno (Gelo, Al limite boschivo, La partita a carte), a volte marciano nel chiuso della loro "casa-prigione", seguendo i percorsi labirintici e infiniti della loro mente (La Fornace, Cemento), altre volte ancora si muovono in un contesto cittadino e metropolitano, a Roma in Estinzione (dove la passeggiata con l'allievo Gambetti conserva un alone aristotelico, il peripatetico) o - più spesso - a Vienna.
Un titolo quale "Dialettica negativa e antropologia negativa" sembrerebbe preannunciare un lavoro di confronto tra Th. W. Adorno e Ulrich Sonnemann, sulla scia di una indicazione mutuata dalla "Introduzione" di "Dialettica negativa" (1966). E invece, disattendendo una simile aspettativa, la "Negative Anthropologie" cui ci si riferisce in questo saggio è quella di Günther Stern/Anders. L’idea di un confronto tra le due prospettive nasce dalla curiosità di capire la corrispondenza tra la "dialettica negativa" e l'"antropologia negativa", laddove con il secondo sintagma si intende la concezione andersiana di un'umanità inadeguata al mondo. Che poi non si tratti di una stranezza ma di un interrogativo legittimo lo conferma, indirettamente, lo stesso Adorno, che in una nota contenuta nella sezione della "Dialettica negativa" dedicata alla lettura del pensiero di Heidegger, chiama in causa proprio la lezione di Anders.
Integer point sets minimizing average pairwise L1 distance: What is the optimal shape of a town?
(2010)
An n-town, n[is an element of]N , is a group of n buildings, each occupying a distinct position on a 2-dimensional integer grid. If we measure the distance between two buildings along the axis-parallel street grid, then an n-town has optimal shape if the sum of all pairwise Manhattan distances is minimized. This problem has been studied for cities, i.e., the limiting case of very large n. For cities, it is known that the optimal shape can be described by a differential equation, for which no closed-form solution is known. We show that optimal n-towns can be computed in O(n[superscript 7.5]) time. This is also practically useful, as it allows us to compute optimal solutions up to n=80.
Wir Philologen haben gut reden. Wir sehen zu, wie andere, die zumeist nicht zu unserer Zunft gehören, die unübersehbare Fülle von Geschriebenem aus seiner jeweiligen Ursprache in alle möglichen Sprachen bringen, und wir verhalten uns dazu als interessierte Zuschauer. Wir haben allen Grund, uns daran zu freuen: Ohne diesen grenzüberschreitenden Waren- und Gedankentausch bliebe das Feld, auf dem wir grasen, enger und parzellierter, als es nach der Intention der Autoren und auch der Sache nach sein müsste. Wir können (sofern wir den nötigen Überblick haben) das loben, was die Übersetzer zu Wege gebracht haben: die Entsprechungen, die sie entdeckt oder erfunden haben, die Kraft, Geschmeidigkeit und Modulationsvielfalt, die sie in ihren Zielsprachen mit Tausenden von einleuchtenden Funden oder mit dem ganzen Ton und Duktus ihrer Übersetzungen erst aktiviert haben. Wenn wir es uns zutrauen, können wir ihnen ins Handwerk pfuschen und einzelne Stellen oder ganze Werke selber übersetzen. Wir können sie kritisieren, wo uns die vorgelegten Übersetzungen zu matt erscheinen oder wo sie sachlich oder stilistisch mehr als nötig ‚hinter dem Original zurückbleiben; wir können Verbesserungsvorschläge machen. Wenn wir Übersetzungen zitieren und es nötig finden, sie abzuwandeln, bewegen wir uns in einer Grauzone zwischen dem Respekt vor dem Übersetzer, der Lust an noch weiteren erkannten Potenzen des Textes und dem Drang, möglichst ‚alles, was wir aus dem Original herausgelesen haben, in der eigenen Sprache den Hörern oder Lesern nahezubringen.
Die zentrale These des vorliegenden Aufsatzes ist es, dass es ein Adam Smith-Problem im traditionellen Sinne nicht gibt, aber sehr wohl einen Selbstwiderspruch in Adam Smith ökonomischer Theorie.
Der Aufsatz behandelt zunächst die enge systematische Verbindung von Smith ökonomischer und ethischer Theorie. Die Verbindung beruht auf der Annahme eines höchsten Wesens und einer daraus gefolgerten prästabilisierenden Harmonie Dem religiösen Vertrauen auf eine natürliche Ordnung korresponiert der Glaube an die Gerechtigkeit des Marktes. Smith weitere politische Analyse produziert allerdings einen Selbstwiderspruch. Smith zeigt auf, dass die unternehmerischen Eigeninteressen dem Allgemeininteresse der Gesellschaft widersprechen und die Unternehmer zudem virtuoser und erfolgreicher beim Durchsetzen ihrer eigenen Interessen agieren als andere Marktakteure. Dennoch hält Smith an der Annahme fest, der Markt entfalte eine harmonisierende und den allseitigen Wohlstand fördernde Wirkung. Diese Annahme mutiert bei seinen Epigonen zu einer ontologischen Gewissheit.
Background: Microvolt T-wave alternans (MTWA) testing in many studies has proven to be a highly accurate predictor of ventricular tachyarrhythmic events (VTEs) in patients with risk factors for sudden cardiac death (SCD) but without a prior history of sustained VTEs (primary prevention patients). In some recent studies involving primary prevention patients with prophylactically implanted cardioverter-defibrillators (ICDs), MTWA has not performed as well.
Objective: This study examined the hypothesis that MTWA is an accurate predictor of VTEs in primary prevention patients without implanted ICDs, but not of appropriate ICD therapy in such patients with implanted ICDs.
Methods: This study identified prospective clinical trials evaluating MTWA measured using the spectral analytic method in primary prevention populations and analyzed studies in which: (1) few patients had implanted ICDs and as a result none or a small fraction (≤15%) of the reported end point VTEs were appropriate ICD therapies (low ICD group), or (2) many of the patients had implanted ICDs and the majority of the reported end point VTEs were appropriate ICD therapies (high ICD group).
Results: In the low ICD group comprising 3,682 patients, the hazard ratio associated with a nonnegative versus negative MTWA test was 13.6 (95% confidence interval [CI] 8.5 to 30.4) and the annual event rate among the MTWA-negative patients was 0.3% (95% CI: 0.1% to 0.5%). In contrast, in the high ICD group comprising 2,234 patients, the hazard ratio was only 1.6 (95% CI: 1.2 to 2.1) and the annual event rate among the MTWA-negative patients was elevated to 5.4% (95% CI: 4.1% to 6.7%). In support of these findings, we analyzed published data from the Multicenter Automatic Defibrillator Trial II (MADIT II) and Sudden Cardiac Death in Heart Failure Trial (SCD-HeFT) trials and determined that in those trials only 32% of patients who received appropriate ICD therapy averted an SCD.
Conclusion: This study found that MTWA testing using the spectral analytic method provides an accurate means of predicting VTEs in primary prevention patients without implanted ICDs; in particular, the event rate is very low among such patients with a negative MTWA test. In prospective trials of ICD therapy, the number of patients receiving appropriate ICD therapy greatly exceeds the number of patients who avert SCD as a result of ICD therapy. In trials involving patients with implanted ICDs, these excess appropriate ICD therapies seem to distribute randomly between MTWA-negative and MTWA-nonnegative patients, obscuring the predictive accuracy of MTWA for SCD. Appropriate ICD therapy is an unreliable surrogate end point for SCD.
Seit gut einem Jahrzehnt wird in Deutschland gewartet: Auf Literatur wird gewartet, auf den großen Berlin-Roman, auf den großen Nachwende-Roman. Und trotz diverser Romane, die Wiedervereinigung und Berlin zum Thema erhoben, ob nun von Günter Grass oder Thomas Brussig, wird weiter gewartet, kann es anscheinend kein Autor recht machen, wird unterhaltsames Erzählen begehrt oder eine Darstellung auf der Höhe moderner Erzählkunst verlangt. Doch die Alternative ist vielleicht falsch gestellt: Könnte denn nicht ein kunstvoll geschriebener Roman mit präziser und variantenreicher Sprache, ausgeklügelten Erzählstrukturen auch unterhaltsam sein? Schließlich ist Döblins nicht gerade schlichter Roman "Berlin Alexanderplatz" ja auch ein Lesevergnügen, vergleichbar mit "Joyces Ulysses" oder Pynchons "Gravity’s Rainbow". Nun lassen sich solche Romane schlecht wiederholen, hinge jeder Nachahmung des Stils der Verdacht an, Plagiat oder Kopie zu sein. Etwas Ähnliches wäre also immer etwas Anderes, neuartig, artifiziell und darin genaueres Abbild seiner Zeit als die Vielzahl schlichter Romane, die von Berlin oder der Wiedervereinigung erzählen. Nun, in letzter Zeit mehren sich im deutschen Feuilleton Stimmen, die eine gewisse, dementsprechende Kunst des Erzählens bei Ulrich Peltzer ausmachen, weswegen hier die Gelegenheit ergriffen wird, einen Gang durch seine drei letzten Publikationen ["Stefan Martinez", "Alle oder keiner", "Bryant Park"] zu unternehmen, um die Entwicklung derselben darzustellen - im Hinterkopf die Frage: Liegt hier vielleicht schon einer der erwarteten großen Berlin-Romane vor?
Der Workshop "Nationale Spezifika und internationale Aspekte in der Wissenschaftsentwicklung – unter besonderer Berücksichtigung der Narratologie" soll, so die Organisatoren in ihrer Einladung – "Gelegenheit bieten, Bedingungen und Möglichkeiten integrativer Ansätze zur Untersuchung von Wissenschaftsprozessen zu diskutieren und wichtige Faktoren der Wissenschaftsentwicklung zu benennen und kritisch zu beleuchten." Die Produktion, Distribution und Rezeption von Wissenssystemen vollziehe sich, schreiben die Organisatoren, "in unterschiedlichen nationalen und internationalen sozialen Räumen, die sowohl die Form als auch den kognitiven Gehalt von Theorien mitunter stark mitstrukturieren, ihre Durchsetzung begünstigen oder behindern. Das wird besonders deutlich, wenn man Transferprozesse von Theorien verfolgt." Den Begriff des Wissenstransfers, der hier in Anschlag gebracht wird, möchte ich in meinem Beitrag einer terminologischen Klärung zuführen. Dazu möchte ich zunächst einige terminologische Überlegungen über den Status der Teilbegriffe anstellen, aus denen der Begriff zusammengesetzt ist (I.), dann die Verwendung des Begriffs in verschiedenen disziplinären Kontexten beobachten (II.) und schließlich einen Vorschlag für eine differenzierte Verwendung des Begriffs als Analysekategorie der Wissenschaftsentwicklung machen (III).
In seinen Sammlungen bildet das Deutsche Literaturarchiv Marbach (DLA) das Netzwerk des literarischen Lebens in all seinen Facetten ab. Im Zentrum des quellenorientierten Sammelns und der Erschließung steht der Autor (bzw. die Autorin). Die Literatur wird dokumentiert vom Entstehungsprozess eines Werkes über die verschiedenen Ausgaben und dessen Rezeption in der Literaturkritik, seine dramaturgische Umsetzung in Hörfunk, Film, auf der Bühne und in der Musik. Seit 2008 bezieht das DLA auch Internetquellen wie literarische Zeitschriften, Netzliteratur und Weblogs in sein Spektrum mit ein und reagiert damit auf die zunehmende Bedeutung des Internets als Publikationsforum. Sammeln, Erschließen und Archivieren bilden eine notwendige Einheit; gerade die Flüchtigkeit der netzbasierten Ressourcen macht eine langfristige Sicherung der Verfügbarkeit erforderlich. Notwendig sind daher mehrere Säulen, auf denen diese neue Sammlung von „Literatur im Netz“ basiert.