Refine
Year of publication
Document Type
- Preprint (2161) (remove)
Has Fulltext
- yes (2161)
Keywords
- Kollisionen schwerer Ionen (33)
- heavy ion collisions (27)
- Deutsch (23)
- Quark-Gluon-Plasma (14)
- equation of state (13)
- QGP (12)
- heavy-ion collisions (11)
- Kongress (10)
- Syntax (10)
- quark-gluon plasma (10)
Institute
- Physik (1308)
- Frankfurt Institute for Advanced Studies (FIAS) (938)
- Informatik (755)
- Medizin (172)
- Extern (82)
- Biowissenschaften (71)
- Ernst Strüngmann Institut (69)
- Mathematik (48)
- MPI für Hirnforschung (46)
- Psychologie (46)
The SARS-CoV-2 Omicron variant is currently causing a large number of infections in many countries. A number of antiviral agents are approved or in clinical testing for the treatment of COVID-19. Despite the high number of mutations in the Omicron variant, we here show that Omicron isolates display similar sensitivity to eight of the most important anti-SARS-CoV-2 drugs and drug candidates (including remdesivir, molnupiravir, and PF-07321332, the active compound in paxlovid), which is of timely relevance for the treatment of the increasing number of Omicron patients. Most importantly, we also found that the Omicron variant displays a reduced capability of antagonising the host cell interferon response. This provides a potential mechanistic explanation for the clinically observed reduced pathogenicity of Omicron variant viruses compared to Delta variant viruses.
Recently, we have shown that SARS-CoV-2 Omicron virus isolates are less effective at inhibiting the host cell interferon response than Delta viruses. Here, we present further evidence that reduced interferon-antagonising activity explains at least in part why Omicron variant infections are inherently less severe than infections with other SARS-CoV-2 variants. Most importantly, we here also show that Omicron variant viruses display enhanced sensitivity to interferon treatment, which makes interferons promising therapy candidates for Omicron patients, in particular in combination with other antiviral agents.
Developmental loss of ErbB4 in PV interneurons disrupts state-dependent cortical circuit dynamics
(2020)
GABAergic inhibition plays an important role in the establishment and maintenance of cortical circuits during development. Neuregulin 1 (Nrg1) and its interneuron-specific receptor ErbB4 are key elements of a signaling pathway critical for the maturation and proper synaptic connectivity of interneurons. Using conditional deletions of the ERBB4 gene in mice, we tested the role of this signaling pathway at two developmental timepoints in parvalbumin-expressing (PV) interneurons, the largest subpopulation of cortical GABAergic cells. Loss of ErbB4 in PV interneurons during embryonic, but not late postnatal, development leads to alterations in the activity of excitatory and inhibitory cortical neurons, along with severe disruption of cortical temporal organization. These impairments emerge by the end of the second postnatal week, prior to the complete maturation of the PV interneurons themselves. Early loss of ErbB4 in PV interneurons also results in profound dysregulation of excitatory pyramidal neuron dendritic architecture and a redistribution of spine density at the apical dendritic tuft. In association with these deficits, excitatory cortical neurons exhibit normal tuning for sensory inputs, but a loss of state-dependent modulation of the gain of sensory responses. Together these data support a key role for early developmental Nrg1/ErbB4 signaling in PV interneurons as powerful mechanism underlying the maturation of both the inhibitory and excitatory components of cortical circuits.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in three common awake marmosets and investigated to what extent event-related-potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy for auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
Rhythmic flicker stimulation has gained interest as a treatment for neurodegenerative diseases and a method for frequency tagging neural activity in human EEG/MEG recordings. Yet, little is known about the way in which flicker-induced synchronization propagates across cortical levels and impacts different cell types. Here, we used Neuropixels to simultaneously record from LGN, V1, and CA1 while presenting visual flicker stimuli at different frequencies. LGN neurons showed strong phase locking up to 40Hz, whereas phase locking was substantially weaker in V1 units and absent in CA1 units. Laminar analyses revealed an attenuation of phase locking at 40Hz for each processing stage, with substantially weaker phase locking in the superficial layers of V1. Gamma-rhythmic flicker predominantly entrained fast-spiking interneurons. Optotagging experiments showed that these neurons correspond to either PV+ or narrow-waveform Sst+ neurons. A computational model could explain the observed differences in phase locking based on the neurons’ capacitative low-pass filtering properties. In summary, the propagation of synchronized activity and its effect on distinct cell types strongly depend on its frequency.
SpikeShip: a method for fast, unsupervised discovery of high-dimensional neural spiking patterns
(2023)
Neural coding and memory formation depend on temporal spiking sequences that span high-dimensional neural ensembles. The unsupervised discovery and characterization of these spiking sequences requires a suitable dissimilarity measure to spiking patterns, which can then be used for clustering and decoding. Here, we present a new dissimilarity measure based on optimal transport theory called SpikeShip, which compares multi-neuron spiking patterns based on all the relative spike-timing relationships among neurons. SpikeShip computes the optimal transport cost to make all the relative spike timing relationships (across neurons) identical between two spiking patterns. We show that this transport cost can be decomposed into a temporal rigid translation term, which captures global latency shifts, and a vector of neuron-specific transport flows, which reflect inter-neuronal spike timing differences. SpikeShip can be effectively computed for high-dimensional neuronal ensembles, has a low (linear) computational cost that has the same order as the spike count, and is sensitive to higher-order correlations. Furthermore SpikeShip is binless, can handle any form of spike time distributions, is not affected by firing rate fluctuations, can detect patterns with a low signal-to-noise ratio, and can be effectively combined with a sliding window approach. We compare the advantages and differences between SpikeShip and other measures like SPIKE and Victor-P urpura distance. We applied SpikeShip to large-scale Neuropixel recordings during spontaneous activity and visual encoding. We show that high-dimensional spiking sequences detected via SpikeShip reliably distinguish between different natural images and different behavioral states. These spiking sequences carried complementary information to conventional firing rate codes. SpikeShip opens new avenues for studying neural coding and memory consolidation by rapid and unsupervised detection of temporal spiking patterns in high-dimensional neural ensembles.
The hippocampal formation is linked to spatial navigation, but there is little corroboration from freely-moving primates with concurrent monitoring of three-dimensional head and gaze stances. We recorded neurons and local field potentials across hippocampal regions in rhesus macaques during free foraging in an open environment while tracking their head and eye. Theta band activity was intermittently present at movement onset and modulated by saccades. Many cells were phase-locked to theta, with few showing theta phase precession. Most hippocampal neurons encoded a mixture of spatial variables beyond place fields and a negligible number showed prominent grid tuning. Spatial representations were dominated by facing location and allocentric direction, mostly in head, rather than gaze, coordinates. Importantly, eye movements strongly modulated neural activity in all regions. These findings reveal that the macaque hippocampal formation represents three-dimensional space using a multiplexed code, with head orientation and eye movement properties dominating over simple place and grid coding during free exploration.
Path integration is a sensorimotor computation that can be used to infer latent dynamical states by integrating self-motion cues. We studied the influence of sensory observation (visual/vestibular) and latent control dynamics (velocity/acceleration) on human path integration using a novel motion-cueing algorithm. Sensory modality and control dynamics were both varied randomly across trials, as participants controlled a joystick to steer to a memorized target location in virtual reality. Visual and vestibular steering cues allowed comparable accuracies only when participants controlled their acceleration, suggesting that vestibular signals, on their own, fail to support accurate path integration in the absence of sustained acceleration. Nevertheless, performance in all conditions reflected a failure to fully adapt to changes in the underlying control dynamics, a result that was well explained by a bias in the dynamics estimation. This work demonstrates how an incorrect internal model of control dynamics affects navigation in volatile environments in spite of continuous sensory feedback.
Olivo-cerebellar loops, where anatomical patches of the cerebellar cortex and inferior olive project one onto the other, form an anatomical unit of cerebellar computation. Here, we investigated how successive computational steps map onto olivo-cerebellar loops. Lobules IX-X of the cerebellar vermis, i.e. the nodulus and uvula, implement an internal model of the inner ear’s graviceptor, the otolith organs. We have previously identified two populations of Purkinje cells that participate in this computation: Tilt-selective cells transform egocentric rotation signals into allocentric tilt velocity signals, to track head motion relative to gravity, and translation-selective cells encode otolith prediction error. Here we show that, despite very distinct simple spike response properties, both types of Purkinje cells emit complex spikes that are proportional to sensory prediction error. This indicates that both cell populations comprise a single olivo-cerebellar loop, in which only translation-selective cells project to the inferior olive. We propose a neural network model where sensory prediction errors computed by translation-selective cells are used as a teaching signal for both populations, and demonstrate that this network can learn to implement an internal model of the otoliths.
Neuroscience studies in non-human primates (NHP) often follow the rule of thumb that results observed in one animal must be replicated in at least one other. However, we lack a statistical justification for this rule of thumb, or an analysis of whether including three or more animals is better than including two. Yet, a formal statistical framework for experiments with few subjects would be crucial for experimental design, ethical justification, and data analysis. Also, including three or four animals in a study creates the possibility that the results observed in one animal will differ from those observed in the others: we need a statistically justified rule to resolve such situations. Here, I present a statistical framework to address these issues. This framework assumes that conducting an experiment will produce a similar result for a large proportion of the population (termed ‘representative’), but will produce spurious results for a substantial proportion of animals (termed ‘outliers’); the fractions of ‘representative’ and ‘outliers’ animals being defined by a prior distribution. I propose a procedure in which experimenters collect results from M animals and accept results that are observed in at least N of them (‘N-out-of-M’ procedure). I show how to compute the risks α (of reaching an incorrect conclusion) and β (of failing to reach a conclusion) for any prior distribution, and as a function of N and M. Strikingly, I find that the N-out-of-M model leads to a similar conclusion across a wide range of prior distributions: recordings from two animals lowers the risk α and therefore ensures reliable result, but leaves a large risk β; and recordings from three animals and accepting results observed in two of them strikes an efficient balance between acceptable risks α and β. This framework gives a formal justification for the rule of thumb of using at least two animals in NHP studies, suggests that recording from three animals when possible markedly improves statistical power, provides a statistical solution for situations where results are not consistent between all animals, and may apply to other types of studies involving few animals.
The neural mechanisms that unfold when humans form a large group defined by an overarching context, such as audiences in theater or sports, are largely unknown and unexplored. This is mainly due to the lack of availability of a scalable system that can record the brain activity from a significantly large portion of such an audience simultaneously. Although the technology for such a system has been readily available for a long time, the high cost as well as the large overhead in human resources and logistic planning have prohibited the development of such a system. However, during the recent years reduction in technology costs and size have led to the emergence of low-cost, consumer-oriented EEG systems, developed primarily for recreational use. Here by combining such a low-cost EEG system with other off-the-shelve hardware and tailor-made software, we develop in the lab and test in a cinema such a scalable EEG hyper-scanning system. The system has a robust and stable performance and achieves accurate unambiguous alignment of the recorded data of the different EEG headsets. These characteristics combined with small preparation time and low-cost make it an ideal candidate for recording large portions of audiences.
Research on psychopathy has so far been largely limited to the investigation of high-level processes, such as emotion perception and regulation. In the present work, we investigate whether psychopathy has an effect on the estimation of fundamental physical parameters, which are computed in the brain during early stages of sensory processing. We employed a simple task in which participants had to estimate their interpersonal distance from a moving avatar and stop it at a given distance. The face expression of the avatars were positive, negative, or neutral. Participants carried out the task online on their home computers. We measured the psychopathy level via a self-report questionnaire. Regardless of the degree of psychopathy, the facial expression of the avatars showed no effect on distance estimation. Our results show that individuals with a high degree of psychopathy underestimate distance of approaching avatars significantly less (let the avatar approach them significantly closer) than did participants with a lesser degree of psychopathy. Moreover, participants who scored high in Self-Centered Impulsivity underestimate the distance to approaching avatars significantly less (let the avatar approach closer) than participants with a low score. Distance estimation is considered an automatic process performed at early stages of visual processing. Therefore, our results imply that psychopathy affects basic early sensory processes, such as feature extraction, in the visual cortex.
Moving in synchrony to external rhythmic stimuli is an elementary function that humans regularly engage in. It is termed “sensorimotor synchronization” and it is governed by two main parameters, the period and the phase of the movement with respect to the external rhythm. There has been an extensive body of research on the characteristics of these parameters, primarily once the movement synchronization has reached a steady-state level. Particular interest has been shown about how these parameters are corrected when there are deviations for the steady-state level. However, little is known about the initial “tuning-in” interval, when one aligns the movement to the external rhythm from rest. The current work investigates this “tuning-in” period for each of the four limbs and makes various novel contributions in the understanding of sensorimotor synchronization. The results suggest that phase and period alignment appear to be separate processes. Phase alignment involves limb-specific somatosensory memory in the order of minutes while period alignment has very limited memory usage. Phase alignment is the primary task but then the brain switches to period alignment where it spends most its resources. In overall this work suggests a central, cognitive role of period alignment and a peripheral, sensorimotor role of phase alignment.
Temporal anticipation is a fundamental process underlying complex neural functions such as associative learning, decision-making, and motor-preparation. Here we study event anticipation in its simplest form in human participants using magnetoencephalography. We distributed events in time according to different probability density functions and presented the stimuli separately in two different sensory modalities. We found that the temporal dynamics in right parietal cortex correlate with reaction times to anticipated events. Specifically, after an event occurred, event probability was represented in right parietal activity, hinting at a functional role of event-related potential component P300 in temporal expectancy. The results are consistent across both visual and auditory modalities. The right parietal cortex seems to play a central role in the processing of event probability density. Overall, this work contributes to the understanding of the neural processes involved in the anticipation of events in time.
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
With the emergence of immunotherapies, the understanding of functional HLA class I antigen presentation to T cells is more relevant than ever. Current knowledge on antigen presentation is based on decades of research in a wide variety of cell types with varying antigen presentation machinery (APM) expression patterns, proteomes and HLA haplotypes. This diversity complicates the establishment of individual APM contributions to antigen generation, selection and presentation. Therefore, we generated a novel Panel of APM Knockout Cell lines (PAKC) from the same genetic origin. After CRISPR/Cas9 genome-editing of ten individual APM components in a human cell line, we derived clonal cell lines and confirmed their knockout status and phenotype. We then show how PAKC will accelerate research on the functional interplay between APM components and their role in antigen generation and presentation. This will lead to improved understanding of peptide-specific T cell responses in infection, cancer and autoimmunity.
Treatments for amblyopia focus on vision therapy and patching of one eye. Predicting the success of these methods remains difficult, however. Recent research has used binocular rivalry to monitor visual cortical plasticity during occlusion therapy, leading to a successful prediction of the recovery rate of the amblyopic eye. The underlying mechanisms and their relation to neural homeostatic plasticity are not known. Here we propose a spiking neural network to explain the effect of short-term monocular deprivation on binocular rivalry. The model reproduces perceptual switches as observed experimentally. When one eye is occluded, inhibitory plasticity changes the balance between the eyes and leads to longer dominance periods for the eye that has been deprived. The model suggests that homeostatic inhibitory plasticity is a critical component of the observed effects and might play an important role in the recovery from amblyopia.
Motivation DNA CpG methylation (CpGm) has proven to be a crucial epigenetic factor in the gene regulatory system. Assessment of DNA CpG methylation values via whole-genome bisulfite sequencing (WGBS) is, however, computationally extremely demanding.
Results We present FAst MEthylation calling (FAME), the first approach to quantify CpGm values directly from bulk or single-cell WGBS reads without intermediate output files. FAME is very fast but as accurate as standard methods, which first produce BS alignment files before computing CpGm values. We present experiments on bulk and single-cell bisulfite datasets in which we show that data analysis can be significantly sped-up and help addressing the current WGBS analysis bottleneck for large-scale datasets without compromising accuracy.
Availability An implementation of FAME is open source and licensed under GPL-3.0 at https://github.com/FischerJo/FAME.
Multiplex families with a high prevalence of a psychiatric disorder are often examined to identify rare genetic variants with large effect sizes. In the present study, we analysed whether the risk for bipolar disorder (BD) in BD multiplex families is influenced by common genetic variants. Furthermore, we investigated whether this risk is conferred mainly by BD-specific risk variants or by variants also associated with the susceptibility to schizophrenia or major depression. In total, 395 individuals from 33 Andalusian BD multiplex families as well as 438 subjects from an independent, sporadic BD case-control cohort were analysed. Polygenic risk scores (PRS) for BD, schizophrenia, and major depression were calculated and compared between the cohorts. Both the familial BD cases and unaffected family members had significantly higher PRS for all three psychiatric disorders than the independent controls, suggesting a high baseline risk for several psychiatric disorders in the families. Moreover, familial BD cases showed significantly higher BD PRS than unaffected family members and sporadic BD cases. A plausible hypothesis is that, in multiplex families with a general increase in risk for psychiatric disease, BD development is attributable to a high burden of common variants that confer a specific risk for BD. The present analyses, therefore, demonstrated that common genetic risk variants for psychiatric disorders are likely to contribute to the high incidence of affective psychiatric disorders in the multiplex families. The PRS explained only part of the observed phenotypic variance and rare variants might have also contributed to disease development.
Investigators in the cognitive neurosciences have turned to Big Data to address persistent replication and reliability issues by increasing sample sizes, statistical power, and representativeness of data. While there is tremendous potential to advance science through open data sharing, these efforts unveil a host of new questions about how to integrate data arising from distinct sources and instruments. We focus on the most frequently assessed area of cognition - memory testing - and demonstrate a process for reliable data harmonization across three common measures. We aggregated raw data from 53 studies from around the world which measured at least one of three distinct verbal learning tasks, totaling N = 10,505 healthy and brain-injured individuals. A mega analysis was conducted using empirical bayes harmonization to isolate and remove site effects, followed by linear models which adjusted for common covariates. After corrections, a continuous item response theory (IRT) model estimated each individual subject’s latent verbal learning ability while accounting for item difficulties. Harmonization significantly reduced inter-site variance by 37% while preserving covariate effects. The effects of age, sex, and education on scores were found to be highly consistent across memory tests. IRT methods for equating scores across AVLTs agreed with held-out data of dually-administered tests, and these tools are made available for free online. This work demonstrates that large-scale data sharing and harmonization initiatives can offer opportunities to address reproducibility and integration challenges across the behavioral sciences.
Mapping cortical brain asymmetry in 17,141 healthy individuals worldwide via the ENIGMA Consortium
(2017)
Models of perceptual decision making have historically been designed to maximally explain behaviour and brain activity independently of their ability to actually perform tasks. More recently, performance-optimized models have been shown to correlate with brain responses to images and thus present a complementary approach to understand perceptual processes. In the present study, we compare how these approaches comparatively account for the spatio-temporal organization of neural responses elicited by ambiguous visual stimuli. Forty-six healthy human subjects performed perceptual decisions on briefly flashed stimuli constructed from ambiguous characters. The stimuli were designed to have 7 orthogonal properties, ranging from low-sensory levels (e.g. spatial location of the stimulus) to conceptual (whether stimulus is a letter or a digit) and task levels (i.e. required hand movement). Magneto-encephalography source and decoding analyses revealed that these 7 levels of representations are sequentially encoded by the cortical hierarchy, and actively maintained until the subject responds. This hierarchy appeared poorly correlated to normative, drift-diffusion, and 5-layer convolutional neural networks (CNN) optimized to accurately categorize alpha-numeric characters, but partially matched the sequence of activations of 3/6 state-of-the-art CNNs trained for natural image labeling (VGG-16, VGG-19, MobileNet). Additionally, we identify several systematic discrepancies between these CNNs and brain activity, revealing the importance of single-trial learning and recurrent processing. Overall, our results strengthen the notion that performance-optimized algorithms can converge towards the computational solution implemented by the human visual system, and open possible avenues to improve artificial perceptual decision making.
Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from “accidental” viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalise to 3D models of objects. Using 3D models further allowed us to probe a broad range of viewpoints and empirically establish accidental and canonical viewpoints. In Experiment 1, we presented 3D models of objects from six different viewpoints (0°, 60°, 120°, 180° 240°, 300°) in colour (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the performance in Experiments 1a and 1b, we determined canonical (0°-rotation) and non-canonical (120°-rotation) viewpoints for the stimuli. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that viewpoint-dependence and scene context effects generalize to depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy.
Bipolar disorder (BD) is a genetically complex mental illness characterized by severe oscillations of mood and behavior. Genome-wide association studies (GWAS) have identified several risk loci that together account for a small portion of the heritability. To identify additional risk loci, we performed a two-stage meta-analysis of >9 million genetic variants in 9,784 bipolar disorder patients and 30,471 controls, the largest GWAS of BD to date. In this study, to increase power we used ~2,000 lithium-treated cases with a long-term diagnosis of BD from the Consortium on Lithium Genetics, excess controls, and analytic methods optimized for markers on the Xchromosome. In addition to four known loci, results revealed genome-wide significant associations at two novel loci: an intergenic region on 9p21.3 (rs12553324, p = 5.87×10-9; odds ratio = 1.12) and markers within ERBB2 (rs2517959, p = 4.53×10-9; odds ratio = 1.13). No significant X-chromosome associations were detected and X-linked markers explained very little BD heritability. The results add to a growing list of common autosomal variants involved in BD and illustrate the power of comparing well-characterized cases to an excess of controls in GWAS.
i-TED is an innovative detection system which exploits Compton imaging techniques to achieve a superior signal-to-background ratio in (n,γ) cross-section measurements using time-of-flight technique. This work presents the first experimental validation of the i-TED apparatus for high-resolution time-of-flight experiments and demonstrates for the first time the concept proposed for background rejection. To this aim both 197Au(n,γ) and 56Fe(n,γ) reactions were measured at CERN n\_TOF using an i-TED demonstrator based on only three position-sensitive detectors. Two \cds detectors were also used to benchmark the performance of i-TED. The i-TED prototype built for this study shows a factor of ∼3 higher detection sensitivity than state-of-the-art \cds detectors in the ∼10~keV neutron energy range of astrophysical interest. This paper explores also the perspectives of further enhancement in performance attainable with the final i-TED array consisting of twenty position-sensitive detectors and new analysis methodologies based on Machine-Learning techniques.
In this work, inhomogeneous chiral phases are studied in a variety of Four-Fermion and Yukawa models in 2+1 dimensions at zero and non-zero temperature and chemical potentials. Employing the mean-field approximation, we do not find indications for an inhomogeneous phase in any of the studied models. We show that the homogeneous phases are stable against inhomogeneous perturbations. At zero temperature, full analytic results are presented.
We deal with the reconstruction of inclusions in elastic bodies based on monotonicity methods and construct conditions under which a resolution for a given partition can be achieved. These conditions take into account the background error as well as the measurement noise. As a main result, this shows us that the resolution guarantees depend heavily on the Lamé parameter μ and only marginally on λ.
Effective spectral functions of the ρ meson are reconstructed by considering the lifetimes inside different media using the hadronic transport SMASH (Simulating Many Accelerated Strongly-interacting Hadrons). Due to inelastic scatterings, resonance lifetimes are dynamically shortened (collisional broadening), even though the employed approach assumes vacuum resonance properties. Analyzing the ρ meson lifetimes allows to quantify an effective broadening of the decay width and spectral function, which is important in order to distinguish dynamical effects from additional genuine medium modifications to the spectral functions, indicating e.g. an onset of chiral symmetry restoration. The broadening of the spectral function in a thermalized system is shown to be consistent with other theoretical calculations. The effective ρ meson spectral function is also presented for the dynamical evolution of heavy-ion collisions, finding a clear correlation of the broadening to system size, which is explained by an observed dependence of the width on the local hadron density. Furthermore, the difference in the results between the thermal system and full collision dynamics is explored, which may point to non-equilibrium effects.
The Calderón problem with finitely many unknowns is equivalent to convex semidefinite optimization
(2023)
We consider the inverse boundary value problem of determining a coefficient function in an elliptic partial differential equation from knowledge of the associated Neumann-Dirichlet-operator. The unknown coefficient function is assumed to be piecewise constant with respect to a given pixel partition, and upper and lower bounds are assumed to be known a-priori.
We will show that this Calderón problem with finitely many unknowns can be equivalently formulated as a minimization problem for a linear cost functional with a convex non-linear semidefinite constraint. We also prove error estimates for noisy data, and extend the result to the practically relevant case of finitely many measurements, where the coefficient is to be reconstructed from a finite-dimensional Galerkin projection of the Neumann-Dirichlet-operator.
Our result is based on previous works on Loewner monotonicity and convexity of the Neumann-Dirichlet-operator, and the technique of localized potentials. It connects the emerging fields of inverse coefficient problems and semidefinite optimization.
The exploration of hot and dense nuclear matter: Introduction to relativistic heavy-ion physics
(2022)
This article summarizes our present knowledge about nuclear matter at the highest energy densities and its formation in relativistic heavy ion collisions. We review what is known about the structure and properties of the quark-gluon plasma and survey the observables that are used to glean information about it from experimental data.
The idea of slow-neutron capture nucleosynthesis formulated in 1957 triggered a tremendous experimental effort in different laboratories worldwide to measure the relevant nuclear physics input quantities, namely (n,γ) cross sections over the stellar temperature range (from few eV up to several hundred keV) for most of the isotopes involved from Fe up to Bi. A brief historical review focused on total energy detectors will be presented to illustrate how, advances in instrumentation have led, over the years, to the assessment and discovery of many new aspects of s-process nucleosynthesis and to the progressive refinement of theoretical models of stellar evolution. A summary will be presented on current efforts to develop new detection concepts, such as the Total-Energy Detector with γ-ray imaging capability (i-TED). The latter is based on the simultaneous combination of Compton imaging with neutron time-of-flight (TOF) techniques, in order to achieve a superior level of sensitivity and selectivity in the measurement of stellar neutron capture rates.
The production of prompt Λ+c baryons at midrapidity (|y|<0.5) was measured in central (0-10%) and mid-central (30-50%) Pb-Pb collisions at the center-of-mass energy per nucleon-nucleon pair sNN−−−√=5.02 TeV with the ALICE detector. The Λ+c production yield, the Λ+c/D0 production ratio, and the Λ+c nuclear modification factor RAA are reported. The results are more precise and more differential in transverse momentum (pT) and centrality with respect to previous measurements. The Λ+c/D0 ratio, which is enhanced with respect to the pp measurement for 4<pT<8 GeV/c, is described by theoretical calculations that model the charm-quark transport in the quark-gluon plasma and include hadronization via both coalescence and fragmentation mechanisms.
The purpose of the paper is to initiate the development of the theory of Newton Okounkov bodies of curve classes. Our denition is based on making a fundamental property of NewtonOkounkov bodies hold also in the curve case: the volume of the NewtonOkounkov body of a curve is a volume-type function of the original curve. This construction allows us to conjecture a new relation between NewtonOkounkov bodies, we prove it in certain cases.
The present article proposes a re-reading of what "inclusion" into the sphere of the historical actually means in modern European historical discourse. It argues that this re-reading permits challenging a powerful, but problematic norm of ontological homogeneity as something to be achieved in and by historical discourse. At least some of the more conceptually profound challenges that accounts of "deep history" - of very distant pasts - pose to historical discourse have to do with pursuits of this norm. Historical theory has the potential of responding to some of these challenges and actually reverting them back at the practice of accounting for deep times in historical writing. The argument proceeds, in a first step, by analyzing the ties between modern European mortuary cultures and historical writing. In a second step, the history of humanitarian moralities is brought to bear on the analysis, in order to make visible, thirdly, the fractured presences of deep time in modern-era and contemporary historical writing. The fractures in question emerge, the article argues, from the ontological heterogeneity of historical knowledge. So in the end, a position beyond ontological homogeneity is adumbrated.
Release of neuropeptides from dense core vesicles (DCVs) is essential for neuromodulation. Compared to the release of small neurotransmitters, much less is known about the mechanisms and proteins contributing to neuropeptide release. By optogenetics, behavioral analysis, electrophysiology, electron microscopy, and live imaging, we show that synapsin SNN-1 is required for cAMP-dependent neuropeptide release in Caenorhabditis elegans hermaphrodite cholinergic motor neurons. In synapsin mutants, behaviors induced by the photoactivated adenylyl cyclase bPAC, which we previously showed to depend on acetylcholine and neuropeptides (Steuer Costa et al., 2017), are altered like in animals with reduced cAMP. Synapsin mutants have slight alterations in synaptic vesicle (SV) distribution, however, a defect in SV mobilization was apparent after channelrhodopsin-based photostimulation. DCVs were largely affected in snn-1 mutants: DCVs were ∼30% reduced in synaptic terminals, and not released following bPAC stimulation. Imaging axonal DCV trafficking, also in genome-engineered mutants in the serine-9 protein kinase A phosphorylation site, showed that synapsin captures DCVs at synapses, making them available for release. SNN-1 co-localized with immobile, captured DCVs. In synapsin deletion mutants, DCVs were more mobile and less likely to be caught at release sites, and in non-phosphorylatable SNN-1B(S9A) mutants, DCVs traffic less and accumulate, likely by enhanced SNN-1 dependent tethering. Our work establishes synapsin as a key mediator of neuropeptide release.
Der vorliegende Beitrag versucht, am Leitfaden der Scham einen Zugang zu Agambens Theorie der Subjektivität zu gewinnen, um die theoretischen und historischen Voraussetzungen seiner Ethik einer Prüfung zu unterziehen, die zugleich an die Kritik Thomäs anschließen kann. Den Ausgangspunkt der folgenden Überlegungen bietet Agambens Untersuchung zum 'homo sacer'. In einem zweiten Schritt geht es um die Theorie der Scham, die "Was von Auschwitz bleibt" vorlegt. Die kritische Diskussion von Agambens Ethik leitet die Auseinandersetzung mit dem Gewährsmann ein, den "Was von Auschwitz bleibt präsentiert", mit Primo Levi. Sie wird weitergeführt und zugespitzt durch die Überbietung, die Levis' Frage "Ist das ein Mensch?" in Imre Kertész' "Roman eines Schicksallosen" gefunden hat. Vor dem Hintergrund der zentralen Bedeutung der Scham bei Primo Levi und Imre Kertécs kehrt der letzte Teil zu Agambens Ethik zurück, um deren Grundlagen im Rückgriff auf Aristoteles einer Revision zu unterziehen.
If projection and transference represent similar terms that imply a fundamental form of ignorance, the aim of this investigation can not be to draw a sharp distinction between projection and transference. Of course, the dialectic of inside and outside doesn't play the central role in transference like it does in projection. In a certain way, the notion of projection concerns all forms of perception and seems to be wider than the notion of transference. But on the other hand, the notion of transference as a poetic act of creating metaphorical analogies seems to be wider than that of projection. My interest in the following lines lies not in the attempt to draw a valuable distinction between both terms, but to look at their interplay in a novel that discusses all forms of archaism, primitivism and regression, commonly linked with projection, a novel, that at the same time tries to give an explanation of the foundation of modern art. Thomas Mann's Doktor Faustus offers an insight not only into the combination of projection and love, but also into ignorance as the common ground of projection and transference. I will therefore first try to determine the modernity of Thomas Mann's novel in regard to the abounding intertextual dimension that characterizes the text, and then closely examine the central scene of the novel, the confrontation between Adrian Leverkühn and the obscure figure of the devil.
Wie Rolf Parr in seinem Aufsatz 'Liminale und andere Übergänge. Theoretische Modellierung von Grenzzonen, Normalitätsaspekten, Schwellen, Übergängen und Zwischenräumen in Literatur- und Kulturwissenschaft' deutlich macht, ist die Intertextualitäts- und Intermedialitätstheorie, die er im Anschluss an die Arbeiten Michel Foucaults und Jürgen Links vertritt, wesentlich von einem Moment der Grenzüberschreitung bestimmt. An die Stelle klar konturierter Grenzen treten Schwellen als "räumlichtopographische Zonen der Unentschiedenheit", die zugleich als zeitliche Erinnerungsschwellen fungieren. Parr richtet im Rekurs auf Foucault den Blick nicht allein auf diskursive Grenzen der Sagbarkeit durch Ausschlussmechanismen, Verbote etc. Er macht zugleich auf Foucaults frühes Konzept der Heterotopie aufmerksam, wo dieser Grenzziehungen auf bestimmte Raumstrukturen bezieht. Parrs eigenes Interesse liegt in diesem Zusammenhang in der Überführung der diskurstheoretischen Arbeiten Foucaults in eine Interdiskurstheorie, die eben die Schwellen einzelner Diskurse zu überschreiten hätte. Ich möchte hier einen anderen Akzent setzen und die Bedeutung von Schwellenerfahrungen bei Foucault selbst herausarbeiten. Ich konzentriere mich dabei zunächst auf den Begriff des historischen Aprioris aus 'Die Ordnung der Dinge', um daran anschließend auf den Begriff der Heterotopie einzugehen, der die Entstehung der 'Ordnung der Dinge' in den sechziger Jahren in gewisser Weise begleitet und komplementiert. Der Vergleich von Foucaults Schwellendenken mit dem Walter Benjamins soll zugleich erlauben, das Thema des Liminalen im Sinne Parrs als ein Grundmotiv von Foucaults Denken auszumachen.
Die Frage, was Literatur ist, scheint nicht nur die grundlegendste zu sein, die sich der Literaturwissenschaft stellt, sie ist zugleich ihre abgründigste. Grundlegend ist sie, weil sie nach dem Wesen der Literatur fragt und damit eigentlich eine Selbstverständlichkeit aufruft, die die Auseinandersetzung mit Literatur begleitet. Abgründig ist sie, weil auch die scheinbar selbstverständlichsten Definitionen der Literatur bisher nicht zu einer einheitlichen Auffassung vom Wesen der Literatur geführt haben. So steht die Literaturwissenschaft bereits mit der ersten Frage, die sich ihr stellt, vor einem scheinbar unaufhebbaren Dilemma. Auf den Gegenstand angesprochen, der ihr zugehört und der entsprechend über ihre Berechtigung als Wissenschaft Auskunft zu geben vermöchte, bleibt sie im Unklaren.
Ist die Literatur, als Abweichung oder als Erfüllung der Ausdrucksfunktion der Sprache verstanden, eine Diskursform, die dem Bereich der Wahrheit zugänglich ist, oder aber verhindert sie jeden systematischen Zugang zur Wahrheit? Und was ist überhaupt damit gewonnen, wenn Literatur und Wahrheit in einen Zusammenhang zueinander gesetzt werden? Diese Fragen mit einer neuen Dringlichkeit versehen zu haben, die über den Gegensatz von analytischer Philosophie und Dekonstruktion hinausreicht, ist das Verdienst der Arbeiten von Stanley Cavell. Im Folgenden geht es darum, die Frage nach der Wahrheit in der Literatur noch einmal anhand der Auseinandersetzung mit Cavells Schriften stellen, um die Reichweite wie die Grenzen des philosophischen Diskurses über die Literatur zu bestimmen. [...] Was für Cavell in grundsätzlicher Weise in Frage steht, ist zum einen das Wissen, das die Philosophie von der Welt haben kann und zum anderen das Wissen, was Philosophie und Literatur in ihrer gemeinsamen und doch unterschiedlichen Auseinandersetzung mit dem Skeptizismus voneinander haben können. In dem Maße, in dem er nach den Möglichkeiten einer Überwindung des Skeptizismus sucht, erkennt Cavell zunächst spezifische Formen des Nichtwissens an, die er im Kontext philosophischer wie literarischer Texte gleichermaßen thematisiert. Eine besondere Stellung nimmt in diesem Zusammenhang der wiederholte Rückgriff auf Shakespeare ein, der in "Der Anspruch der Vernunft" in einer Lektüre des "Othello" kulminiert, die anhand der Analyse der Tragödie als Ausdruck von und Antwort an den Skeptizismus das Problem von Wissen und Nichtwissen zu fassen erlaubt. Insofern bietet es sich an, Cavells Überlegungen zum Zusammenhang von Tragödie und Skeptizismus einer kritischen Lektüre zu unterziehen, die im Rahmen seiner eigenen Fragestellung noch einmal nach dem grundsätzlichen Verhältnis von Literatur und philosophischer Wahrheitsfindung fragt.
Im Mittelpunkt des Textes, so scheint es, steht die trauernde Verarbeitung eines lang zurückliegenden Ereignisses, damit zugleich Erinnerung und Abschied als Grundmotive des Werkes von Droste-Hülshoff, wie sie auch in anderen Texten wie "Meine Toten" oder dem Byron-Gedicht "Lebt Wohl" zum Ausdruck kommen. In der "Taxuswand" durchmisst Droste-Hülshoff eine lange Zeitspanne, achtzehn Jahre, die zwischen der Begegnung und seiner dichterischen Verarbeitung stehen. Die Frage, die in diesem Zusammenhang im Raum steht, ist die nach dem grundsätzlichen Verhältnis von dichterischer Erinnerungsleistung und biographischem Erlebnis im Werk der Annette von Droste-Hülshoff. Dass beide in ähnlicher Weise wie bei Baudelaire nicht einfach zusammenfallen, sondern auseinandertreten, ist die Vermutung, der es im Folgenden nachzugehen gilt.
We tested 6–7-year-olds, 18–22-year-olds, and 67–74-year-olds on an associative memory task that consisted of knowledge-congruent and knowledge-incongruent object–scene pairs that were highly familiar to all age groups. We compared the three age groups on their memory congruency effect (i.e., better memory for knowledge-congruent associations) and on a schema bias score, which measures the participants’ tendency to commit knowledge-congruent memory errors. We found that prior knowledge similarly benefited memory for items encoded in a congruent context in all age groups. However, for associative memory, older adults and, to a lesser extent, children overrelied on their prior knowledge, as indicated by both an enhanced congruency effect and schema bias. Functional Magnetic Resonance Imaging (fMRI) performed during memory encoding revealed an age-independent memory x congruency interaction in the ventromedial prefrontal cortex (vmPFC). Furthermore, the magnitude of vmPFC recruitment correlated positively with the schema bias. These findings suggest that older adults are most prone to rely on their prior knowledge for episodic memory decisions, but that children can also rely heavily on prior knowledge that they are well acquainted with. Furthermore, the fMRI results suggest that the vmPFC plays a key role in the assimilation of new information into existing knowledge structures across the entire lifespan. vmPFC recruitment leads to better memory for knowledge-congruent information but also to a heightened susceptibility to commit knowledge-congruent memory errors, in particular in children and older adults.
During the first two days of August 2016 a seismic crisis occurred on Brava, Cape Verde, which – according to observations based on a local seismic network – was characterized by more than thousand volcano–seismic signals. Brava is considered an active volcanic island, although it has not experienced any historic eruptions. Seismicity significantly exceeded the usual level during the crisis. We report on results based on data from a temporary seismic–array deployment on the neighbouring island of Fogo at a distance of about 35 km. The array was in operation from October 2015 to December 2016 and recorded a total of 1343 earthquakes, 355 thereof were localized. On 1 and 2 August we observed 54 earthquakes, 25 of which could be located beneath Brava. We further evaluate the observations with regards to possible precursors to the crisis and its continuation. Our analysis shows a migration of seismicity around Brava, but no distinct precursory pattern. However, the observations suggest that similar earthquake swarms commonly occur close to Brava. The results further confirm the advantages of seismic arrays as tools for the remote monitoring of regions with limited station coverage or access.
In the last decades, energy modelling has supported energy planning by offering insights into the dynamics between energy access, resource use, and sustainable development. Especially in recent years, there has been an attempt to strengthen the science-policy interface and increase the involvement of society in energy planning processes. This has, both in the EU and worldwide, led to the development of open-source and transparent energy modelling practices.This paper describes the role of an open-source energy modelling tool in the energy planning process and highlights its importance for society. Specifically, it describes the existence and characteristics of the relationship between developing an open-source, freely available tool and its application, dissemination and use for policy making. Using the example of the Open Source energy Modelling System (OSeMOSYS), this work focuses on practices that were established within the community and that made the framework's development and application both relevant and scientifically grounded. Keywords: Energy system modelling tool, Open-source software, Model-based public policy, Software development practice, Outreach practice
Introduction: In the development of bio-enabling formulations, innovative in vivo predictive tools to understand and predict the in vivo performance of such formulations are needed. Etravirine, a non-nucleoside reverse transcriptase inhibitor, is currently marketed as an amorphous solid dispersion (Intelence® tablets). The aims of this study were 1) to investigate and discuss the advantages of using biorelevant in vitro setups in simulating the in vivo performance of Intelence® 100 mg and 200 mg tablets, in the fed state, 2) to build a Physiologically Based Pharmacokinetic (PBPK) model by combining experimental data and literature information with the commercially available in silico software Simcyp® Simulator V17.1 (Certara UK Ltd.), and 3) to discuss the challenges when predicting the in vivo performance of an amorphous solid dispersion and identify the parameters which influence the pharmacokinetics of etravirine most.
Methods: Solubility, dissolution and transfer experiments were performed in various biorelevant media simulating the fasted and fed state environment in the gastrointestinal tract. An in silico PBPK model for healthy volunteers was developed in the Simcyp® Simulator, using in vitro results and data available from the literature as input. The impact of pre- and post-absorptive parameters on the pharmacokinetics of etravirine was investigated using simulations of various scenarios.
Results: In vitro experiments indicated a large effect of naturally occurring solubilizing agents on the solubility of etravirine. Interestingly, supersaturated concentrations of etravirine were observed over the entire duration of dissolution experiments on Intelence® tablets. Coupling the in vitro results with the PBPK model provided the opportunity to investigate two possible absorption scenarios, i.e. with or without implementation of precipitation. The results from the simulations suggested that a scenario in which etravirine does not precipitate is more representative of the in vivo data. On the post-absorptive side, it appears that the concentration dependency of the unbound fraction of etravirine in plasma has a significant effect on etravirine pharmacokinetics.
Conclusions: The present study underlines the importance of combining in vitro and in silico biopharmaceutical tools to advance our knowledge in the field of bio-enabling formulations. Future studies on other bio-enabling formulations can be used to further explore this approach to support rational formulation design as well as robust prediction of clinical outcomes.
Within the last decades, western democracies have experienced a rise of inequality, with the gap between lower and upper class citizens steadily increasing and a widespread sentiment of growing inequalities also in the political sphere. Against this background, and in the context of the current “crisis of democracy”, democratic innovations such as direct democratic instruments are discussed as a very popular means to bring citizens back in. However, research on direct democracy has produced rather inconsistent results with regard to the question of which effects referenda and initiatives have on equality. Studies in this field are often limited to single countries and certain aspects of equality. Moreover, most existing studies look at the mere availability of direct democratic instruments instead of actual bills that are put to a vote. This paper aims to take a first step to fill these gaps by giving an explorative overview of the outputs of direct democratic bills on multiple equality dimensions, analyzing all national referenda and initiatives in European democracies between 1990 and 2015. How many pro- and contra-equality bills have been put to a vote, how many of those succeeded at the ballot, and are there differences between country groups? Our findings show that a majority of direct democratic bills was not related to equality at all. Regarding the successful bills, we detect some regional differences along with the general tendency that there are more pro- than contra-equality bills. Our paper sheds new light on the question if direct democracy can serve as an appropriate means to complement representative democracy and to shape democratic institutions in the future. The potential of direct democracy in fostering or impeding equality should be an important criterion for the assessment of claims to extend decision-making by citizens.
Purpose: The design of biorelevant conditions for in vitro evaluation of orally administered drug products is contingent on obtaining accurate values for physiologically relevant parameters such as pH, buffer capacity and bile salt concentrations in upper gastrointestinal fluids.
Methods: The impact of sample handling on the measurement of pH and buffer capacity of aspirates from the upper gastrointestinal tract was evaluated, with a focus on centrifugation and freeze-thaw cycling as factors that can influence results. Since bicarbonate is a key buffer system in the fasted state and is used to represent conditions in the upper intestine in vitro, variations on sample handling were also investigated for bicarbonate-based buffers prepared in the laboratory.
Results: Centrifugation and freezing significantly increase pH and decrease buffer capacity in samples obtained by aspiration from the upper gastrointestinal tract in the fasted state and in bicarbonate buffers prepared in vitro. Comparison of data suggested that the buffer system in the small intestine does not derive exclusively from bicarbonates.
Conclusions: Measurement of both pH and buffer capacity immediately after aspiration are strongly recommended as “best practice” and should be adopted as the standard procedure for measuring pH and buffer capacity in aspirates from the gastrointestinal tract. Only data obtained in this way provide a valid basis for setting the physiological parameters in physiologically based pharmacokinetic models.
Introduction: When developing bio-enabling formulations, innovative tools are required to understand and predict in vivo performance and may facilitate approval by regulatory authorities. EMEND® is an example of such a formulation, in which the active pharmaceutical ingredient, aprepitant, is nano-sized. The aims of this study were 1) to characterize the 80 mg and 125 mg EMEND® capsules in vitro using biorelevant tools, 2) to develop and parameterize a physiologically based pharmacokinetic (PBPK) model to simulate and better understand the in vivo performance of EMEND® capsules and 3) to assess which parameters primarily influence the in vivo performance of this formulation across the therapeutic dose range.
Methods: Solubility, dissolution and transfer experiments were performed in various biorelevant media simulating the fasted and fed state environment in the gastrointestinal tract. An in silico PBPK model for healthy volunteers was developed in the Simcyp Simulator, informed by the in vitro results and data available from the literature.
Results: In vitro experiments indicated a large effect of native surfactants on the solubility of aprepitant. Coupling the in vitro results with the PBPK model led to an appropriate simulation of aprepitant plasma concentrations after administration of 80 mg and 125 mg EMEND® capsules in both the fasted and fed states. Parameter Sensitivity Analysis (PSA) was conducted to investigate the effect of several parameters on the in vivo performance of EMEND®. While nano-sizing aprepitant improves its in vivo performance, intestinal solubility remains a barrier to its bioavailability and thus aprepitant should be classified as DCS IIb.
Conclusions: The present study underlines the importance of combining in vitro and in silico biopharmaceutical tools to understand and predict the absorption of this poorly soluble compound from an enabling formulation. The approach can be applied to other poorly soluble compounds to support rational formulation design and to facilitate regulatory assessment of the bio-performance of enabling formulations.
Objectives Supersaturating formulations hold great promise for delivery of poorly soluble active pharmaceutical ingredients (APIs). To profit from supersaturating formulations, precipitation is hindered with precipitation inhibitors (PIs), maintaining drug concentrations for as long as possible. This review provides a brief overview of supersaturation and precipitation, focusing on precipitation inhibition. Trial-and-error PI selection will be examined alongside established PI screening techniques. Primarily, however, this review will focus on recent advances that utilise advanced analytical techniques to increase mechanistic understanding of PI action and systematic PI selection.
Key Findings. Advances in mechanistic understanding have been made possible by the use of analytical tools such as spectroscopy, microscopy and mathematical and molecular modelling, which have been reviewed herein. Using these techniques, PI selection can instead be guided by molecular rationale. However, more work is required to see wide-spread application of such an approach for PI selection.
Conclusions PIs are becoming increasingly important in enabling formulations. Trial-and-error approaches have seen success thus far. However, it is essential to learn more about the mode of action of PIs if the most optimal formulations are to be realised. Robust analytical tools, and the knowledge of where and how they can be applied, will be essential in this endeavour.
Supersaturating formulations are widely used to improve the oral bioavailability of poorly soluble drugs. However, supersaturated solutions are thermodynamically unstable and such formulations often must include a precipitation inhibitor (PI) to sustain the increased concentrations to ensure that sufficient absorption will take place from the gastrointestinal tract. Recent advances in understanding the importance of drug-polymer interaction for successful precipitation inhibition have been encouraging. However, there still exists a gap in how this newfound understanding can be applied to improve the efficiency of PI screening and selection, which is still largely carried out with trial and error-based approaches. The aim of this study was to demonstrate how drug-polymer mixing enthalpy, calculated with the Conductor like Screening Model for Real Solvents (COSMO-RS), can be used as a parameter to select the most efficient precipitation inhibitors, and thus realise the most successful supersaturating formulations. This approach was tested for three different Biopharmaceutical Classification System (BCS) II compounds: dipyridamole, fenofibrate and glibenclamide, formulated with the supersaturating formulation, mesoporous silica. For all three compounds, precipitation was evident in mesoporous silica formulations without a precipitation inhibitor. Of the nine precipitation inhibitors studied, there was a strong positive correlation between the drug-polymer mixing enthalpy and the overall formulation performance, as measured by the area under the concentration-time curve in in vitro dissolution experiments. The data suggest that a rank-order based approach using calculated drug-polymer mixing enthalpy can be reliably used to select precipitation inhibitors for a more focused screening. Such an approach improves efficiency of precipitation inhibitor selection, whilst also improving the likelihood that the most optimal formulation will be realised.
Objectives: The objective of this review is to provide an overview of PK/PD models, focusing on drug-specific PK/PD models and highlighting their value-added in drug development and regulatory decision-making.
Key findings: Many PK/PD models, with varying degrees of complexity and physiological understanding, have been developed to evaluate the safety and efficacy of drug products. In special populations (e.g. pediatrics), in cases where there is genetic polymorphism and in other instances where therapeutic outcomes are not well described solely by PK metrics, the implementation of PK/PD models is crucial to assure the desired clinical outcome. Since dissociation between the pharmacokinetic and pharmacodynamic profiles is often observed, it is proposed that physiologically-based pharmacokinetic (PBPK) and PK/PD models be given more weight by regulatory authorities when assessing the therapeutic equivalence of drug products.
Summary: Modeling and simulation approaches already play an important role in drug development. While slowly moving away from “one-size fits all” PK methodologies to assess therapeutic outcomes, further work is required to increase confidence in PK/PD models in translatability and prediction of various clinical scenarios to encourage more widespread implementation in regulatory decision-making.
Background: Drugs used to treat gastrointestinal diseases (GI drugs) are widely used either as prescription or over23 the-counter (OTC) medications and belong to both the ten most prescribed and ten most sold OTC medications worldwide. Current clinical practice shows that in many cases, these drugs are administered concomitantly with other drug products. Due to their metabolic properties and mechanisms of action, the drugs used to treat gastrointestinal diseases can change the pharmacokinetics of some co27 administered drugs. In certain cases, these interactions can lead to failure of treatment or to the occurrence of serious adverse events. The mechanism of interaction depends highly on drug properties and differs among therapeutic categories. Understanding these interactions is essential to providing recommendations for optimal drug therapy.
Objective: To discuss the most frequent interactions between GI and other drugs, including identification of the mechanisms behind these interactions, where possible.
Conclusion: Interactions with GI drugs are numerous and can be highly significant clinically. Whilst alterations in bioavailability due to changes in solubility, dissolution rate and metabolic interactions can be (for the most part) easily identified, interactions that are mediated through other mechanisms, such as permeability or microbiota, are less well understood. Future work should focus on characterizing these aspects.
Motivation: The topic of this paper is the estimation of alignments and mutation rates based on stochastic sequence-evolution models that allow insertions and deletions of subsequences ("fragments") and not just single bases. The model we propose is a variant of a model introduced by Thorne, Kishino, and Felsenstein (1992). The computational tractability of the model depends on certain restrictions in the insertion/deletion process; possible effects we discuss.
Results: The process of fragment insertion and deletion in the sequence-evolution model induces a hidden Markov structure at the level of alignments and thus makes possible efficient statistical alignment algorithms. As an example we apply a sampling procedure to assess the variability in alignment and mutation parameter estimates for HVR1 sequences of human and orangutan, improving results of previous work. Simulation studies give evidence that estimation methods based on the proposed model also give satisfactory results when applied to data for which the restrictions in the insertion/deletion process do not hold.
Availability: The source code of the software for sampling alignments and mutation rates for a pair of DNA sequences according to the fragment insertion and deletion model is freely available from www.math.uni-frankfurt.de/~stoch/software/mcmcsalut under the terms of the GNU public license (GPL, 2000).
Within the last year, expressions of second-hand embarrassment on Twitter significantly increased. We show how this relates to the current situation in U.S. politics under Trump and provide two explanations for why people feel this way in response to his actions. First, compared to former politicians, Trump’s norm violations seem intentional. Second, intentional norm violations specifically threaten the social integrity of in-group members—in this case, U.S citizens. We theorize that these strong, frequent and widespread feelings of second-hand embarrassment motivate political actions to prevent further harm to individuals’ self-concept and protect their social integrity.
Die Bedeutung des philosophischen Programms John McDowells, das schon in der theoretischen Philosophie eine revolutionäre Neuausrichtung vornimmt, kann erst voll erkannt werden, wenn man auch seine Konsequenzen für die praktische Philosophie in den Blick nimmt. Zwar geht Geist und Welt primär von Dilemmata der Erkenntnistheorie aus. Aus McDowells Vorschlag, die Gleichsetzung der äußeren Natur mit dem bedeutungsfreien Raum der Naturgesetze zugunsten einer Konzeption von Gründen in der Welt aufzugeben, ergibt sich aber die Möglichkeit einer so neuartigen Perspektive auf die Natur moralischer Urteile, dass es fast so scheint, als sei McDowells theoretisches Programm auf diesen Gewinn für die praktische Philosophie hin angelegt worden.
According to his own understanding, Jürgen Habermas’ Theory of Communicative Action offers a new account of the normative foundations of critical theory. 1 Habermas’ motivating insight is that neither a transcendental or metaphysical solution to the problem of normativity, nor a merely hermeneutic reconstruction of historically given norms, is sufficient to clarify the normative foundations of critical theory. In response to this insight, Habermas develops a novel account of normativity which locates the normative demands upon which critical theory draws within the socially instituted practice of communicative understanding. Although Habermas has claimed otherwise, this new foundation for critical theory constitutes a novel and innovative form of “immanent critique”. To argue for and to clarify this claim, I offer, in section 1, a formal account of immanent critique and distinguish between two different ways of carrying out such a critique. In section 2, I examine Habermas’ rejection of the first, hermeneutic option. Against this background, I then show, in section 3, that the Theory of Communicative Action attempts to formulate an immanent critique of contemporary societies according to a second, “practice-based” model. However, because Habermas, as I will argue in section 4, commits himself to an implausibly narrow view in regard to one central element of such a model – in regard to the social ontology of immanent normativity – his normative critique cannot develop its full potential (section 5).
Die vorliegende Untersuchung befasst sich mit verschiedenen Aspekten des Frauenbilds sowohl in der deutschen als auch in der arabischen bzw. orientalischen Literatur des Mittelalters.
Zu den untersuchenden Aspekten gehören zum Beispiel die Stellung der Frau in der Gesellschaft sowohl religiös als auch sozial, Rolle und Einfluss der Frau, Beschreibung der Schönheit der Frau, Beziehung zwischen Mann und Frau, Minne als Zentralmotiv, Ehe als sozialbedingte Lebensform sowie adlige Dame und bäuerliche Frau als Gegenbild.
Die Forschung wird sich auf bestimmte Textsorten der mittelalterlichen Literatur stützen, um mittels dieser induktiven Methode ein konkretes Bild zu geben und die verschiedenen Hauptelemente der Untersuchung zu betonen.
Diese Werke sind die klassischen höfischen Artusromane z.B. Erec, Iwein, Parzival und Tristan und Isolde in der deutschen Literatur des Mittelalters sowie Erzählungen aus der Geschichtensammlung von 1001 Nacht, der Antarroman und die Geschichte von Laila und Madjnun in der orientalischen Literatur des Mittelalters.
Das Bild der Frau ist ein Thema, das bis heute ein Schwerpunkt in vielen literarischen Werken bildet, der aber während der gesamten Epoche des Mittelalters eine besondere Bedeutung hatte. Das könnte auf die unterschiedlichen Darstellungsweisen vom Bild der Frau zurückzuführen, und wie sie in den verschiedenen Kulturen zum Ausdruck gebracht werden.
Wikis in der Hochschullehre
(2012)
Dieser Beitrag gibt einen Überblick über Einsatzszenarien von Wikis in Lern- und Lehrprozessen und deren Eignung für die kollaborative Wissensproduktion, während zugleich Einschränkungen, Bedingungen und Gestaltungsempfehlungen thematisiert werden. Zudem werden Erfahrungen mit verschiedenen Wiki-Anwendungen an der Universität Frankfurt dokumentiert, die vom begleitenden Einsatz im Seminar bis hin zur studentisch initiierten Bereitstellung studienbegleitender Materialien reichen. Die vorher ausgearbeiteten Aspekte werden nochmals anhand der Beispiele aufgegriffen und ihrer Praxisrelevanz verdeutlicht.
Während der wissenschaftliche Nachwuchs im Forschungsbereich strategisch und wissenschaftlich fundiert samt diversen Prüfungen (Bachelor, Master, Promotion, ggf. auch Habilitation) ausgebildet wird, existiert im Bereich der Lehre nichts auch nur annährend Vergleichbares. Die übliche „Qualifizierung“ des Nachwuchslehrenden findet meist nur „On-the-job“ (vgl. Conradi, 1983) statt, d.h. durch eigenes Ausprobieren nach Beobachtung anderer Lehrender während des eigenen Studiums. Unter guten Bedingungen hat der Lehrende vorab oder begleitend Weiterbildungen zu guter Lehre besucht. Eine strategische Einbettung dieser Personalentwicklungsmaßnahmen, wie es seitens der Forschung intendiert wird, ist nicht vorhanden. Dieser Beitrag stellt mögliche Formen vor und führt exemplarisch eine darunter näher aus.
Hochschuldidaktische Weiterbildungsveranstaltungen haben häufig nur eine geringe Akzeptanz bei etablierten Hochschullehrenden. Es wird angenommen, dass der Nachweis wissenschaftlicher Evidenz hochschuldidaktischer Maßnahmen deren Akzeptanz in Hochschulen erhöht. Zur Verknüpfung von empirischer Forschung und hochschuldidaktischen Weiterbildungen schlagen wir ein Spiralmodell vor. Praktisch werden ausgehend von theoretischen und empirischen Grundlagen relevante Ergebnisse für die Bearbeitung in hochschuldidaktischen Weiterbildungen entwickelt. Die Anwendung des Spiralmodells wird an einem Praxisbeispiel zum Themenfeld "Interkulturelle Kommunikation in der Hochschule" illustriert.
Die Internationalisierung der deutschen Hochschulen nahm in den letzten Jahren stark zu. Umgang mit Studierenden aus unterschiedlichen Kulturen bedeutet für Lehrende längst Alltag. Nicht immer jedoch verläuft die Kommunikation zwischen Angehörigen unterschiedlicher Kulturen reibungslos. Um möglichen Schwierigkeiten entgegenzuwirken, setzen einige Universitäten interkulturelle Trainings ein zur Sensibilisierung für interkulturelle Unterschiede. Die Autoren haben im Rahmen eines hochschuldidaktischen Weiterbildungsprogramms für Lehrende ein interkulturelles Training entwickelt und eingesetzt. Über den Aufbau und die Ziele des Trainings wird im vorliegenden Artikel berichtet. Weiterhin wird ein Untersuchungsdesign vorgestellt, mit welchem der Einfluss von Kultur auf die Online-Kommunikation in der Lehre untersucht wurde.
This book is a full reference grammar of Qiang, one of the minority languages of southwest China, spoken by about 70,000 Qiang and Tibetan people in Aba Tibetan and Qiang Autonomous Prefecture in northern Sichuan Province. It belongs to the Qiangic branch of Tibeto-Burman (one of the two major branches of Sino-Tibetan). The dialect presented in the book is the Northern Qiang variety spoken in Ronghong Village, Yadu Township, Chibusu District, Mao County. This book, the first book-length description of the Qiang language in English, is the result of many years of work on the language.
Im Rahmen des Bund-Länder-Programms "Qualitätspakt Lehre" hat die Goethe-Universität Frankfurt erfolgreich das Programm "Starker Start ins Studium" eingeworben. Dadurch verfügt das Institut für Psychologie nun über die personellen Möglichkeiten, die fachliche und soziale Integration neuer Psychologiestudierender im sechssemestrigen Bachelorstudiengang Psychologie zu verbessern. Hierzu wurden zwei obligate je zweisemestrige Lehrmodule entwickelt. In dem vorliegenden Beitrag wird das übergeordnete Lehrkonzept beschrieben und dessen Implementierung im Fach Psychologie als Praxisbeispiel illustriert.
Verständnisvolle Dozenten haben weniger Fachwissen : Wirkungen der sprachlichen Anpassung an Laien
(2012)
In der Interaktion mit Studierenden ist schriftliche Online-Kommunikation ein wichtiges Arbeitsmedium für jeden Lehrenden geworden. Die Interaktionspartner haben dabei für ihre Urteilsbildung über den jeweils anderen ausschließlich den geschriebenen Text mit seinen lexikalen und grammatikalischen Merkmalen zur Verfügung. Das Ausmaß der lexikalen Anpassung an die Wortwahl eines Studierenden kann daher einen Einfluss auf die studentische Bewertung ihrer Dozenten hinsichtlich unterschiedlicher Persönlichkeitseigenschaften haben. In der vorliegenden Studie beurteilten Studierende jeweils zwei Dozenten hinsichtlich Verständnis, Gewissenhaftigkeit und Intellekt (IPIP, Goldberg, Johnson, Eber et al., 2006) auf Grundlage einer Emailkommunikation. Der Grad der lexikalen Anpassung der Lehrenden wurde dabei variiert. Es zeigte sich, dass Studierende Dozenten mit umgangssprachlicher Wortwahl als verständnisvoller, gewissenhafter aber tendenziell weniger wissend einschätzen.
In diesem Beitrag werden Ansätze zur Förderung der Eignungsreflexion der Studierenden im Lehramt sowie der Beratungskompetenz der betreuenden Lehrenden an der Goethe-Universität Frankfurt dargestellt: Für die Studierenden wurden unterschiedliche Maßnahmen entwickelt und implementiert, die die Reflexion über die persönliche Eignung für den Lehrerberuf fördern und bestehende Defizite frühzeitig ausgleichen helfen. Für die betreuenden Lehrenden (an Universität und Schule) wurde eine hochschuldidaktische Weiterbildung entwickelt und eingesetzt, welche deren Beratungskompetenz stärken soll.
We propose a framework of individual problem-solving and communicative demands (IproCo) that bridges the gap between models from cognitive psychology and communication pragmatics. Furthermore, we present two experiments conducted to identify factors influencing the demands and to test possibilities for support. The experiments employed a remote collaborative picture-sorting task with concrete and abstract pictures and applied non-interactive conditions compared to interactive conditions. In a first experiment, the influence of the postulated demands on collaboration process and outcome was analysed, and the impact of shared applications was tested. In a second experiment, we evaluated instructional support measures consisting of model collaboration and a collaboration script. The collaboration process showed benefits of the support but the outcome did not. However, the support measures fostered the collaboration process even in the particularly difficult conditions with non-interactive communication. We discuss the impact of the IproCo framework and apply it to other tasks.
Effective knowledge communication presupposes common ground (Clark & Brennan, 1991) that needs to be established and maintained. This is particularly difficult in remote communication as well as in non-interactive settings, because the speaker cannot use gestures or mimic and has to tailor his utterances to the addressee without receiving feedback. In these situations, the speaker may achieve mutual understanding for example by adopting the addressee’s perspective. We present a study conducted to test the impact of instructions that support and hinder individual problem solving and knowledge communication. We used a picture-sorting task requiring individual cognitive processes of feature search (Treisman & Gelade, 1980) in addition to referential communication. As our study focused on the design of utterances, all participants assumed the role of speaker. Participants were told that their descriptions would be recorded and then listened to later on by a participant in the role of addressee. Eight sets of pictures were used, which varied on two dimensions: the individual cognitive demands of detecting the relevant features (varied as between-subject factor) and the communicative demands (varied as within-subject factor). A further between-subject factor was the type of instructions: The participants received either a collaboration script as supporting instructions, or time pressure was applied to induce stress, or else they were given no additional instructions (control group). We used the speakers’ verbal utterances to examine the quality of the speakers’ descriptions. For both dimensions of difficulty, we found the expected effects. In the conditions with a collaboration script, there were fewer irrelevant features mentioned and fewer features were described with delay. In the conditions with time pressure, there were fewer irrelevant features described, but the number of correctly described pictures was impaired through the fact that relevant features were also neglected. Under time pressure, speakers tended to provide ambiguous descriptions regarding the frame of reference.
Avventurarsi e poi inoltrarsi nell'opera di Thomas Bernhard non è precisamente come fare una passeggiata, ma la passeggiata è un motivo ricorrente nell'opera bernhardiana (insieme a quella di Handke, di Sebald, di Walser, per fare solo alcuni nomi di passeggiatori nel Novecento di lingua tedesca). Le figure di Bernhard camminano, marciano, corrono, ma in una "direzione opposta" rispetto a quella indicata da Stifter. Talvolta i loro percorsi si snodano nella natura, come quando entrano in un bosco per non fare più ritorno (Gelo, Al limite boschivo, La partita a carte), a volte marciano nel chiuso della loro "casa-prigione", seguendo i percorsi labirintici e infiniti della loro mente (La Fornace, Cemento), altre volte ancora si muovono in un contesto cittadino e metropolitano, a Roma in Estinzione (dove la passeggiata con l'allievo Gambetti conserva un alone aristotelico, il peripatetico) o - più spesso - a Vienna.
Un titolo quale "Dialettica negativa e antropologia negativa" sembrerebbe preannunciare un lavoro di confronto tra Th. W. Adorno e Ulrich Sonnemann, sulla scia di una indicazione mutuata dalla "Introduzione" di "Dialettica negativa" (1966). E invece, disattendendo una simile aspettativa, la "Negative Anthropologie" cui ci si riferisce in questo saggio è quella di Günther Stern/Anders. L’idea di un confronto tra le due prospettive nasce dalla curiosità di capire la corrispondenza tra la "dialettica negativa" e l'"antropologia negativa", laddove con il secondo sintagma si intende la concezione andersiana di un'umanità inadeguata al mondo. Che poi non si tratti di una stranezza ma di un interrogativo legittimo lo conferma, indirettamente, lo stesso Adorno, che in una nota contenuta nella sezione della "Dialettica negativa" dedicata alla lettura del pensiero di Heidegger, chiama in causa proprio la lezione di Anders.
Integer point sets minimizing average pairwise L1 distance: What is the optimal shape of a town?
(2010)
An n-town, n[is an element of]N , is a group of n buildings, each occupying a distinct position on a 2-dimensional integer grid. If we measure the distance between two buildings along the axis-parallel street grid, then an n-town has optimal shape if the sum of all pairwise Manhattan distances is minimized. This problem has been studied for cities, i.e., the limiting case of very large n. For cities, it is known that the optimal shape can be described by a differential equation, for which no closed-form solution is known. We show that optimal n-towns can be computed in O(n[superscript 7.5]) time. This is also practically useful, as it allows us to compute optimal solutions up to n=80.
Wir Philologen haben gut reden. Wir sehen zu, wie andere, die zumeist nicht zu unserer Zunft gehören, die unübersehbare Fülle von Geschriebenem aus seiner jeweiligen Ursprache in alle möglichen Sprachen bringen, und wir verhalten uns dazu als interessierte Zuschauer. Wir haben allen Grund, uns daran zu freuen: Ohne diesen grenzüberschreitenden Waren- und Gedankentausch bliebe das Feld, auf dem wir grasen, enger und parzellierter, als es nach der Intention der Autoren und auch der Sache nach sein müsste. Wir können (sofern wir den nötigen Überblick haben) das loben, was die Übersetzer zu Wege gebracht haben: die Entsprechungen, die sie entdeckt oder erfunden haben, die Kraft, Geschmeidigkeit und Modulationsvielfalt, die sie in ihren Zielsprachen mit Tausenden von einleuchtenden Funden oder mit dem ganzen Ton und Duktus ihrer Übersetzungen erst aktiviert haben. Wenn wir es uns zutrauen, können wir ihnen ins Handwerk pfuschen und einzelne Stellen oder ganze Werke selber übersetzen. Wir können sie kritisieren, wo uns die vorgelegten Übersetzungen zu matt erscheinen oder wo sie sachlich oder stilistisch mehr als nötig ‚hinter dem Original zurückbleiben; wir können Verbesserungsvorschläge machen. Wenn wir Übersetzungen zitieren und es nötig finden, sie abzuwandeln, bewegen wir uns in einer Grauzone zwischen dem Respekt vor dem Übersetzer, der Lust an noch weiteren erkannten Potenzen des Textes und dem Drang, möglichst ‚alles, was wir aus dem Original herausgelesen haben, in der eigenen Sprache den Hörern oder Lesern nahezubringen.
Die zentrale These des vorliegenden Aufsatzes ist es, dass es ein Adam Smith-Problem im traditionellen Sinne nicht gibt, aber sehr wohl einen Selbstwiderspruch in Adam Smith ökonomischer Theorie.
Der Aufsatz behandelt zunächst die enge systematische Verbindung von Smith ökonomischer und ethischer Theorie. Die Verbindung beruht auf der Annahme eines höchsten Wesens und einer daraus gefolgerten prästabilisierenden Harmonie Dem religiösen Vertrauen auf eine natürliche Ordnung korresponiert der Glaube an die Gerechtigkeit des Marktes. Smith weitere politische Analyse produziert allerdings einen Selbstwiderspruch. Smith zeigt auf, dass die unternehmerischen Eigeninteressen dem Allgemeininteresse der Gesellschaft widersprechen und die Unternehmer zudem virtuoser und erfolgreicher beim Durchsetzen ihrer eigenen Interessen agieren als andere Marktakteure. Dennoch hält Smith an der Annahme fest, der Markt entfalte eine harmonisierende und den allseitigen Wohlstand fördernde Wirkung. Diese Annahme mutiert bei seinen Epigonen zu einer ontologischen Gewissheit.
Background: Microvolt T-wave alternans (MTWA) testing in many studies has proven to be a highly accurate predictor of ventricular tachyarrhythmic events (VTEs) in patients with risk factors for sudden cardiac death (SCD) but without a prior history of sustained VTEs (primary prevention patients). In some recent studies involving primary prevention patients with prophylactically implanted cardioverter-defibrillators (ICDs), MTWA has not performed as well.
Objective: This study examined the hypothesis that MTWA is an accurate predictor of VTEs in primary prevention patients without implanted ICDs, but not of appropriate ICD therapy in such patients with implanted ICDs.
Methods: This study identified prospective clinical trials evaluating MTWA measured using the spectral analytic method in primary prevention populations and analyzed studies in which: (1) few patients had implanted ICDs and as a result none or a small fraction (≤15%) of the reported end point VTEs were appropriate ICD therapies (low ICD group), or (2) many of the patients had implanted ICDs and the majority of the reported end point VTEs were appropriate ICD therapies (high ICD group).
Results: In the low ICD group comprising 3,682 patients, the hazard ratio associated with a nonnegative versus negative MTWA test was 13.6 (95% confidence interval [CI] 8.5 to 30.4) and the annual event rate among the MTWA-negative patients was 0.3% (95% CI: 0.1% to 0.5%). In contrast, in the high ICD group comprising 2,234 patients, the hazard ratio was only 1.6 (95% CI: 1.2 to 2.1) and the annual event rate among the MTWA-negative patients was elevated to 5.4% (95% CI: 4.1% to 6.7%). In support of these findings, we analyzed published data from the Multicenter Automatic Defibrillator Trial II (MADIT II) and Sudden Cardiac Death in Heart Failure Trial (SCD-HeFT) trials and determined that in those trials only 32% of patients who received appropriate ICD therapy averted an SCD.
Conclusion: This study found that MTWA testing using the spectral analytic method provides an accurate means of predicting VTEs in primary prevention patients without implanted ICDs; in particular, the event rate is very low among such patients with a negative MTWA test. In prospective trials of ICD therapy, the number of patients receiving appropriate ICD therapy greatly exceeds the number of patients who avert SCD as a result of ICD therapy. In trials involving patients with implanted ICDs, these excess appropriate ICD therapies seem to distribute randomly between MTWA-negative and MTWA-nonnegative patients, obscuring the predictive accuracy of MTWA for SCD. Appropriate ICD therapy is an unreliable surrogate end point for SCD.
Seit gut einem Jahrzehnt wird in Deutschland gewartet: Auf Literatur wird gewartet, auf den großen Berlin-Roman, auf den großen Nachwende-Roman. Und trotz diverser Romane, die Wiedervereinigung und Berlin zum Thema erhoben, ob nun von Günter Grass oder Thomas Brussig, wird weiter gewartet, kann es anscheinend kein Autor recht machen, wird unterhaltsames Erzählen begehrt oder eine Darstellung auf der Höhe moderner Erzählkunst verlangt. Doch die Alternative ist vielleicht falsch gestellt: Könnte denn nicht ein kunstvoll geschriebener Roman mit präziser und variantenreicher Sprache, ausgeklügelten Erzählstrukturen auch unterhaltsam sein? Schließlich ist Döblins nicht gerade schlichter Roman "Berlin Alexanderplatz" ja auch ein Lesevergnügen, vergleichbar mit "Joyces Ulysses" oder Pynchons "Gravity’s Rainbow". Nun lassen sich solche Romane schlecht wiederholen, hinge jeder Nachahmung des Stils der Verdacht an, Plagiat oder Kopie zu sein. Etwas Ähnliches wäre also immer etwas Anderes, neuartig, artifiziell und darin genaueres Abbild seiner Zeit als die Vielzahl schlichter Romane, die von Berlin oder der Wiedervereinigung erzählen. Nun, in letzter Zeit mehren sich im deutschen Feuilleton Stimmen, die eine gewisse, dementsprechende Kunst des Erzählens bei Ulrich Peltzer ausmachen, weswegen hier die Gelegenheit ergriffen wird, einen Gang durch seine drei letzten Publikationen ["Stefan Martinez", "Alle oder keiner", "Bryant Park"] zu unternehmen, um die Entwicklung derselben darzustellen - im Hinterkopf die Frage: Liegt hier vielleicht schon einer der erwarteten großen Berlin-Romane vor?
Der Workshop "Nationale Spezifika und internationale Aspekte in der Wissenschaftsentwicklung – unter besonderer Berücksichtigung der Narratologie" soll, so die Organisatoren in ihrer Einladung – "Gelegenheit bieten, Bedingungen und Möglichkeiten integrativer Ansätze zur Untersuchung von Wissenschaftsprozessen zu diskutieren und wichtige Faktoren der Wissenschaftsentwicklung zu benennen und kritisch zu beleuchten." Die Produktion, Distribution und Rezeption von Wissenssystemen vollziehe sich, schreiben die Organisatoren, "in unterschiedlichen nationalen und internationalen sozialen Räumen, die sowohl die Form als auch den kognitiven Gehalt von Theorien mitunter stark mitstrukturieren, ihre Durchsetzung begünstigen oder behindern. Das wird besonders deutlich, wenn man Transferprozesse von Theorien verfolgt." Den Begriff des Wissenstransfers, der hier in Anschlag gebracht wird, möchte ich in meinem Beitrag einer terminologischen Klärung zuführen. Dazu möchte ich zunächst einige terminologische Überlegungen über den Status der Teilbegriffe anstellen, aus denen der Begriff zusammengesetzt ist (I.), dann die Verwendung des Begriffs in verschiedenen disziplinären Kontexten beobachten (II.) und schließlich einen Vorschlag für eine differenzierte Verwendung des Begriffs als Analysekategorie der Wissenschaftsentwicklung machen (III).
In seinen Sammlungen bildet das Deutsche Literaturarchiv Marbach (DLA) das Netzwerk des literarischen Lebens in all seinen Facetten ab. Im Zentrum des quellenorientierten Sammelns und der Erschließung steht der Autor (bzw. die Autorin). Die Literatur wird dokumentiert vom Entstehungsprozess eines Werkes über die verschiedenen Ausgaben und dessen Rezeption in der Literaturkritik, seine dramaturgische Umsetzung in Hörfunk, Film, auf der Bühne und in der Musik. Seit 2008 bezieht das DLA auch Internetquellen wie literarische Zeitschriften, Netzliteratur und Weblogs in sein Spektrum mit ein und reagiert damit auf die zunehmende Bedeutung des Internets als Publikationsforum. Sammeln, Erschließen und Archivieren bilden eine notwendige Einheit; gerade die Flüchtigkeit der netzbasierten Ressourcen macht eine langfristige Sicherung der Verfügbarkeit erforderlich. Notwendig sind daher mehrere Säulen, auf denen diese neue Sammlung von „Literatur im Netz“ basiert.
Römische Bildnisse : Bibliographie, ungekürzt, mit den zu ergänzenden Literaturverweisen des Autors
(2010)
Originalfassung der in der Verlagspublikation um zahlreiche Literaturverweise gekürzten Bibliographie des Werkes: Götz Lahusen: Römische Bildnisse : Auftraggeber, Funktionen, Standorte. - Mainz : von Zabern, 2010. - Lizenz der WBG (Wiss. Buchges.) Darmstadt. - ISBN: 978-3-8053-3738-0. Pp. : EUR 49.90
Der Entlarver hinter der Maske : die Sprache der Seele in der Philosophie Friedrich Nietzsches
(2001)
Die Bedeutung Nietzsches als Philosoph, Dichter und Diagnostiker seiner Zeit ist inzwischen unbestritten und wird zunehmend erkannt. Unverständlich ist allerdings, dass in der über hundertjährigen Nietzsche-Rezeption der Psychologe Nietzsche weitgehend vernachlässigt wurde. Dabei sind große Teile seiner Philosophie, seine Moralkritik, seine Kunst- und Machttheorie, reinste Psychologie. Die "unerhörte psychologische Tiefe und Abgründlichkeit", die Nietzsche für sich reklamiert, haben auch andere erkannt. Sigmund Freud, der nicht umhin konnte, Nietzsche zu bewundern, bemerkt in seiner "Selbstdarstellung", dass "dessen Ahnungen und Einsichten sich oft in der erstaunlichsten Weise mit den mühsamen Ergebnissen der Psychoanalyse decken...". Alfred Adler nennt ihn "eine der ragenden Säulen unserer Kunst" und wird nicht müde, seine Bedeutung hervorzuheben. Für C. G. Jung war die Lektüre von Nietzsches Schriften die Vorbereitung, mit der er zur "modernen Psychologie" gelangte. Gottfried Benn meint gar, "die ganze Psychoanalyse ... ist seine Tat". Noch extremer äußert sich Karl Jaspers, der Nietzsches Denken über die Tiefenpsychologie stellt. Mehr noch: Die Psychoanalyse sei "mitschuldig an der geistigen Niveausenkung", sie habe "die unmittelbare Auswirkung des eigentlich Großen (Kierkegaard und Nietzsche) in der Psychopathologie verhindert". Tatsächlich wurden aber nahezu sämtliche Psychoanalytiker der ersten Stunde, so auch Rank, Tausk, Wittels, Reik, Hitschmann von Nietzsche inspiriert.
Die Untersuchung der Goetheschen und Heineschen Betrachtungen eines für das kulturelle Gedächtnis prominenten Ortes, des Amphitheaters in Verona, [soll zeigen], auf welche Weise unterschiedliche Verfahrensweisen vor Ort zu differenten Bedeutungen vom Gedächtnis der Orte führen. Diese sind auch in der gegenwärtigen theoretischen Diskussion virulent. Dabei gilt es nicht nur, das "Gedächtnis der Orte" vom Konzept der "Gedächtnisorte", der lieux de mémoire zu unterscheiden, sondern auch, die je verschiedene Bedeutung der Orte in unterschiedlichen Gedächtnistraditionen - wie beispielsweise der ars memoriae, einer Kultur des Gedenkens und dem Freudschen Gedächtnismodell - herauszuarbeiten. Das steht im Zusammenhang der Notwendigkeit, die Lektüre- und Textmetapher, die in ihrer universellen Verwendung in den gegenwärtigen Kulturwissenschaften ihre Konturen zu verlieren droht, auf ihre Spezifik und ihre theoretische Stimmigkeit hin zu befragen.
This article presents linguistic features of and educational approaches to a new variety of German that has emerged in multi-ethnic urban areas in Germany: Kiezdeutsch (‘Hood German’). From a linguistic point of view, Kiezdeutsch is very interesting, as it is a multi-ethnolect that combines features of a youth language with those of a contact language. We will present examples that illustrate the grammatical productivity and innovative potential of this variety. From an educational perspective, Kiezdeutsch has also a high potential in many respects: school projects can help enrich intercultural communication and weaken derogatory attitudes. In grammar lessons, Kiezdeutsch can be a means to enhance linguistic competence by having the adolescents analyse their own language. Keywords: German, Kiezdeutsch, multi-ethnolect, migrants’ language, language change, educational proposals
The dynamics of many systems are described by ordinary differential equations (ODE). Solving ODEs with standard methods (i.e. numerical integration) needs a high amount of computing time but only a small amount of storage memory. For some applications, e.g. short time weather forecast or real time robot control, long computation times are prohibitive. Is there a method which uses less computing time (but has drawbacks in other aspects, e.g. memory), so that the computation of ODEs gets faster? We will try to discuss this question for the assumption that the alternative computation method is a neural network which was trained on ODE dynamics and compare both methods using the same approximation error. This comparison is done with two different errors. First, we use the standard error that measures the difference between the approximation and the solution of the ODE which is hard to characterize. But in many cases, as for physics engines used in computer games, the shape of the approximation curve is important and not the exact values of the approximation. Therefore, we introduce a subjective error based on the Total Least Square Error (TLSE) which gives more consistent results. For the final performance comparison, we calculate the optimal resource usage for the neural network and evaluate it depending on the resolution of the interpolation points and the inter-point distance. Our conclusion gives a method to evaluate where neural nets are advantageous over numerical ODE integration and where this is not the case. Index Terms—ODE, neural nets, Euler method, approximation complexity, storage optimization.
At present, there are no quantitative, objective methods for diagnosing the Parkinson disease. Existing methods of quantitative analysis by myograms suffer by inaccuracy and patient strain; electronic tablet analysis is limited to the visible drawing, not including the writing forces and hand movements. In our paper we show how handwriting analysis can be obtained by a new electronic pen and new features of the recorded signals. This gives good results for diagnostics. Keywords: Parkinson diagnosis, electronic pen, automatic handwriting analysis
This paper describes the problems and an adaptive solution for process control in rubber industry. We show that the human and economical benefits of an adaptive solution for the approximation of process parameters are very attractive. The modeling of the industrial problem is done by the means of artificial neural networks. For the example of the extrusion of a rubber profile in tire production our method shows good results even using only a few training samples.
Impurismus ist eine uralte Weltanschauung und eine alte Poetik. Beides habe ich in meinem Buch von 2007 Illustrierte Poetik des Impurismus ausführlich dargestellt. Da ich mich nicht wiederholen will, kann ich die umfangreichen Funde zum Thema hier nicht erneut vortragen. Andererseits soll der Leser dieser Fortsetzung nicht ganz unvorbereitet in die Materie einsteigen. Deshalb will ich einige nackte Fakten als Erinnerung hier zusammenstellen, muß aber doch dringend auf die anschaulichen Grundlagen in dem genannten Buch verweisen, sonst verschreckt die in aller Kürze vorgetragene Ungeheuerlichkeit der ganzen Entdeckung manchen willigen Leser. ...
In the late seventies, Bernard Comrie was one of the first linguists to explore the effects of the referential hierarchy (RH) on the distribution of grammatical relations (GRs). The referential hierarchy is also known in the literature as the animacy, empathy or indexibability hierarchy and ranks speech act participants (i.e. first and second person) above third persons, animates above inanimates, or more topical referents above less topical referents. Depending on the language, the hierarchy is sometimes extended by analogy to rankings of possessors above possessees, singulars above plurals, or other notions. In his 1981 textbook, Comrie analyzed RH effects as explaining (a) differential case (or adposition) marking of transitive subject ("A") noun phrases in low RH positions (e.g. inanimate or third person) and of object ("P") noun phrases in high RH positions (e.g. animate or first or second person), and (b) hierarchical verb agreement coupled with a direct vs. inverse distinction, as in Algonquian (Comrie 1981: Chapter 6).
Sino-Tibetan is a prime example of how strongly a language family can typologically diversify under the pressure of areal spread features (Matisoff 1991, 1999). One of the manifestation of this is the average length of prosodic words. In Southeast Asia, prosodic words tend to average on one or one-and-a-half syllables. In the Himalayas, by contrast, it is not uncommon to encounter prosodic words containing five to ten syllables. The following pair of examples illustrates this.
Language universals are statements that are true of all languages, for example: “all languages have stop consonants”. But beneath this simple definition lurks deep ambiguity, and this triggers misunderstanding in both interdisciplinary discourse and within linguistics itself. A core dimension of the ambiguity is captured by the opposition “absolute vs. statistical universal”, although the literature uses these terms in varied ways. Many textbooks draw the boundary between absolute and statistical according to whether a sample of languages contains exceptions to a universal. But the notion of an exception-free sample is not very revealing even if the sample contained all known languages: there is always a chance that an as yet undescribed language, or an unknown language from the past or future, will provide an exception.
This paper is one argument for a theory of grammatical relations in Chinese in which there are no grammatical relations beyond semantic roles, and no lexical relation-changing rules. As the passive rule is one of the most common relation changing rules cross-linguistically, in this paper I will address the question of whether or not Mandarin Chinese has lexical passives, that is, passives defined as in Relational Grammar (see for example Perlmutter and Postal 1977) and the early Lexical Functional Grammar (LFG) literature (e.g. Bresnan 1982), where a 2-arc (object) is promoted to a 1-arc (subject).
Thirty-one years ago Tsu-lin Mei (1961) argued against the traditional doctrine that saw the subject-predicate distinction in grammar as parallel to the particular- universal distinction in logic, as he said it was a reflex of an Indo-European bias, and could not be valid, as ‘Chinese ... does not admit a distinction into subject and predicate’ (p. 153). This has not stopped linguists working on Chinese from attempting to define ‘subject’ (and ‘object’) in Chinese. Though a number of linguists have lamented the difficulties in trying to define these concepts for Chinese (see below), most work done on Chinese still assumes that Chinese must have the same grammatical features as Indo-European, such as having a subject and a direct object, though no attempt is made to justify that view. This paper challenges that view and argues that there has been no grammaticalization of syntactic functions in Chinese. The correct assignment of semantic roles to the constituents of a discourse is done by the listener on the basis of the discourse structure and pragmatics (information flow, inference, relevance, and real world knowledge) (cf. Li & Thompson 1978, 1979; LaPolla 1990).
Von der welt louff vnd gestallt (3b) [Anm. 1] - vom Lauf der Welt und ihrem Zustand - handelt ein Werk, das im Zentrum der nachfolgenden Überlegungen stehen soll: die Reimchronik zum Schwaben- bzw. Schweizerkrieg des Hans Lenz vom Jahr 1499. In Form eines fiktiven Gesprächs, einer disputatz (62b) zwischen dem Autor und einem Waldbruder, werden die historische Zeitgeschichte und die damalige politisch-gesellschaftliche Situation gesichtet, geordnet, diskutiert, gedeutet und in größere, insbesondere heilsgeschichtliche Zusammenhänge gebracht. Text und Kontext stehen in diesem Beispiel (wie in der Historiographie ganz allgemein) in besonders offensichtlicher Beziehung zueinander - es leuchtet unmittelbar ein, daß ein solcher Text ohne den geschichtlichen Hintergrund nicht angemessen beurteilt werden kann. Dabei darf allerdings nicht allein danach gefragt werden, wie der Historiograph mit den geschichtlichen Fakten (soweit diese überhaupt objektiv rekonstruiert werden können!) umgeht, es muß auch dem Umfeld des Verfassers selbst und seiner Rezipienten sowie dem Zweck und der Funktion seiner Dichtung Rechnung getragen werden, den literarischen und außerliterarischen Einflüssen und Vorbildern, den Denk- und Argumentationsmustern, kurz: die "Lebenswelt" [Anm. 2] des Textes sollte zu seinem Verständnis im gesellschaftlich-kulturellen Kontext soweit als möglich erschlossen werden.
The argument that I tried to elaborate on in this paper is that the conceptual problem behind the traditional competence/performance distinction does not go away, even if we abandon its original Chomskyan formulation. It returns as the question about the relation between the model of the grammar and the results of empirical investigations – the question of empirical verification The theoretical concept of markedness is argued to be an ideal correlate of gradience. Optimality Theory, being based on markedness, is a promising framework for the task of bridging the gap between model and empirical world. However, this task not only requires a model of grammar, but also a theory of the methods that are chosen in empirical investigations and how their results are interpreted, and a theory of how to derive predictions for these particular empirical investigations from the model. Stochastic Optimality Theory is one possible formulation of a proposal that derives empirical predictions from an OT model. However, I hope to have shown that it is not enough to take frequency distributions and relative acceptabilities at face value, and simply construe some Stochastic OT model that fits the facts. These facts first of all need to be interpreted, and those factors that the grammar has to account for must be sorted out from those about which grammar should have nothing to say. This task, to my mind, is more complicated than the picture that a simplistic application of (not only) Stochastic OT might draw.
The aim of this paper is the exploration of an optimality theoretic architecture for syntax that is guided by the concept of "correspondence": syntax is understood as the mechanism of "translating" underlying representations into a surface form. In minimalism, this surface form is called "Phonological Form" (PF). Both semantic and abstract syntactic information are reflected by the surface form. The empirical domain where this architecture is tested are minimal link effects, especially in the case of "wh"-movement. The OT constraints require the surface form to reflect the underlying semantic and syntactic representations as maximally as possible. The means by which underlying relations and properties are encoded are precedence, adjacency, surface morphology and prosodic structure. Information that is not encoded in one of these ways remains unexpressed, and gets lost unless it is recoverable via the context. Different kinds of information are often expressed by the same means. The resulting conflicts are resolved by the relative ranking of the relevant correspondence constraints.
This paper argues for a particular architecture of OT syntax. This architecture hasthree core features: i) it is bidirectional, the usual production-oriented optimisation (called ‘first optimisation’ here) is accompanied by a second step that checks the recoverability of an underlying form; ii) this underlying form already contains a full-fledged syntactic specification; iii) especially the procedure checking for recoverability makes crucial use of semantic and pragmatic factors. The first section motivates the basic architecture. The second section shows with two examples, how contextual factors are integrated. The third section examines its implications for learning theory, and the fourth section concludes with a broader discussion of the advantages and disadvantages of the proposed model.
Weak function word shift
(2004)
The fact that object shift only affects weak pronouns in mainland Scandinavian is seen as an instance of a more general observation that can be made in all Germanic languages: weak function words tend to avoid the edges of larger prosodic domains. This generalisation has been formulated within Optimality Theory in terms of alignment constraints on prosodic structure by Selkirk (1996) in explaining thedistribution of prosodically strong and weak forms of English functionwords, especially modal verbs, prepositions and pronouns. But a purely phonological account fails to integrate the syntactic licensing conditions for object shift in an appropriate way. The standard semantico-syntactic accounts of object shift, onthe other hand, fail to explain why it is only weak pronouns that undergo object shift. This paper develops an Optimality theoretic model of the syntax-phonology interface which is based on the interaction of syntactic and prosodic factors. The account can successfully be applied to further related phenomena in English and German.
This paper is part of a research project on OT Syntax and the typology of the free relative (FR) construction. It concentrates on the details of an OT analysis and some of its consequences for OT syntax. I will not present a general discussion of the phenomenon and the many controversial issues it is famous for in generative syntax.
Im vorliegenden Artikel geht es um sprachliche Elemente, die in einer Sprache bereits vorhanden sind, als Nonstandard gelten bzw. nicht in anerkannter verbindlicher Weise standardisiert sind und nun in verändertem Gebrauch differenzierend genutzt werden. Der neue Gebrauch hat ein oder mehrere initiale Ereignisse, die – systemorientiert formuliert – an einer oder mehreren Stellen eines Sprachraums auftreten und in einer evolutionären Drift häufiger werden oder verschwinden, bzw. – handlungsorientiert formuliert – von unterschiedlichen Sprechern übernommen, mit neuen Semantiken versehen werden oder unbeachtet bleiben.
Die Driften der Wörter in öffentlichen Räumen sind vielfältig. Neue Wortentwicklungen belegen unterschiedliche Interessen, "chillen" und "dissen" andere als das in der konservativen Züricher Zeitung zuerst erschienene "share-holder-value". Im Folgenden soll eine sinnbezoge Verallgemeinerung unternommen werden, die die Handlungen der Akteure mit der strukturellen Ebene verbindet. Die Veränderungen in den Verwendungen sollen zu strukturellen sozialen und sprachlichen Rahmenbedingungen in Bezug gesetzt werden. Wie werden Neuerungen und Änderungen der Anwendungsbedingungen von Wörtern vor dem Hintergrund des Wissens um die traditionelle Standardsprache und deren soziale Funktion wahrgenommen? Welche Funktionen haben Neologismen in Abgrenzung zu diesem Standard?
Sprachwahl und Sprachwahrnehmung sind im Deutschen unabdingbar geprägt durch das Wissen von einer Standardsprache. Dieses Wissen basiert für die meisten Sprecher auf der Erfahrung, dass in der Schule manche sprachliche Formen als korrekt, andere als falsch bewertet werden, außerdem auf der Tatsache, dass es Fixierungen der Regeln des Standards in Lexika und Grammatiken gibt. Wissen und Anerkennung dieses Standards sind unabhängig davon, dass keine dieser Kodifikationen unumstritten ist, dass viele Sprecher die Regeln nicht genau kennen und dass als Vorbilder anerkannte Personen (Nachrichtensprecher, Journalisten bestimmter Zeitschriften, Lehrer, Literaten u.a.) keineswegs einheitliche Regeln verfolgen. Der Standard ist fest assoziiert mit der Erfahrung einer legitimen Regelhaftigkeit, also mit Ordnung. Verwendung von Nonstandard wird mit Bezug auf diese Ordnung und von ihr unterschieden wahrgenommen. Diese relationale Sicht der Dinge ist sowohl subjektiv als auch intersubjektiv.