Refine
Year of publication
Document Type
- Preprint (877)
- Article (723)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Part of a Book (2)
- Diploma Thesis (1)
- Periodical (1)
Language
- English (1655) (remove)
Is part of the Bibliography
- no (1655)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (10)
- Heavy-ion collision (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
- QCD (5)
- Quark-Gluon Plasma (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1655) (remove)
Background: The automation of objectively selecting amino acid residue ranges for structure superpositions is important for meaningful and consistent protein structure analyses. So far there is no widely-used standard for choosing these residue ranges for experimentally determined protein structures, where the manual selection of residue ranges or the use of suboptimal criteria remain commonplace. Results: We present an automated and objective method for finding amino acid residue ranges for the superposition and analysis of protein structures, in particular for structure bundles resulting from NMR structure calculations. The method is implemented in an algorithm, CYRANGE, that yields, without protein-specific parameter adjustment, appropriate residue ranges in most commonly occurring situations, including low-precision structure bundles, multi-domain proteins, symmetric multimers, and protein complexes. Residue ranges are chosen to comprise as many residues of a protein domain that increasing their number would lead to a steep rise in the RMSD value. Residue ranges are determined by first clustering residues into domains based on the distance variance matrix, and then refining for each domain the initial choice of residues by excluding residues one by one until the relative decrease of the RMSD value becomes insignificant. A penalty for the opening of gaps favours contiguous residue ranges in order to obtain a result that is as simple as possible, but not simpler. Results are given for a set of 37 proteins and compared with those of commonly used protein structure validation packages. We also provide residue ranges for 6351 NMR structures in the Protein Data Bank. Conclusions: The CYRANGE method is capable of automatically determining residue ranges for the superposition of protein structure bundles for a large variety of protein structures. The method correctly identifies ordered regions. Global structure superpositions based on the CYRANGE residue ranges allow a clear presentation of the structure, and unnecessary small gaps within the selected ranges are absent. In the majority of cases, the residue ranges from CYRANGE contain fewer gaps and cover considerably larger parts of the sequence than those from other methods without significantly increasing the RMSD values. CYRANGE thus provides an objective and automatic method for standardizing the choice of residue ranges for the superposition of protein structures. Additional files Additional file 1: Dependence of Q on the order parameter rank. The quantity Qi is plotted against the order parameter rank i for 9 different protein structure bundles. Additional file 2: Dependence of P on the clustering stage. The quantity Pi is plotted against the clustering stage i for 9 different protein structure bundles. Additional file 3: Dependence of CYRANGE results on the minimal cluster size parameter my. The sequence coverage (red) and RMSD (blue) of the residue ranges determined by CYRANGE were plotted as a function of my for 9 different protein structure bundles. The dotted vertical line indicates the default value, my = 8. Where CYRANGE found two domains, the RMSD values of the individual domains are shown in light and dark blue. Additional file 4: Dependence of CYRANGE results on the domain boundary extension parameter m. See Additional File 3 for details. Additional file 5: Dependence of CYRANGE results on the minimal gap width g. See Additional File 3 for details. Additional file 6: Dependence of CYRANGE results on the relative RMSD decrease parameter delta. See Additional File 3 for details. Additional file 7: Dependence of CYRANGE results on the absolute RMSD decrease parameter delta abs. See Additional File 3 for details. Additional file 8: Dependence of CYRANGE results on the gap penalty parameter gamma. See Additional File 3 for details. Additional file 9: Correlation between the sequence coverage from CYRANGE, FindCore and PSVS, and the GDT total score, GDT_TS. Each data point represents a protein shown in Figures 3 and 4. The coverage is the percentage of amino acid residues included in the residue ranges found by the different methods. The GDT_TS value is defined by GDT_TS = (P1 + P2 + P4 + P8)/4, where Pd is the fraction of residues that can be superimposed under a distance cutoff of d Å. Additional file 10: Correlation between the RMSD value for the residue ranges from CYRANGE, FindCore and PSVS, and the GDT total score, GDT_TS. Each data point represents one protein domain. See Additional File 9 for details.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. One of the central questions in neuroscience is how neural activity is organized across different spatial and temporal scales. As larger populations oscillate and synchronize at lower frequencies and smaller ensembles are active at higher frequencies, a cross-frequency coupling would facilitate flexible coordination of neural activity simultaneously in time and space. Although various experiments have revealed amplitude-to-amplitude and phase-to-phase coupling, the most common and most celebrated result is that the phase of the lower frequency component modulates the amplitude of the higher frequency component. Over the recent 5 years the amount of experimental works finding such phase-amplitude coupling in LFP, ECoG, EEG and MEG has been tremendous (summarized in [1]). We suggest that although the mechanism of cross-frequency-coupling (CFC) is theoretically very tempting, the current analysis methods might overestimate any physiological CFC actually evident in the signals of LFP, ECoG, EEG and MEG. In particular, we point out three conceptual problems in assessing the components and their correlations of a time series. Although we focus on phase-amplitude coupling, most of our argument is relevant for any type of coupling. 1) The first conceptual problem is related to isolating physiological frequency components of the recorded signal. The key point is to notice that there are many different mathematical representations for a time series but the physical interpretation we make out of them is dependent on the choice of the components to be analyzed. In particular, when one isolates the components by Fourier-representation based filtering, it is the width of the filtering bands what defines what we consider as our components and how their power or group phase change in time. We will discuss clear cut examples where the interpretation of the existence of CFC depends on the width of the filtering process. 2) A second problem deals with the origin of spectral correlations as detected by current cross-frequency analysis. It is known that non-stationarities are associated with spectral correlations in the Fourier space. Therefore, there are two possibilities regarding the interpretation of any observed CFC. One scenario is that basic neuronal mechanisms indeed generate an interaction across different time scales (or frequencies) resulting in processes with non-stationary features. The other and problematic possibility is that unspecific non-stationarities can also be associated with spectral correlations which in turn will be detected by cross frequency measures even if physiologically there is no causal interaction between the frequencies. 3) We discuss on the role of non-linearities as generators of cross frequency interactions. As an example we performed a phase-amplitude coupling analysis of two nonlinearly related signals: atmospheric noise and the square of it (Figure 1) observing an enhancement of phase-amplitude coupling in the second signal while no pattern is observed in the first. Finally, we discuss some minimal conditions need to be tested to solve some of the ambiguities here noted. In summary, we simply want to point out that finding a significant cross frequency pattern does not always have to imply that there indeed is physiological cross frequency interaction in the brain.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. Parallel multiunit recordings from V1 in anesthetized cat were collected during the presentation of random sequences of drifting sinusoidal gratings at 12 fixed orientations while gamma oscillations were present. In agreement with the seminal work [1], most units were orientation selective to varying degrees and synchronization was evident in spike train crosscorrelograms computed between units with similar preferred orientations, particularly during the presentation of optimal stimuli. Interestingly, a subset of units, which we refer to as synchronization hubs, were additionally found to synchronize with units having differing preferred orientations which was consistent with a previous study [2]. Moreover, oscillatory patterning in spike train autocorrelograms was also found to be strongest in units denoted as synchronization hubs, and synchronization hubs also tended to have narrower tuning curves relative to other units. We used simplified computational models of small networks of V1 neurons to demonstrate that neurons subject to a sufficiently strong level of inhibitory input can function as synchronization hubs. Neurons were endowed either with integrate-and-fire or conductance-based dynamics and each neuron received a combination of excitatory (AMPA) synaptic inputs that were Poisson-distributed and inhibitory (GABA) inputs that were coherent at a gamma-frequency range. If the strength of rhythmic inhibition was increased for a subset of neurons in the network, and excitation was increased simultaneously to maintain a fixed firing rate, then these neurons produced stronger oscillatory patterning in their discharge probabilities. The oscillations in turn synchronized these neurons with other neurons in the network. Importantly, the strength of synchronization increased with neurons of differing orientation preferences even though no direct synaptic coupling existed between the hubs and the other neurons. Enhanced levels of inhibition account for the emergence of synchronization hubs in the following way: Inhibitory inputs exhibiting a gamma rhythm determine a time window within which a cell is likely to discharge. Increased levels of inhibition narrow down this window further simultaneously leading to (i) even stronger oscillatory patterning of the neuron's activity and (ii) enhanced synchronization with other neurons. This enables synchronization even between cells with differing orientation preferences. Additionally, the same increased levels of inhibition may be responsible for the narrow tuning curves of hub neurons. In conclusion, synchronization hubs may be the cells that interact most strongly with the network of inhibitory interneurons during gamma oscillations in primary visual cortex.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. Background: Oscillatory activity in high-beta and gamma bands (20-80Hz) is known to play an important role in cortical processing being linked to cognitive processes and behavior. Beta/gamma oscillations are thought to emerge in local cortical circuits via two mechanisms: the interaction between excitatory principal cells and inhibitory interneurons – the pyramidal-interneuron gamma (PING) [1], and in networks of coupled inhibitory interneurons under tonic excitation – the interneuronal gamma (ING) [2]. Experimental evidence underlines the important role of inhibitory interneurons and especially of the fast spiking (FS) interneurons [3,4]. We show in simulation that an important property of FS neurons, namely the membrane resonance (frequency preference), represents an additional mechanism – the resonance induced gamma (RING), i.e. modulation of oscillatory discharge by resonance. RING promotes frequency stability and enables oscillations in purely excitatory networks. Methods: Local circuits were modeled with small world networks of 80% excitatory and 20% inhibitory neuron populations interconnected in small-world topology by realistic conductance-based synapses. Neuron populations were leaky integrate and fire (LIF) or Izhikevich resonator (RES) neurons. We also tested networks of purely inhibitory and purely excitatory RES neurons. Networks were stimulated with miniature postsynaptic potentials (MINIs) [5] and with low frequency sinusoidal (0.5 Hz) input that mimics the effect of gratings passing trough the visual field. The activity was calibrated to match recordings from cat visual cortex (firing rate, oscillatory activity). Results: Sinusoidal input modulates network oscillation frequency. This effect is most prominent in IF excitatory and IF inhibitory (IF-IF) networks and less prominent (about 4 times) in IF-RES or RES-IF networks where frequency remains relatively stable. The most stable frequency was observed in networks of pure resonators (RES-RES, None-RES, RES-None). Interestingly, purely excitatory RES networks (RES-None) were also able to exhibit oscillations through RING. By contrast purely excitatory or inhibitory IF networks (IF-None, None-IF) were not able to express oscillations under these conditions, matching experimental parameters. Conclusions: In both PING and ING, adding membrane resonance to principal cells or inhibitory interneurons stabilizes network oscillation frequency via the RING mechanism. Notably, in networks of purely excitatory networks, where ING and PING are not defined, oscillations can emerge via the RING mechanism if membrane resonance is expressed. Thus, RING appears as a potentially important mechanism for promoting stable network oscillations.
TRENTOOL : an open source toolbox to estimate neural directed interactions with transfer entropy
(2011)
To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wiener’s definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle – such as Granger causality – modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems – such as the brain – nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. ...
Dynamics of relativistic heavy-ion collisions is investigated on the basis of a simple (1+1)-dimensional hydrodynamical model in light-cone coordinates. The main emphasis is put on studying sensitivity of the dynamics and observables to the equation of state and initial conditions. Low sensitivity of pion rapidity spectra to the presence of the phase transition is demonstrated, and some inconsistencies of the equilibrium scenario are pointed out. Possible non-equilibrium effects are discussed, in particular, a possibility of an explosive disintegration of the deconfined phase into quark-gluon droplets. Simple estimates show that the characteristic droplet size should decrease with increasing the collective expansion rate. These droplets will hadronize individually by emitting hadrons from the surface. This scenario should reveal itself by strong non-statistical fluctuations of observables. Critical Point and Onset of Deconfinement 4th International Workshop July 9-13 2007 GSI Darmstadt,Germany
Event-by-event multiplicity fluctuations in nucleus-nucleus collisions from low SPS up to RHIC energies have been studied within the HSD transport approach. Fluctuations of baryonic number and electric charge also have been explored for Pb+Pb collisions at SPS energies in comparison to the experimental data from NA49. We find a dominant role of the fluctuations in the nucleon participant number for the final hadron multiplicity fluctuations and a strong influence of the experimental acceptance on the final results. Critical Point and Onset of Deconfinement - 4th International Workshop July 9 - 13, 2007 Darmstadt, Germany
In the next years the Facility for Antiproton and Ion Research FAIR will be constructed at the GSI Helmholtzzentrum fur Schwerionenforschung in Darmstadt, Germany. This new accelerator complex will allow for unprecedented and pathbreaking research in hadronic, nuclear, and atomic physics as well as in applied sciences. This manuscript will discuss some of these research opportunities, with a focus on few-body physics.
We examine the scaling trends in particle multiplicity and flow observables between SPS, RHIC and LHC, and discuss their compatibility with popular theoretical models. We examine the way scaling trends between SPS and RHIC are broken at LHC energies, and suggest experimental measurements which can further clarify the situation.
NCQ scaling of elliptic flow is studied in a non-equilibrium hadronization and freeze-out model from ideal, deconfined and chirally symmetric Quark Gluon Plasma (QGP), to final non-interacting hadrons. In this transition the quarks gain constituent quark mass while the background Bag-field breaks up. The constituent quarks then recombine into simplified hadron states, while chemical, thermal and flow equilibrium break down. Then the resulting temperatures and flow velocities of baryons and mesons will be different. In a simplified model, we reproduce the constituent quark number scaling.
We derive the equations of second order dissipative fluid dynamics from the relativistic Boltzmann equation following the method of W. Israel and J. M. Stewart [1]. We present a frame independent calculation of all first- and second-order terms and their coefficients using a linearised collision integral. Therefore, we restore all terms that were previously neglected in the original papers of W. Israel and J. M. Stewart.
We present results on Hanbury Brown-Twiss (HBT) radii extracted from the Ultra-relativistic Molecular Dynamics (UrQMD) approach to relativistic heavy ion collisions. The present investigation provides a comparison of results from pure hadronic transport calculations to a Boltzmann + Hydrodynamic hybrid approach with an intermediate hydrodynamic phase. For the hydrodynamic phase different Equations of State (EoS) have been employed, i.e. bag model, hadron resonance gas and a chiral EoS. The influence of various freeze-out scenarios has been investigated and shown to be negligible if hadronic rescatterings after the hydrodynamic evolution are included. Furthermore, first results of the source tilt from azimuthal sensitive HBT and the direct extraction from the transport model are presented and exhibit a very good agreement with E895 data at AGS.
A mechanism for locally density-dependent dynamic parton rearrangement and fusion has been implemented into the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) approach. The same mechanism has been previously built in the Quark Gluon String Model (QGSM). This rearrangement and fusion approach based on parton coalescence ideas enables the description of multi-particle interactions, namely 3 -> 3 and 3 -> 2, between (pre)hadronic states in addition to standard binary interactions. The UrQMD model (v2.3) extended by these additional processes allows to investigate implications of multi-particle interactions on the reaction dynamics of ultrarelativistic heavy ion collisions. The mechanism, its implementation and first results of this investigation are presented and discussed.
We present the current status of hybrid approaches to describe heavy ion collisions and their future challenges and perspectives. First we present a hybrid model combining a Boltzmann transport model of hadronic degrees of freedom in the initial and final state with an optional hydrodynamic evolution during the dense and hot phase. Second, we present a recent extension of the hydrodynamical model to include fluctuations near the phase transition by coupling a chiral field to the hydrodynamic evolution.
Fast thermalization and a strong build up of elliptic flow of QCD matter were investigated within the pQCD based 3+1 dimensional parton transport model BAMPS including bremsstrahlung 2 <-> 3 processes. Within the same framework quenching of gluonic jets in Au+Au collisions at RHIC can be understood. The development of conical structure by gluonic jets is investigated in a static box for the regimes of small and large dissipation. Furthermore we demonstrate two different approaches to extract the shear viscosity coefficient n from a microscopical picture.
We study the kinetic and chemical equilibration in 'infinite' parton-hadron matter within the Parton-Hadron-String Dynamics transport approach, which is based on a dynamical quasiparticle model for partons matched to reproduce lattice-QCD results – including the partonic equation of state – in thermodynamic equilibrium. The 'infinite' matter is simulated within a cubic box with periodic boundary conditions initialized at different baryon density (or chemical potential) and energy density. The transition from initially pure partonic matter to hadronic degrees of freedom (or vice versa) occurs dynamically by interactions. Different thermody-namical distributions of the strongly-interacting quark-gluon plasma (sQGP) are addressed and discussed.
The investigation of distributed coding across multiple neurons in the cortex remains to this date a challenge. Our current understanding of collective encoding of information and the relevant timescales is still limited. Most results are restricted to disparate timescales, focused on either very fast, e.g., spike-synchrony, or slow timescales, e.g., firing rate. Here, we investigated systematically multineuronal activity patterns evolving on different timescales, spanning the whole range from spike-synchrony to mean firing rate. Using multi-electrode recordings from cat visual cortex, we show that cortical responses can be described as trajectories in a high-dimensional pattern space. Patterns evolve on a continuum of coexisting timescales that strongly relate to the temporal properties of stimuli. Timescales consistent with the time constants of neuronal membranes and fast synaptic transmission (5–20 ms) play a particularly salient role in encoding a large amount of stimulus-related information. Thus, to faithfully encode the properties of visual stimuli the brain engages multiple neurons into activity patterns evolving on multiple timescales.
We address the question of whether and how boosting and bagging can be used for speech recognition. In order to do this, we compare two different boosting schemes, one at the phoneme level and one at the utterance level, with a phoneme-level bagging scheme. We control for many parameters and other choices, such as the state inference scheme used. In an unbiased experiment, we clearly show that the gain of boosting methods compared to a single hidden Markov model is in all cases only marginal, while bagging significantly outperforms all other methods. We thus conclude that bagging methods, which have so far been overlooked in favour of boosting, should be examined more closely as a potentially useful ensemble learning technique for speech recognition.
This thesis investigates the development of early cognition in infancy using neural network models. Fundamental events in visual perception such as caused motion, occlusion, object permanence, tracking of moving objects behind occluders, object unity perception and sequence learning are modeled in a unifying computational framework while staying close to experimental data in developmental psychology of infancy. In the first project, the development of causality and occlusion perception in infancy is modeled using a simple, three-layered, recurrent network trained with error backpropagation to predict future inputs (Elman network). The model unifies two infant studies on causality and occlusion perception. Subsequently, in the second project, the established framework is extended to a larger prediction network that models the development of object unity, object permanence and occlusion perception in infancy. It is shown that these different phenomena can be unified into a single theoretical framework thereby explaining experimental data from 14 infant studies. The framework shows that these developmental phenomena can be explained by accurately representing and predicting statistical regularities in the visual environment. The models assume (1) different neuronal populations processing different motion directions of visual stimuli in the visual cortex of the newborn infant which are supported by neuroscientific evidence and (2) available learning algorithms that are guided by the goal of predicting future events. Specifically, the models demonstrate that no innate force notions, motion analysis modules, common motion detectors, specific perceptual rules or abilities to "reason" about entities which have been widely postulated in the developmental literature are necessary for the explanation of the discussed phenomena. Since the prediction of future events turned out to be fruitful for theoretical explanation of various developmental phenomena and a guideline for learning in infancy, the third model addresses the development of visual expectations themselves. A self-organising, fully recurrent neural network model that forms internal representations of input sequences and maps them onto eye movements is proposed. The reinforcement learning architecture (RLA) of the model learns to perform anticipatory eye movements as observed in a range of infant studies. The model suggests that the goal of maximizing the looking time at interesting stimuli guides infants' looking behavior thereby explaining the occurrence and development of anticipatory eye movements and reaction times. In contrast to classical neural network modelling approaches in the developmental literature, the model uses local learning rules and contains several biologically plausible elements like excitatory and inhibitory spiking neurons, spike-timing dependent plasticity (STDP), intrinsic plasticity (IP) and synaptic scaling. It is also novel from the technical point of view as it uses a dynamic recurrent reservoir shaped by various plasticity mechanisms and combines it with reinforcement learning. The model accounts for twelve experimental studies and predicts among others anticipatory behavior for arbitrary sequences and facilitated reacquisition of already learned sequences. All models emphasize the development of the perception of the discussed phenomena thereby addressing the questions of how and why this developmental change takes place - questions that are difficult to be assessed experimentally. Despite the diversity of the discussed phenomena all three projects rely on the same principle: the prediction of future events. This principle suggests that cognitive development in infancy may largely be guided by building internal models and representations of the visual environment and using those models to predict its future development.