Refine
Year of publication
Document Type
- Article (738) (remove)
Has Fulltext
- yes (738) (remove)
Is part of the Bibliography
- no (738) (remove)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (9)
- Heavy-ion collision (7)
- Black holes (6)
- schizophrenia (6)
- Equation of state (5)
- Quark-Gluon Plasma (5)
- Relativistic heavy-ion collisions (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (738) (remove)
In this Letter we study the radiation measured by an accelerated detector, coupled to a scalar field, in the presence of a fundamental minimal length. The latter is implemented by means of a modified momentum space Green's function. After calibrating the detector, we find that the net flux of field quanta is negligible, and that there is no Planckian spectrum. We discuss possible interpretations of this result, and we comment on experimental implications in heavy ion collisions and atomic systems.
In this Letter, we propose a new scenario emerging from the conjectured presence of a minimal length ℓ in the spacetime fabric, on the one side, and the existence of a new scale invariant, continuous mass spectrum, of un-particles on the other side. We introduce the concept of un-spectral dimension DU of a d-dimensional, euclidean (quantum) spacetime, as the spectral dimension measured by an “un-particle” probe. We find a general expression for the un-spectral dimension DU labelling different spacetime phases: a semi-classical phase, where ordinary spectral dimension gets contribution from the scaling dimension dU of the un-particle probe; a critical “Planckian phase”, where four-dimensional spacetime can be effectively considered two-dimensional when dU=1; a “Trans-Planckian phase”, which is accessible to un-particle probes only, where spacetime as we currently understand it looses its physical meaning.
Fuzziness at the horizon
(2010)
We study the stability of the noncommutative Schwarzschild black hole interior by analysing the propagation of a massless scalar field between the two horizons. We show that the spacetime fuzziness triggered by the field higher momenta can cure the classical exponential blue-shift divergence, suppressing the emergence of infinite energy density in a region nearby the Cauchy horizon.
In this Letter we derive the gravity field equations by varying the action for an ultraviolet complete quantum gravity. Then we consider the case of a static source term and we determine an exact black hole solution. As a result we find a regular spacetime geometry: in place of the conventional curvature singularity extreme energy fluctuations of the gravitational field at small length scales provide an effective cosmological constant in a region locally described in terms of a de Sitter space. We show that the new metric coincides with the noncommutative geometry inspired Schwarzschild black hole. Indeed, we show that the ultraviolet complete quantum gravity, generated by ordinary matter is the dual theory of ordinary Einstein gravity coupled to a noncommutative smeared matter. In other words we obtain further insights about that quantum gravity mechanism which improves Einstein gravity in the vicinity of curvature singularities. This corroborates all the existing literature in the physics and phenomenology of noncommutative black holes.
Human Transformer2-beta (hTra2-beta) is an important member of the serine/arginine-rich protein family, and contains one RNA recognition motif (RRM). It controls the alternative splicing of several pre-mRNAs, including those of the calcitonin/calcitonin gene-related peptide (CGRP), the survival motor neuron 1 (SMN1) protein and the tau protein. Accordingly, the RRM of hTra2-beta specifically binds to two types of RNA sequences [the CAA and (GAA)2 sequences]. We determined the solution structure of the hTra2-beta RRM (spanning residues Asn110–Thr201), which not only has a canonical RRM fold, but also an unusual alignment of the aromatic amino acids on the beta-sheet surface. We then solved the complex structure of the hTra2-beta RRM with the (GAA)2 sequence, and found that the AGAA tetra-nucleotide was specifically recognized through hydrogen-bond formation with several amino acids on the N- and C-terminal extensions, as well as stacking interactions mediated by the unusually aligned aromatic rings on the beta-sheet surface. Further NMR experiments revealed that the hTra2-beta RRM recognizes the CAA sequence when it is integrated in the stem-loop structure. This study indicates that the hTra2-beta RRM recognizes two types of RNA sequences in different RNA binding modes.
The first measurement of two-pion Bose–Einstein correlations in central Pb–Pb collisions at √sNN=2.76 TeV at the Large Hadron Collider is presented. We observe a growing trend with energy now not only for the longitudinal and the outward but also for the sideward pion source radius. The pion homogeneity volume and the decoupling time are significantly larger than those measured at RHIC.
Inclusive transverse momentum spectra of primary charged particles in Pb–Pb collisions at √sNN=2.76 TeV have been measured by the ALICE Collaboration at the LHC. The data are presented for central and peripheral collisions, corresponding to 0–5% and 70–80% of the hadronic Pb–Pb cross section. The measured charged particle spectra in |η|<0.8 and 0.3<pT<20 GeV/c are compared to the expectation in pp collisions at the same sNN, scaled by the number of underlying nucleon–nucleon collisions. The comparison is expressed in terms of the nuclear modification factor RAA. The result indicates only weak medium effects (RAA≈0.7) in peripheral collisions. In central collisions, RAA reaches a minimum of about 0.14 at pT=6–7 GeV/c and increases significantly at larger pT. The measured suppression of high-pT particles is stronger than that observed at lower collision energies, indicating that a very dense medium is formed in central Pb–Pb collisions at the LHC.
The inclusive charged particle transverse momentum distribution is measured in proton–proton collisions at s=900 GeV at the LHC using the ALICE detector. The measurement is performed in the central pseudorapidity region (|η|<0.8) over the transverse momentum range 0.15<pT<10 GeV/c. The correlation between transverse momentum and particle multiplicity is also studied. Results are presented for inelastic (INEL) and non-single-diffractive (NSD) events. The average transverse momentum for |η|<0.8 is 〈pT〉INEL=0.483±0.001 (stat.)±0.007 (syst.) GeV/c and 〈pT〉NSD=0.489±0.001 (stat.)±0.007 (syst.) GeV/c, respectively. The data exhibit a slightly larger 〈pT〉 than measurements in wider pseudorapidity intervals. The results are compared to simulations with the Monte Carlo event generators PYTHIA and PHOJET.
Abstract We consider the phase structure of hadronic and hadron-quark models at finite temperature and density. The basis for the hadronic part is an extension of a flavor-SU(3) ? ? ? model. We study the effect on the phase diagram by adding additional hadronic resonances to the model. With the resulting equation of state we investigate heavy-ion c... collisions using hydrodynamical simulations. In a combined approach we include quarks and the Polyakov loop field in the calculation and study chiral symmetry restoration and the deconfinement transition.
We calculate leading-order dilepton yields from a quark-gluon plasma which has a time-dependent anisotropy in momentum space. Such anisotropies can arise during the earliest stages of quark-gluon plasma evolution due to the rapid longitudinal expansion of the created matter. A phenomenological model for the proper time dependence of the parton hard momentum scale, p_hard, and the plasma anisotropy parameter, xi, is proposed. The model describes the transition of the plasma from a 0+1 dimensional collisionally-broadened expansion at early times to a 0+1 dimensional ideal hydrodynamic expansion at late times. We find that high-energy dilepton production is enhanced by pre-equilibrium emission up to 50% at LHC energies, if one assumes an isotropization/thermalization time of 2 fm/c. Given sufficiently precise experimental data this enhancement could be used to determine the plasma isotropization time experimentally.
Poster presentation: Our work deals with the self-organization [1] of a memory structure that includes multiple hierarchical levels with massive recurrent communication within and between them. Such structure has to provide a representational basis for the relevant objects to be stored and recalled in a rapid and efficient way. Assuming that the object patterns consist of many spatially distributed local features, a problem of parts-based learning is posed. We speculate on the neural mechanisms governing the process of the structure formation and demonstrate their functionality on the task of human face recognition. The model we propose is based on two consecutive layers of distributed cortical modules, which in turn contain subunits receiving common afferents and bounded by common lateral inhibition (Figure 1). In the initial state, the connectivity between and within the layers is homogeneous, all types of synapses – bottom-up, lateral and top-down – being plastic. During the iterative learning, the lower layer of the system is exposed to the Gabor filter banks extracted from local points on the face images. Facing an unsupervised learning problem, the system is able to develop synaptic structure capturing local features and their relations on the lower level, as well as the global identity of the person at the higher level of processing, improving gradually its recognition performance with learning time. ...
Poster presentation: Introduction We study the problem of object recognition invariant to transformations, such as translation, rotation and scale. A system is underdetermined if its degrees of freedom (number of possible transformations and potential objects) exceed the available information (image size). The regularization theory solves this problem by adding constraints [1]. It is unclear what constraints biological systems use. We suggest that rather than seeking constraints, an underdetermined system can make decisions based on available information by grouping its variables. We propose a dynamical system as a minimum system for invariant recognition to demonstrate this strategy. ...
Poster presentation: Introduction Rhythmic synchronization of neural activity in the gamma-frequency range (30–100 Hz) was observed in many brain regions; see the review in [1]. The functional relevance of these oscillations remains to be clarified, a task that requires modeling of the relevant aspects of information processing. The temporal correlation hypothesis, reviewed in [2], proposes that the temporal correlation of neural units provides a means to group the neural units into so-called neural assemblies that are supposed to represent mental objects. Here, we approach the modeling of the temporal grouping of neural units from the perspective of oscillatory neural network systems based on phase model oscillators. Patterns are assumed to be stored in the network based on Hebbian memory and assemblies are identified with phase-locked subset of these patterns. Going beyond foregoing discussions, we demonstrate the combination of two recently discussed mechanisms, referred to as "acceleration" [3] and "pooling" [4]. The combination realizes in a complementary manner a competition for activity on a local scale, while providing a competition for coherence among different assemblies on a non-local scale. ...
Poster presentation: Introduction We here address the problem of integrating information about multiple objects and their positions on the visual scene. A primate visual system has little difficulty in rapidly achieving integration, given only a few objects. Unfortunately, computer vision still has great difficultly achieving comparable performance. It has been hypothesized that temporal binding or temporal separation could serve as a crucial mechanism to deal with information about objects and their positions in parallel to each other. Elaborating on this idea, we propose a neurally plausible mechanism for reaching local decision-making for "what" and "where" information to the global multi-object recognition. ...
Poster presentation: Introduction We here focus on constructing a hierarchical neural system for position-invariant recognition, which is one of the most fundamental invariant recognition achieved in visual processing [1,2]. The invariant recognition have been hypothesized to be done by matching a sensory image of a particular object stimulated on the retina to the most suitable representation stored in memory of the higher visual cortical area. Here arises a general problem: In such a visual processing, the position of the object image on the retina must be initially uncertain. Furthermore, the retinal activities possessing sensory information are being far from the ones in the higher area with a loss of the sensory object information. Nevertheless, with such recognition ambiguity, the particular object can effortlessly and easily be recognized. Our aim in this work is an attempt to resolve such a general recognition problem. ...
Background: The immune system is a complex adaptive system of cells and molecules that are interwoven in a highly organized communication network. Primary immune deficiencies are disorders in which essential parts of the immune system are absent or do not function according to plan. X-linked agammaglobulinemia is a B-lymphocyte maturation disorder in which the production of immunoglobulin is prohibited by a genetic defect. Patients have to be put on life-long immunoglobulin substitution therapy in order to prevent recurrent and persistent opportunistic infections. Methodology: We formulate an immune response model in terms of stochastic differential equations and perform a systematic analysis of empirical therapy protocols that differ in the treatment frequency. The model accounts for the immunoglobulin reduction by natural degradation and by antigenic consumption, as well as for the periodic immunoglobulin replenishment that gives rise to an inhomogeneous distribution of immunoglobulin specificities in the shape space. Results are obtained from computer simulations and from analytical calculations within the framework of the Fokker-Planck formalism, which enables us to derive closed expressions for undetermined model parameters such as the infection clearance rate. Conclusions: We find that the critical value of the clearance rate, below which a chronic infection develops, is strongly dependent on the strength of fluctuations in the administered immunoglobulin dose per treatment and is an increasing function of the treatment frequency. The comparative analysis of therapy protocols with regard to the treatment frequency yields quantitative predictions of therapeutic relevance, where the choice of the optimal treatment frequency reveals a conflict of competing interests: In order to diminish immunomodulatory effects and to make good economic sense, therapeutic immunoglobulin levels should be kept close to physiological levels, implying high treatment frequencies. However, clearing infections without additional medication is more reliably achieved by substitution therapies with low treatment frequencies. Our immune response model predicts that the compromise solution of immunoglobulin substitution therapy has a treatment frequency in the range from one infusion per week to one infusion per two weeks.
Experience-driven formation of parts-based representations in a model of layered visual memory
(2009)
Growing neuropsychological and neurophysiological evidence suggests that the visual cortex uses parts-based representations to encode, store and retrieve relevant objects. In such a scheme, objects are represented as a set of spatially distributed local features, or parts, arranged in stereotypical fashion. To encode the local appearance and to represent the relations between the constituent parts, there has to be an appropriate memory structure formed by previous experience with visual objects. Here, we propose a model how a hierarchical memory structure supporting efficient storage and rapid recall of parts-based representations can be established by an experience-driven process of self-organization. The process is based on the collaboration of slow bidirectional synaptic plasticity and homeostatic unit activity regulation, both running at the top of fast activity dynamics with winner-take-all character modulated by an oscillatory rhythm. These neural mechanisms lay down the basis for cooperation and competition between the distributed units and their synaptic connections. Choosing human face recognition as a test task, we show that, under the condition of open-ended, unsupervised incremental learning, the system is able to form memory traces for individual faces in a parts-based fashion. On a lower memory layer the synaptic structure is developed to represent local facial features and their interrelations, while the identities of different persons are captured explicitly on a higher layer. An additional property of the resulting representations is the sparseness of both the activity during the recall and the synaptic patterns comprising the memory traces. Keywords: visual memory, self-organization, unsupervised learning, competitive learning, bidirectional plasticity, activity homeostasis, parts-based representation, cortical column
Poster presentation: Characterizing neuronal encoding is essential for understanding information processing in the brain. Three methods are commonly used to characterize the relationship between neural spiking activity and the features of putative stimuli. These methods include: Wiener-Volterra kernel methods (WVK), the spike-triggered average (STA), and more recently, the point process generalized linear model (GLM). We compared the performance of these three approaches in estimating receptive field properties and orientation tuning of 251 V1 neurons recorded from 2 monkeys during a fixation period in response to a moving bar. The GLM consisted of two formulations of the conditional intensity function for a point process characterization of the spiking activity: one with a stimulus only component and one with the stimulus and spike history. We fit the GLMs by maximum likelihood using GLMfit in Matlab. Goodness-of-fit was assessed using cross-validation with Kolmogorov-Smirnov (KS) tests based on the time-rescaling theorem to evaluate the accuracy with which each model predicts the spiking activity of individual neurons and for each movement direction (4016 models in total, for 251 neurons and 16 different directions). The GLMs that considered spike history of up to 35 ms, accurately predicted neuronal spiking activity (95% confidence intervals for KS test) with a performance of 97.0% (3895/4016) for the training data, and 96.5% (3876/4016) for the test data. If spike history was not considered, performance dropped to 73,1% in the training and 71.3% in the testing data. In contrast, the WVF and the STA predicted spiking accurately for 24.2% and 44.5% of the test data examples respectively. The receptive field size estimates obtained from the GLM (with and without history), WVF and STA were comparable. Relative to the GLM orientation tuning was underestimated on average by a factor of 0.45 by the WVF and the STA. The main reason for using the STA and WVF approaches is their apparent simplicity. However, our analyses suggest that more accurate spike prediction as well as more credible estimates of receptive field size and orientation tuning can be computed easily using GLMs implemented in Matlab with standard functions such as GLMfit.
Following the discovery of context-dependent synchronization of oscillatory neuronal responses in the visual system, the role of neural synchrony in cortical networks has been expanded to provide a general mechanism for the coordination of distributed neural activity patterns. In the current paper, we present an update of the status of this hypothesis through summarizing recent results from our laboratory that suggest important new insights regarding the mechanisms, function and relevance of this phenomenon. In the first part, we present recent results derived from animal experiments and mathematical simulations that provide novel explanations and mechanisms for zero and nero-zero phase lag synchronization. In the second part, we shall discuss the role of neural synchrony for expectancy during perceptual organization and its role in conscious experience. This will be followed by evidence that indicates that in addition to supporting conscious cognition, neural synchrony is abnormal in major brain disorders, such as schizophrenia and autism spectrum disorders. We conclude this paper with suggestions for further research as well as with critical issues that need to be addressed in future studies.
The cerebral cortex presents itself as a distributed dynamical system with the characteristics of a small world network. The neuronal correlates of cognitive and executive processes often appear to consist of the coordinated activity of large assemblies of widely distributed neurons. These features require mechanisms for the selective routing of signals across densely interconnected networks, the flexible and context dependent binding of neuronal groups into functionally coherent assemblies and the task and attention dependent integration of subsystems. In order to implement these mechanisms, it is proposed that neuronal responses should convey two orthogonal messages in parallel. They should indicate (1) the presence of the feature to which they are tuned and (2) with which other neurons (specific target cells or members of a coherent assembly) they are communicating. The first message is encoded in the discharge frequency of the neurons (rate code) and it is proposed that the second message is contained in the precise timing relationships between individual spikes of distributed neurons (temporal code). It is further proposed that these precise timing relations are established either by the timing of external events (stimulus locking) or by internal timing mechanisms. The latter are assumed to consist of an oscillatory modulation of neuronal responses in different frequency bands that cover a broad frequency range from <2 Hz (delta) to >40 Hz (gamma) and ripples. These oscillations limit the communication of cells to short temporal windows whereby the duration of these windows decreases with oscillation frequency. Thus, by varying the phase relationship between oscillating groups, networks of functionally cooperating neurons can be flexibly configurated within hard wired networks. Moreover, by synchronizing the spikes emitted by neuronal populations, the saliency of their responses can be enhanced due to the coincidence sensitivity of receiving neurons in very much the same way as can be achieved by increasing the discharge rate. Experimental evidence will be reviewed in support of the coexistence of rate and temporal codes. Evidence will also be provided that disturbances of temporal coding mechanisms are likely to be one of the pathophysiological mechanisms in schizophrenia.
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (<= ~20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs.
Poster presentation: Coordinated neuronal activity across many neurons, i.e. synchronous or spatiotemporal pattern, had been believed to be a major component of neuronal activity. However, the discussion if coordinated activity really exists remained heated and controversial. A major uncertainty was that many analysis approaches either ignored the auto-structure of the spiking activity, assumed a very simplified model (poissonian firing), or changed the auto-structure by spike jittering. We studied whether a statistical inference that tests whether coordinated activity is occurring beyond chance can be made false if one ignores or changes the real auto-structure of recorded data. To this end, we investigated the distribution of coincident spikes in mutually independent spike-trains modeled as renewal processes. We considered Gamma processes with different shape parameters as well as renewal processes in which the ISI distribution is log-normal. For Gamma processes of integer order, we calculated the mean number of coincident spikes, as well as the Fano factor of the coincidences, analytically. We determined how these measures depend on the bin width and also investigated how they depend on the firing rate, and on rate difference between the neurons. We used Monte-Carlo simulations to estimate the whole distribution for these parameters and also for other values of gamma. Moreover, we considered the effect of dithering for both of these processes and saw that while dithering does not change the average number of coincidences, it does change the shape of the coincidence distribution. Our major findings are: 1) the width of the coincidence count distribution depends very critically and in a non-trivial way on the detailed properties of the inter-spike interval distribution, 2) the dependencies of the Fano factor on the coefficient of variation of the ISI distribution are complex and mostly non-monotonic. Moreover, the Fano factor depends on the very detailed properties of the individual point processes, and cannot be predicted by the CV alone. Hence, given a recorded data set, the estimated value of CV of the ISI distribution is not sufficient to predict the Fano factor of the coincidence count distribution, and 3) spike jittering, even if it is as small as a fraction of the expected ISI, can falsify the inference on coordinated firing. In most of the tested cases and especially for complex synchronous and spatiotemporal pattern across many neurons, spike jittering increased the likelihood of false positive finding very strongly. Last, we discuss a procedure [1] that considers the complete auto-structure of each individual spike-train for testing whether synchrony firing occurs at chance and therefore overcomes the danger of an increased level of false positives.
Poster presentation: How can two distant neural assemblies synchronize their firings at zero-lag even in the presence of non-negligible delays in the transfer of information between them? Neural synchronization stands today as one of the most promising mechanisms to counterbalance the huge anatomical and functional specialization of the different brain areas. However, and albeit more evidence is being accumulated in favor of its functional role as a binding mechanism of distributed neural responses, the physical and anatomical substrate for such a dynamic and precise synchrony, especially zero-lag even in the presence of non-negligible delays, remains unclear. Here we propose a simple network motif that naturally accounts for zero-lag synchronization of spiking assemblies of neurons for a wide range of temporal delays. We demonstrate that when two distant neural assemblies do not interact directly but relaying their dynamics via a third mediating single neuron or population and eventually achieve zero-lag coherent firing. Extensive numerical simulations of populations of Hodgkin-Huxley neurons interacting in such a network are analyzed. The results show that even with axonal delays as large as 15 ms the distant neural populations can synchronize their firings at zero-lag in a millisecond precision after the exchange of a few spikes. The role of noise and a distribution of axonal delays in the synchronized dynamics of the neural populations are also studied confirming the robustness of this sync mechanism. The proposed network module is densely embedded within the complex functional architecture of the brain and especially within the reciprocal thalamocortical interactions where the role of indirect pathways mimicking direct cortico-cortical fibers has been already suggested to facilitate trans-areal cortical communication. In summary the robust neural synchronization mechanism presented here arises as a consequence of the relay and redistribution of the dynamics performed by a mediating neuronal population. In opposition to previous works, neither inhibitory, gap junctions, nor complex networks need to be invoked to provide a stable mechanism of zero-phase correlated activity of neural populations in the presence of large conduction delays.
Poster presentation: Background To test the importance of synchronous neuronal firing for information processing in the brain, one has to investigate if synchronous firing strength is correlated to the experimental subjects. This requires a tool that can compare the strength of the synchronous firing across different conditions, while at the same time it should correct for other features of neuronal firing such as spike rate modulation or the auto-structure of the spike trains that might co-occur with synchronous firing. Here we present the bi- and multivariate extension of previously developed method NeuroXidence [1,2], which allows for comparing the amount of synchronous firing between different conditions. ...
Poster presentation: Introduction Adequate anesthesia is crucial to the success of surgical interventions and subsequent recovery. Neuroscientists, surgeons, and engineers have sought to understand the impact of anesthetics on the information processing in the brain and to properly assess the level of anesthesia in an non-invasive manner. Studies have indicated a more reliable depth of anesthesia (DOA) detection if multiple parameters are employed. Indeed, commercial DOA monitors (BIS, Narcotrend, M-Entropy and A-line ARX) use more than one feature extraction method. Here, we propose TESPAR (Time Encoded Signal Processing And Recognition) a time domain signal processing technique novel to EEG DOA assessment that could enhance existing monitoring devices. ...
NeuroXidence: reliable and efficient analysis of an excess or deficiency of joint-spike events
(2009)
Poster presentation: We present a non-parametric and computationally-efficient method named NeuroXidence (see http://www.NeuroXidence.com ) that detects coordinated firing within a group of two or more neurons and tests whether the observed level of coordinated firing is significantly different from that expected by chance. NeuroXidence [1] considers the full auto-structure of the data, including the changes in the rate responses and the history dependencies in the spiking activity. We demonstrate that NeuroXidence can identify epochs with significant spike synchronisation even if these coincide with strong and fast rate modulations. We also show, that the method accounts for trial-by-trial variability in the rate responses and their latencies, and that it can be applied to short data windows lasting only tens of milliseconds. Based on simulated data we compare the performance of NeuroXidence with the UE-method [2,3] and the cross-correlation analysis. An application of NeuroXidence to 42 single-units (SU) recorded in area 17 of an anesthetized cat revealed significant coincident events of high complexities, involving firing of up to 8 SUs simultaneously (5 ms window). The results were highly consistent with those obtained by traditional pair-wise measures based on cross-correlation: Neuronal synchrony was strongest in stimulation conditions in which the orientation of the sinusoidal grating matched the preferred orientation of most of the SUs included in the analysis, and was the weakest when the neurons were stimulated least optimally. Interestingly, events of higher complexities showed stronger stimulus-specific modulation than pair-wise interactions. The results suggest strong evidence for stimulus specific synchronous firing and, therefore, support the temporal coding hypothesis in visual cortex. ...
Poster presentation: Functional connectivity of the brain describes the network of correlated activities of different brain areas. However, correlation does not imply causality and most synchronization measures do not distinguish causal and non-causal interactions among remote brain areas, i.e. determine the effective connectivity [1]. Identification of causal interactions in brain networks is fundamental to understanding the processing of information. Attempts at unveiling signs of functional or effective connectivity from non-invasive Magneto-/Electroencephalographic (M/EEG) recordings at the sensor level are hampered by volume conduction leading to correlated sensor signals without the presence of effective connectivity. Here, we make use of the transfer entropy (TE) concept to establish effective connectivity. The formalism of TE has been proposed as a rigorous quantification of the information flow among systems in interaction and is a natural generalization of mutual information [2]. In contrast to Granger causality, TE is a non-linear measure and not influenced by volume conduction. ...
Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success. Keywords: synaptic plasticity, intrinsic plasticity, recurrent neural networks, reservoir computing, time series prediction
Poster presentation: Introduction Dopaminergic neurons in the midbrain show a variety of firing patterns, ranging from very regular firing pacemaker cells to bursty and irregular neurons. The effects of different experimental conditions (like pharmacological treatment or genetical manipulations) on these neuronal discharge patterns may be subtle. Applying a stochastic model is a quantitative approach to reveal these changes. ...
Short-term memory requires the coordination of sub-processes like encoding, retention, retrieval and comparison of stored material to subsequent input. Neuronal oscillations have an inherent time structure, can effectively coordinate synaptic integration of large neuron populations and could therefore organize and integrate distributed sub-processes in time and space. We observed field potential oscillations (14–95 Hz) in ventral prefrontal cortex of monkeys performing a visual memory task. Stimulus-selective and performance-dependent oscillations occurred simultaneously at 65–95 Hz and 14–50 Hz, the latter being phase-locked throughout memory maintenance. We propose that prefrontal oscillatory activity may be instrumental for the dynamical integration of local and global neuronal processes underlying short-term memory.
Effects of a phase transition on HBT correlations in an integrated Boltzmann+hydrodynamics approach
(2009)
A systematic study of HBT radii of pions, produced in heavy ion collisions in the intermediate energy regime (SPS), from an integrated (3+1)d Boltzmann+hydrodynamics approach is presented. The calculations in this hybrid approach, incorporating an hydrodynamic stage into the Ultra-relativistic Quantum Molecular Dynamics transport model, allow for a comparison of different equations of state retaining the same initial conditions and final freeze-out. The results are also compared to the pure cascade transport model calculations in the context of the available data. Furthermore, the effect of different treatments of the hydrodynamic freeze-out procedure on the HBT radii are investigated. It is found that the HBT radii are essentially insensitive to the details of the freeze-out prescription as long as the final hadronic interactions in the cascade are taken into account. The HBT radii RL and RO and the RO/RS ratio are sensitive to the EoS that is employed during the hydrodynamic evolution. We conclude that the increased lifetime in case of a phase transition to a QGP (via a Bag Model equation of state) is not supported by the available data.
The upcoming high energy experiments at the LHC are one of the most outstanding efforts for a better understanding of nature. It is associated with great hopes in the physics community. But there is also some fear in the public, that the conjectured production of mini black holes might lead to a dangerous chain reaction. In this Letter we summarize the most straightforward arguments that are necessary to rule out such doomsday scenarios.
We developed a Monte Carlo event generator for production of nucleon configurations in complex nuclei consistently including effects of nucleon–nucleon (NN) correlations. Our approach is based on the Metropolis search for configurations satisfying essential constraints imposed by short- and long-range NN correlations, guided by the findings of realistic calculations of one- and two-body densities for medium-heavy nuclei. The produced event generator can be used for Monte Carlo (MC) studies of pA and AA collisions. We perform several tests of consistency of the code and comparison with previous models, in the case of high energy proton–nucleus scattering on an event-by-event basis, using nucleus configurations produced by our code and Glauber multiple scattering theory both for the uncorrelated and the correlated configurations; fluctuations of the average number of collisions are shown to be affected considerably by the introduction of NN correlations in the target nucleus. We also use the generator to estimate maximal possible gluon nuclear shadowing in a simple geometric model.
Background: In this interdisciplinary project, the biological effects of heavy ions are compared to those of X-rays using tissue slice culture preparations from rodents and humans. Advantages of this biological model are the conservation of an organotypic environment and the independency from genetic immortalization strategies used to generate cell lines. Its open access allows easy treatment and observation via live-imaging microscopy. Materials and methods: Rat brains and human brain tumor tissue are cut into 300 micro m thick tissue slices. These slices are cultivated using a membrane-based culture system and kept in an incubator at 37°C until treatment. The slices are treated with X-rays at the radiation facility of the University Hospital in Frankfurt at doses of up to 40 Gy. The heavy ion irradiations were performed at the UNILAC facility at GSI with different ions of 11.4 A MeV and fluences ranging from 0.5–10 x 106 particles/cm². Using 3D-confocal microscopy, cell-death and immune cell activation of the irradiated slices are analyzed. Planning of the irradiation experiments is done with simulation programs developed at GSI and FIAS. Results: After receiving a single application of either X-rays or heavy ions, slices were kept in culture for up to 9d post irradiation. DNA damage was visualized using gamma H2AXstaining. Here, a dose-dependent increase and time-dependent decrease could clearly be observed for the X-ray irradiation. Slices irradiated with heavy ions showed less gamma H2AX-positive cells distributed evenly throughout the slice, even though particles were calculated to penetrate only 90–100 micro m into the slice. Conclusions: Single irradiations of brain tissue, even at high doses of 40 Gy, will result neither in tissue damage visible on a macroscopic level nor necrosis. This is in line with the view that the brain is highly radio-resistant. However, DNA damage can be detected very well in tissue slices using gamma H2AX-immuno staining. Thus, slice cultures are an excellent tool to study radiation-induced damage and repair mechanisms in living tissues.
Poster presentation A central problem in neuroscience is to bridge local synaptic plasticity and the global behavior of a system. It has been shown that Hebbian learning of connections in a feedforward network performs PCA on its inputs [1]. In recurrent Hopfield network with binary units, the Hebbian-learnt patterns form the attractors of the network [2]. Starting from a random recurrent network, Hebbian learning reduces system complexity from chaotic to fixed point [3]. In this paper, we investigate the effect of Hebbian plasticity on the attractors of a continuous dynamical system. In a Hopfield network with binary units, it can be shown that Hebbian learning of an attractor stabilizes it with deepened energy landscape and larger basin of attraction. We are interested in how these properties carry over to continuous dynamical systems. Consider system of the form Math(1) where xi is a real variable, and fi a nondecreasing nonlinear function with range [-1,1]. T is the synaptic matrix, which is assumed to have been learned from orthogonal binary ({1,-1}) patterns ξμ, by the Hebbian rule: Math. Similar to the continuous Hopfield network [4], ξμ are no longer attractors, unless the gains gi are big. Assume that the system settles down to an attractor X*, and undergoes Hebbian plasticity: T´ = T + εX*X*T, where ε > 0 is the learning rate. We study how the attractor dynamics change following this plasticity. We show that, in system (1) under certain general conditions, Hebbian plasticity makes the attractor move towards its corner of the hypercube. Linear stability analysis around the attractor shows that the maximum eigenvalue becomes more negative with learning, indicating a deeper landscape. This in a way improves the system´s ability to retrieve the corresponding stored binary pattern, although the attractor itself is no longer stabilized the way it does in binary Hopfield networks.
The influence of visual tasks on short and long-term memory for visual features was investigated using a change-detection paradigm. Subjects completed 2 tasks: (a) describing objects in natural images, reporting a specific property of each object when a crosshair appeared above it, and (b) viewing a modified version of each scene, and detecting which of the previously described objects had changed. When tested over short delays (seconds), no task effects were found. Over longer delays (minutes) we found the describing task influenced what types of changes were detected in a variety of explicit and incidental memory experiments. Furthermore, we found surprisingly high performance in the incidental memory experiment, suggesting that simple tasks are sufficient to instill long-lasting visual memories. Keywords: visual working memory, natural scenes, natural tasks, change detection
We suggest a new method to compute the spectrum and wave functions of excited states. We construct a stochastic basis of Bargmann link states, drawn from a physical probability density distribution and compute transition amplitudes between stochastic basis states. From such transition matrix we extract wave functions and the energy spectrum. We apply this method toU(1)2+1 lattice gauge theory. As a test we compute the energy spectrum, wave functions and thermodynamical functions of the electric Hamiltonian and compare it with analytical results. We find excellent agreement. We observe scaling of energies and wave functions in the variable of time. We also present first results on a small lattice for the full Hamiltonian including the magnetic term.
What is the energy function guiding behavior and learningµ Representationbased approaches like maximum entropy, generative models, sparse coding, or slowness principles can account for unsupervised learning of biologically observed structure in sensory systems from raw sensory data. However, they do not relate to behavior. Behavior-based approaches like reinforcement learning explain animal behavior in well-described situations. However, they rely on high-level representations which they cannot extract from raw sensory data. Combinations of multiple goal functions seems the methodology of choice to understand the complexity of the brain. But what is the set of possible goals. ...
We argue that Clustering in heavy ion collisions could be the missing element in resolving the socalled HBT puzzle, and briefly discuss the different physical situations where clustering could be present. We then propose a method by which clustering in heavy ion collisions could be detectedin a model-independent way.
Recently, two-photon imaging has allowed intravital tracking of lymphocyte migration and cellular interactions during germinal center (GC) reactions. The implications of two-photon measurements obtained by several investigators are currently the subject of controversy. With the help of two mathematical approaches, we reanalyze these data. It is shown that the measured lymphocyte migration frequency between the dark and the light zone is quantitatively explained by persistent random walk of lymphocytes. The cell motility data imply a fast intermixture of cells within the whole GC in approximately 3 h, and this does not allow for maintenance of dark and light zones. The model predicts that chemotaxis is active in GCs to maintain GC zoning and demonstrates that chemotaxis is consistent with two-photon lymphocyte motility data. However, the model also predicts that the chemokine sensitivity is quickly down-regulated. On the basis of these fi ndings, we formulate a novel GC lymphocyte migration model and propose its verifi cation by new two-photon experiments that combine the measurement of B cell migration with that of specifi c chemokine receptor expression levels. In addition, we discuss some statistical limitations for the interpretation of two-photon cell motility measurements in general.
A small-world network has been suggested to be an efficient solution for achieving both modular and global processing-a property highly desirable for brain computations. Here, we investigated functional networks of cortical neurons using correlation analysis to identify functional connectivity. To reconstruct the interaction network, we applied the Ising model based on the principle of maximum entropy. This allowed us to assess the interactions by measuring pairwise correlations and to assess the strength of coupling from the degree of synchrony. Visual responses were recorded in visual cortex of anesthetized cats, simultaneously from up to 24 neurons. First, pairwise correlations captured most of the patterns in the population´s activity and, therefore, provided a reliable basis for the reconstruction of the interaction networks. Second, and most importantly, the resulting networks had small-world properties; the average path lengths were as short as in simulated random networks, but the clustering coefficients were larger. Neurons differed considerably with respect to the number and strength of interactions, suggesting the existence of "hubs" in the network. Notably, there was no evidence for scale-free properties. These results suggest that cortical networks are optimized for the coexistence of local and global computations: feature detection and feature integration or binding.
Dilepton production in pp and Au+Au nucleus–nucleus collisions at s=200GeV as well as in In+In and Pb+Au at 158AGeV is studied within the microscopic HSD transport approach. A comparison to the data from the PHENIX Collaboration at RHIC shows that standard in-medium effects of the ρ,ω vector mesons—compatible with the NA60 data for In+In at 158AGeV and the CERES data for Pb+Au at 158AGeV—do not explain the large enhancement observed in the invariant mass regime from 0.2 to 0.5 GeV in Au+Au collisions at s=200 GeV relative to pp collisions.
Poster presentation: Introduction The brain is a highly interconnected network of constantly interacting units. Understanding the collective behavior of these units requires a multi-dimensional approach. The results of such analyses are hard to visualize and interpret. Hence tools capable of dealing with such tasks become imperative. ....
The timing of feedback to early visual cortex in the perception of long-range apparent motion
(2008)
When 2 visual stimuli are presented one after another in different locations, they are often perceived as one, but moving object. Feedback from area human motion complex hMT/V5+ to V1 has been hypothesized to play an important role in this illusory perception of motion. We measured event-related responses to illusory motion stimuli of varying apparent motion (AM) content and retinal location using Electroencephalography. Detectable cortical stimulus processing started around 60-ms poststimulus in area V1. This component was insensitive to AM content and sequential stimulus presentation. Sensitivity to AM content was observed starting around 90 ms post the second stimulus of a sequence and most likely originated in area hMT/V5+. This AM sensitive response was insensitive to retinal stimulus position. The stimulus sequence related response started to be sensitive to retinal stimulus position at a longer latency of 110 ms. We interpret our findings as evidence for feedback from area hMT/V5+ or a related motion processing area to early visual cortices (V1, V2, V3).
We present a measurement of e+e− pair production in central PbAu collisions at 158A GeV/c. As reported earlier, a significant excess of the e+e− pair yield over the expectation from hadron decays is observed. The improved mass resolution of the present data set, recorded with the upgraded CERES experiment at the CERN-SPS, allows for a comparison of the data with different theoretical approaches. The data clearly favor a substantial in-medium broadening of the ρ spectral function over a density-dependent shift of the ρ pole mass. The in-medium broadening model implies that baryon induced interactions are the key mechanism to the observed modifications of the ρ meson at SPS energy.
Relying on the existing estimates for the production cross sections of mini black holes in models with large extra dimensions, we review strategies for identifying those objects at collider experiments. We further consider a possible stable final state of such black holes and discuss their characteristic signatures. Keywords: Black holes
Event-by-event multiplicity fluctuations in nucleus-nucleus collisions from low SPS up to RHIC energies have been studied within the HSD transport approach. Fluctuations of baryonic number and electric charge also have been explored for Pb+Pb collisions at SPS energies in comparison to the experimental data from NA49. We find a dominant role of the fluctuations in the nucleon participant number for the final hadron multiplicity fluctuations and a strong influence of the experimental acceptance on the final results. Critical Point and Onset of Deconfinement - 4th International Workshop July 9 - 13, 2007 Darmstadt, Germany
We study various fluctuation and correlation signals of the deconfined state using a dynamical recombination approach (quark Molecular Dynamics, qMD). We analyse charge ratio fluctuations, charge transfer fluctuations and baryon-strangeness correlations as a function of the center of mass energy with a set of central Pb+Pb/Au+Au events from AGS energies on (Elab = 4 AGeV) up to the highest RHIC energy available (V sNN = 200 GeV) and as a function of time with a set of central Au+Au qMD events at V sNN = 200 GeV with and without applying our hadronization procedure. For all studied quantities, the results start from values compatible with a weakly coupled QGP in the early stage and end with values compatible with the hadronic result in the final state. We show that the loss of the signal occurs at the same time as hadronization and trace it back to the dynamical recombination process implemented in our model.
We discuss the present collective flow signals for the phase transition to the quark-gluon plasma (QGP) and the collective flow as a barometer for the equation of state (EoS). We emphasize the importance of the flow excitation function from 1 to 50A GeV: here the hydrodynamicmodel has predicted the collapse of the v1-flow at ~ 10A GeV and of the v2-flow at ~ 40A GeV. In the latter case, this has recently been observed by the NA49 collaboration. Since hadronic rescattering models predict much larger flow than observed at this energy, we interpret this observation as potential evidence for a first order phase transition at high baryon density pB.
Dynamics of relativistic heavy-ion collisions is investigated on the basis of a simple (1+1)-dimensional hydrodynamical model in light-cone coordinates. The main emphasis is put on studying sensitivity of the dynamics and observables to the equation of state and initial conditions. Low sensitivity of pion rapidity spectra to the presence of the phase transition is demonstrated, and some inconsistencies of the equilibrium scenario are pointed out. Possible non-equilibrium effects are discussed, in particular, a possibility of an explosive disintegration of the deconfined phase into quark-gluon droplets. Simple estimates show that the characteristic droplet size should decrease with increasing the collective expansion rate. These droplets will hadronize individually by emitting hadrons from the surface. This scenario should reveal itself by strong non-statistical fluctuations of observables. Critical Point and Onset of Deconfinement 4th International Workshop July 9-13 2007 GSI Darmstadt,Germany