Refine
Year of publication
- 2010 (24) (remove)
Document Type
- Article (16)
- Doctoral Thesis (5)
- Conference Proceeding (1)
- Contribution to a Periodical (1)
- Part of Periodical (1)
Has Fulltext
- yes (24)
Is part of the Bibliography
- no (24)
Keywords
- LHC (3)
- ALICE (2)
- 900 GeV (1)
- Black holes (1)
- Cauchy horizon (1)
- Causality (1)
- Conformational transitions (1)
- Delaunay-Triangulierung (1)
- Effective connectivity (1)
- Electroencephalography (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (24) (remove)
In the next years the Facility for Antiproton and Ion Research FAIR will be constructed at the GSI Helmholtzzentrum fur Schwerionenforschung in Darmstadt, Germany. This new accelerator complex will allow for unprecedented and pathbreaking research in hadronic, nuclear, and atomic physics as well as in applied sciences. This manuscript will discuss some of these research opportunities, with a focus on few-body physics.
This thesis is dedicated to the study of fluctuation and correlation observables of hadronic equilibrium systems. The statistical hadronization model of high energy physics, in its ideal, i.e. non-interacting, gas approximation will be investigated in different ensemble formulations. The hypothesis of thermal and chemical equilibrium in high energy interaction will be tested against qualitative and quantitative predictions.
This thesis investigates the development of early cognition in infancy using neural network models. Fundamental events in visual perception such as caused motion, occlusion, object permanence, tracking of moving objects behind occluders, object unity perception and sequence learning are modeled in a unifying computational framework while staying close to experimental data in developmental psychology of infancy. In the first project, the development of causality and occlusion perception in infancy is modeled using a simple, three-layered, recurrent network trained with error backpropagation to predict future inputs (Elman network). The model unifies two infant studies on causality and occlusion perception. Subsequently, in the second project, the established framework is extended to a larger prediction network that models the development of object unity, object permanence and occlusion perception in infancy. It is shown that these different phenomena can be unified into a single theoretical framework thereby explaining experimental data from 14 infant studies. The framework shows that these developmental phenomena can be explained by accurately representing and predicting statistical regularities in the visual environment. The models assume (1) different neuronal populations processing different motion directions of visual stimuli in the visual cortex of the newborn infant which are supported by neuroscientific evidence and (2) available learning algorithms that are guided by the goal of predicting future events. Specifically, the models demonstrate that no innate force notions, motion analysis modules, common motion detectors, specific perceptual rules or abilities to "reason" about entities which have been widely postulated in the developmental literature are necessary for the explanation of the discussed phenomena. Since the prediction of future events turned out to be fruitful for theoretical explanation of various developmental phenomena and a guideline for learning in infancy, the third model addresses the development of visual expectations themselves. A self-organising, fully recurrent neural network model that forms internal representations of input sequences and maps them onto eye movements is proposed. The reinforcement learning architecture (RLA) of the model learns to perform anticipatory eye movements as observed in a range of infant studies. The model suggests that the goal of maximizing the looking time at interesting stimuli guides infants' looking behavior thereby explaining the occurrence and development of anticipatory eye movements and reaction times. In contrast to classical neural network modelling approaches in the developmental literature, the model uses local learning rules and contains several biologically plausible elements like excitatory and inhibitory spiking neurons, spike-timing dependent plasticity (STDP), intrinsic plasticity (IP) and synaptic scaling. It is also novel from the technical point of view as it uses a dynamic recurrent reservoir shaped by various plasticity mechanisms and combines it with reinforcement learning. The model accounts for twelve experimental studies and predicts among others anticipatory behavior for arbitrary sequences and facilitated reacquisition of already learned sequences. All models emphasize the development of the perception of the discussed phenomena thereby addressing the questions of how and why this developmental change takes place - questions that are difficult to be assessed experimentally. Despite the diversity of the discussed phenomena all three projects rely on the same principle: the prediction of future events. This principle suggests that cognitive development in infancy may largely be guided by building internal models and representations of the visual environment and using those models to predict its future development.
Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition.
FIAS Scientific Report 2009
(2010)
In this Annual Report we present some of the ongoing activities of FIAS and of the associated graduate
school, the “Frankfurt International Graduate School for Science” (FIGSS) in the year 2009. The main part of the Report consists of a collection of short reports describing the research projects of scientists working at or associated with FIAS.
Gamma synchronization has generally been associated with grouping processes in the visual system. Here, we examine in monkey V1 whether gamma oscillations play a functional role in segmenting surfaces of plaid stimuli. Local field potentials (LFPs) and spiking activity were recorded simultaneously from multiple sites in the opercular and calcarine regions while the monkeys were presented with sequences of single and superimposed components of plaid stimuli. In accord with the previous studies, responses to the single components (gratings) exhibited strong and sustained gamma-band oscillations (30–65 Hz). The superposition of the second component, however, led to profound changes in the temporal structure of the responses, characterized by a drastic reduction of gamma oscillations in the spiking activity and systematic shifts to higher frequencies in the LFP (~10% increase). Comparisons between cerebral hemispheres and across monkeys revealed robust subject-specific spectral signatures. A possible interpretation of our results may be that single gratings induce strong cooperative interactions among populations of cells that share similar response properties, whereas plaids lead to competition. Overall, our results suggest that the functional architecture of the cortex is a major determinant of the neuronal synchronization dynamics in V1. Key words: attention , gamma , gratings , oscillation , visual cortex
The goal of this project is to develop a framework for a cell that takes in consideration its internal structure, using an agent-based approach. In this framework, a cell was simulated as many sub-particles interacting to each other. This sub-particles can, in principle, represent any internal structure from the cell (organelles, etc). In the model discussed here, two types of sub-particles were used: membrane sub-particles and cytosolic elements. A kinetic and dynamic Delaunay triangulation was used in order to define the neighborhood relations between the sub-particles. However, it was soon noted that the relations defined by the Delaunay triangulation were not suitable to define the interactions between membrane sub-particles. The cell membrane is a lipid bilayer, and does not present any long range interactions between their sub-particles. This means that the membrane particles should not be able to interact in a long range. Instead, their interactions should be confined to the two-dimensional surface supposedly formed by the membrane. A method to select, from the original three-dimensional triangulations, connections restricted to the two-dimensional surface formed by the cell membrane was then developed. The algorithm uses as starting point the three-dimensional Delaunay triangulation involving both internal and membrane sub-particles. From this triangulation, only the subset of connections between membrane sub-particles was considered. Since the cell is full of internal particles, the collection of the membrane particles' connections will resemble the surface to be obtained, even though it will still have many connections that do not belong to the restricted triangulation on the surface. This "thick surface" was called a quasi-surface. The following step was to refine the quasi-surface, cutting out some of the connections so that the ones left made a proper surface triangulation with the membrane points. For that, the quasi-surface was separated in clusters. Clusters are defined as areas on the quasi-surface that are not yet properly triangulated on a two-dimensional surface. Each of the clusters was then re-triangulated independently, using re-triangulation methods also developed during this work. The interactions between cytosolic elements was given by a Lennard-Jones potential, as well as the interactions between cytosolic elements and membrane particles. Between only membrane particles, the interactions were given by an elastic interaction. For each particle, the equation of motion was written. The algorithm chosen to solve the equations of motion was the Verlet algorithm. Since the cytosol can be approximated as a gel, it is reasonable to suppose that the sub-cellular particles are moving in an overdamped environment. Therefore, an overdamped approximation was used for all interactions. Additionally, an adaptive algorithm was used in order to define the size of the time step used in each interaction. After the method to re-triangulate the membrane points was implemented, the time needed to re-triangulate a single cluster was studied, followed by an analysis on how the time needed to re-triangulate each point in a cluster varied with the cluster size. The frequency of appearance for each cluster size was also compared, as this information is necessary to guarantee that the total time needed by to re-triangulate a cell is convergent. At last, the total time spent re-triangulating a surface was plotted, as well as a scaling for the total re-triangulation time with the variation. Even though there is still a lot to be done, the work presented here is an important step on the way to the main goal of this project: to create an agent-based framework that not only allows the simulation of any sub-cellular structure of interest but also provides meaningful interaction relations to particles belonging to the cell membrane.
The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain’s abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise.
Poster Presentation from Nineteenth Annual Computational Neuroscience Meeting: CNS*2010 San Antonio, TX, USA. 24-30 July 2010 Statistical models of neural activity are at the core of the field of modern computational neuroscience. The activity of single neurons has been modeled to successfully explain dependencies of neural dynamics to its own spiking history, to external stimuli or other covariates [1]. Recently, there has been a growing interest in modeling spiking activity of a population of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing (existing models include generalized linear models [2,3] or maximum-entropy approaches [4]). For point-process-based models of single neurons, the time-rescaling theorem has proven to be a useful toolbox to assess goodness-of-fit. In its univariate form, the time-rescaling theorem states that if the conditional intensity function of a point process is known, then its inter-spike intervals can be transformed or “rescaled” so that they are independent and exponentially distributed [5]. However, the theorem in its original form lacks sensitivity to detect even strong dependencies between neurons. Here, we present how the theorem can be extended to be applied to neural population models and we provide a step-by-step procedure to perform the statistical tests. We then apply both the univariate and multivariate tests to simplified toy models, but also to more complicated many-neuron models and to neuronal populations recorded in V1 of awake monkey during natural scenes stimulation. We demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. ...