Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (821)
- Article (715)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Review (1)
Has Fulltext
- yes (1594) (remove)
Is part of the Bibliography
- no (1594)
Keywords
- Heavy Ion Experiments (20)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (10)
- Heavy-ion collision (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
- QCD (5)
- Quark-Gluon Plasma (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1594)
- Physik (1298)
- Informatik (1000)
- Medizin (64)
- MPI für Hirnforschung (30)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (11)
- Helmholtz International Center for FAIR (7)
We study the kinetic and chemical equilibration in 'infinite' parton-hadron matter within the Parton-Hadron-String Dynamics transport approach, which is based on a dynamical quasiparticle model for partons matched to reproduce lattice-QCD results – including the partonic equation of state – in thermodynamic equilibrium. The 'infinite' matter is simulated within a cubic box with periodic boundary conditions initialized at different baryon density (or chemical potential) and energy density. The transition from initially pure partonic matter to hadronic degrees of freedom (or vice versa) occurs dynamically by interactions. Different thermody-namical distributions of the strongly-interacting quark-gluon plasma (sQGP) are addressed and discussed.
The investigation of distributed coding across multiple neurons in the cortex remains to this date a challenge. Our current understanding of collective encoding of information and the relevant timescales is still limited. Most results are restricted to disparate timescales, focused on either very fast, e.g., spike-synchrony, or slow timescales, e.g., firing rate. Here, we investigated systematically multineuronal activity patterns evolving on different timescales, spanning the whole range from spike-synchrony to mean firing rate. Using multi-electrode recordings from cat visual cortex, we show that cortical responses can be described as trajectories in a high-dimensional pattern space. Patterns evolve on a continuum of coexisting timescales that strongly relate to the temporal properties of stimuli. Timescales consistent with the time constants of neuronal membranes and fast synaptic transmission (5–20 ms) play a particularly salient role in encoding a large amount of stimulus-related information. Thus, to faithfully encode the properties of visual stimuli the brain engages multiple neurons into activity patterns evolving on multiple timescales.
We address the question of whether and how boosting and bagging can be used for speech recognition. In order to do this, we compare two different boosting schemes, one at the phoneme level and one at the utterance level, with a phoneme-level bagging scheme. We control for many parameters and other choices, such as the state inference scheme used. In an unbiased experiment, we clearly show that the gain of boosting methods compared to a single hidden Markov model is in all cases only marginal, while bagging significantly outperforms all other methods. We thus conclude that bagging methods, which have so far been overlooked in favour of boosting, should be examined more closely as a potentially useful ensemble learning technique for speech recognition.
This thesis investigates the development of early cognition in infancy using neural network models. Fundamental events in visual perception such as caused motion, occlusion, object permanence, tracking of moving objects behind occluders, object unity perception and sequence learning are modeled in a unifying computational framework while staying close to experimental data in developmental psychology of infancy. In the first project, the development of causality and occlusion perception in infancy is modeled using a simple, three-layered, recurrent network trained with error backpropagation to predict future inputs (Elman network). The model unifies two infant studies on causality and occlusion perception. Subsequently, in the second project, the established framework is extended to a larger prediction network that models the development of object unity, object permanence and occlusion perception in infancy. It is shown that these different phenomena can be unified into a single theoretical framework thereby explaining experimental data from 14 infant studies. The framework shows that these developmental phenomena can be explained by accurately representing and predicting statistical regularities in the visual environment. The models assume (1) different neuronal populations processing different motion directions of visual stimuli in the visual cortex of the newborn infant which are supported by neuroscientific evidence and (2) available learning algorithms that are guided by the goal of predicting future events. Specifically, the models demonstrate that no innate force notions, motion analysis modules, common motion detectors, specific perceptual rules or abilities to "reason" about entities which have been widely postulated in the developmental literature are necessary for the explanation of the discussed phenomena. Since the prediction of future events turned out to be fruitful for theoretical explanation of various developmental phenomena and a guideline for learning in infancy, the third model addresses the development of visual expectations themselves. A self-organising, fully recurrent neural network model that forms internal representations of input sequences and maps them onto eye movements is proposed. The reinforcement learning architecture (RLA) of the model learns to perform anticipatory eye movements as observed in a range of infant studies. The model suggests that the goal of maximizing the looking time at interesting stimuli guides infants' looking behavior thereby explaining the occurrence and development of anticipatory eye movements and reaction times. In contrast to classical neural network modelling approaches in the developmental literature, the model uses local learning rules and contains several biologically plausible elements like excitatory and inhibitory spiking neurons, spike-timing dependent plasticity (STDP), intrinsic plasticity (IP) and synaptic scaling. It is also novel from the technical point of view as it uses a dynamic recurrent reservoir shaped by various plasticity mechanisms and combines it with reinforcement learning. The model accounts for twelve experimental studies and predicts among others anticipatory behavior for arbitrary sequences and facilitated reacquisition of already learned sequences. All models emphasize the development of the perception of the discussed phenomena thereby addressing the questions of how and why this developmental change takes place - questions that are difficult to be assessed experimentally. Despite the diversity of the discussed phenomena all three projects rely on the same principle: the prediction of future events. This principle suggests that cognitive development in infancy may largely be guided by building internal models and representations of the visual environment and using those models to predict its future development.
Background: The immune system is a complex adaptive system of cells and molecules that are interwoven in a highly organized communication network. Primary immune deficiencies are disorders in which essential parts of the immune system are absent or do not function according to plan. X-linked agammaglobulinemia is a B-lymphocyte maturation disorder in which the production of immunoglobulin is prohibited by a genetic defect. Patients have to be put on life-long immunoglobulin substitution therapy in order to prevent recurrent and persistent opportunistic infections. Methodology: We formulate an immune response model in terms of stochastic differential equations and perform a systematic analysis of empirical therapy protocols that differ in the treatment frequency. The model accounts for the immunoglobulin reduction by natural degradation and by antigenic consumption, as well as for the periodic immunoglobulin replenishment that gives rise to an inhomogeneous distribution of immunoglobulin specificities in the shape space. Results are obtained from computer simulations and from analytical calculations within the framework of the Fokker-Planck formalism, which enables us to derive closed expressions for undetermined model parameters such as the infection clearance rate. Conclusions: We find that the critical value of the clearance rate, below which a chronic infection develops, is strongly dependent on the strength of fluctuations in the administered immunoglobulin dose per treatment and is an increasing function of the treatment frequency. The comparative analysis of therapy protocols with regard to the treatment frequency yields quantitative predictions of therapeutic relevance, where the choice of the optimal treatment frequency reveals a conflict of competing interests: In order to diminish immunomodulatory effects and to make good economic sense, therapeutic immunoglobulin levels should be kept close to physiological levels, implying high treatment frequencies. However, clearing infections without additional medication is more reliably achieved by substitution therapies with low treatment frequencies. Our immune response model predicts that the compromise solution of immunoglobulin substitution therapy has a treatment frequency in the range from one infusion per week to one infusion per two weeks.
At present, there is a huge lag between the artificial and the biological information processing systems in terms of their capability to learn. This lag could be certainly reduced by gaining more insight into the higher functions of the brain like learning and memory. For instance, primate visual cortex is thought to provide the long-term memory for the visual objects acquired by experience. The visual cortex handles effortlessly arbitrary complex objects by decomposing them rapidly into constituent components of much lower complexity along hierarchically organized visual pathways. How this processing architecture self-organizes into a memory domain that employs such compositional object representation by learning from experience remains to a large extent a riddle. The study presented here approaches this question by proposing a functional model of a self-organizing hierarchical memory network. The model is based on hypothetical neuronal mechanisms involved in cortical processing and adaptation. The network architecture comprises two consecutive layers of distributed, recurrently interconnected modules. Each module is identified with a localized cortical cluster of fine-scale excitatory subnetworks. A single module performs competitive unsupervised learning on the incoming afferent signals to form a suitable representation of the locally accessible input space. The network employs an operating scheme where ongoing processing is made of discrete successive fragments termed decision cycles, presumably identifiable with the fast gamma rhythms observed in the cortex. The cycles are synchronized across the distributed modules that produce highly sparse activity within each cycle by instantiating a local winner-take-all-like operation. Equipped with adaptive mechanisms of bidirectional synaptic plasticity and homeostatic activity regulation, the network is exposed to natural face images of different persons. The images are presented incrementally one per cycle to the lower network layer as a set of Gabor filter responses extracted from local facial landmarks. The images are presented without any person identity labels. In the course of unsupervised learning, the network creates simultaneously vocabularies of reusable local face appearance elements, captures relations between the elements by linking associatively those parts that encode the same face identity, develops the higher-order identity symbols for the memorized compositions and projects this information back onto the vocabularies in generative manner. This learning corresponds to the simultaneous formation of bottom-up, lateral and top-down synaptic connectivity within and between the network layers. In the mature connectivity state, the network holds thus full compositional description of the experienced faces in form of sparse memory traces that reside in the feed-forward and recurrent connectivity. Due to the generative nature of the established representation, the network is able to recreate the full compositional description of a memorized face in terms of all its constituent parts given only its higher-order identity symbol or a subset of its parts. In the test phase, the network successfully proves its ability to recognize identity and gender of the persons from alternative face views not shown before. An intriguing feature of the emerging memory network is its ability to self-generate activity spontaneously in absence of the external stimuli. In this sleep-like off-line mode, the network shows a self-sustaining replay of the memory content formed during the previous learning. Remarkably, the recognition performance is tremendously boosted after this off-line memory reprocessing. The performance boost is articulated stronger on those face views that deviate more from the original view shown during the learning. This indicates that the off-line memory reprocessing during the sleep-like state specifically improves the generalization capability of the memory network. The positive effect turns out to be surprisingly independent of synapse-specific plasticity, relying completely on the synapse-unspecific, homeostatic activity regulation across the memory network. The developed network demonstrates thus functionality not shown by any previous neuronal modeling approach. It forms and maintains a memory domain for compositional, generative object representation in unsupervised manner through experience with natural visual images, using both on- ("wake") and off-line ("sleep") learning regimes. This functionality offers a promising departure point for further studies, aiming for deeper insight into the learning mechanisms employed by the brain and their consequent implementation in the artificial adaptive systems for solving complex tasks not tractable so far.
Relying on the existing estimates for the production cross sections of mini black holes in models with large extra dimensions, we review strategies for identifying those objects at collider experiments. We further consider a possible stable final state of such black holes and discuss their characteristic signatures. Keywords: Black holes
We discuss the present collective flow signals for the phase transition to the quark-gluon plasma (QGP) and the collective flow as a barometer for the equation of state (EoS). We emphasize the importance of the flow excitation function from 1 to 50A GeV: here the hydrodynamicmodel has predicted the collapse of the v1-flow at ~ 10A GeV and of the v2-flow at ~ 40A GeV. In the latter case, this has recently been observed by the NA49 collaboration. Since hadronic rescattering models predict much larger flow than observed at this energy, we interpret this observation as potential evidence for a first order phase transition at high baryon density pB.
We study various fluctuation and correlation signals of the deconfined state using a dynamical recombination approach (quark Molecular Dynamics, qMD). We analyse charge ratio fluctuations, charge transfer fluctuations and baryon-strangeness correlations as a function of the center of mass energy with a set of central Pb+Pb/Au+Au events from AGS energies on (Elab = 4 AGeV) up to the highest RHIC energy available (V sNN = 200 GeV) and as a function of time with a set of central Au+Au qMD events at V sNN = 200 GeV with and without applying our hadronization procedure. For all studied quantities, the results start from values compatible with a weakly coupled QGP in the early stage and end with values compatible with the hadronic result in the final state. We show that the loss of the signal occurs at the same time as hadronization and trace it back to the dynamical recombination process implemented in our model.
To investigate the formation and the propagation of relativistic shock waves in viscous gluon matter we solve the relativistic Riemann problem using a microscopic parton cascade. We demonstrate the transition from ideal to viscous shock waves by varying the shear viscosity to entropy density ratio n/s. Furthermore we compare our results with those obtained by solving the relativistic causal dissipative fluid equations of Israel and Stewart (IS), in order to show the validity of the IS hydrodynamics. Employing the parton cascade we also investigate the formation of Mach shocks induced by a high-energy gluon traversing viscous gluon matter. For n/s = 0.08 a Mach cone structure is observed, whereas the signal smears out for n/s >=0.32.