Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (889)
- Article (738)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical (1)
Is part of the Bibliography
- no (1687)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (10)
- Heavy-ion collision (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
- QCD (5)
- Quark-Gluon Plasma (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1687)
- Physik (1311)
- Informatik (1002)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (11)
- Helmholtz International Center for FAIR (7)
Background: Cognitive dysfunctions represent a core feature of schizophrenia and a predictor for clinical outcomes. One possible mechanism for cognitive impairments could involve an impairment in the experience-dependent modifications of cortical networks.
Methods: To address this issue, we employed magnetoencephalography (MEG) during a visual priming paradigm in a sample of chronic patients with schizophrenia (n = 14), and in a group of healthy controls (n = 14). We obtained MEG-recordings during the presentation of visual stimuli that were presented three times either consecutively or with intervening stimuli. MEG-data were analyzed for event-related fields as well as spectral power in the 1–200 Hz range to examine repetition suppression and repetition enhancement. We defined regions of interest in occipital and thalamic regions and obtained virtual-channel data.
Results: Behavioral priming did not differ between groups. However, patients with schizophrenia showed prominently reduced oscillatory response to novel stimuli in the gamma-frequency band as well as significantly reduced repetition suppression of gamma-band activity and reduced repetition enhancement of beta-band power in occipital cortex to both consecutive repetitions as well as repetitions with intervening stimuli. Moreover, schizophrenia patients were characterized by a significant deficit in suppression of the C1m component in occipital cortex and thalamus as well as of the late positive component (LPC) in occipital cortex.
Conclusions: These data provide novel evidence for impaired repetition suppression in cortical and subcortical circuits in schizophrenia. Although behavioral priming was preserved, patients with schizophrenia showed deficits in repetition suppression as well as repetition enhancement in thalamic and occipital regions, suggesting that experience-dependent modification of neural circuits is impaired in the disorder.
Poster presentation: Introduction We here focus on constructing a hierarchical neural system for position-invariant recognition, which is one of the most fundamental invariant recognition achieved in visual processing [1,2]. The invariant recognition have been hypothesized to be done by matching a sensory image of a particular object stimulated on the retina to the most suitable representation stored in memory of the higher visual cortical area. Here arises a general problem: In such a visual processing, the position of the object image on the retina must be initially uncertain. Furthermore, the retinal activities possessing sensory information are being far from the ones in the higher area with a loss of the sensory object information. Nevertheless, with such recognition ambiguity, the particular object can effortlessly and easily be recognized. Our aim in this work is an attempt to resolve such a general recognition problem. ...
Poster presentation: Introduction We here address the problem of integrating information about multiple objects and their positions on the visual scene. A primate visual system has little difficulty in rapidly achieving integration, given only a few objects. Unfortunately, computer vision still has great difficultly achieving comparable performance. It has been hypothesized that temporal binding or temporal separation could serve as a crucial mechanism to deal with information about objects and their positions in parallel to each other. Elaborating on this idea, we propose a neurally plausible mechanism for reaching local decision-making for "what" and "where" information to the global multi-object recognition. ...
We study Mach shocks generated by fast partonic jets propagating through a deconfined strongly-interacting matter. Our main goal is to take into account different types of collective motion during the formation and evolution of this matter. We predict a significant deformation of Mach shocks in central Au+Au collisions at RHIC and LHC energies as compared to the case of jet propagation in a static medium. The observed broadening of the near-side two-particle correlations in pseudorapidity space is explained by the Bjorken-like longitudinal expansion. Three-particle correlation measurements are proposed for a more detailed study of the Mach shock waves.
We develop a 1+1 dimensional hydrodynamical model for central heavy-ion collisions at ultrarelativistic energies. Deviations from Bjorken's scaling are taken into account by implementing finite-size profiles for the initial energy density. The calculated rapidity distributions of pions, kaons and antiprotons in central Au+Au collisions at the c.m. energy 200 AGeV are compared with experimental data of the BRAHMS Collaboration. The sensitivity of the results to the choice of the equation of state, the parameters of initial state and the freeze-out conditions is investigated. The best fit of experimental data is obtained for a soft equation of state and Gaussian-like initial profiles of the energy density.
Abstract
Co-infections by multiple pathogens have important implications in many aspects of health, epidemiology and evolution. However, how to disentangle the contributing factors of the immune response when two infections take place at the same time is largely unexplored. Using data sets of the immune response during influenza-pneumococcal co-infection in mice, we employ here topological data analysis to simplify and visualise high dimensional data sets.
We identified persistent shapes of the simplicial complexes of the data in the three infection scenarios: single viral infection, single bacterial infection, and co-infection. The immune response was found to be distinct for each of the infection scenarios and we uncovered that the immune response during the co-infection has three phases and two transition points. During the first phase, its dynamics is inherited from its response to the primary (viral) infection. The immune response has an early (few hours post co-infection) and then modulates its response to finally react against the secondary (bacterial) infection. Between 18 to 26 hours post co-infection the nature of the immune response changes again and does no longer resembles either of the single infection scenarios.
Author summary
The mapper algorithm is a topological data analysis technique used for the qualitative analysis, simplification and visualisation of high dimensional data sets. It generates a low-dimensional image that captures topological and geometric information of the data set in high dimensional space, which can highlight groups of data points of interest and can guide further analysis and quantification.
To understand how the immune system evolves during the co-infection between viruses and bacteria, and the role of specific cytokines as contributing factors for these severe infections, we use Topological Data Analysis (TDA) along with an extensive semi-unsupervised parameter value grid search, and k-nearest neighbour analysis.
We find persistent shapes of the data in the three infection scenarios, single viral and bacterial infections and co-infection. The immune response is shown to be distinct for each of the infections scenarios and we uncover that the immune response during the co-infection has three phases and two transition points, a previously unknown property regarding the dynamics of the immune response during co-infection.
We derive the Polyakov-loop thermodynamic potential in the perturbative approach to pure SU(3) Yang-Mills theory. The potential expressed in terms of the Polyakov loop in the fundamental representation corresponds to that of the strong-coupling expansion, of which the relevant coefficients of the gluon energy distribution are specified by characters of the SU(3) group. At high temperature, the potential exhibits the correct asymptotic behavior, whereas at low temperature, it disfavors gluons as appropriate dynamical degrees of freedom. To quantify the Yang-Mills thermodynamics in confined phase, we introduce a hybrid approach which matches the effective gluon potential to that of glueballs, constrained by the QCD trace anomaly in terms of dilaton fields.
We propose an effective theory of SU(3) gluonic matter where interactions between color-electric and color-magnetic gluons are constrained by the center and scale symmetries. Through matching to the dimensionally-reduced magnetic theories, the magnetic gluon condensate qualitatively changes its thermal behavior above the critical temperature. We argue its phenomenological consequences for the thermodynamics, in particular the dynamical breaking of scale invariance.
ϕ-meson production in In–In collisions at Elab=158A GeV: Evidence for relics of a thermal phase
(2010)
Yields and transverse mass distributions of the ϕ-mesons reconstructed in the ϕ→μ+μ− channel in In+In collisions at Elab=158A GeV are calculated within an integrated Boltzmann+hydrodynamics hybrid approach based on the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport model with an intermediate hydrodynamic stage. The analysis is performed for various centralities and a comparison with the corresponding NA60 data in the muon channel is presented. We find that the hybrid model, that embeds an intermediate locally equilibrated phase subsequently mapped into the transport dynamics according to thermal phase-space distributions, gives a good description of the experimental data, both in yield and slope. On the contrary, the pure transport model calculations tend to fail in catching the general properties of the ϕ meson production: not only the yield, but also the slope of the mT spectra, compare poorly with the experimental observations at top SPS energies.
Recent lattice QCD results, comparing to a hadron resonance gas model, have shown the need for hundreds of particles in hadronic models. These extra particles influence both the equation of state and hadronic interactions within hadron transport models. Here, we introduce the PDG21+ particle list, which contains the most up-to-date database of particles and their properties. We then convert all particles decays into 2 body decays so that they are compatible with SMASH in order to produce a more consistent description of a heavy-ion collision.
Hadron lists based on experimental studies summarized by the Particle Data Group (PDG) are a crucial input for the equation of state and thermal models used in the study of strongly-interacting matter produced in heavy-ion collisions. Modeling of these strongly-interacting systems is carried out via hydrodynamical simulations, which are followed by hadronic transport codes that also require a hadronic list as input. To remain consistent throughout the different stages of modeling of a heavy-ion collision, the same hadron list with its corresponding decays must be used at each step. It has been shown that even the most uncertain states listed in the PDG from 2016 are required to reproduce partial pressures and susceptibilities from Lattice Quantum Chromodynamics with the hadronic list known as the PDG2016+. Here, we update the hadronic list for use in heavy-ion collision modeling by including the latest experimental information for all states listed in the Particle Data Booklet in 2021. We then compare our new list, called PDG2021+, to Lattice Quantum Chromodynamics results and find that it achieves even better agreement with the first principles calculations than the PDG2016+ list. Furthermore, we develop a novel scheme based on intermediate decay channels that allows for only binary decays, such that PDG2021+ will be compatible with the hadronic transport framework SMASH. Finally, we use these results to make comparisons to experimental data and discuss the impact on particle yields and spectra.
Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements.
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc-13-1-Munc13-2 knockout mice with completely blocked synaptic transmission. Neither induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal like distribution of spines.
We investigate the effect of large magnetic fields on the (2 + 1)-dimensional reduced-magnetohydrodynamical expansion of hot and dense nuclear matter produced in √sNN = 200 GeV Au+Au collisions. For the sake of simplicity,we consider the casewhere themagnetic field points in the direction perpendicular to the reaction plane. We also consider this field to be external, with energy density parametrized as a two-dimensional Gaussian. The width of the Gaussian along the directions orthogonal to the beam axis varies with the centrality of the collision. The dependence of the magnetic field on proper time (τ ) for the case of zero electrical conductivity of the QGP is parametrized following Deng et al. [Phys. Rev. C 85, 044907 (2012)], and for finite electrical conductivity following Tuchin [Phys. Rev. C 88, 024911 (2013)].We solve the equations of motion of ideal hydrodynamics for such an external magnetic field. For collisions with nonzero impact parameter we observe considerable changes in the evolution of the momentum eccentricities of the fireball when comparing the case when the magnetic field decays in a conducting QGP medium and when no magnetic field is present. The elliptic-flow coefficient v2 of π− is shown to increase in the presence of an external magnetic field and the increment in v2 is found to depend on the evolution and the initial magnitude of the magnetic field.
The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain’s abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise.
We estimate the temperature dependence of the bulk viscosity in a relativistic hadron gas. Employing the Green–Kubo formalism in the SMASH (Simulating Many Accelerated Strongly-interacting Hadrons) transport approach, we study different hadronic systems in increasing order of complexity. We analyze the (in)validity of the single exponential relaxation ansatz for the bulk-channel correlation function and the strong influence of the resonances and their lifetimes. We discuss the difference between the inclusive bulk viscosity of an equilibrated, long-lived system, and the effective bulk viscosity of a short-lived mixture like the hadronic phase of relativistic heavy-ion collisions, where the processes whose inverse relaxation rate are larger than the fireball duration are excluded from the analysis. This clarifies the differences between previous approaches which computed the bulk viscosity including/excluding the very slow processes in the hadron gas. We compare our final results with previous hadron gas calculations and confirm a decreasing trend of the inclusive bulk viscosity over entropy density as temperature increases, whereas the effective bulk viscosity to entropy ratio, while being lower than the inclusive one, shows no strong dependence to temperature.
ALICE (A Large Heavy Ion Experiment) is one of the four large scale experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most computing-intensive task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC) and the Inner Tracking System (ITS). The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As our first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach to fast ITS standalone tracking. As our second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.
The influence of visual tasks on short and long-term memory for visual features was investigated using a change-detection paradigm. Subjects completed 2 tasks: (a) describing objects in natural images, reporting a specific property of each object when a crosshair appeared above it, and (b) viewing a modified version of each scene, and detecting which of the previously described objects had changed. When tested over short delays (seconds), no task effects were found. Over longer delays (minutes) we found the describing task influenced what types of changes were detected in a variety of explicit and incidental memory experiments. Furthermore, we found surprisingly high performance in the incidental memory experiment, suggesting that simple tasks are sufficient to instill long-lasting visual memories. Keywords: visual working memory, natural scenes, natural tasks, change detection
In the juvenile brain, the synaptic architecture of the visual cortex remains in a state of flux for months after the natural onset of vision and the initial emergence of feature selectivity in visual cortical neurons. It is an attractive hypothesis that visual cortical architecture is shaped during this extended period of juvenile plasticity by the coordinated optimization of multiple visual cortical maps such as orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we introduced a class of analytically tractable coordinated optimization models and solved representative examples, in which a spatially complex organization of the OP map is induced by interactions between the maps. We found that these solutions near symmetry breaking threshold predict a highly ordered map layout. Here we examine the time course of the convergence towards attractor states and optima of these models. In particular, we determine the timescales on which map optimization takes place and how these timescales can be compared to those of visual cortical development and plasticity. We also assess whether our models exhibit biologically more realistic, spatially irregular solutions at a finite distance from threshold, when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. We show that, although maps typically undergo substantial rearrangement, no other solutions than pinwheel crystals and stripes dominate in the emerging layouts. Pinwheel crystallization takes place on a rather short timescale and can also occur for detuned wavelengths of different maps. Our numerical results thus support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the architecture of the visual cortex. We discuss several alternative scenarios that may improve the agreement between model solutions and biological observations.
In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of orientation columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about a hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference. From basic symmetry assumptions we obtain a comprehensive phenomenological classification of possible inter-map coupling energies and examine representative examples. We show that each individual coupling energy leads to a different class of OP solutions with different correlations among the maps such that inferences about the optimization principle from map layout appear viable. We systematically assess whether quantitative laws resembling experimental observations can result from the coordinated optimization of orientation columns with other feature maps.
Experimental data from the NA49 collaboration show an unexpectedly steep rise of the rapidity width of the ϕ meson as function of beam energy, which was suggested as possible interesting signal for novel physics. In this work we show that the Ultra-relativistic Quantum-Molecular-Dynamics (UrQMD) model is able to reproduce the shapes of the rapidity distributions of most measured hadrons and predicts a common linear increase of the width for all hadrons. Only when following the exact same analysis technique and experimental acceptance of the NA49 and NA61/SHINE collaborations, we find that the extracted value of the rapidity width of the ϕ increases drastically for the highest beam energy. We conclude that the observed steep increase of the ϕ rapidity width is a problem of limited detector acceptance and the simplified Gaussian fit approximation.
We investigate the development of the directed, v1, and elliptic flow, v2, in heavy ion collisions in mid-central Au+Au reactions at Elab=1.23A GeV. We demonstrate that the elliptic flow of hot and dense matter is initially positive (v2>0) due to the early pressure gradient. This positive v2 transfers its momentum to the spectators, which leads to the creation of the directed flow v1. In turn, the spectator shadowing of the in-plane expansion leads to a preferred decoupling of hadrons in the out-of-plane direction and results in a negative v2 for the observable final state hadrons. We propose a measurement of v1−v2 flow correlations and of the elliptic flow of dileptons as methods to pin down this evolution pattern. The elliptic flow of the dileptons allows then to determine the early-state EoS more precisely, because it avoids the strong modifications of the momentum distribution due to shadowing seen in the protons. This opens the unique opportunity for the HADES and CBM collaborations to measure the Equation-of-State directly at 2-3 times nuclear saturation density.
Future operation of the CBM detector requires ultra-fast analysis of the continuous stream of data from all subdetector systems. Determining the inter-system time shifts among individual detector systems in the existing prototype experiment mCBM is an essential step for data processing and in particular for stable data taking. Based on the input of raw measurements from all detector systems, the corresponding time correlations can be obtained at digital level by evaluating the differences in time stamps. If the relevant systems are stable during data taking and sufficient digital measurements are available, the distribution of time differences should display a clear peak. Up to now, the outcome of the processed time differences is stored in histograms and the maximum peak is considered, after the evaluation of all timeslices of a run leading to significant run times. The results presented here demonstrate the stability of the synchronicity of mCBM systems. Furthermore it is illustrated that relatively small amounts of raw measurements are sufficient to evaluate corresponding time correlations among individual mCBM detectors, thus enabling fast online monitoring of them in future online data processing.
In this work the baryon number and strange susceptibility of second and fourth order are presented. The results at zero baryon-chemical potential are obtained using a well tested chiral effective model including all known hadron degrees of freedom and additionally implementing quarks and gluons in a PNJL-like approach. Quark and baryon number susceptibilities are sensitive to the fundamental degrees of freedom in the model and signal the shift from massive hadrons to light quarks at the deconfinement transition by a sharp rise at the critical temperature. Furthermore, all susceptibilities are found to be largely suppressed by repulsive vector field interactions of the particles. In the hadronic sector vector repulsion of baryon resonances restrains fluctuations to a large amount and in the quark sector above Tc even small vector field interactions of quarks quench all fluctuations unreasonably strong. For this reason, vector field interactions for quarks have to vanish in the deconfinement limit.
Neurogenesis of hippocampal granule cells (GCs) persists throughout mammalian life and is important for learning and memory. How newborn GCs differentiate and mature into an existing circuit during this time period is not yet fully understood. We established a method to visualize postnatally generated GCs in organotypic entorhino-hippocampal slice cultures (OTCs) using retroviral (RV) GFP-labeling and performed time-lapse imaging to study their morphological development in vitro. Using anterograde tracing we could, furthermore, demonstrate that the postnatally generated GCs in OTCs, similar to adult born GCs, grow into an existing entorhino-dentate circuitry. RV-labeled GCs were identified and individual cells were followed for up to four weeks post injection. Postnatally born GCs exhibited highly dynamic structural changes, including dendritic growth spurts but also retraction of dendrites and phases of dendritic stabilization. In contrast, older, presumably prenatally born GCs labeled with an adeno-associated virus (AAV), were far less dynamic. We propose that the high degree of structural flexibility seen in our preparations is necessary for the integration of newborn granule cells into an already existing neuronal circuit of the dentate gyrus in which they have to compete for entorhinal input with cells generated and integrated earlier.
Highlights
• We present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series.
• The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios.
• The model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33.
Abstract
High-rate Global Navigation Satellite System (HR-GNSS) data can be highly useful for earthquake analysis as it provides continuous high-frequency measurements of ground motion. This data can be used to analyze diverse parameters related to the seismic source and to assess the potential of an earthquake to prompt strong motions at certain distances and even generate tsunamis. In this work, we present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series. The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios. We explored the potential of the model for global application and compared its performance using both synthetic and real data from different seismogenic regions. The performance of our model at this stage was satisfactory in estimating earthquake magnitude from synthetic data with 0.07 ≤ RMS ≤ 0.11. Comparable results were observed in tests using synthetic data from a different region than the training data, with RMS ≤ 0.15. Furthermore, the model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33, provided that the data from a particular group of stations had similar epicentral distance constraints to those used during the model training. The robustness of the DL model can be improved to work independently from the window size of the time series and the number of stations, enabling faster estimation by the model using only near-field data. Overall, this study provides insights for the development of future DL approaches for earthquake magnitude estimation with HR-GNSS data, emphasizing the importance of proper handling and careful data selection for further model improvements.
For medicine to fulfill its promise of personalized treatments based on a better understanding of disease biology, computational and statistical tools must exist to analyze the increasing amount of patient data that becomes available. A particular challenge is that several types of data are being measured to cope with the complexity of the underlying systems, enhance predictive modeling and enrich molecular understanding.
Here we review a number of recent approaches that specialize in the analysis of multimodal data in the context of predictive biomedicine. We focus on methods that combine different OMIC measurements with image or genome variation data. Our overview shows the diversity of methods that address analysis challenges and reveals new avenues for novel developments.
As important as the intrinsic properties of an individual nervous cell stands the network of neurons in which it is embedded and by virtue of which it acquires great part of its responsiveness and functionality. In this study we have explored how the topological properties and conduction delays of several classes of neural networks affect the capacity of their constituent cells to establish well-defined temporal relations among firing of their action potentials. This ability of a population of neurons to produce and maintain a millisecond-precise coordinated firing (either evoked by external stimuli or internally generated) is central to neural codes exploiting precise spike timing for the representation and communication of information. Our results, based on extensive simulations of conductance-based type of neurons in an oscillatory regime, indicate that only certain topologies of networks allow for a coordinated firing at a local and long-range scale simultaneously. Besides network architecture, axonal conduction delays are also observed to be another important factor in the generation of coherent spiking. We report that such communication latencies not only set the phase difference between the oscillatory activity of remote neural populations but determine whether the interconnected cells can set in any coherent firing at all. In this context, we have also investigated how the balance between the network synchronizing effects and the dispersive drift caused by inhomogeneities in natural firing frequencies across neurons is resolved. Finally, we show that the observed roles of conduction delays and frequency dispersion are not particular to canonical networks but experimentally measured anatomical networks such as the macaque cortical network can display the same type of behavior.
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy.
When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons. Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons. Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited.
We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms. The second model had the same interaction rules but random connectivity. The third model had local connectivity but different activity propagation rules. As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue. Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units.
Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated.
Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled.
Neuronal dynamics differs between wakefulness and sleep stages, so does the cognitive state. In contrast, a single attractor state, called self-organized critical (SOC), has been proposed to govern human brain dynamics for its optimal information coding and processing capabilities. Here we address two open questions: First, does the human brain always operate in this computationally optimal state, even during deep sleep? Second, previous evidence for SOC was based on activity within single brain areas, however, the interaction between brain areas may be organized differently. Here we asked whether the interaction between brain areas is SOC. ...
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allows imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. Tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.
Interacting with the environment to process sensory information, generate perceptions, and shape behavior engages neural networks in brain areas with highly varied representations, ranging from unimodal sensory cortices to higher-order association areas. Recent work suggests a much greater degree of commonality across areas, with distributed and modular networks present in both sensory and non-sensory areas during early development. However, it is currently unknown whether this initially common modular structure undergoes an equally common developmental trajectory, or whether such a modular functional organization persists in some areas—such as primary visual cortex—but not others. Here we examine the development of network organization across diverse cortical regions in ferrets of both sexes using in vivo widefield calcium imaging of spontaneous activity. We find that all regions examined, including both primary sensory cortices (visual, auditory, and somatosensory—V1, A1, and S1, respectively) and higher order association areas (prefrontal and posterior parietal cortices) exhibit a largely similar pattern of changes over an approximately 3 week developmental period spanning eye opening and the transition to predominantly externally-driven sensory activity. We find that both a modular functional organization and millimeter-scale correlated networks remain present across all cortical areas examined. These networks weakened over development in most cortical areas, but strengthened in V1. Overall, the conserved maintenance of modular organization across different cortical areas suggests a common pathway of network refinement, and suggests that a modular organization—known to encode functional representations in visual areas—may be similarly engaged in highly diverse brain areas.
Significance Different areas of the mature brain encode vastly different representations of the world. This study shows that a modular functional organization where nearby neurons participate in similar functional networks is shared across different brain areas not only during early development, but also as the brain matures where it remains a shared feature that shapes neural activity. The largely conserved trajectory of developmental changes across brain areas suggests that similar circuit mechanisms may drive this maturation. This implies that the large literature on developing cortical circuits, which is largely focused on sensory areas, may also apply more broadly, and that perturbations during development that impinge on any such shared mechanisms may produce deficits that extend across multiple brain systems.
We present the black hole accretion code (BHAC), a new multidimensional general-relativistic magnetohydrodynamics module for the MPI-AMRVAC framework. BHAC has been designed to solve the equations of ideal general-relativistic magnetohydrodynamics in arbitrary spacetimes and exploits adaptive mesh refinement techniques with an efficient block-based approach. Several spacetimes have already been implemented and tested. We demonstrate the validity of BHAC by means of various one-, two-, and three-dimensional test problems, as well as through a close comparison with the HARM3D code in the case of a torus accreting onto a black hole. The convergence of a turbulent accretion scenario is investigated with several diagnostics and we find accretion rates and horizon-penetrating fluxes to be convergent to within a few percent when the problem is run in three dimensions. Our analysis also involves the study of the corresponding thermal synchrotron emission, which is performed by means of a new general-relativistic radiative transfer code, BHOSS. The resulting synthetic intensity maps of accretion onto black holes are found to be convergent with increasing resolution and are anticipated to play a crucial role in the interpretation of horizon-scale images resulting from upcoming radio observations of the source at the Galactic Center.
The wave function of a spheroidal harmonic oscillator without spin-orbit interaction is expressed in terms of associated Laguerre and Hermite polynomials. The pairing gap and Fermi energy are found by solving the BCS system of two equations. Analytical relationships for the matrix elements of inertia are obtained function of the main quantum numbers and potential derivative. They may be used to test complex computer codes one should develop in a realistic approach of the fission dynamics. The results given for the 240 Pu nucleus are compared with a hydrodynamical model. The importance of taking into account the correction term due to the variation of the occupation number is stressed.
Potential energy surfaces are calculated by using the most advanced asymmetric two-center shell model allowing to obtain shell and pairing corrections which are added to the Yukawa-plus-exponential model deformation energy. Shell effects are of crucial importance for experimental observation of spontaneous disintegration by heavy ion emission. Results for 222Ra, 232U, 236Pu and 242Cm illustrate the main ideas and show for the first time for a cluster emitter a potential barrier obtained by using the macroscopic-microscopic method.
Complex fission phenomena
(2004)
Complex fission phenomena are studied in a unified way. Very general reflection asymmetrical equilibrium (saddle point) nuclear shapes are obtained by solving an integro-differential equation without being necessary to specify a certain parametrization. The mass asymmetry in binary cold fission of Th and U isotopes is explained as the result of adding a phenomenological shell correction to the liquid drop model deformation energy. Applications to binary, ternary, and quaternary fission are outlined.
Sharp wave-ripples (SPW-Rs) are a hippocampal network phenomenon critical for memory consolidation and planning. SPW-Rs have been extensively studied in the adult brain, yet their developmental trajectory is poorly understood. While SPWs have been recorded in rodents shortly after birth, the time point and mechanisms of ripple emergence are still unclear. Here, we combine in vivo electrophysiology with optogenetics and chemogenetics in 4 to 12 days-old mice to address this knowledge gap. We show that ripples are robustly detected and induced by light stimulation of ChR2-transfected CA1 pyramidal neurons only from postnatal day (P) 10 onwards. Leveraging a spiking neural network model, we mechanistically link the maturation of inhibition and ripple emergence. We corroborate these findings by reducing ripple rate upon chemogenetic silencing of CA1 interneurons. Finally, we show that early SPW-Rs elicit a more robust prefrontal cortex response then SPWs lacking ripples. Thus, development of inhibition promotes ripples emergence.
Introduction: Neuronal death and subsequent denervation of target areas are hallmarks of many neurological disorders. Denervated neurons lose part of their dendritic tree, and are considered "atrophic", i.e. pathologically altered and damaged. The functional consequences of this phenomenon are poorly understood.
Results: Using computational modelling of 3D-reconstructed granule cells we show that denervation-induced dendritic atrophy also subserves homeostatic functions: By shortening their dendritic tree, granule cells compensate for the loss of inputs by a precise adjustment of excitability. As a consequence, surviving afferents are able to activate the cells, thereby allowing information to flow again through the denervated area. In addition, action potentials backpropagating from the soma to the synapses are enhanced specifically in reorganized portions of the dendritic arbor, resulting in their increased synaptic plasticity. These two observations generalize to any given dendritic tree undergoing structural changes.
Conclusions: Structural homeostatic plasticity, i.e. homeostatic dendritic remodeling, is operating in long-term denervated neurons to achieve functional homeostasis.
At nonzero temperature, it is expected that QCD undergoes a phase transition to a deconfined, chirally symmetric phase, the Quark-Gluon Plasma (QGP). I review what we expect theoretically about this possible transition, and what we have learned from heavy ion experiments at RHIC. I argue that while there are unambiguous signals for qualitatively new behavior at RHIC, versus experiments at lower energies, that in detail, no simple theoretical model can explain all salient features of the data.
NeuroXidence: reliable and efficient analysis of an excess or deficiency of joint-spike events
(2009)
Poster presentation: We present a non-parametric and computationally-efficient method named NeuroXidence (see http://www.NeuroXidence.com ) that detects coordinated firing within a group of two or more neurons and tests whether the observed level of coordinated firing is significantly different from that expected by chance. NeuroXidence [1] considers the full auto-structure of the data, including the changes in the rate responses and the history dependencies in the spiking activity. We demonstrate that NeuroXidence can identify epochs with significant spike synchronisation even if these coincide with strong and fast rate modulations. We also show, that the method accounts for trial-by-trial variability in the rate responses and their latencies, and that it can be applied to short data windows lasting only tens of milliseconds. Based on simulated data we compare the performance of NeuroXidence with the UE-method [2,3] and the cross-correlation analysis. An application of NeuroXidence to 42 single-units (SU) recorded in area 17 of an anesthetized cat revealed significant coincident events of high complexities, involving firing of up to 8 SUs simultaneously (5 ms window). The results were highly consistent with those obtained by traditional pair-wise measures based on cross-correlation: Neuronal synchrony was strongest in stimulation conditions in which the orientation of the sinusoidal grating matched the preferred orientation of most of the SUs included in the analysis, and was the weakest when the neurons were stimulated least optimally. Interestingly, events of higher complexities showed stronger stimulus-specific modulation than pair-wise interactions. The results suggest strong evidence for stimulus specific synchronous firing and, therefore, support the temporal coding hypothesis in visual cortex. ...
Poster presentation: Coordinated neuronal activity across many neurons, i.e. synchronous or spatiotemporal pattern, had been believed to be a major component of neuronal activity. However, the discussion if coordinated activity really exists remained heated and controversial. A major uncertainty was that many analysis approaches either ignored the auto-structure of the spiking activity, assumed a very simplified model (poissonian firing), or changed the auto-structure by spike jittering. We studied whether a statistical inference that tests whether coordinated activity is occurring beyond chance can be made false if one ignores or changes the real auto-structure of recorded data. To this end, we investigated the distribution of coincident spikes in mutually independent spike-trains modeled as renewal processes. We considered Gamma processes with different shape parameters as well as renewal processes in which the ISI distribution is log-normal. For Gamma processes of integer order, we calculated the mean number of coincident spikes, as well as the Fano factor of the coincidences, analytically. We determined how these measures depend on the bin width and also investigated how they depend on the firing rate, and on rate difference between the neurons. We used Monte-Carlo simulations to estimate the whole distribution for these parameters and also for other values of gamma. Moreover, we considered the effect of dithering for both of these processes and saw that while dithering does not change the average number of coincidences, it does change the shape of the coincidence distribution. Our major findings are: 1) the width of the coincidence count distribution depends very critically and in a non-trivial way on the detailed properties of the inter-spike interval distribution, 2) the dependencies of the Fano factor on the coefficient of variation of the ISI distribution are complex and mostly non-monotonic. Moreover, the Fano factor depends on the very detailed properties of the individual point processes, and cannot be predicted by the CV alone. Hence, given a recorded data set, the estimated value of CV of the ISI distribution is not sufficient to predict the Fano factor of the coincidence count distribution, and 3) spike jittering, even if it is as small as a fraction of the expected ISI, can falsify the inference on coordinated firing. In most of the tested cases and especially for complex synchronous and spatiotemporal pattern across many neurons, spike jittering increased the likelihood of false positive finding very strongly. Last, we discuss a procedure [1] that considers the complete auto-structure of each individual spike-train for testing whether synchrony firing occurs at chance and therefore overcomes the danger of an increased level of false positives.
Poster presentation: How can two distant neural assemblies synchronize their firings at zero-lag even in the presence of non-negligible delays in the transfer of information between them? Neural synchronization stands today as one of the most promising mechanisms to counterbalance the huge anatomical and functional specialization of the different brain areas. However, and albeit more evidence is being accumulated in favor of its functional role as a binding mechanism of distributed neural responses, the physical and anatomical substrate for such a dynamic and precise synchrony, especially zero-lag even in the presence of non-negligible delays, remains unclear. Here we propose a simple network motif that naturally accounts for zero-lag synchronization of spiking assemblies of neurons for a wide range of temporal delays. We demonstrate that when two distant neural assemblies do not interact directly but relaying their dynamics via a third mediating single neuron or population and eventually achieve zero-lag coherent firing. Extensive numerical simulations of populations of Hodgkin-Huxley neurons interacting in such a network are analyzed. The results show that even with axonal delays as large as 15 ms the distant neural populations can synchronize their firings at zero-lag in a millisecond precision after the exchange of a few spikes. The role of noise and a distribution of axonal delays in the synchronized dynamics of the neural populations are also studied confirming the robustness of this sync mechanism. The proposed network module is densely embedded within the complex functional architecture of the brain and especially within the reciprocal thalamocortical interactions where the role of indirect pathways mimicking direct cortico-cortical fibers has been already suggested to facilitate trans-areal cortical communication. In summary the robust neural synchronization mechanism presented here arises as a consequence of the relay and redistribution of the dynamics performed by a mediating neuronal population. In opposition to previous works, neither inhibitory, gap junctions, nor complex networks need to be invoked to provide a stable mechanism of zero-phase correlated activity of neural populations in the presence of large conduction delays.
Short-term memory requires the coordination of sub-processes like encoding, retention, retrieval and comparison of stored material to subsequent input. Neuronal oscillations have an inherent time structure, can effectively coordinate synaptic integration of large neuron populations and could therefore organize and integrate distributed sub-processes in time and space. We observed field potential oscillations (14–95 Hz) in ventral prefrontal cortex of monkeys performing a visual memory task. Stimulus-selective and performance-dependent oscillations occurred simultaneously at 65–95 Hz and 14–50 Hz, the latter being phase-locked throughout memory maintenance. We propose that prefrontal oscillatory activity may be instrumental for the dynamical integration of local and global neuronal processes underlying short-term memory.
Poster presentation: Characterizing neuronal encoding is essential for understanding information processing in the brain. Three methods are commonly used to characterize the relationship between neural spiking activity and the features of putative stimuli. These methods include: Wiener-Volterra kernel methods (WVK), the spike-triggered average (STA), and more recently, the point process generalized linear model (GLM). We compared the performance of these three approaches in estimating receptive field properties and orientation tuning of 251 V1 neurons recorded from 2 monkeys during a fixation period in response to a moving bar. The GLM consisted of two formulations of the conditional intensity function for a point process characterization of the spiking activity: one with a stimulus only component and one with the stimulus and spike history. We fit the GLMs by maximum likelihood using GLMfit in Matlab. Goodness-of-fit was assessed using cross-validation with Kolmogorov-Smirnov (KS) tests based on the time-rescaling theorem to evaluate the accuracy with which each model predicts the spiking activity of individual neurons and for each movement direction (4016 models in total, for 251 neurons and 16 different directions). The GLMs that considered spike history of up to 35 ms, accurately predicted neuronal spiking activity (95% confidence intervals for KS test) with a performance of 97.0% (3895/4016) for the training data, and 96.5% (3876/4016) for the test data. If spike history was not considered, performance dropped to 73,1% in the training and 71.3% in the testing data. In contrast, the WVF and the STA predicted spiking accurately for 24.2% and 44.5% of the test data examples respectively. The receptive field size estimates obtained from the GLM (with and without history), WVF and STA were comparable. Relative to the GLM orientation tuning was underestimated on average by a factor of 0.45 by the WVF and the STA. The main reason for using the STA and WVF approaches is their apparent simplicity. However, our analyses suggest that more accurate spike prediction as well as more credible estimates of receptive field size and orientation tuning can be computed easily using GLMs implemented in Matlab with standard functions such as GLMfit.
The cumulant ratios up to fourth order of the Z distributions of the largest fragment in spectator fragmentation following 107,124Sn+Sn and 124La+Sn collisions at 600 MeV/nucleon have been investigated. They are found to exhibit the signatures of a second-order phase transition established with cubic bond percolation and previously observed in the ALADIN experimental data for fragmentation of 197Au projectiles at similar energies. The deduced pseudocritical points are found to be only weakly dependent on the A/Z ratio of the fragmenting spectator source. The same holds for the corresponding chemical freeze-out temperatures of close to 6 MeV.The experimental cumulant distributions are quantitatively reproduced with the Statistical Multifragmentation Model and parameters used to describe the experimental fragment multiplicities, isotope distributions and their correlations with impact-parameter related observables in these reactions. The characteristic coincidence of the zero transition of the skewness with the minimum of the kurtosis excess appears to be a generic property of statistical models and is found to coincide with the maximum of the heat capacity in the canonical thermodynamic fragmentation model.
Self-organized complexity and Coherent Infomax from the viewpoint of Jaynes’s probability theory
(2012)
This paper discusses concepts of self-organized complexity and the theory of Coherent Infomax in the light of Jaynes’s probability theory. Coherent Infomax, shows, in principle, how adaptively self-organized complexity can be preserved and improved by using probabilistic inference that is context-sensitive. It argues that neural systems do this by combining local reliability with flexible, holistic, context-sensitivity. Jaynes argued that the logic of probabilistic inference shows it to be based upon Bayesian and Maximum Entropy methods or special cases of them. He presented his probability theory as the logic of science; here it is considered as the logic of life. It is concluded that the theory of Coherent Infomax specifies a general objective for probabilistic inference, and that contextual interactions in neural systems perform functions required of the scientist within Jaynes’s theory.
Lattice QCD with heavy quarks reduces to a three-dimensional effective theory of Polyakov loops, which is amenable to series expansion methods. We analyse the effective theory in the cold and dense regime for a general number of colours, Nc. In particular, we investigate the transition from a hadron gas to baryon condensation. For any finite lattice spacing, we find the transition to become stronger, i.e. ultimately first-order, as Nc is made large. Moreover, in the baryon condensed regime, we find the pressure to scale as p ∼ Nc through three orders in the hopping expansion. Such a phase differs from a hadron gas with p ∼ N0c, or a quark gluon plasma, p ∼ N2c, and was termed quarkyonic in the literature, since it shows both baryon-like and quark-like aspects. A lattice filling with baryon number shows a rapid and smooth transition from condensing baryons to a crystal of saturated quark matter, due to the Pauli principle, and is consistent with this picture. For continuum physics, the continuum limit needs to be taken before the large Nc limit, which is not yet possible in practice. However, in the controlled range of lattice spacings and Nc-values, our results are stable when the limits are approached in this order. We discuss possible implications for physical QCD.
LatticeQCD using OpenCL
(2011)
The global energy system is undergoing a major transition, and in energy planning and decision-making across governments, industry and academia, models play a crucial role. Because of their policy relevance and contested nature, the transparency and open availability of energy models and data are of particular importance. Here we provide a practical how-to guide based on the collective experience of members of the Open Energy Modelling Initiative (Openmod). We discuss key steps to consider when opening code and data, including determining intellectual property ownership, choosing a licence and appropriate modelling languages, distributing code and data, and providing support and building communities. After illustrating these decisions with examples and lessons learned from the community, we conclude that even though individual researchers' choices are important, institutional changes are still also necessary for more openness and transparency in energy research.