Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (889)
- Article (738)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical (1)
Is part of the Bibliography
- no (1687)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (10)
- Heavy-ion collision (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
- QCD (5)
- Quark-Gluon Plasma (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1687)
- Physik (1311)
- Informatik (1002)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (11)
- Helmholtz International Center for FAIR (7)
Experimental data from the NA49 collaboration show an unexpectedly steep rise of the rapidity width of the ϕ meson as function of beam energy, which was suggested as possible interesting signal for novel physics. In this work we show that the Ultra-relativistic Quantum-Molecular-Dynamics (UrQMD) model is able to reproduce the shapes of the rapidity distributions of most measured hadrons and predicts a common linear increase of the width for all hadrons. Only when following the exact same analysis technique and experimental acceptance of the NA49 and NA61/SHINE collaborations, we find that the extracted value of the rapidity width of the ϕ increases drastically for the highest beam energy. We conclude that the observed steep increase of the ϕ rapidity width is a problem of limited detector acceptance and the simplified Gaussian fit approximation.
We investigate the development of the directed, v1, and elliptic flow, v2, in heavy ion collisions in mid-central Au+Au reactions at Elab=1.23A GeV. We demonstrate that the elliptic flow of hot and dense matter is initially positive (v2>0) due to the early pressure gradient. This positive v2 transfers its momentum to the spectators, which leads to the creation of the directed flow v1. In turn, the spectator shadowing of the in-plane expansion leads to a preferred decoupling of hadrons in the out-of-plane direction and results in a negative v2 for the observable final state hadrons. We propose a measurement of v1−v2 flow correlations and of the elliptic flow of dileptons as methods to pin down this evolution pattern. The elliptic flow of the dileptons allows then to determine the early-state EoS more precisely, because it avoids the strong modifications of the momentum distribution due to shadowing seen in the protons. This opens the unique opportunity for the HADES and CBM collaborations to measure the Equation-of-State directly at 2-3 times nuclear saturation density.
Future operation of the CBM detector requires ultra-fast analysis of the continuous stream of data from all subdetector systems. Determining the inter-system time shifts among individual detector systems in the existing prototype experiment mCBM is an essential step for data processing and in particular for stable data taking. Based on the input of raw measurements from all detector systems, the corresponding time correlations can be obtained at digital level by evaluating the differences in time stamps. If the relevant systems are stable during data taking and sufficient digital measurements are available, the distribution of time differences should display a clear peak. Up to now, the outcome of the processed time differences is stored in histograms and the maximum peak is considered, after the evaluation of all timeslices of a run leading to significant run times. The results presented here demonstrate the stability of the synchronicity of mCBM systems. Furthermore it is illustrated that relatively small amounts of raw measurements are sufficient to evaluate corresponding time correlations among individual mCBM detectors, thus enabling fast online monitoring of them in future online data processing.
In this work the baryon number and strange susceptibility of second and fourth order are presented. The results at zero baryon-chemical potential are obtained using a well tested chiral effective model including all known hadron degrees of freedom and additionally implementing quarks and gluons in a PNJL-like approach. Quark and baryon number susceptibilities are sensitive to the fundamental degrees of freedom in the model and signal the shift from massive hadrons to light quarks at the deconfinement transition by a sharp rise at the critical temperature. Furthermore, all susceptibilities are found to be largely suppressed by repulsive vector field interactions of the particles. In the hadronic sector vector repulsion of baryon resonances restrains fluctuations to a large amount and in the quark sector above Tc even small vector field interactions of quarks quench all fluctuations unreasonably strong. For this reason, vector field interactions for quarks have to vanish in the deconfinement limit.
Neurogenesis of hippocampal granule cells (GCs) persists throughout mammalian life and is important for learning and memory. How newborn GCs differentiate and mature into an existing circuit during this time period is not yet fully understood. We established a method to visualize postnatally generated GCs in organotypic entorhino-hippocampal slice cultures (OTCs) using retroviral (RV) GFP-labeling and performed time-lapse imaging to study their morphological development in vitro. Using anterograde tracing we could, furthermore, demonstrate that the postnatally generated GCs in OTCs, similar to adult born GCs, grow into an existing entorhino-dentate circuitry. RV-labeled GCs were identified and individual cells were followed for up to four weeks post injection. Postnatally born GCs exhibited highly dynamic structural changes, including dendritic growth spurts but also retraction of dendrites and phases of dendritic stabilization. In contrast, older, presumably prenatally born GCs labeled with an adeno-associated virus (AAV), were far less dynamic. We propose that the high degree of structural flexibility seen in our preparations is necessary for the integration of newborn granule cells into an already existing neuronal circuit of the dentate gyrus in which they have to compete for entorhinal input with cells generated and integrated earlier.
Highlights
• We present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series.
• The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios.
• The model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33.
Abstract
High-rate Global Navigation Satellite System (HR-GNSS) data can be highly useful for earthquake analysis as it provides continuous high-frequency measurements of ground motion. This data can be used to analyze diverse parameters related to the seismic source and to assess the potential of an earthquake to prompt strong motions at certain distances and even generate tsunamis. In this work, we present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series. The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios. We explored the potential of the model for global application and compared its performance using both synthetic and real data from different seismogenic regions. The performance of our model at this stage was satisfactory in estimating earthquake magnitude from synthetic data with 0.07 ≤ RMS ≤ 0.11. Comparable results were observed in tests using synthetic data from a different region than the training data, with RMS ≤ 0.15. Furthermore, the model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33, provided that the data from a particular group of stations had similar epicentral distance constraints to those used during the model training. The robustness of the DL model can be improved to work independently from the window size of the time series and the number of stations, enabling faster estimation by the model using only near-field data. Overall, this study provides insights for the development of future DL approaches for earthquake magnitude estimation with HR-GNSS data, emphasizing the importance of proper handling and careful data selection for further model improvements.
For medicine to fulfill its promise of personalized treatments based on a better understanding of disease biology, computational and statistical tools must exist to analyze the increasing amount of patient data that becomes available. A particular challenge is that several types of data are being measured to cope with the complexity of the underlying systems, enhance predictive modeling and enrich molecular understanding.
Here we review a number of recent approaches that specialize in the analysis of multimodal data in the context of predictive biomedicine. We focus on methods that combine different OMIC measurements with image or genome variation data. Our overview shows the diversity of methods that address analysis challenges and reveals new avenues for novel developments.
As important as the intrinsic properties of an individual nervous cell stands the network of neurons in which it is embedded and by virtue of which it acquires great part of its responsiveness and functionality. In this study we have explored how the topological properties and conduction delays of several classes of neural networks affect the capacity of their constituent cells to establish well-defined temporal relations among firing of their action potentials. This ability of a population of neurons to produce and maintain a millisecond-precise coordinated firing (either evoked by external stimuli or internally generated) is central to neural codes exploiting precise spike timing for the representation and communication of information. Our results, based on extensive simulations of conductance-based type of neurons in an oscillatory regime, indicate that only certain topologies of networks allow for a coordinated firing at a local and long-range scale simultaneously. Besides network architecture, axonal conduction delays are also observed to be another important factor in the generation of coherent spiking. We report that such communication latencies not only set the phase difference between the oscillatory activity of remote neural populations but determine whether the interconnected cells can set in any coherent firing at all. In this context, we have also investigated how the balance between the network synchronizing effects and the dispersive drift caused by inhomogeneities in natural firing frequencies across neurons is resolved. Finally, we show that the observed roles of conduction delays and frequency dispersion are not particular to canonical networks but experimentally measured anatomical networks such as the macaque cortical network can display the same type of behavior.
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy.
When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons. Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons. Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited.
We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms. The second model had the same interaction rules but random connectivity. The third model had local connectivity but different activity propagation rules. As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue. Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units.
Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated.
Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled.