Refine
Year of publication
Document Type
- Preprint (1019)
- Article (808)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical (1)
Language
- English (1883) (remove)
Is part of the Bibliography
- no (1883)
Keywords
- Heavy Ion Experiments (22)
- Hadron-Hadron Scattering (14)
- Hadron-Hadron scattering (experiments) (11)
- LHC (11)
- Heavy-ion collisions (9)
- Heavy-ion collision (7)
- heavy-ion collisions (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1883) (remove)
The properties of open strange meson K1± in nuclear matter are estimated in the QCD sum rule approach. We obtain a relation between the in-medium mass and width of K1− (K1+) in nuclear matter, and show that the upper limit of the mass shift is as large as −249 (−35) MeV. The spectral modification of the K1 meson is possible to be probed by using kaon beams at J-PARC. Such measurement together with that of K⁎ will shed light on how chiral symmetry is partially restored in nuclear matter.
The effect of a non-zero strangeness chemical potential on the strong interaction phase diagram has been studied within the framework of the SU(3) quark-hadron chiral parity-doublet model. Both, the nuclear liquid-gas and the chiral/deconfinement phase transitions are modified. The first-order line in the chiral phase transition is observed to vanish completely, with the entire phase boundary becoming a crossover. These changes in the nature of the phase transitions are expected to modify various susceptibilities, the effects of which might be detectable in particle-number distributions resulting from moderate-temperature and high-density heavy-ion collision experiments.
In this letter we present some stringy corrections to black hole spacetimes emerging from string T-duality. As a first step, we derive the static Newtonian potential by exploiting the relation between the T-duality and the path integral duality. We show that the intrinsic non-perturbative nature of stringy corrections introduces an ultraviolet cutoff known as zero-point length in the path integral duality literature. As a result, the static potential is found to be regular. We use this result to derive a consistent black hole metric for the spherically symmetric, electrically neutral case. It turns out that the new spacetime is regular and is formally equivalent to the Bardeen metric, apart from a different ultraviolet regulator. On the thermodynamics side, the Hawking temperature admits a maximum before a cooling down phase towards a thermodynamically stable end of the black hole evaporation process. The findings support the idea of universality of quantum black holes.
We consider a simple model of modified gravity interacting with a single scalar field ϕ with weakly coupled exponential potential within the framework of non-Riemannian spacetime volume-form formalism. The specific form of the action is fixed by the requirement of invariance under global Weyl-scale symmetry. Upon passing to the physical Einstein frame we show how the non-Riemannian volume elements create a second canonical scalar field u and dynamically generate a non-trivial two-scalar-field potential Ueff(u,ϕ) with two remarkable features: (i) it possesses a large flat region for large u describing a slow-roll inflation; (ii) it has a stable low-lying minimum w.r.t. (u,ϕ) representing the dark energy density in the “late universe”. We study the corresponding two-field slow-roll inflation and show that the pertinent slow-roll inflationary curve ϕ = ϕ(u) in the two-field space (u,ϕ) has a very small curvature, i.e., ϕ changes very little during the inflationary evolution of u on the flat region of Ueff(u,ϕ). Explicit expressions are found for the slow-roll parameters which differ from those in the single-field inflationary counterpart. Numerical solutions for the scalar spectral index and the tensor-to-scalar ratio are derived agreeing with the observational data.
Rethinking superdeterminism
(2020)
Quantum mechanics has irked physicists ever since its conception more than 100 years ago. While some of the misgivings, such as it being unintuitive, are merely aesthetic, quantum mechanics has one serious shortcoming: it lacks a physical description of the measurement process. This “measurement problem” indicates that quantum mechanics is at least an incomplete theory—good as far as it goes, but missing a piece—or, more radically, is in need of complete overhaul. Here we describe an approach which may provide this sought-for completion or replacement: Superdeterminism. A superdeterministic theory is one which violates the assumption of Statistical Independence (that distributions of hidden variables are independent of measurement settings). Intuition suggests that Statistical Independence is an essential ingredient of any theory of science (never mind physics), and for this reason Superdeterminism is typically discarded swiftly in any discussion of quantum foundations. The purpose of this paper is to explain why the existing objections to Superdeterminism are based on experience with classical physics and linear systems, but that this experience misleads us. Superdeterminism is a promising approach not only to solve the measurement problem, but also to understand the apparent non-locality of quantum physics. Most importantly, we will discuss how it may be possible to test this hypothesis in an (almost) model independent way.
In this work, we discuss the dense matter equation of state (EOS) for the extreme range of conditions encountered in neutron stars and their mergers. The calculation of the properties of such an EOS involves modeling different degrees of freedom (such as nuclei, nucleons, hyperons, and quarks), taking into account different symmetries, and including finite density and temperature effects in a thermodynamically consistent manner. We begin by addressing subnuclear matter consisting of nucleons and a small admixture of light nuclei in the context of the excluded volume approach. We then turn our attention to supranuclear homogeneous matter as described by the Chiral Mean Field (CMF) formalism. Finally, we present results from realistic neutron-star-merger simulations performed using the CMF model that predict signatures for deconfinement to quark matter in gravitational wave signals.
In power systems, flow allocation (FA) methods enable to allocate the usage and costs of the transmission grid to each single market participant. Based on predefined assumptions, the power flow is split into isolated generator-specific or producer-specific sub-flows. Two prominent FA methods, Marginal Participation (MP) and Equivalent Bilateral Exchanges (EBEs), build upon the linearized power flow and thus on the Power Transfer Distribution Factors (PTDFs). Despite their intuitive and computationally efficient concepts, they are restricted to networks with passive transmission elements only. As soon as a significant number of controllable transmission elements, such as high-voltage direct current (HVDC) lines, operate in the system, they lose their applicability. This work reformulates the two methods in terms of Virtual Injection Patterns (VIPs), which allows one to efficiently introduce a shift parameter q to tune contributions of net sources and net sinks in the network. In this work, major properties and differences in the methods are pointed out, and it is shown how the MP and EBE algorithms can be applied to generic meshed AC-DC electricity grids: by introducing a pseudo-impedance ω¯ , which reflects the operational state of controllable elements and allows one to extend the PTDF matrix under the assumption of knowing the current flow in the system. Basic properties from graph theory are used to solve for the pseudo-impedance in dependence of the position within the network. This directly enables, e.g., HVDC lines to be considered in the MP and EBE algorithms. The extended methods are applied to a low-carbon European network model (PyPSA-EUR) with a spatial resolution of 181 nodes and an 18% transmission expansion compared to today’s total transmission capacity volume. The allocations of MP and EBE show that countries with high wind potentials profit most from the transmission grid expansion. Based on the average usage of transmission system expansion, a method of distributing operational and capital expenditures is proposed. In addition, it is shown how injections from renewable resources strongly drive country-to-country allocations and thus cross-border electricity flows.
The Karl Schwarzschild Meeting 2017 (KSM2017) has been the third instalment of the conference dedicated to the great Frankfurter scientist, who derived the first black hole solution of Einstein's equations about 100 years ago.
The event has been a 5 day meeting in the field of black holes, AdS/CFT correspondence and gravitational physics. Like the two previous instalments, the conference continued to attract a stellar ensemble of participants from the world's most renowned institutions. The core of the meeting has been a series of invited talks from eminent experts (keynote speakers) as well as the presence of plenary research talks by students and junior speakers.
List of Conference photo and poster, Sponsors and funding acknowledgments, Committees and List of participants are available in this PDF.
We have built quasi-equilibrium models for uniformly rotating quark stars in general relativity. The conformal flatness approximation is employed and the Compact Object CALculator (cocal) code is extended to treat rotating stars with surface density discontinuity. In addition to the widely used MIT bag model, we have considered a strangeon star equation of state (EoS), suggested by Lai and Xu, that is based on quark clustering and results in a stiff EoS. We have investigated the maximum mass of uniformly rotating axisymmetric quark stars. We have also built triaxially deformed solutions for extremely fast rotating quark stars and studied the possible gravitational wave emission from such configurations.
Steep rise of parton densities in the limit of small parton momentum fraction x poses a challenge for describing the observed energy-dependence of the total and inelastic proton-proton cross sections σtot/inelpp : considering a realistic parton spatial distribution, one obtains a too-strong increase of σtot/inelpp in the limit of very high energies. We discuss various mechanisms which allow one to tame such a rise, paying special attention to the role of parton-parton correlations. In addition, we investigate a potential impact on model predictions for σtotpp, related to dynamical higher twist corrections to parton-production process.
The global energy system is undergoing a major transition, and in energy planning and decision-making across governments, industry and academia, models play a crucial role. Because of their policy relevance and contested nature, the transparency and open availability of energy models and data are of particular importance. Here we provide a practical how-to guide based on the collective experience of members of the Open Energy Modelling Initiative (Openmod). We discuss key steps to consider when opening code and data, including determining intellectual property ownership, choosing a licence and appropriate modelling languages, distributing code and data, and providing support and building communities. After illustrating these decisions with examples and lessons learned from the community, we conclude that even though individual researchers' choices are important, institutional changes are still also necessary for more openness and transparency in energy research.
In the last decades, energy modelling has supported energy planning by offering insights into the dynamics between energy access, resource use, and sustainable development. Especially in recent years, there has been an attempt to strengthen the science-policy interface and increase the involvement of society in energy planning processes. This has, both in the EU and worldwide, led to the development of open-source and transparent energy modelling practices.This paper describes the role of an open-source energy modelling tool in the energy planning process and highlights its importance for society. Specifically, it describes the existence and characteristics of the relationship between developing an open-source, freely available tool and its application, dissemination and use for policy making. Using the example of the Open Source energy Modelling System (OSeMOSYS), this work focuses on practices that were established within the community and that made the framework's development and application both relevant and scientifically grounded. Keywords: Energy system modelling tool, Open-source software, Model-based public policy, Software development practice, Outreach practice
Python for Power System Analysis (PyPSA) is a free software toolbox for simulating and optimising modern electrical power systems over multiple periods. PyPSA includes models for conventional generators with unit commitment, variable renewable generation, storage units, coupling to other energy sectors, and mixed alternating and direct current networks. It is designed to be easily extensible and to scale well with large networks and long time series. In this paper the basic functionality of PyPSA is described, including the formulation of the full power flow equations and the multi-period optimisation of operation and investment with linear power flow equations. PyPSA is positioned in the existing free software landscape as a bridge between traditional power flow analysis tools for steady-state analysis and full multi-period energy system models. The functionality is demonstrated on two open datasets of the transmission system in Germany (based on SciGRID) and Europe (based on GridKit).
In energy modelling, open data and open source code can help enhance traceability and reproducibility of model exercises which contribute to facilitate controversial debates and improve policy advice. While the availability of open power plant databases increased in recent years, they often differ considerably from each other and their data quality has not been systematically compared to proprietary sources yet. Here, we introduce the python-based ‘powerplantmatching’ (PPM), an open source toolset for cleaning, standardizing and combining multiple power plant databases. We apply it once only with open databases and once with an additional proprietary database in order to discuss and elaborate the issue of data quality, by analysing capacities, countries, fuel types, geographic coordinates and commissioning years for conventional power plants. We find that a derived dataset purely based on open data is not yet on a par with one in which a proprietary database has been added to the matching, even though the statistical values for capacity matched to a large degree with both datasets. When commissioning years are needed for modelling purposes in the final dataset, the proprietary database helps crucially to increase the quality of the derived dataset.
Use-dependent long-term changes of neuronal response properties must be gated to prevent irrelevant activity from inducing inappropriate modifications. Here we test the hypothesis that local network dynamics contribute to such gating. As synaptic modifications depend on temporal contiguity between presynaptic and postsynaptic activity, we examined the effect of synchronized gamma (ɣ) oscillations on stimulation-dependent modifications of orientation selectivity in adult cat visual cortex. Changes of orientation maps were induced by pairing visual stimulation with electrical activation of the mesencephalic reticular formation. Changes in orientation selectivity were assessed with optical recording of intrinsic signals and multiunit recordings. When conditioning stimuli were associated with strong ɣ-oscillations, orientation domains matching the orientation of the conditioning grating stimulus became more responsive and expanded, because neurons with preferences differing by less than 30° from the orientation of the conditioning grating shifted their orientation preference toward the conditioned orientation. When conditioning stimuli induced no or only weak ɣ-oscillations, responsiveness of neurons driven by the conditioning stimulus decreased. These differential effects depended on the power of oscillations in the low ɣ-band (20 Hz to 48 Hz) and not on differences in discharge rate of cortical neurons, because there was no correlation between the discharge rates during conditioning and the occurrence of changes in orientation preference. Thus, occurrence and polarity of use-dependent long-term changes of cortical response properties appear to depend on the occurrence of ɣ-oscillations during induction and hence on the degree of temporal coherence of the change-inducing network activity.
An incoming or outgoing hadron in a hard collision with large momentum transfer gets squeezed in the transverse direction to its momentum. In the case of nuclear targets, this leads to the reduced interaction of such hadrons with surrounding nucleons which is known as color transparency (CT). The identification of CT in exclusive processes on nuclear targets is of significant interest not only by itself but also due to the fact that CT is a necessary condition for the applicability of factorization for the description of the corresponding elementary process. In this paper we discuss the semiexclusive processes A(e,e′π+) , A(π−,l−l+) and A(γ,π−p) . Since CT is closely related to hadron formation mechanism, the reduced interaction of ’pre-hadrons’ with nucleons is a common feature of generic high-energy inclusive processes on nuclear targets, such as hadron attenuation in deep inelastic scattering (DIS). We will discuss the novel way to study hadron formation via slow neutron production induced by a hard photon interaction with a nucleus. Finally, the opportunity to study hadron formation effects in heavy-ion collisions in the NICA regime will be considered.
Surface color and predictability determine contextual modulation of V1 firing and gamma oscillations
(2019)
The integration of direct bottom-up inputs with contextual information is a core feature of neocortical circuits. In area V1, neurons may reduce their firing rates when their receptive field input can be predicted by spatial context. Gamma-synchronized (30–80 Hz) firing may provide a complementary signal to rates, reflecting stronger synchronization between neuronal populations receiving mutually predictable inputs. We show that large uniform surfaces, which have high spatial predictability, strongly suppressed firing yet induced prominent gamma synchronization in macaque V1, particularly when they were colored. Yet, chromatic mismatches between center and surround, breaking predictability, strongly reduced gamma synchronization while increasing firing rates. Differences between responses to different colors, including strong gamma-responses to red, arose from stimulus adaptation to a full-screen background, suggesting prominent differences in adaptation between M- and L-cone signaling pathways. Thus, synchrony signaled whether RF inputs were predicted from spatial context, while firing rates increased when stimuli were unpredicted from context.
PURPOSE: The purpose of this work is to analyze whether the Monte Carlo codes penh, fluka, and geant4/topas are suitable to calculate absorbed doses and fQ/fQ0 ratios in therapeutic high-energy photon and proton beams.
METHODS: We used penh, fluka, geant4/topas, and egsnrc to calculate the absorbed dose to water in a reference water cavity and the absorbed dose to air in two air cavities representative of a plane-parallel and a cylindrical ionization chamber in a 1.25 MeV photon beam and a 150 MeV proton beam - egsnrc was only used for the photon beam calculations. The physics and transport settings in each code were adjusted to simulate the particle transport as detailed as reasonably possible. From these absorbed doses, fQ0 factors, fQ factors, and fQ/fQ0 ratios (which are the basis of Monte Carlo calculated beam quality correction factors kQ,Q0 ) were calculated and compared between the codes. Additionally, we calculated the spectra of primary particles and secondary electrons in the reference water cavity, as well as the integrated depth-dose curve of 150 MeV protons in water.
RESULTS: The absorbed doses agreed within 1.4% or better between the individual codes for both the photon and proton simulations. The fQ0 and fQ factors agreed within 0.5% or better for the individual codes for both beam qualities. The resulting fQ/fQ0 ratios for 150 MeV protons agreed within 0.7% or better. For the 1.25 MeV photon beam, the spectra of photons and secondary electrons agreed almost perfectly. For the 150 MeV proton simulation, we observed differences in the spectra of secondary protons whereas the spectra of primary protons and low-energy delta electrons also agreed almost perfectly. The first 2 mm of the entrance channel of the 150 MeV proton Bragg curve agreed almost perfectly while for greater depths, the differences in the integrated dose were up to 1.5%.
CONCLUSION: penh, fluka, and geant4/topas are capable of calculating beam quality correction factors in proton beams. The differences in the fQ0 and fQ factors between the codes are 0.5% at maximum. The differences in the fQ/fQ0 ratios are 0.7% at maximum.
An overt pro-inflammatory immune response is a key factor contributing to lethal pneumococcal infection in an influenza pre-infected host and represents a potential target for therapeutic intervention. However, there is a paucity of knowledge about the level of contribution of individual cytokines. Based on the predictions of our previous mathematical modeling approach, the potential benefit of IFN-γ- and/or IL-6-specific antibody-mediated cytokine neutralization was explored in C57BL/6 mice infected with the influenza A/PR/8/34 strain, which were subsequently infected with the Streptococcus pneumoniae strain TIGR4 on day 7 post influenza. While single IL-6 neutralization had no effect on respiratory bacterial clearance, single IFN-γ neutralization enhanced local bacterial clearance in the lungs. Concomitant neutralization of IFN-γ and IL-6 significantly reduced the degree of pneumonia as well as bacteremia compared to the control group, indicating a positive effect for the host during secondary bacterial infection. The results of our model-driven experimental study reveal that the predicted therapeutic value of IFN-γ and IL-6 neutralization in secondary pneumococcal infection following influenza infection is tightly dependent on the experimental protocol while at the same time paving the way toward the development of effective immune therapies.
Classical Hodgkin lymphoma (cHL) is one of the most common malignant lymphomas in Western Europe. The nodular sclerosing subtype of cHL (NS cHL) is characterized by a proliferation of fibroblasts in the tumor microenvironment, leading to fibrotic bands surrounding the lymphoma infiltrate. Several studies have described a crosstalk between the tumour cells of cHL, the Hodgkin- and Reed-Sternberg (HRS) cells, and cancer-associated fibroblasts. However, to date a deep molecular characterization of these fibroblasts is lacking. Thus, the aim of the present study is a comprehensive characterization of these fibroblasts. Gene expression profiling and methylation profiles of fibroblasts isolated from primary lymph node suspensions revealed persistent differences between fibroblasts obtained from NS cHL and lymphadenitis. NS cHL derived fibroblasts exhibit a myofibroblastic phenotype characterized by myocardin (MYOCD) expression. Moreover, TIMP3, an inhibitor of matrix metalloproteinases, was strongly upregulated in NS cHL fibroblasts, likely contributing to the accumulation of collagen in sclerotic bands of NS cHL. As previously shown for other types of cancer-associated fibroblasts, treatment by luteolin could reverse this fibroblast phenotype and decrease TIMP3 secretion. NS cHL fibroblasts showed enhanced proliferation when they were exposed to soluble factors released from HRS cells. For HRS cells, soluble factors from fibroblasts were not sufficient to protect them from Brentuximab-Vedotin induced cell death. However, HRS cells adherent to fibroblasts were protected from Brentuximab-Vedotin induced injury. In summary, we confirm the importance of fibroblasts for HRS cell survival and identify TIMP3 which probably contributes as a major factor to the typical fibrosis observed in NS cHL.
Gravitational waves, electromagnetic radiation, and the emission of high energy particles probe the phase structure of the equation of state of dense matter produced at the crossroad of the closely related relativistic collisions of heavy ions and of binary neutron stars mergers. 3 + 1 dimensional special- and general relativistic hydrodynamic simulation studies reveal a unique window of opportunity to observe phase transitions in compressed baryon matter by laboratory based experiments and by astrophysical multimessenger observations. The astrophysical consequences of a hadron-quark phase transition in the interior of a compact star will be focused within this article. Especially with a future detection of the post-merger gravitational wave emission emanated from a binary neutron star merger event, it would be possible to explore the phase structure of quantum chromodynamics. The astrophysical observables of a hadron-quark phase transition in a single compact star system and binary hybrid star merger scenario will be summarized within this article. The FAIR facility at GSI Helmholtzzentrum allows one to study the universe in the laboratory, and several astrophysical signatures of the quark-gluon plasma have been found in relativistic collisions of heavy ions and will be explored in future experiments.
The graph theoretical analysis of structural magnetic resonance imaging (MRI) data has received a great deal of interest in recent years to characterize the organizational principles of brain networks and their alterations in psychiatric disorders, such as schizophrenia. However, the characterization of networks in clinical populations can be challenging, since the comparison of connectivity between groups is influenced by several factors, such as the overall number of connections and the structural abnormalities of the seed regions. To overcome these limitations, the current study employed the whole-brain analysis of connectional fingerprints in diffusion tensor imaging data obtained at 3 T of chronic schizophrenia patients (n = 16) and healthy, age-matched control participants (n = 17). Probabilistic tractography was performed to quantify the connectivity of 110 brain areas. The connectional fingerprint of a brain area represents the set of relative connection probabilities to all its target areas and is, hence, less affected by overall white and gray matter changes than absolute connectivity measures. After detecting brain regions with abnormal connectional fingerprints through similarity measures, we tested each of its relative connection probability between groups. We found altered connectional fingerprints in schizophrenia patients consistent with a dysconnectivity syndrome. While the medial frontal gyrus showed only reduced connectivity, the connectional fingerprints of the inferior frontal gyrus and the putamen mainly contained relatively increased connection probabilities to areas in the frontal, limbic, and subcortical areas. These findings are in line with previous studies that reported abnormalities in striatal–frontal circuits in the pathophysiology of schizophrenia, highlighting the potential utility of connectional fingerprints for the analysis of anatomical networks in the disorder.
Synesthesia is a phenomenon in which additional perceptual experiences are elicited by sensory stimuli or cognitive concepts. Synesthetes possess a unique type of phenomenal experiences not directly triggered by sensory stimulation. Therefore, for better understanding of consciousness it is relevant to identify the mental and physiological processes that subserve synesthetic experience. In the present work we suggest several reasons why synesthesia has merit for research on consciousness. We first review the research on the dynamic and rapidly growing field of the studies of synesthesia. We particularly draw attention to the role of semantics in synesthesia, which is important for establishing synesthetic associations in the brain. We then propose that the interplay between semantics and sensory input in synesthesia can be helpful for the study of the neural correlates of consciousness, especially when making use of ambiguous stimuli for inducing synesthesia. Finally, synesthesia-related alterations of brain networks and functional connectivity can be of merit for the study of consciousness.
Following a brief review of current efforts to identify the neuronal correlates of conscious processing (NCCP) an attempt is made to bridge the gap between the material neuronal processes and the immaterial dimensions of subjective experience. It is argued that this "hard problem" of consciousness research cannot be solved by only considering the neuronal underpinnings of cognition. The proposal is that the hard problem can be treated within a naturalistic framework if one considers not only the biological but also the socio-cultural dimensions of evolution. The argument is based on the following premises: perceptions are the result of a constructivist process that depends on priors. This applies both for perceptions of the outer world and the perception of oneself. Social interactions between agents endowed with the cognitive abilities of humans generated immaterial realities, addressed as social or cultural realities. This novel class of realities assumed the role of priors for the perception of oneself and the embedding world. A natural consequence of these extended perceptions is a dualist classification of observables into material and immaterial phenomena nurturing the concept of ontological substance dualism. It is argued that perceptions shaped by socio-cultural priors lead to the construction of a self-model that has both a material and an immaterial dimension. As priors are implicit and not amenable to conscious recollection the perceived immaterial dimension is experienced as veridical and not derivable from material processes—which is the hallmark of the hard problem. These considerations let the hard problem appear as the result of cognitive constructs that are amenable to naturalistic explanations in an evolutionary framework.
Simulating Many Accelerated Strongly-interacting Hadrons (SMASH) is a new hadronic transport approach designed to describe the non-equilibrium evolution of heavy-ion collisions. The production of strange particles in such systems is enhanced compared to elementary reactions (Blume and Markert 2011), providing an interesting signal to study. Two different strangeness production mechanisms are discussed: one based on resonances and another using forced canonical thermalization. Comparisons to experimental data from elementary collisions are shown.
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on “Information Decomposition of Target Effects from Multi-Source Interactions” at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field.
Top-down influences on ambiguous perception: the role of stable and transient states of the observer
(2014)
The world as it appears to the viewer is the result of a complex process of inference performed by the brain. The validity of this apparently counter-intuitive assertion becomes evident whenever we face noisy, feeble or ambiguous visual stimulation: in these conditions, the state of the observer may play a decisive role in determining what is currently perceived. On this background, ambiguous perception and its amenability to top-down influences can be employed as an empirical paradigm to explore the principles of perception. Here we offer an overview of both classical and recent contributions on how stable and transient states of the observer can impact ambiguous perception. As to the influence of the stable states of the observer, we show that what is currently perceived can be influenced (1) by cognitive and affective aspects, such as meaning, prior knowledge, motivation, and emotional content and (2) by individual differences, such as gender, handedness, genetic inheritance, clinical conditions, and personality traits and by (3) learning and conditioning. As to the impact of transient states of the observer, we outline the effects of (4) attention and (5) voluntary control, which have attracted much empirical work along the history of ambiguous perception. In the huge literature on the topic we trace a difference between the observer's ability to control dominance (i.e., the maintenance of a specific percept in visual awareness) and reversal rate (i.e., the switching between two alternative percepts). Other transient states of the observer that have more recently drawn researchers' attention regard (6) the effects of imagery and visual working memory. (7) Furthermore, we describe the transient effects of prior history of perceptual dominance. (8) Finally, we address the currently available computational models of ambiguous perception and how they can take into account the crucial share played by the state of the observer in perceiving ambiguous displays.
Aims: The examination of histological sections is still the gold standard in diagnostic pathology. Important histopathological diagnostic criteria are nuclear shapes and chromatin distribution as well as nucleus-cytoplasm relation and immunohistochemical properties of surface and intracellular proteins. The aim of this investigation was to evaluate the benefits and drawbacks of three-dimensional imaging of CD30+ cells in classical Hodgkin Lymphoma (cHL) in comparison to CD30+ lymphoid cells in reactive lymphoid tissues.
Materials and results: Using immunoflourescence confocal microscopy and computer-based analysis, we compared CD30+ neoplastic cells in Nodular Sclerosis cHL (NScCHL), Mixed Cellularity cHL (MCcHL), with reactive CD30+ cells in Adenoids (AD) and Lymphadenitis (LAD). We confirmed that the percentage of CD30+ cell volume can be calculated. The amount in lymphadenitis was approx. 1.5%, in adenoids around 2%, in MCcHL up to 4,5% whereas the values for NScHL rose to more than 8% of the total cell cytoplasm. In addition, CD30+ tumour cells (HRS-cells) in cHL had larger volumes, and more protrusions compared to CD30+ reactive cells. Furthermore, the formation of large cell networks turned out to be a typical characteristic of NScHL.
Conclusion: In contrast to 2D histology, 3D laser scanning offers a visualisation of complete cells, their network interaction and spatial distribution in the tissue. The possibility to differentiate cells in regards to volume, surface, shape, and cluster formation enables a new view on further diagnostic and biological questions. 3D includes an increased amount of information as a basis of bioinformatical calculations.
Volatility is a widely recognized measure of market risk. As volatility is not observed it has to be estimated from market prices, i.e., as the implied volatility from option prices. The volatility index VIX making volatility a tradeable asset in its own right is computed from near- and next-term put and call options on the S&P 500 with more than 23 days and less than 37 days to expiration and non-vanishing bid. In the present paper we quantify the information content of the constituents of the VIX about the volatility of the S&P 500 in terms of the Fisher information matrix. Assuming that observed option prices are centered on the theoretical price provided by Heston's model perturbed by additive Gaussian noise we relate their Fisher information matrix to the Greeks in the Heston model. We find that the prices of options contained in the VIX basket allow for reliable estimates of the volatility of the S&P 500 with negligible uncertainty as long as volatility is large enough. Interestingly, if volatility drops below a critical value of roughly 3%, inferences from option prices become imprecise because Vega, the derivative of a European option w.r.t. volatility, and thereby the Fisher information nearly vanishes.
A hypothesis regarding the development of imitation learning is presented that is rooted in intrinsic motivations. It is derived from a recently proposed form of intrinsically motivated learning (IML) for efficient coding in active perception, wherein an agent learns to perform actions with its sense organs to facilitate efficient encoding of the sensory data. To this end, actions of the sense organs that improve the encoding of the sensory data trigger an internally generated reinforcement signal. Here it is argued that the same IML mechanism might also support the development of imitation when general actions beyond those of the sense organs are considered: The learner first observes a tutor performing a behavior and learns a model of the the behavior's sensory consequences. The learner then acts itself and receives an internally generated reinforcement signal reflecting how well the sensory consequences of its own behavior are encoded by the sensory model. Actions that are more similar to those of the tutor will lead to sensory signals that are easier to encode and produce a higher reinforcement signal. Through this, the learner's behavior is progressively tuned to make the sensory consequences of its actions match the learned sensory model. I discuss this mechanism in the context of human language acquisition and bird song learning where similar ideas have been proposed. The suggested mechanism also offers an account for the development of mirror neurons and makes a number of predictions. Overall, it establishes a connection between principles of efficient coding, intrinsic motivations and imitation.
Variable renewable energy sources (VRES), such as solarphotovoltaic (PV) and wind turbines (WT), are starting to play a significant role in several energy systems around the globe. To overcome the problem of their non-dispatchable and stochastic nature, several approaches have been proposed so far. This paper describes a novel mathematical model for scheduling the operation of a wind-powered pumped-storage hydroelectricity (PSH) hybrid for 25 to 48 h ahead. The model is based on mathematical programming and wind speed forecasts for the next 1 to 24 h, along with predicted upper reservoir occupancy for the 24th hour ahead. The results indicate that by coupling a 2-MW conventional wind turbine with a PSH of energy storing capacity equal to 54 MWh it is possible to significantly reduce the intraday energy generation coefficient of variation from 31% for pure wind turbine to 1.15% for a wind-powered PSH The scheduling errors calculated based on mean absolute percentage error (MAPE) are significantly smaller for such a coupling than those seen for wind generation forecasts, at 2.39% and 27%, respectively. This is even stronger emphasized by the fact that, those for wind generation were calculated for forecasts made for the next 1 to 24 h, while those for scheduled generation were calculated for forecasts made for the next 25 to 48 h. The results clearly show that the proposed scheduling approach ensures the high reliability of the WT–PSH energy source
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy.
Anaplastic large cell lymphoma (ALCL) and classical Hodgkin lymphoma (cHL) are lymphomas that contain CD30-expressing tumor cells and have numerous pathological similarities. Whereas ALCL is usually diagnosed at an advanced stage, cHL more frequently presents with localized disease. The aim of the present study was to elucidate the mechanisms underlying the different clinical presentation of ALCL and cHL. Chemokine and chemokine receptor expression were similar in primary ALCL and cHL cases apart from the known overexpression of the chemokines CCL17 and CCL22 in the Hodgkin and Reed-Sternberg (HRS) cells of cHL. Consistent with the overexpression of these chemokines, primary cHL cases encountered a significantly denser T cell microenvironment than ALCL. Additionally to differences in the interaction with their microenvironment, cHL cell lines presented a lower and less efficient intrinsic cell motility than ALCL cell lines, as assessed by time-lapse microscopy in a collagen gel and transwell migration assays. We thus propose that the combination of impaired basal cell motility and differences in the interaction with the microenvironment hamper the dissemination of HRS cells in cHL when compared with the tumor cells of ALCL.
We present a model for the autonomous and simultaneous learning of active binocular and motion vision. The model is based on the Active Efficient Coding (AEC) framework, a recent generalization of classic efficient coding theories to active perception. The model learns how to efficiently encode the incoming visual signals generated by an object moving in 3-D through sparse coding. Simultaneously, it learns how to produce eye movements that further improve the efficiency of the sensory coding. This learning is driven by an intrinsic motivation to maximize the system's coding efficiency. We test our approach on the humanoid robot iCub using simulations. The model demonstrates self-calibration of accurate object fixation and tracking of moving objects. Our results show that the model keeps improving until it hits physical constraints such as camera or motor resolution, or limits on its internal coding capacity. Furthermore, we show that the emerging sensory tuning properties are in line with results on disparity, motion, and motion-in-depth tuning in the visual cortex of mammals. The model suggests that vergence and tracking eye movements can be viewed as fundamentally having the same objective of maximizing the coding efficiency of the visual system and that they can be learned and calibrated jointly through AEC.
We investigate charmonium production in Pb + Pb collisions at LHC beam energy Elab=2.76A TeV at fixed-target experiment (√sNN = 72 GeV). In the frame of a transport approach including cold and hot nuclear matter effects on charmonium evolution, we focus on the antishadowing effect on the nuclear modification factors RAA and rAA for the J/ψ yield and transverse momentum. The yield is more suppressed at less forward rapidity (ylab ≃ 2) than that at very forward rapidity (ylab ≃ 4) due to the shadowing and antishadowing in different rapidity bins.
Physics at its core is an experimental pursuit. If one theory does not agree with experimental results, then the theory is wrong. However, it is becoming harder and harder to directly test some theories of fundamental physics at the high energy/small distance frontier exactly because this frontier is becoming technologically harder to reach. The Large Hadron Collider is getting near the limit of what we can do with present accelerator technology in terms of directly reaching the energy frontier. The motivation for this special issue was to try and collect together ideas and potential approaches to experimentally probe some of our ideas about physics at the high energy/small distance frontier. Some of the papers in this special issue directly deal with the issue of what happens to spacetime at small distance scales. In the paper by A. Aurilia and E. Spallucci a picture of quantum spacetime is given based on the effects of ultrahigh velocity length contractions on the structure of the spacetime. The work of P. Nicolini et al. further pursues the idea that spacetime has a minimal length. The consequences of this minimal length are investigated in terms of the effects it would have on the gravitational collapse of a star to form a black hole. In the article by G. Amelino-Camelia et al. the quantum structure of spacetime is studied through the Fermi LAT data on the Gamma Ray Burst GRB130427A. The article by S. Hossenfelder addressed the question of whether spacetime is fundamentally continuous or discrete and postulates that in the case when spacetime is discrete it might have defects which would have important observational consequences. ...
This paper studies the geometry and the thermodynamics of a holographic screen in the framework of the ultraviolet self-complete quantum gravity. To achieve this goal we construct a new static, neutral, nonrotating black hole metric, whose outer (event) horizon coincides with the surface of the screen. The spacetime admits an extremal configuration corresponding to the minimal holographic screen and having both mass and radius equalling the Planck units. We identify this object as the spacetime fundamental building block, whose interior is physically unaccessible and cannot be probed even during the Hawking evaporation terminal phase. In agreement with the holographic principle, relevant processes take place on the screen surface. The area quantization leads to a discrete mass spectrum. An analysis of the entropy shows that the minimal holographic screen can store only one byte of information, while in the thermodynamic limit the area law is corrected by a logarithmic term.
The 2D azimuth and rapidity structure of the two-particle correlations in relativistic A+A collisions is altered significantly by the presence of sharp inhomogeneities in superdense matter formed in such processes. The causality constraints enforce one to associate the long-range longitudinal correlations observed in a narrow angular interval, the so-called (soft) ridge, with peculiarities of the initial conditions of collision process. This study's objective is to analyze whether multiform initial tubular structures, undergoing the subsequent hydrodynamic evolution and gradual decoupling, can form the soft ridges. Motivated by the flux-tube scenarios, the initial energy density distribution contains the different numbers of high density tube-like boost-invariant inclusions that form a bumpy structure in the transverse plane. The influence of various structures of such initial conditions in the most central A+A events on the collective evolution of matter, resulting spectra, angular particle correlations and vn-coefficients is studied in the framework of the hydrokinetic model (HKM).
The theoretical review of the last femtoscopy results for the systems created in ultrarelativistic A+A, p+p, and p+Pb collisions is presented. The basic model, allowing to describe the interferometry data at SPS, RHIC, and LHC, is the hydrokinetic model. The model allows one to avoid the principal problem of the particlization of the medium at nonspace-like sites of transition hypersurfaces and switch to hadronic cascade at a space-like hypersurface with nonequilibrated particle input. The results for pion and kaon interferometry scales in Pb+Pb and Au+Au collisions at LHC and RHIC are presented for different centralities. The new theoretical results as for the femtoscopy of small sources with sizes of 1-2 fm or less are discussed. The uncertainty principle destroys the standard approach of completely chaotic sources: the emitters in such sources cannot radiate independently and incoherently. As a result, the observed femtoscopy scales are reduced, and the Bose-Einstein correlation function is suppressed. The results are applied for the femtoscopy analysis of p+p collisions at √s=7 TeV LHC energy and p+Pb ones at √s=5.02 TeV. The behavior of the corresponding interferometry volumes on multiplicity is compared with what is happening for central A+A collisions. In addition the nonfemtoscopic two-pion correlations in proton-proton collisions at the LHC energies are considered, and a simple model that takes into account correlations induced by the conservation laws and minijets is analyzed.
The production of K∗(892)0 and ϕ(1020) mesons has been measured in p–Pb collisions at √sNN = 5.02 TeV. K∗0 and ϕ are reconstructed via their decay into charged hadrons with the ALICE detector in the rapidity range - 0.5 < y < 0. The transverse momentum spectra, measured as a function of the multiplicity, have a pT range from 0 to 15 GeV/c for K∗0 and from 0.3 to 21 GeV/c for ϕ. Integrated yields, mean transverse momenta and particle ratios are reported and compared with results in pp collisions at √s = 7 TeV and Pb–Pb collisions at √sNN = 2.76 TeV. In Pb–Pb and p–Pb collisions, K∗0 and ϕ probe the hadronic phase of the system and contribute to the study of particle formation mechanisms by comparison with other identified hadrons. For this purpose, the mean transverse momenta and the differential proton-to-ϕ ratio are discussed as a function of the multiplicity of the event. The short-lived K∗0 is measured to investigate re-scattering effects, believed to be related to the size of the system and to the lifetime of the hadronic phase.
The differences between contemporary Monte Carlo generators of high energy hadronic interactions are discussed and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs) is studied. Key directions for further model improvements are outlined. The prospect for a coherent interpretation of the data in terms of the UHECR composition is investigated.
Spatial neuronal synchronization and the waveform of oscillations : implications for EEG and MEG
(2019)
Neuronal oscillations are ubiquitous in the human brain and are implicated in virtually all brain functions. Although they can be described by a prominent peak in the power spectrum, their waveform is not necessarily sinusoidal and shows rather complex morphology. Both frequency and temporal descriptions of such non-sinusoidal neuronal oscillations can be utilized. However, in non-invasive EEG/MEG recordings the waveform of oscillations often takes a sinusoidal shape which in turn leads to a rather oversimplified view on oscillatory processes. In this study, we show in simulations how spatial synchronization can mask non-sinusoidal features of the underlying rhythmic neuronal processes. Consequently, the degree of non-sinusoidality can serve as a measure of spatial synchronization. To confirm this empirically, we show that a mixture of EEG components is indeed associated with more sinusoidal oscillations compared to the waveform of oscillations in each constituent component. Using simulations, we also show that the spatial mixing of the non-sinusoidal neuronal signals strongly affects the amplitude ratio of the spectral harmonics constituting the waveform. Finally, our simulations show how spatial mixing can affect the strength and even the direction of the amplitude coupling between constituent neuronal harmonics at different frequencies. Validating these simulations, we also demonstrate these effects in real EEG recordings. Our findings have far reaching implications for the neurophysiological interpretation of spectral profiles, cross-frequency interactions, as well as for the unequivocal determination of oscillatory phase.
The Gribov mode in hot QCD
(2017)
In thesis I investigate the possibility that at the smallest length scale (Planck scale) the very notion of "dimension" needs to be revisited. Due to "quantum effects" spacetime might become very turbulent at these scales and properties like those of "fractals" emerge, including a "scale dependent dimension". It seems that this "spontaneous dimensional reduction" and the appearance of a minimal physical length are very general effects that most approaches to quantum gravity share. Main emphasis is given to the"spectral dimension" and its calculation for strings and p-branes.
We present a study of the elliptic flow and RAA of D and D¯ mesons in Au+Au collisions at FAIR energies. We propagate the charm quarks and the D mesons following a previously applied Langevin dynamics. The evolution of the background medium is modeled in two different ways: (I) we use the UrQMD hydrodynamics + Boltzmann transport hybrid approach including a phase transition to QGP and (II) with the coarse-graining approach employing also an equation of state with QGP. The latter approach has previously been used to describe di-lepton data at various energies very successfully. This comparison allows us to explore the effects of partial thermalization and viscous effects on the charm propagation. We explore the centrality dependencies of the collisions, the variation of the decoupling temperature and various hadronization parameters. We find that the initial partonic phase is responsible for the creation of most of the D/D¯ mesons elliptic flow and that the subsequent hadronic interactions seem to play only a minor role. This indicates that D/D¯ mesons elliptic flow is a smoking gun for a partonic phase at FAIR energies. However, the results suggest that the magnitude and the details of the elliptic flow strongly depend on the dynamics of the medium and on the hadronization procedure, which is related to the medium properties as well. Therefore, even at FAIR energies the charm quark might constitute a very useful tool to probe the quark–gluon plasma and investigate its physics.
Heat stress transcription factors (HSFs) regulate transcriptional response to a large number of environmental influences, such as temperature fluctuations and chemical compound applications. Plant HSFs represent a large and diverse gene family. The HSF members vary substantially both in gene expression patterns and molecular functions. HEATSTER is a web resource for mining, annotating, and analyzing members of the different classes of HSFs in plants. A web-interface allows the identification and class assignment of HSFs, intuitive searches in the database and visualization of conserved motifs, and domains to classify novel HSFs.
Feathers are arranged in a precise pattern in avian skin. They first arise during development in a row along the dorsal midline, with rows of new feather buds added sequentially in a spreading wave. We show that the patterning of feathers relies on coupled fibroblast growth factor (FGF) and bone morphogenetic protein (BMP) signalling together with mesenchymal cell movement, acting in a coordinated reaction-diffusion-taxis system. This periodic patterning system is partly mechanochemical, with mechanical-chemical integration occurring through a positive feedback loop centred on FGF20, which induces cell aggregation, mechanically compressing the epidermis to rapidly intensify FGF20 expression. The travelling wave of feather formation is imposed by expanding expression of Ectodysplasin A (EDA), which initiates the expression of FGF20. The EDA wave spreads across a mesenchymal cell density gradient, triggering pattern formation by lowering the threshold of mesenchymal cells required to begin to form a feather bud. These waves, and the precise arrangement of feather primordia, are lost in the flightless emu and ostrich, though via different developmental routes. The ostrich retains the tract arrangement characteristic of birds in general but lays down feather primordia without a wave, akin to the process of hair follicle formation in mammalian embryos. The embryonic emu skin lacks sufficient cells to enact feather formation, causing failure of tract formation, and instead the entire skin gains feather primordia through a later process. This work shows that a reaction-diffusion-taxis system, integrated with mechanical processes, generates the feather array. In flighted birds, the key role of the EDA/Ectodysplasin A receptor (EDAR) pathway in vertebrate skin patterning has been recast to activate this process in a quasi-1-dimensional manner, imposing highly ordered pattern formation.
The capability of directing gaze to relevant parts in the environment is crucial for our survival. Computational models have proposed quantitative accounts of human gaze selection in a range of visual search tasks. Initially, models suggested that gaze is directed to the locations in a visual scene at which some criterion such as the probability of target location, the reduction of uncertainty or the maximization of reward appear to be maximal. But subsequent studies established, that in some tasks humans instead direct their gaze to locations, such that after the single next look the criterion is expected to become maximal. However, in tasks going beyond a single action, the entire action sequence may determine future rewards thereby necessitating planning beyond a single next gaze shift. While previous empirical studies have suggested that human gaze sequences are planned, quantitative evidence for whether the human visual system is capable of finding optimal eye movement sequences according to probabilistic planning is missing. Here we employ a series of computational models to investigate whether humans are capable of looking ahead more than the next single eye movement. We found clear evidence that subjects’ behavior was better explained by the model of a planning observer compared to a myopic, greedy observer, which selects only a single saccade at a time. In particular, the location of our subjects’ first fixation differed depending on the stimulus and the time available for the search, which was well predicted quantitatively by a probabilistic planning model. Overall, our results are the first evidence that the human visual system’s gaze selection agrees with optimal planning under uncertainty.
We present the black hole accretion code (BHAC), a new multidimensional general-relativistic magnetohydrodynamics module for the MPI-AMRVAC framework. BHAC has been designed to solve the equations of ideal general-relativistic magnetohydrodynamics in arbitrary spacetimes and exploits adaptive mesh refinement techniques with an efficient block-based approach. Several spacetimes have already been implemented and tested. We demonstrate the validity of BHAC by means of various one-, two-, and three-dimensional test problems, as well as through a close comparison with the HARM3D code in the case of a torus accreting onto a black hole. The convergence of a turbulent accretion scenario is investigated with several diagnostics and we find accretion rates and horizon-penetrating fluxes to be convergent to within a few percent when the problem is run in three dimensions. Our analysis also involves the study of the corresponding thermal synchrotron emission, which is performed by means of a new general-relativistic radiative transfer code, BHOSS. The resulting synthetic intensity maps of accretion onto black holes are found to be convergent with increasing resolution and are anticipated to play a crucial role in the interpretation of horizon-scale images resulting from upcoming radio observations of the source at the Galactic Center.
We present entropy-limited hydrodynamics (ELH): a new approach for the computation of numerical fluxes arising in the discretization of hyperbolic equations in conservation form. ELH is based on the hybridisation of an unfiltered high-order scheme with the first-order Lax-Friedrichs method. The activation of the low-order part of the scheme is driven by a measure of the locally generated entropy inspired by the artificial-viscosity method proposed by Guermond et al. (J. Comput. Phys. 230(11):4248-4267, 2011, doi:10.1016/j.jcp.2010.11.043). Here, we present ELH in the context of high-order finite-differencing methods and of the equations of general-relativistic hydrodynamics. We study the performance of ELH in a series of classical astrophysical tests in general relativity involving isolated, rotating and nonrotating neutron stars, and including a case of gravitational collapse to black hole. We present a detailed comparison of ELH with the fifth-order monotonicity preserving method MP5 (Suresh and Huynh in J. Comput. Phys. 136(1):83-99, 1997, doi:10.1006/jcph.1997.5745), one of the most common high-order schemes currently employed in numerical-relativity simulations. We find that ELH achieves comparable and, in many of the cases studied here, better accuracy than more traditional methods at a fraction of the computational cost (up to ∼50% speedup). Given its accuracy and its simplicity of implementation, ELH is a promising framework for the development of new special- and general-relativistic hydrodynamics codes well adapted for massively parallel supercomputers.
Ongoing brain activity has been implicated in the modulation of cortical excitability. The combination of electroencephalography (EEG) and transcranial magnetic stimulation (TMS) in a real-time triggered setup is a novel method for testing hypotheses about the relationship between spontaneous neuronal oscillations, cortical excitability, and synaptic plasticity. For this method, a reliable real-time extraction of the neuronal signal of interest from scalp EEG with high signal-to-noise ratio (SNR) is of crucial importance. Here we compare individually tailored spatial filters as computed by spatial-spectral decomposition (SSD), which maximizes SNR in a frequency band of interest, against established local C3-centered Laplacian filters for the extraction of the sensorimotor μ-rhythm. Single-pulse TMS over the left primary motor cortex was synchronized with the surface positive or negative peak of the respective extracted signal, and motor evoked potentials (MEP) were recorded with electromyography (EMG) of a contralateral hand muscle. Both extraction methods led to a comparable degree of MEP amplitude modulation by phase of the sensorimotor μ-rhythm at the time of stimulation. This could be relevant for targeting other brain regions with no working benchmark such as the local C3-centered Laplacian filter, as sufficient SNR is an important prerequisite for reliable real-time single-trial detection of EEG features.
Adjuvanted influenza vaccines constitute a key element towards inducing neutralizing antibody responses in populations with reduced responsiveness, such as infants and elderly subjects, as well as in devising antigen-sparing strategies. In particular, squalene-containing adjuvants have been observed to induce enhanced antibody responses, as well as having an influence on cross-reactive immunity. To explore the effects of adjuvanted vaccine formulations on antibody response and their relation to protein-specific immunity, we propose different mathematical models of antibody production dynamics in response to influenza vaccination. Data from ferrets immunized with commercial H1N1pdm09 vaccine antigen alone or formulated with different adjuvants was instrumental to adjust model parameters. While the affinity maturation process complexity is abridged, the proposed model is able to recapitulate the essential features of the observed dynamics. Our numerical results suggest that there exists a qualitative shift in protein-specific antibody response, with enhanced production of antibodies targeting the NA protein in adjuvanted versus non-adjuvanted formulations, in conjunction with a protein-independent boost that is over one order of magnitude larger for squalene-containing adjuvants. Furthermore, simulations predict that vaccines formulated with squalene-containing adjuvants are able to induce sustained antibody titers in a robust way, with little impact of the time interval between immunizations.
The scope of this Thesis is to understand the position dependency phenomenon of human visual perception. First, under the ecological assumption, meaning under the assumption that animals adapt to the statistical regularities of their environment, we study the consequences of the imaging on the local statistics of the input to the human visual system. Second, we model efficient representations of these statistics and their contribution to shape the properties of eye sensory neurons. Third, we model efficient representations of the semantic context of images and the correctness of different underneath geometrical assumptions on the statistics of images.
The efficient coding hypothesis posits that sensory systems are adapted to the regularities of their signal input in order to reduce redundancy in the resulting representations. It is therefore important to characterize the regularities of natural signals to gain insight into the processing of natural stimuli. While measurements of statistical regularity in vision have focused on photographic images of natural environments it has been much less investigated, how the specific imaging process embodied by the organism’s eye induces statistical dependencies on the natural input to the visual system. This has allowed using the convenient assumption that natural image data is homogeneous across the visual field. Here we give up on this assumption and show how the imaging process in a human eye model influences the local statistics of the natural input to the visual system across the entire visual field. ...
Neurogenesis of hippocampal granule cells (GCs) persists throughout mammalian life and is important for learning and memory. How newborn GCs differentiate and mature into an existing circuit during this time period is not yet fully understood. We established a method to visualize postnatally generated GCs in organotypic entorhino-hippocampal slice cultures (OTCs) using retroviral (RV) GFP-labeling and performed time-lapse imaging to study their morphological development in vitro. Using anterograde tracing we could, furthermore, demonstrate that the postnatally generated GCs in OTCs, similar to adult born GCs, grow into an existing entorhino-dentate circuitry. RV-labeled GCs were identified and individual cells were followed for up to four weeks post injection. Postnatally born GCs exhibited highly dynamic structural changes, including dendritic growth spurts but also retraction of dendrites and phases of dendritic stabilization. In contrast, older, presumably prenatally born GCs labeled with an adeno-associated virus (AAV), were far less dynamic. We propose that the high degree of structural flexibility seen in our preparations is necessary for the integration of newborn granule cells into an already existing neuronal circuit of the dentate gyrus in which they have to compete for entorhinal input with cells generated and integrated earlier.
Neurons collect their inputs from other neurons by sending out arborized dendritic structures. However, the relationship between the shape of dendrites and the precise organization of synaptic inputs in the neural tissue remains unclear. Inputs could be distributed in tight clusters, entirely randomly or else in a regular grid-like manner. Here, we analyze dendritic branching structures using a regularity index R, based on average nearest neighbor distances between branch and termination points, characterizing their spatial distribution. We find that the distributions of these points depend strongly on cell types, indicating possible fundamental differences in synaptic input organization. Moreover, R is independent of cell size and we find that it is only weakly correlated with other branching statistics, suggesting that it might reflect features of dendritic morphology that are not captured by commonly studied branching statistics. We then use morphological models based on optimal wiring principles to study the relation between input distributions and dendritic branching structures. Using our models, we find that branch point distributions correlate more closely with the input distributions while termination points in dendrites are generally spread out more randomly with a close to uniform distribution. We validate these model predictions with connectome data. Finally, we find that in spatial input distributions with increasing regularity, characteristic scaling relationships between branching features are altered significantly. In summary, we conclude that local statistics of input distributions and dendrite morphology depend on each other leading to potentially cell type specific branching features.
Correction to: Nature Communications https://doi.org/10.1038/s41467-017-01045-x, published online 31 October 2017
It has come to our attention that we did not specify whether the stimulation magnitudes we report in this Article are peak amplitudes or peak-to-peak. All references to intensity given in mA in the manuscript refer to peak-to-peak amplitudes, except in Fig. 2, where the model is calibrated to 1 mA peak amplitude, as stated. In the original version of the paper we incorrectly calibrated the computational models to 1 mA peak-to-peak, rather than 1 mA peak amplitude. This means that we divided by a value twice as large as we should have. The correct estimated fields are therefore twice as large as shown in the original Fig. 2 and Supplementary Fig. 11. The corrected figures are now properly calibrated to 1mA peak amplitude. Furthermore, the sentence in the first paragraph of the Results section ‘Intensity ranged from 0.5 to 2.5 mA (current density 0.125–0.625 mA mA/cm2), which is stronger than in previous reports’, should have read ‘Intensity ranged from 0.5 to 2.5 mA peak to peak (peak current density 0.0625–0.3125 mA/cm2), which is stronger than in previous reports.’ These errors do not affect any of the Article’s conclusions. Correct versions of Fig. 2 and Supplementary Fig. 11 are presented below as Figs. 1, 2.
Transcranial electrical stimulation has widespread clinical and research applications, yet its effect on ongoing neural activity in humans is not well established. Previous reports argue that transcranial alternating current stimulation (tACS) can entrain and enhance neural rhythms related to memory, but the evidence from non-invasive recordings has remained inconclusive. Here, we measure endogenous spindle and theta activity intracranially in humans during low-frequency tACS and find no stable entrainment of spindle power during non-REM sleep, nor of theta power during resting wakefulness. As positive controls, we find robust entrainment of spindle activity to endogenous slow-wave activity in 66% of electrodes as well as entrainment to rhythmic noise-burst acoustic stimulation in 14% of electrodes. We conclude that low-frequency tACS at common stimulation intensities neither acutely modulates spindle activity during sleep nor theta activity during waking rest, likely because of the attenuated electrical fields reaching the cortical surface.
The endoplasmic reticulum–mitochondria encounter structure (ERMES) connects the mitochondrial outer membrane with the ER. Multiple functions have been linked to ERMES, including maintenance of mitochondrial morphology, protein assembly and phospholipid homeostasis. Since the mitochondrial distribution and morphology protein Mdm10 is present in both ERMES and the mitochondrial sorting and assembly machinery (SAM), it is unknown how the ERMES functions are connected on a molecular level. Here we report that conserved surface areas on opposite sides of the Mdm10 β-barrel interact with SAM and ERMES, respectively. We generated point mutants to separate protein assembly (SAM) from morphology and phospholipid homeostasis (ERMES). Our study reveals that the β-barrel channel of Mdm10 serves different functions. Mdm10 promotes the biogenesis of α-helical and β-barrel proteins at SAM and functions as integral membrane anchor of ERMES, demonstrating that SAM-mediated protein assembly is distinct from ER-mitochondria contact sites.
We examined alterations in E/I-balance in schizophrenia (ScZ) through measurements of resting-state gamma-band activity in participants meeting clinical high-risk (CHR) criteria (n = 88), 21 first episode (FEP) patients and 34 chronic ScZ-patients. Furthermore, MRS-data were obtained in CHR-participants and matched controls. Magnetoencephalographic (MEG) resting-state activity was examined at source level and MEG-data were correlated with neuropsychological scores and clinical symptoms. CHR-participants were characterized by increased 64–90 Hz power. In contrast, FEP- and ScZ-patients showed aberrant spectral power at both low- and high gamma-band frequencies. MRS-data showed a shift in E/I-balance toward increased excitation in CHR-participants, which correlated with increased occipital gamma-band power. Finally, neuropsychological deficits and clinical symptoms in FEP and ScZ-patients were correlated with reduced gamma band-activity, while elevated psychotic symptoms in the CHR group showed the opposite relationship. The current study suggests that resting-state gamma-band power and altered Glx/GABA ratio indicate changes in E/I-balance parameters across illness stages in ScZ.
Compartmental models are the theoretical tool of choice for understanding single neuron computations. However, many models are incomplete, built ad hoc and require tuning for each novel condition rendering them of limited usability. Here, we present T2N, a powerful interface to control NEURON with Matlab and TREES toolbox, which supports generating models stable over a broad range of reconstructed and synthetic morphologies. We illustrate this for a novel, highly detailed active model of dentate granule cells (GCs) replicating a wide palette of experiments from various labs. By implementing known differences in ion channel composition and morphology, our model reproduces data from mouse or rat, mature or adult-born GCs as well as pharmacological interventions and epileptic conditions. This work sets a new benchmark for detailed compartmental modeling. T2N is suitable for creating robust models useful for large-scale networks that could lead to novel predictions. We discuss possible T2N application in degeneracy studies.
Hypofunction of the N-methyl-D-aspartate receptor (NMDAR) has been implicated as a possible mechanism underlying cognitive deficits and aberrant neuronal dynamics in schizophrenia. To test this hypothesis, we first administered a sub-anaesthetic dose of S-ketamine (0.006 mg/kg/min) or saline in a single-blind crossover design in 14 participants while magnetoencephalographic data were recorded during a visual task. In addition, magnetoencephalographic data were obtained in a sample of unmedicated first-episode psychosis patients (n = 10) and in patients with chronic schizophrenia (n = 16) to allow for comparisons of neuronal dynamics in clinical populations versus NMDAR hypofunctioning. Magnetoencephalographic data were analysed at source-level in the 1–90 Hz frequency range in occipital and thalamic regions of interest. In addition, directed functional connectivity analysis was performed using Granger causality and feedback and feedforward activity was investigated using a directed asymmetry index. Psychopathology was assessed with the Positive and Negative Syndrome Scale. Acute ketamine administration in healthy volunteers led to similar effects on cognition and psychopathology as observed in first-episode and chronic schizophrenia patients. However, the effects of ketamine on high-frequency oscillations and their connectivity profile were not consistent with these observations. Ketamine increased amplitude and frequency of gamma-power (63–80 Hz) in occipital regions and upregulated low frequency (5–28 Hz) activity. Moreover, ketamine disrupted feedforward and feedback signalling at high and low frequencies leading to hypo- and hyper-connectivity in thalamo-cortical networks. In contrast, first-episode and chronic schizophrenia patients showed a different pattern of magnetoencephalographic activity, characterized by decreased task-induced high-gamma band oscillations and predominantly increased feedforward/feedback-mediated Granger causality connectivity. Accordingly, the current data have implications for theories of cognitive dysfunctions and circuit impairments in the disorder, suggesting that acute NMDAR hypofunction does not recreate alterations in neural oscillations during visual processing observed in schizophrenia.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different cell types in motor cortex due to transcranial magnetic stimulation. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict detailed neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to predict activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also predicts differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of corctial pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
Background: Recent epidemics have entailed global discussions on revamping epidemic control and prevention approaches. A general consensus is that all sources of data should be embraced to improve epidemic preparedness. As a disease transmission is inherently governed by individual-level responses, pathogen dynamics within infected hosts posit high potentials to inform population-level phenomena. We propose a multiscale approach showing that individual dynamics were able to reproduce population-level observations.
Methods: Using experimental data, we formulated mathematical models of pathogen infection dynamics from which we simulated mechanistically its transmission parameters. The models were then embedded in our implementation of an age-specific contact network that allows to express individual differences relevant to the transmission processes. This approach is illustrated with an example of Ebola virus (EBOV).
Results: The results showed that a within-host infection model can reproduce EBOV’s transmission parameters obtained from population data. At the same time, population age-structure, contact distribution and patterns can be expressed using network generating algorithm. This framework opens a vast opportunity to investigate individual roles of factors involved in the epidemic processes. Estimating EBOV’s reproduction number revealed a heterogeneous pattern among age-groups, prompting cautions on estimates unadjusted for contact pattern. Assessments of mass vaccination strategies showed that vaccination conducted in a time window from five months before to one week after the start of an epidemic appeared to strongly reduce epidemic size. Noticeably, compared to a non-intervention scenario, a low critical vaccination coverage of 33% cannot ensure epidemic extinction but could reduce the number of cases by ten to hundred times as well as lessen the case-fatality rate.
Conclusions: Experimental data on the within-host infection have been able to capture upfront key transmission parameters of a pathogen; the applications of this approach will give us more time to prepare for potential epidemics. The population of interest in epidemic assessments could be modelled with an age-specific contact network without exhaustive amount of data. Further assessments and adaptations for different pathogens and scenarios to explore multilevel aspects in infectious diseases epidemics are underway.
Transmission of temporally correlated spike trains through synapses with short-term depression
(2018)
Short-term synaptic depression, caused by depletion of releasable neurotransmitter, modulates the strength of neuronal connections in a history-dependent manner. Quantifying the statistics of synaptic transmission requires stochastic models that link probabilistic neurotransmitter release with presynaptic spike-train statistics. Common approaches are to model the presynaptic spike train as either regular or a memory-less Poisson process: few analytical results are available that describe depressing synapses when the afferent spike train has more complex, temporally correlated statistics such as bursts. Here we present a series of analytical results—from vesicle release-site occupancy statistics, via neurotransmitter release, to the post-synaptic voltage mean and variance—for depressing synapses driven by correlated presynaptic spike trains. The class of presynaptic drive considered is that fully characterised by the inter-spike-interval distribution and encompasses a broad range of models used for neuronal circuit and network analyses, such as integrate-and-fire models with a complete post-spike reset and receiving sufficiently short-time correlated drive. We further demonstrate that the derived post-synaptic voltage mean and variance allow for a simple and accurate approximation of the firing rate of the post-synaptic neuron, using the exponential integrate-and-fire model as an example. These results extend the level of biological detail included in models of synaptic transmission and will allow for the incorporation of more complex and physiologically relevant firing patterns into future studies of neuronal networks.
Recent experiments have demonstrated that visual cortex engages in spatio-temporal sequence learning and prediction. The cellular basis of this learning remains unclear, however. Here we present a spiking neural network model that explains a recent study on sequence learning in the primary visual cortex of rats. The model posits that the sequence learning and prediction abilities of cortical circuits result from the interaction of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. It also reproduces changes in stimulus-evoked multi-unit activity during learning. Furthermore, it makes precise predictions regarding how training shapes network connectivity to establish its prediction ability. Finally, it predicts that the adapted connectivity gives rise to systematic changes in spontaneous network activity. Taken together, our model establishes a new conceptual bridge between the structure and function of cortical circuits in the context of sequence learning and prediction.
The transverse momentum distributions of the strange and double-strange hyperon resonances (Σ(1385)±,Ξ(1530)0) produced in p–Pb collisions at √sNN = 5.02 TeV were measured in the rapidity range −0.5<yCMS<0 for event classes corresponding to different charged-particle multiplicity densities, ⟨dNch/dηlab⟩. The mean transverse momentum values are presented as a function of ⟨dNch/dηlab⟩, as well as a function of the particle masses and compared with previous results on hyperon production. The integrated yield ratios of excited to ground-state hyperons are constant as a function of ⟨dNch/dηlab⟩. The equivalent ratios to pions exhibit an increase with ⟨dNch/dηlab⟩, depending on their strangeness content.
A key hallmark of visual perceptual awareness is robustness to instabilities arising from unnoticeable eye and eyelid movements. In previous human intracranial (iEEG) work (Golan et al., 2016) we found that excitatory broadband high-frequency activity transients, driven by eye blinks, are suppressed in higher-level but not early visual cortex. Here, we utilized the broad anatomical coverage of iEEG recordings in 12 eye-tracked neurosurgical patients to test whether a similar stabilizing mechanism operates following small saccades. We compared saccades (1.3°−3.7°) initiated during inspection of large individual visual objects with similarly-sized external stimulus displacements. Early visual cortex sites responded with positive transients to both conditions. In contrast, in both dorsal and ventral higher-level sites the response to saccades (but not to external displacements) was suppressed. These findings indicate that early visual cortex is highly unstable compared to higher-level visual regions which apparently constitute the main target of stabilizing extra-retinal oculomotor influences.
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allows imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. Tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.
Working memory and conscious perception are thought to share similar brain mechanisms, yet recent reports of non-conscious working memory challenge this view. Combining visual masking with magnetoencephalography, we investigate the reality of non-conscious working memory and dissect its neural mechanisms. In a spatial delayed-response task, participants reported the location of a subjectively unseen target above chance-level after several seconds. Conscious perception and conscious working memory were characterized by similar signatures: a sustained desynchronization in the alpha/beta band over frontal cortex, and a decodable representation of target location in posterior sensors. During non-conscious working memory, such activity vanished. Our findings contradict models that identify working memory with sustained neural firing, but are compatible with recent proposals of ‘activity-silent’ working memory. We present a theoretical framework and simulations showing how slowly decaying synaptic changes allow cell assemblies to go dormant during the delay, yet be retrieved above chance-level after several seconds.
A primordial state of matter consisting of free quarks and gluons that existed in the early universe a few microseconds after the Big Bang is also expected to form in high-energy heavy-ion collisions. Determining the equation of state (EoS) of such a primordial matter is the ultimate goal of high-energy heavy-ion experiments. Here we use supervised learning with a deep convolutional neural network to identify the EoS employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in quantum chromodynamics. Such EoS-meter is model-independent and insensitive to other simulation inputs including the initial conditions for hydrodynamic simulations.
We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7–80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
Current theories of schizophrenia (ScZ) posit that the symptoms and cognitive dysfunctions arise from a dysconnection syndrome. However, studies that have examined this hypothesis with physiological data at realistic time scales are so far scarce. The current study employed a state-of-the-art approach using Magnetoencephalography (MEG) to test alterations in large-scale phase synchronization in a sample of n = 16 chronic ScZ patients, 10 males and n = 19 healthy participants, 10 males, during a perceptual closure task. We identified large-scale networks from source reconstructed MEG data using data-driven analyses of neuronal synchronization. Oscillation amplitudes and interareal phase-synchronization in the 3–120 Hz frequency range were estimated for 400 cortical parcels and correlated with clinical symptoms and neuropsychological scores. ScZ patients were characterized by a reduction in γ-band (30–120 Hz) oscillation amplitudes that was accompanied by a pronounced deficit in large-scale synchronization at γ-band frequencies. Synchronization was reduced within visual regions as well as between visual and frontal cortex and the reduction of synchronization correlated with elevated clinical disorganization. Accordingly, these data highlight that ScZ is associated with a profound disruption of transient synchronization, providing critical support for the notion that core aspect of the pathophysiology arises from an impairment in coordination of distributed neural activity.
The goal of heavy ion reactions at low beam energies is to explore the QCD phase diagram at high net baryon chemical potential. To relate experimental observations with a first order phase transition or a critical endpoint, dynamical approaches for the theoretical description have to be developed. In this summary of the corresponding plenary talk, the status of the dynamical modeling including the most recent advances is presented. The remaining challenges are highlighted and promising experimental measurements are pointed out.
The three-dimensional structure determination of RNAs by NMR spectroscopy relies on chemical shift assignment, which still constitutes a bottleneck. In order to develop more efficient assignment strategies, we analysed relationships between sequence and 1H and 13C chemical shifts. Statistics of resonances from regularly Watson– Crick base-paired RNA revealed highly characteristic chemical shift clusters. We developed two approaches using these statistics for chemical shift assignment of double-stranded RNA (dsRNA): a manual approach that yields starting points for resonance assignment and simplifies decision trees and an automated approach based on the recently introduced automated resonance assignment algorithm FLYA. Both strategies require only unlabeled RNAs and three 2D spectra for assigning the H2/C2, H5/C5, H6/C6, H8/C8 and H10/C10 chemical shifts. The manual approach proved to be efficient and robust when applied to the experimental data of RNAs with a size between 20 nt and 42 nt. The more advanced automated assignment approach was successfully applied to four stemloop RNAs and a 42 nt siRNA, assigning 92–100% of the resonances from dsRNA regions correctly. This is the first automated approach for chemical shift assignment of non-exchangeable protons of RNA and their corresponding 13C resonances, which provides an important step toward automated structure determination of RNAs.
We present results on transverse momentum (pT) and rapidity (y) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/ψ and ψ(2S) at forward rapidity (2.5 < y < 4) as well as ψ(2S)-to-J/ψ cross section ratios. These quantities are measured in pp collisions at center of mass energies s√=5.02 and 13 TeV with the ALICE detector. Both charmonium states are reconstructed in the dimuon decay channel, using the muon spectrometer. A comprehensive comparison to inclusive charmonium cross sections measured at s√=2.76, 7 and 8 TeV is performed. A comparison to non-relativistic quantum chromodynamics and fixed-order next-to-leading logarithm calculations, which describe prompt and non-prompt charmonium production respectively, is also presented. A good description of the data is obtained over the full pT range, provided that both contributions are summed. In particular, it is found that for pT > 15 GeV/c the non-prompt contribution reaches up to 50% of the total charmonium yield.
The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network’s changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network’s sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences.
Two theories address the origin of repeating patterns, such as hair follicles, limb digits, and intestinal villi, during development. The Turing reaction–diffusion system posits that interacting diffusible signals produced by static cells first define a prepattern that then induces cell rearrangements to produce an anatomical structure. The second theory, that of mesenchymal self-organisation, proposes that mobile cells can form periodic patterns of cell aggregates directly, without reference to any prepattern. Early hair follicle development is characterised by the rapid appearance of periodic arrangements of altered gene expression in the epidermis and prominent clustering of the adjacent dermal mesenchymal cells. We assess the contributions and interplay between reaction–diffusion and mesenchymal self-organisation processes in hair follicle patterning, identifying a network of fibroblast growth factor (FGF), wingless-related integration site (WNT), and bone morphogenetic protein (BMP) signalling interactions capable of spontaneously producing a periodic pattern. Using time-lapse imaging, we find that mesenchymal cell condensation at hair follicles is locally directed by an epidermal prepattern. However, imposing this prepattern’s condition of high FGF and low BMP activity across the entire skin reveals a latent dermal capacity to undergo spatially patterned self-organisation in the absence of epithelial direction. This mesenchymal self-organisation relies on restricted transforming growth factor (TGF) β signalling, which serves to drive chemotactic mesenchymal patterning when reaction–diffusion patterning is suppressed, but, in normal conditions, facilitates cell movement to locally prepatterned sources of FGF. This work illustrates a hierarchy of periodic patterning modes operating in organogenesis.
Dendrites form predominantly binary trees that are exquisitely embedded in the networks of the brain. While neuronal computation is known to depend on the morphology of dendrites, their underlying topological blueprint remains unknown. Here, we used a centripetal branch ordering scheme originally developed to describe river networks—the Horton-Strahler order (SO)–to examine hierarchical relationships of branching statistics in reconstructed and model dendritic trees. We report on a number of universal topological relationships with SO that are true for all binary trees and distinguish those from SO-sorted metric measures that appear to be cell type-specific. The latter are therefore potential new candidates for categorising dendritic tree structures. Interestingly, we find a faithful correlation of branch diameters with centripetal branch orders, indicating a possible functional importance of SO for dendritic morphology and growth. Also, simulated local voltage responses to synaptic inputs are strongly correlated with SO. In summary, our study identifies important SO-dependent measures in dendritic morphology that are relevant for neural function while at the same time it describes other relationships that are universal for all dendrites.
I summarize recent developments in the hard-thermal-loop approach to QCD. I first discuss a finite-temperature and -density calculation of QCD thermodynamics at NNLO from the hard-thermal-loop perturbation theory. I then discuss a generalization of the hard-thermal-loop framework to the magnetic scale g2T, from which a novel non-Abelian massless mode is uncovered.
The pA system is typically regarded in heavy ion collisions as a “cold” nuclear matter environment and thought to isolate and identify initial state effects due to the presence of multiple nucleons in the incoming nucleus. Moreover, pA collisions bridge the gap between peripheral AA collisions and the pp baseline to create a more complete understanding of underlying production mechanisms and how they evolve with multiplicity. Recent measurements at both RHIC and the LHC provide an indication, however, that the “cold” nuclear matter picture may be somewhat naïve.
Recent LHC results from the 2013 p–Pb run at √sNN = 5.02 TeV will be discussed.
Overrepresentation of bidirectional connections in local cortical networks has been repeatedly reported and is a focus of the ongoing discussion of nonrandom connectivity. Here we show in a brief mathematical analysis that in a network in which connection probabilities are symmetric in pairs, Pij = Pji, the occurrences of bidirectional connections and nonrandom structures are inherently linked; an overabundance of reciprocally connected pairs emerges necessarily when some pairs of neurons are more likely to be connected than others. Our numerical results imply that such overrepresentation can also be sustained when connection probabilities are only approximately symmetric.
Criticality meets learning : criticality signatures in a self-organizing recurrent neural network
(2017)
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
BACKGROUND: The analysis of microarray time series promises a deeper insight into the dynamics of the cellular response following stimulation. A common observation in this type of data is that some genes respond with quick, transient dynamics, while other genes change their expression slowly over time. The existing methods for detecting significant expression dynamics often fail when the expression dynamics show a large heterogeneity. Moreover, these methods often cannot cope with irregular and sparse measurements.
RESULTS: The method proposed here is specifically designed for the analysis of perturbation responses. It combines different scores to capture fast and transient dynamics as well as slow expression changes, and performs well in the presence of low replicate numbers and irregular sampling times. The results are given in the form of tables including links to figures showing the expression dynamics of the respective transcript. These allow to quickly recognise the relevance of detection, to identify possible false positives and to discriminate early and late changes in gene expression. An extension of the method allows the analysis of the expression dynamics of functional groups of genes, providing a quick overview of the cellular response. The performance of this package was tested on microarray data derived from lung cancer cells stimulated with epidermal growth factor (EGF).
CONCLUSION: Here we describe a new, efficient method for the analysis of sparse and heterogeneous time course data with high detection sensitivity and transparency. It is implemented as R package TTCA (transcript time course analysis) and can be installed from the Comprehensive R Archive Network, CRAN. The source code is provided with the Additional file 1.
Study of hard core repulsive interactions in an hadronic gas from a comparison with lattice QCD
(2016)
We study the influence of hard-core repulsive interactions within the Hadron-Resonace Gas model in comparison to first principle calculation performed on a lattice. We check the effect of a bag-like parametrization for particle eigenvolume on flavor correlators, looking for an extension of the agreement with lattice simulations up to higher temperatures, as was yet pointed out in an analysis of hadron yields measured by the ALICE experiment. Hints for a flavor depending eigenvolume are present.
The future heavy-ion experiment CBM (FAIR/GSI, Darmstadt, Germany) will focus on measurement of very rare probes at interaction rates up to 10 MHz with data flow of up to 1 TB/s. The beam will provide free stream of beam particles without bunch structure. That requires full online event reconstruction and selection not only in space, but also in time, so-called 4D event building and selection.
The FLES (First-Level Event Selection) reconstruction and selection package consists of several modules: track finding, track fitting, short-lived particles finding, event building and event selection. A time-slice is reconstructed in parallel between cores within a same CPU, thus minimizing the communication between CPUs. After all tracks are found and fitted in 4D, they are collected into clusters of tracks originated from common primary vertices, which then are fitted, thus identifying 4D interaction points registered within the time-slice. Secondary tracks are associated with primary vertices according to their estimated production time. After that, short-lived particles are found and the full event building process is finished. The last stage of the FLES package is the selection of events according to the requested trigger signatures.
Neural oscillations at low- and high-frequency ranges are a fundamental feature of large-scale networks. Recent evidence has indicated that schizophrenia is associated with abnormal amplitude and synchrony of oscillatory activity, in particular, at high (beta/gamma) frequencies. These abnormalities are observed during task-related and spontaneous neuronal activity which may be important for understanding the pathophysiology of the syndrome. In this paper, we shall review the current evidence for impaired beta/gamma-band oscillations and their involvement in cognitive functions and certain symptoms of the disorder. In the first part, we will provide an update on neural oscillations during normal brain functions and discuss underlying mechanisms. This will be followed by a review of studies that have examined high-frequency oscillatory activity in schizophrenia and discuss evidence that relates abnormalities of oscillatory activity to disturbed excitatory/inhibitory (E/I) balance. Finally, we shall identify critical issues for future research in this area.
Introduction: Neuronal death and subsequent denervation of target areas are hallmarks of many neurological disorders. Denervated neurons lose part of their dendritic tree, and are considered "atrophic", i.e. pathologically altered and damaged. The functional consequences of this phenomenon are poorly understood.
Results: Using computational modelling of 3D-reconstructed granule cells we show that denervation-induced dendritic atrophy also subserves homeostatic functions: By shortening their dendritic tree, granule cells compensate for the loss of inputs by a precise adjustment of excitability. As a consequence, surviving afferents are able to activate the cells, thereby allowing information to flow again through the denervated area. In addition, action potentials backpropagating from the soma to the synapses are enhanced specifically in reorganized portions of the dendritic arbor, resulting in their increased synaptic plasticity. These two observations generalize to any given dendritic tree undergoing structural changes.
Conclusions: Structural homeostatic plasticity, i.e. homeostatic dendritic remodeling, is operating in long-term denervated neurons to achieve functional homeostasis.
Abstract: Integration of synaptic currents across an extensive dendritic tree is a prerequisite for computation in the brain. Dendritic tapering away from the soma has been suggested to both equalise contributions from synapses at different locations and maximise the current transfer to the soma. To find out how this is achieved precisely, an analytical solution for the current transfer in dendrites with arbitrary taper is required. We derive here an asymptotic approximation that accurately matches results from numerical simulations. From this we then determine the diameter profile that maximises the current transfer to the soma. We find a simple quadratic form that matches diameters obtained experimentally, indicating a fundamental architectural principle of the brain that links dendritic diameters to signal transmission.
Author Summary: Neurons take a great variety of shapes that allow them to perform their different computational roles across the brain. The most distinctive visible feature of many neurons is the extensively branched network of cable-like projections that make up their dendritic tree. A neuron receives current-inducing synaptic contacts from other cells across its dendritic tree. As in the case of botanical trees, dendritic trees are strongly tapered towards their tips. This tapering has previously been shown to offer a number of advantages over a constant width, both in terms of reduced energy requirements and the robust integration of inputs at different locations. However, in order to predict the computations that neurons perform, analytical solutions for the flow of input currents tend to assume constant dendritic diameters. Here we introduce an asymptotic approximation that accurately models the current transfer in dendritic trees with arbitrary, continuously changing, diameters. When we then determine the diameter profiles that maximise current transfer towards the cell body we find diameters similar to those observed in real neurons. We conclude that the tapering in dendritic trees to optimise signal transmission is a fundamental architectural principle of the brain.
Abstract We consider the phase structure of hadronic and hadron-quark models at finite temperature and density. The basis for the hadronic part is an extension of a flavor-SU(3) ? ? ? model. We study the effect on the phase diagram by adding additional hadronic resonances to the model. With the resulting equation of state we investigate heavy-ion c... collisions using hydrodynamical simulations. In a combined approach we include quarks and the Polyakov loop field in the calculation and study chiral symmetry restoration and the deconfinement transition.
The true revolution in the age of digital neuroanatomy is the ability to extensively quantify anatomical structures and thus investigate structure-function relationships in great detail. Large-scale projects were recently launched with the aim of providing infrastructure for brain simulations. These projects will increase the need for a precise understanding of brain structure, e.g., through statistical analysis and models.
From articles in this Research Topic, we identify three main themes that clearly illustrate how new quantitative approaches are helping advance our understanding of neural structure and function. First, new approaches to reconstruct neurons and circuits from empirical data are aiding neuroanatomical mapping. Second, methods are introduced to improve understanding of the underlying principles of organization. Third, by combining existing knowledge from lower levels of organization, models can be used to make testable predictions about a higher-level organization where knowledge is absent or poor. This latter approach is useful for examining statistical properties of specific network connectivity when current experimental methods have not yet been able to fully reconstruct whole circuits of more than a few hundred neurons.
Abstract: Understanding the structure and dynamics of cortical connectivity is vital to understanding cortical function. Experimental data strongly suggest that local recurrent connectivity in the cortex is significantly non-random, exhibiting, for example, above-chance bidirectionality and an overrepresentation of certain triangular motifs. Additional evidence suggests a significant distance dependency to connectivity over a local scale of a few hundred microns, and particular patterns of synaptic turnover dynamics, including a heavy-tailed distribution of synaptic efficacies, a power law distribution of synaptic lifetimes, and a tendency for stronger synapses to be more stable over time. Understanding how many of these non-random features simultaneously arise would provide valuable insights into the development and function of the cortex. While previous work has modeled some of the individual features of local cortical wiring, there is no model that begins to comprehensively account for all of them. We present a spiking network model of a rodent Layer 5 cortical slice which, via the interactions of a few simple biologically motivated intrinsic, synaptic, and structural plasticity mechanisms, qualitatively reproduces these non-random effects when combined with simple topological constraints. Our model suggests that mechanisms of self-organization arising from a small number of plasticity rules provide a parsimonious explanation for numerous experimentally observed non-random features of recurrent cortical wiring. Interestingly, similar mechanisms have been shown to endow recurrent networks with powerful learning abilities, suggesting that these mechanism are central to understanding both structure and function of cortical synaptic wiring.
Author Summary: The problem of how the brain wires itself up has important implications for the understanding of both brain development and cognition. The microscopic structure of the circuits of the adult neocortex, often considered the seat of our highest cognitive abilities, is still poorly understood. Recent experiments have provided a first set of findings on the structural features of these circuits, but it is unknown how these features come about and how they are maintained. Here we present a neural network model that shows how these features might come about. It gives rise to numerous connectivity features, which have been observed in experiments, but never before simultaneously produced by a single model. Our model explains the development of these structural features as the result of a process of self-organization. The results imply that only a few simple mechanisms and constraints are required to produce, at least to the first approximation, various characteristic features of a typical fragment of brain microcircuitry. In the absence of any of these mechanisms, simultaneous production of all desired features fails, suggesting a minimal set of necessary mechanisms for their production.
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.
Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms.
Tumour hypoxia plays a pivotal role in cancer therapy for most therapeutic approaches from radiotherapy to immunotherapy. The detailed and accurate knowledge of the oxygen distribution in a tumour is necessary in order to determine the right treatment strategy. Still, due to the limited spatial and temporal resolution of imaging methods as well as lacking fundamental understanding of internal oxygenation dynamics in tumours, the precise oxygen distribution map is rarely available for treatment planing. We employ an agent-based in silico tumour spheroid model in order to study the complex, localized and fast oxygen dynamics in tumour micro-regions which are induced by radiotherapy. A lattice-free, 3D, agent-based approach for cell representation is coupled with a high-resolution diffusion solver that includes a tissue density-dependent diffusion coefficient. This allows us to assess the space- and time-resolved reoxygenation response of a small subvolume of tumour tissue in response to radiotherapy. In response to irradiation the tumour nodule exhibits characteristic reoxygenation and re-depletion dynamics which we resolve with high spatio-temporal resolution. The reoxygenation follows specific timings, which should be respected in treatment in order to maximise the use of the oxygen enhancement effects. Oxygen dynamics within the tumour create windows of opportunity for the use of adjuvant chemotherapeutica and hypoxia-activated drugs. Overall, we show that by using modelling it is possible to follow the oxygenation dynamics beyond common resolution limits and predict beneficial strategies for therapy and in vitro verification. Models of cell cycle and oxygen dynamics in tumours should in the future be combined with imaging techniques, to allow for a systematic experimental study of possible improved schedules and to ultimately extend the reach of oxygenation monitoring available in clinical treatment.
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.