Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (962)
- Article (753)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Review (1)
Has Fulltext
- yes (1774) (remove)
Is part of the Bibliography
- no (1774)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (11)
- LHC (10)
- Heavy-ion collisions (8)
- Heavy-ion collision (7)
- heavy-ion collisions (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1774)
- Physik (1315)
- Informatik (1008)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (12)
- Helmholtz International Center for FAIR (7)
- Informatik und Mathematik (7)
- ELEMENTS (5)
- Präsidium (5)
- Geowissenschaften (4)
- Hochschulrechenzentrum (4)
- MPI für Biophysik (4)
- Biochemie, Chemie und Pharmazie (3)
- Zentrum für Biomolekulare Magnetische Resonanz (BMRZ) (3)
- Buchmann Institut für Molekulare Lebenswissenschaften (BMLS) (2)
- Exzellenzcluster Makromolekulare Komplexe (2)
- MPI für empirische Ästhetik (2)
- Mathematik (2)
- Biodiversität und Klima Forschungszentrum (BiK-F) (1)
- Center for Scientific Computing (CSC) (1)
- Pharmazie (1)
- Senckenbergische Naturforschende Gesellschaft (1)
- Zentrum für Arzneimittelforschung, Entwicklung und Sicherheit (ZAFES) (1)
Fuzziness at the horizon
(2010)
We study the stability of the noncommutative Schwarzschild black hole interior by analysing the propagation of a massless scalar field between the two horizons. We show that the spacetime fuzziness triggered by the field higher momenta can cure the classical exponential blue-shift divergence, suppressing the emergence of infinite energy density in a region nearby the Cauchy horizon.
In this Letter, we propose a new scenario emerging from the conjectured presence of a minimal length ℓ in the spacetime fabric, on the one side, and the existence of a new scale invariant, continuous mass spectrum, of un-particles on the other side. We introduce the concept of un-spectral dimension DU of a d-dimensional, euclidean (quantum) spacetime, as the spectral dimension measured by an “un-particle” probe. We find a general expression for the un-spectral dimension DU labelling different spacetime phases: a semi-classical phase, where ordinary spectral dimension gets contribution from the scaling dimension dU of the un-particle probe; a critical “Planckian phase”, where four-dimensional spacetime can be effectively considered two-dimensional when dU=1; a “Trans-Planckian phase”, which is accessible to un-particle probes only, where spacetime as we currently understand it looses its physical meaning.
Quarkyonic or baryquark matter? On the dynamical generation of momentum space shell structure
(2023)
We study the equation of state of a mixture of (quasi-)free constituent quarks and nucleons with hard-core repulsion at zero temperature. Two opposite scenarios for the realization of the Pauli exclusion principle are considered: (i) a Fermi sea of quarks surrounded by a shell of baryons – the quarkyonic matter, and (ii) a Fermi sea of nucleons surrounded by a shell of quarks which we call baryquark matter. In both scenarios, the sizes of the Fermi sea and shell are fixed through energy minimization at fixed baryon number density. While both cases yield a qualitatively similar transition from hadronic to quark matter, we find that baryquark matter is energetically favored in this setup and yields a physically acceptable behavior of the speed of sound without the need to introduce an infrared regulator. In order to retain the theoretically more appealing quarkyonic matter as the preferred form of dense QCD matter will thus require modifications to the existing dynamical generation mechanisms, such as, for example, the introduction of momentum-dependent nuclear interactions.
The production of the hypertriton nuclei HΛ3 and H‾Λ¯3 has been measured for the first time in Pb–Pb collisions at sNN=2.76 TeV with the ALICE experiment at LHC. The pT-integrated HΛ3 yield in one unity of rapidity, dN/dy×B.R.(HΛ3→He3,π−)=(3.86±0.77(stat.)±0.68(syst.))×10−5 in the 0–10% most central collisions, is consistent with the predictions from a statistical thermal model using the same temperature as for the light hadrons. The coalescence parameter B3 shows a dependence on the transverse momentum, similar to the B2 of deuterons and the B3 of 3He nuclei. The ratio of yields S3=HΛ3/(He3×Λ/p) was measured to be S3=0.60±0.13(stat.)±0.21(syst.) in 0–10% centrality events; this value is compared to different theoretical models. The measured S3 is compatible with thermal model predictions. The measured HΛ3 lifetime, τ=181−39+54(stat.)±33(syst.)ps is in agreement within 1σ with the world average value.
The ALICE Collaboration has made the first measurement at the LHC of J/ψ photoproduction in ultra-peripheral Pb–Pb collisions at sNN=2.76 TeV. The J/ψ is identified via its dimuon decay in the forward rapidity region with the muon spectrometer for events where the hadronic activity is required to be minimal. The analysis is based on an event sample corresponding to an integrated luminosity of about 55 μb−1. The cross section for coherent J/ψ production in the rapidity interval −3.6<y<−2.6 is measured to be dσJ/ψcoh/dy=1.00±0.18(stat)−0.26+0.24(syst) mb. The result is compared to theoretical models for coherent J/ψ production and found to be in good agreement with those models which include nuclear gluon shadowing.
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition. Furthermore, CNNs have major applications in understanding the nature of visual representations in the human brain. Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans. Specifically, there is a major debate about the question of whether CNNs primarily rely on surface regularities of objects, or whether they are capable of exploiting the spatial arrangement of features, similar to humans. Here, we develop a novel feature-scrambling approach to explicitly test whether CNNs use the spatial arrangement of features (i.e. object parts) to classify objects. We combine this approach with a systematic manipulation of effective receptive field sizes of CNNs as well as minimal recognizable configurations (MIRCs) analysis. In contrast to much previous literature, we provide evidence that CNNs are in fact capable of using relatively long-range spatial relationships for object classification. Moreover, the extent to which CNNs use spatial relationships depends heavily on the dataset, e.g. texture vs. sketch. In fact, CNNs even use different strategies for different classes within heterogeneous datasets (ImageNet), suggesting CNNs have a continuous spectrum of classification strategies. Finally, we show that CNNs learn the spatial arrangement of features only up to an intermediate level of granularity, which suggests that intermediate rather than global shape features provide the optimal trade-off between sensitivity and specificity in object classification. These results provide novel insights into the nature of CNN representations and the extent to which they rely on the spatial arrangement of features for object classification.
In this paper, we present a family of regular black hole solutions in the presence of charge and angular momentum. We also discuss the related thermodynamics and we comment about the black hole life cycle during the balding and spin down phases. Interestingly the static solution resembles the Ayón-Beato–García spacetime, provided the T-duality scale is redefined in terms of the electric charge, l0→Q. The key factor at the basis of our derivation is the employment of Padmanabhan's propagator to calculate static potentials. Such a propagator encodes string T-duality effects. This means that the regularity of the spacetimes here presented can open a new window on string theory phenomenology.
In gastric cancer (GC), there are four molecular subclasses that indicate whether patients respond to chemotherapy or immunotherapy, according to the TCGA. In clinical practice, however, not every patient undergoes molecular testing. Many laboratories have used well-implemented in situ techniques (IHC and EBER-ISH) to determine the subclasses in their cohorts. Although multiple stains are used, we show that a staining approach is unable to correctly discriminate all subclasses. As an alternative, we trained an ensemble convolutional neuronal network using bagging that can predict the molecular subclass directly from hematoxylin–eosin histology. We also identified patients with predicted intra-tumoral heterogeneity or with features from multiple subclasses, which challenges the postulated TCGA-based decision tree for GC subtyping. In the future, deep learning may enable targeted testing for molecular subtypes and targeted therapy for a broader group of GC patients. © 2022 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) can show variable histological growth patterns and present remarkable overlap with T-cell/histiocyte-rich large B-cell lymphoma (THRLBCL). Previous studies suggest that NLPHL histological variants represent progression forms of NLPHL and THRLBCL transformation in aggressive disease. Since molecular studies of both lymphomas are limited due to the low number of tumor cells, the present study aimed to learn if a better understanding of these lymphomas is possible via detailed measurements of nuclear and cell size features in 2D and 3D sections. Whereas no significant differences were visible in 2D analyses, a slightly increased nuclear volume and a significantly enlarged cell size were noted in 3D measurements of the tumor cells of THRLBCL in comparison to typical NLPHL cases. Interestingly, not only was the size of the tumor cells increased in THRLBCL but also the nuclear volume of concomitant T cells in the reactive infiltrate when compared with typical NLPHL. Particularly CD8+ T cells had frequent contacts to tumor cells of THRLBCL. However, the nuclear volume of B cells was comparable in all cases. These results clearly demonstrate that 3D tissue analyses are superior to conventional 2D analyses of histological sections. Furthermore, the results point to a strong activation of T cells in THRLBCL, representing a cytotoxic response against the tumor cells with unclear effectiveness, resulting in enhanced swelling of the tumor cell bodies and limiting proliferative potential. Further molecular studies combining 3D tissue analyses and molecular data will help to gain profound insight into these ill-defined cellular processes.
The impact of GABAergic transmission on neuronal excitability depends on the Cl--gradient across membranes. However, the Cl--fluxes through GABAA receptors alter the intracellular Cl- concentration ([Cl-]i) and in turn attenuate GABAergic responses, a process termed ionic plasticity. Recently it has been shown that coincident glutamatergic inputs significantly affect ionic plasticity. Yet how the [Cl-]i changes depend on the properties of glutamatergic inputs and their spatiotemporal relation to GABAergic stimuli is unknown. To investigate this issue, we used compartmental biophysical models of Cl- dynamics simulating either a simple ball-and-stick topology or a reconstructed CA3 neuron. These computational experiments demonstrated that glutamatergic co-stimulation enhances GABA receptor-mediated Cl- influx at low and attenuates or reverses the Cl- efflux at high initial [Cl-]i. The size of glutamatergic influence on GABAergic Cl--fluxes depends on the conductance, decay kinetics, and localization of glutamatergic inputs. Surprisingly, the glutamatergic shift in GABAergic Cl--fluxes is invariant to latencies between GABAergic and glutamatergic inputs over a substantial interval. In agreement with experimental data, simulations in a reconstructed CA3 pyramidal neuron with physiological patterns of correlated activity revealed that coincident glutamatergic synaptic inputs contribute significantly to the activity-dependent [Cl-]i changes. Whereas the influence of spatial correlation between distributed glutamatergic and GABAergic inputs was negligible, their temporal correlation played a significant role. In summary, our results demonstrate that glutamatergic co-stimulation had a substantial impact on ionic plasticity of GABAergic responses, enhancing the attenuation of GABAergic inhibition in the mature nervous systems, but suppressing GABAergic [Cl-]i changes in the immature brain. Therefore, glutamatergic shift in GABAergic Cl--fluxes should be considered as a relevant factor of short-term plasticity.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. Here, we hypothesized that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we found that attention enhanced the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Population analysis revealed that neuronal responses converged to a low dimensional subspace for natural but not for synthetic images. Critically, we determined that the attentional enhancement in stimulus decodability was captured by the dominant low dimensional subspace, suggesting an alignment between the attentional and natural stimulus variance. The alignment was pronounced for late evoked responses but not for early transient responses of V1 neurons, supporting the notion that top-down feedback was required. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
Sharp wave-ripples (SPW-Rs) are a hippocampal network phenomenon critical for memory consolidation and planning. SPW-Rs have been extensively studied in the adult brain, yet their developmental trajectory is poorly understood. While SPWs have been recorded in rodents shortly after birth, the time point and mechanisms of ripple emergence are still unclear. Here, we combine in vivo electrophysiology with optogenetics and chemogenetics in 4 to 12 days-old mice to address this knowledge gap. We show that ripples are robustly detected and induced by light stimulation of ChR2-transfected CA1 pyramidal neurons only from postnatal day (P) 10 onwards. Leveraging a spiking neural network model, we mechanistically link the maturation of inhibition and ripple emergence. We corroborate these findings by reducing ripple rate upon chemogenetic silencing of CA1 interneurons. Finally, we show that early SPW-Rs elicit a more robust prefrontal cortex response then SPWs lacking ripples. Thus, development of inhibition promotes ripples emergence.
We present a model for the autonomous learning of active binocular vision using a recently developed biome-chanical model of the human oculomotor system. The model is formulated in the Active Efficient Coding (AEC) framework, a recent generalization of classic efficient coding theories to active perception. The model simultaneously learns how to efficiently encode binocular images and how to generate accurate vergence eye movements that facilitate efficient encoding of the visual input. In order to resolve the redundancy problem arising from the actuation of the eyes through antagonistic muscle pairs, we consider the metabolic costs associated with eye movements. We show that the model successfully learns to trade off vergence accuracy against the associated metabolic costs, producing high fidelity vergence eye movements obeying Sherrington’s law of reciprocal innervation.
Motivation: Partial differential equations (PDEs) is a well-established and powerful tool to simulate multi-cellular biological systems. However, available free tools for validation against data are not established. The PDEparams module provides flexible functionality in Python for parameter estimation in PDE models.
Results: The PDEparams module provides a flexible interface and readily accommodates different parameter analysis tools in PDE models such as computation of likelihood profiles, and parametric boot-strapping, along with direct visualisation of the results. To our knowledge, it is the first open, freely available tool for parameter fitting of PDE models.
Availability and implementation: The PDEparams module is distributed under the MIT license. The source code, usage instructions and step-by-step examples are freely available on GitHub at github.com/systemsmedicine/PDE_params.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here we show in awake macaque area V1 that both, repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects showed some persistence on the timescale of minutes. Further, gamma increases were specific to the presented stimulus location. Importantly, repetition effects on gamma and on firing rates generalized to natural images. These findings suggest that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters, both for generating efficient stimulus responses and possibly for memory formation.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here, we show in awake macaque area V1 that both repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects show some persistence on the timescale of minutes. Gamma increases are specific to the presented stimulus location. Further, repetition effects on gamma and on firing rates generalize to images of natural objects. These findings support the notion that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters.
We estimate the temperature dependence of the bulk viscosity in a relativistic hadron gas. Employing the Green–Kubo formalism in the SMASH (Simulating Many Accelerated Strongly-interacting Hadrons) transport approach, we study different hadronic systems in increasing order of complexity. We analyze the (in)validity of the single exponential relaxation ansatz for the bulk-channel correlation function and the strong influence of the resonances and their lifetimes. We discuss the difference between the inclusive bulk viscosity of an equilibrated, long-lived system, and the effective bulk viscosity of a short-lived mixture like the hadronic phase of relativistic heavy-ion collisions, where the processes whose inverse relaxation rate are larger than the fireball duration are excluded from the analysis. This clarifies the differences between previous approaches which computed the bulk viscosity including/excluding the very slow processes in the hadron gas. We compare our final results with previous hadron gas calculations and confirm a decreasing trend of the inclusive bulk viscosity over entropy density as temperature increases, whereas the effective bulk viscosity to entropy ratio, while being lower than the inclusive one, shows no strong dependence to temperature.
Individual differences in perception are widespread. Considering inter-individual variability, synesthetes experience stable additional sensations; schizophrenia patients suffer perceptual deficits in e.g. perceptual organization (alongside hallucinations and delusions). Is there a unifying principle explaining inter-individual variability in perception? There is good reason to believe perceptual experience results from inferential processes whereby sensory evidence is weighted by prior knowledge about the world. Different perceptual phenotypes may result from different precision weighting of sensory evidence and prior knowledge. We tested this hypothesis by comparing visibility thresholds in a perceptual hysteresis task across medicated schizophrenia patients, synesthetes, and controls. Participants rated the subjective visibility of stimuli embedded in noise while we parametrically manipulated the availability of sensory evidence. Additionally, precise long-term priors in synesthetes were leveraged by presenting either synesthesia-inducing or neutral stimuli. Schizophrenia patients showed increased visibility thresholds, consistent with overreliance on sensory evidence. In contrast, synesthetes exhibited lowered thresholds exclusively for synesthesia-inducing stimuli suggesting high-precision long-term priors. Additionally, in both synesthetes and schizophrenia patients explicit, short-term priors – introduced during the hysteresis experiment – lowered thresholds but did not normalize perception. Our results imply that distinct perceptual phenotypes might result from differences in the precision afforded to prior beliefs and sensory evidence, respectively.
Cyclophilins, or immunophilins, are proteins found in many organisms including bacteria, plants and humans. Most of them display peptidyl-prolyl cis-trans isomerase activity, and play roles as chaperones or in signal transduction. Here, we show that cyclophilin anaCyp40 from the cyanobacterium Anabaena sp. PCC 7120 is enzymatically active, and seems to be involved in general stress responses and in assembly of photosynthetic complexes. The protein is associated with the thylakoid membrane and interacts with phycobilisome and photosystem components. Knockdown of anacyp40 leads to growth defects under high-salt and high-light conditions, and reduced energy transfer from phycobilisomes to photosystems. Elucidation of the anaCyp40 crystal structure at 1.2-Å resolution reveals an N-terminal helical domain with similarity to PsbQ components of plant photosystem II, and a C-terminal cyclophilin domain with a substrate-binding site. The anaCyp40 structure is distinct from that of other multi-domain cyclophilins (such as Arabidopsis thaliana Cyp38), and presents features that are absent in single-domain cyclophilins.
Glutathione (GSH) is the main determinant of intracellular redox potential and participates in multiple cellular signaling pathways. Achieving a detailed understanding of intracellular GSH trafficking and regulation depends on the development of tools to map GSH compartmentalization and intra-organelle fluctuations. Herein, we present a new GSH sensing platform, TRaQ-G, for live-cell imaging. This small-molecule/protein hybrid sensor possesses a unique reactivity turn-on mechanism that ensures that the small molecule is only sensitive to GSH in the desired location. Furthermore, TRaQ-G can be fused to a fluorescent protein of choice to give a ratiometric response. Using TRaQ-G-mGold, we demonstrated that the nuclear and cytosolic GSH pools are independently regulated during cell proliferation. We also used this sensor, in combination with roGFP, to quantify redox potential and GSH concentration simultaneously in the endoplasmic reticulum. Finally, by exchanging the fluorescent protein, we created a near-infrared, targetable and quantitative GSH sensor.
The novel coronavirus (SARS-CoV-2), identified in China at the end of December 2019 and causing the disease COVID-19, has meanwhile led to outbreaks all over the globe, with about 571,700 confirmed cases and about 26,500 deaths as of March 28th, 2020. We present here the preliminary results of a mathematical study directed at informing on the possible application or lifting of control measures in Germany. The developed mathematical models allow to study the spread of COVID-19 among the population in Germany and to asses the impact of non-pharmaceutical interventions.
The novel coronavirus (SARS-CoV-2), identified in China at the end of December 2019 and causing the disease COVID-19, has meanwhile led to outbreaks all over the globe with about 2.2 million confirmed cases and more than 150,000 deaths as of April 17, 2020 [37]. In view of most recent information on testing activity [32], we present here an update of our initial work [4]. In this work, mathematical models have been developed to study the spread of COVID-19 among the population in Germany and to asses the impact of non-pharmaceutical interventions. Systems of differential equations of SEIR type are extended here to account for undetected infections, as well as for stages of infections and age groups. The models are calibrated on data until April 5, data from April 6 to 14 are used for model validation. We simulate different possible strategies for the mitigation of the current outbreak, slowing down the spread of the virus and thus reducing the peak in daily diagnosed cases, the demand for hospitalization or intensive care units admissions, and eventually the number of fatalities. Our results suggest that a partial (and gradual) lifting of introduced control measures could soon be possible if accompanied by further increased testing activity, strict isolation of detected cases and reduced contact to risk groups.
To understand the neural mechanisms underlying brain function, neuroscientists aim to quantify causal interactions between neurons, for instance by perturbing the activity of neuron A and measuring the effect on neuron B. Recently, manipulating neuron activity using light-sensitive opsins, optogenetics, has increased the specificity of neural perturbation. However, using widefield optogenetic interventions, multiple neurons are usually perturbed, producing a confound -- any of the stimulated neurons can have affected the postsynaptic neuron making it challenging to discern which neurons produced the causal effect. Here, we show how such confounds produce large biases in interpretations. We explain how confounding can be reduced by combining instrumental variables (IV) and difference in differences (DiD) techniques from econometrics. Combined, these methods can estimate (causal) effective connectivity by exploiting the weak, approximately random signal resulting from the interaction between stimulation and the absolute refractory period of the neuron. In simulated neural networks, we find that estimates using ideas from IV and DiD outperform naive techniques suggesting that methods from causal inference can be useful to disentangle neural interactions in the brain.
A key competence for open-ended learning is the formation of increasingly abstract representations useful for driving complex behavior. Abstract representations ignore specific details and facilitate generalization. Here we consider the learning of abstract representations in a multi-modal setting with two or more input modalities. We treat the problem as a lossy compression problem and show that generic lossy compression of multimodal sensory input naturally extracts abstract representations that tend to strip away modalitiy specific details and preferentially retain information that is shared across the different modalities. Furthermore, we propose an architecture to learn abstract representations by identifying and retaining only the information that is shared across multiple modalities while discarding any modality specific information.
Recent advances in artificial neural networks enabled the quick development of new learning algorithms, which, among other things, pave the way to novel robotic applications. Traditionally, robots are programmed by human experts so as to accomplish pre-defined tasks. Such robots must operate in a controlled environment to guarantee repeatability, are designed to solve one unique task and require costly hours of development. In developmental robotics, researchers try to artificially imitate the way living beings acquire their behavior by learning. Learning algorithms are key to conceive versatile and robust robots that can adapt to their environment and solve multiple tasks efficiently. In particular, Reinforcement Learning (RL) studies the acquisition of skills through teaching via rewards. In this thesis, we will introduce RL and present recent advances in RL applied to robotics. We will review Intrinsically Motivated (IM) learning, a special form of RL, and we will apply in particular the Active Efficient Coding (AEC) principle to the learning of active vision. We also propose an overview of Hierarchical Reinforcement Learning (HRL), an other special form of RL, and apply its principle to a robotic manipulation task.
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Very-long-baseline interferometry (VLBI) observations of active galactic nuclei at millimetre wavelengths have the power to reveal the launching and initial collimation region of extragalactic radio jets, down to 10–100 gravitational radii (rg ≡ GM/c2) scales in nearby sources. Centaurus A is the closest radio-loud source to Earth. It bridges the gap in mass and accretion rate between the supermassive black holes (SMBHs) in Messier 87 and our Galactic Centre. A large southern declination of −43° has, however, prevented VLBI imaging of Centaurus A below a wavelength of 1 cm thus far. Here we show the millimetre VLBI image of the source, which we obtained with the Event Horizon Telescope at 228 GHz. Compared with previous observations, we image the jet of Centaurus A at a tenfold higher frequency and sixteen times sharper resolution and thereby probe sub-lightday structures. We reveal a highly collimated, asymmetrically edge-brightened jet as well as the fainter counterjet. We find that the source structure of Centaurus A resembles the jet in Messier 87 on ~500 rg scales remarkably well. Furthermore, we identify the location of Centaurus A’s SMBH with respect to its resolved jet core at a wavelength of 1.3 mm and conclude that the source’s event horizon shadow should be visible at terahertz frequencies. This location further supports the universal scale invariance of black holes over a wide range of masses.
The cortical networks that underlie behavior exhibit an orderly functional organization at local and global scales, which is readily evident in the visual cortex of carnivores and primates1-6. Here, neighboring columns of neurons represent the full range of stimulus orientations and contribute to distributed networks spanning several millimeters2,7-11. However, the principles governing functional interactions that bridge this fine-scale functional architecture and distant network elements are unclear, and the emergence of these network interactions during development remains unexplored. Here, by using in vivo wide-field and 2-photon calcium imaging of spontaneous activity patterns in mature ferret visual cortex, we find widespread and specific modular correlation patterns that accurately predict the local structure of visually-evoked orientation columns from the spontaneous activity of neurons that lie several millimeters away. The large-scale networks revealed by correlated spontaneous activity show abrupt ‘fractures’ in continuity that are in tight register with evoked orientation pinwheels. Chronic in vivo imaging demonstrates that these large-scale modular correlation patterns and fractures are already present at early stages of cortical development and predictive of the mature network structure. Silencing feed-forward drive through either retinal or thalamic blockade does not affect network structure suggesting a cortical origin for this large-scale correlated activity, despite the immaturity of long-range horizontal network connections in the early cortex. Using a circuit model containing only local connections, we demonstrate that such a circuit is sufficient to generate large-scale correlated activity, while also producing correlated networks showing strong fractures, a reduced dimensionality, and an elongated local correlation structure, all in close agreement with our empirical data. These results demonstrate the precise local and global organization of cortical networks revealed through correlated spontaneous activity and suggest that local connections in early cortical circuits may generate structured long-range network correlations that underlie the subsequent formation of visually-evoked distributed functional networks.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this ‘nature-nurture transform’ using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The development of binocular vision is an active learning process comprising the development of disparity tuned neurons in visual cortex and the establishment of precise vergence control of the eyes. We present a computational model for the learning and self-calibration of active binocular vision based on the Active Efficient Coding framework, an extension of classic efficient coding ideas to active perception. Under normal rearing conditions, the model develops disparity tuned neurons and precise vergence control, allowing it to correctly interpret random dot stereogramms. Under altered rearing conditions modeled after neurophysiological experiments, the model qualitatively reproduces key experimental findings on changes in binocularity and disparity tuning. Furthermore, the model makes testable predictions regarding how altered rearing conditions impede the learning of precise vergence control. Finally, the model predicts a surprising new effect that impaired vergence control affects the statistics of orientation tuning in visual cortical neurons.
Mounting evidence suggests that perception depends on a largely-feedforward brain network. However, the discrepancy between (i) the latency of the corresponding feedforward responses (150-200 ms) and (ii) the time it takes human subjects to recognize brief images (often >500 ms) suggests that recurrent neuronal activity is critical to visual processing. Here, we use magneto-encephalography to localize, track and decode the feedforward and recurrent responses elicited by brief presentations of variably-ambiguous letters and digits. We first confirm that these stimuli trigger, within the first 200 ms, a feedforward response in the ventral and dorsal cortical pathways. The subsequent activity is distributed across temporal, parietal and prefrontal cortices and leads to a slow and incremental cascade of representations culminating in action-specific motor signals. We introduce an analytical framework to show that these brain responses are best accounted for by a hierarchy of recurrent neural assemblies. An accumulation of computational delays across specific processing stages explains subjects’ reaction times. Finally, the slow convergence of neural representations towards perceptual categories is quickly followed by all-or-none motor decision signals. Together, these results show how recurrent processes generate, over extended time periods, a cascade of hierarchical decisions that ultimately predicts subjects’ perceptual reports.
The spike protein (S) of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is required for cell entry and is the primary focus for vaccine development. In this study, we combined cryo–electron tomography, subtomogram averaging, and molecular dynamics simulations to structurally analyze S in situ. Compared with the recombinant S, the viral S was more heavily glycosylated and occurred mostly in the closed prefusion conformation. We show that the stalk domain of S contains three hinges, giving the head unexpected orientational freedom. We propose that the hinges allow S to scan the host cell surface, shielded from antibodies by an extensive glycan coat. The structure of native S contributes to our understanding of SARS-CoV-2 infection and potentially to the development of safe vaccines.
The spike (S) protein of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is required for cell entry and is the major focus for vaccine development. We combine cryo electron tomography, subtomogram averaging and molecular dynamics simulations to structurally analyze S in situ. Compared to recombinant S, the viral S is more heavily glycosylated and occurs predominantly in a closed pre-fusion conformation. We show that the stalk domain of S contains three hinges that give the globular domain unexpected orientational freedom. We propose that the hinges allow S to scan the host cell surface, shielded from antibodies by an extensive glycan coat. The structure of native S contributes to our understanding of SARS-CoV-2 infection and the development of safe vaccines. The large scale tomography data set of SARS-CoV-2 used for this study is therefore sufficient to resolve structural features to below 5 Ångstrom, and is publicly available at EMPIAR-10453.
Abstract
The primary immunological target of COVID-19 vaccines is the SARS-CoV-2 spike (S) protein. S is exposed on the viral surface and mediates viral entry into the host cell. To identify possible antibody binding sites, we performed multi-microsecond molecular dynamics simulations of a 4.1 million atom system containing a patch of viral membrane with four full-length, fully glycosylated and palmitoylated S proteins. By mapping steric accessibility, structural rigidity, sequence conservation, and generic antibody binding signatures, we recover known epitopes on S and reveal promising epitope candidates for structure-based vaccine design. We find that the extensive and inherently flexible glycan coat shields a surface area larger than expected from static structures, highlighting the importance of structural dynamics. The protective glycan shield and the high flexibility of its hinges give the stalk overall low epitope scores. Our computational epitope-mapping procedure is general and should thus prove useful for other viral envelope proteins whose structures have been characterized.
Author summary
The SARS-CoV-2 virus has caused a global health crisis. The spike protein exposed at its surface is key for infection and the primary antibody target. However, spike is covered by highly mobile glycan molecules that could impair antibody binding. To identify accessible epitopes, we performed molecular dynamics simulations of an atomistic model of glycosylated spike embedded in a membrane. By combining extensive simulations with bioinformatics analyses, we recovered known antibody binding sites and identified several epitope candidates as targets for further vaccine development.
Neural computations emerge from recurrent neural circuits that comprise hundreds to a few thousand neurons. Continuous progress in connectomics, electrophysiology, and calcium imaging require tractable spiking network models that can consistently incorporate new information about the network structure and reproduce the recorded neural activity features. However, it is challenging to predict which spiking network connectivity configurations and neural properties can generate fundamental operational states and specific experimentally reported nonlinear cortical computations. Theoretical descriptions for the computational state of cortical spiking circuits are diverse, including the balanced state where excitatory and inhibitory inputs balance almost perfectly or the inhibition stabilized state (ISN) where the excitatory part of the circuit is unstable. It remains an open question whether these states can co-exist with experimentally reported nonlinear computations and whether they can be recovered in biologically realistic implementations of spiking networks. Here, we show how to identify spiking network connectivity patterns underlying diverse nonlinear computations such as XOR, bistability, inhibitory stabilization, supersaturation, and persistent activity. We established a mapping between the stabilized supralinear network (SSN) and spiking activity which allowed us to pinpoint the location in parameter space where these activity regimes occur. Notably, we found that biologically-sized spiking networks can have irregular asynchronous activity that does not require strong excitation-inhibition balance or large feedforward input and we showed that the dynamic firing rate trajectories in spiking networks can be precisely targeted without error-driven training algorithms.
Autophagosome biogenesis requires a localized perturbation of lipid membrane dynamics and a unique protein-lipid conjugate. Autophagy-related (ATG) proteins catalyze this biogenesis on cellular membranes, but the underlying molecular mechanism remains unclear. Focusing on the final step of the protein-lipid conjugation reaction, ATG8/LC3 lipidation, we show how membrane association of the conjugation machinery is organized and fine-tuned at the atomistic level. Amphipathic α-helices in ATG3 proteins (AHATG3) are found to have low hydrophobicity and to be less bulky. Molecular dynamics simulations reveal that AHATG3 regulates the dynamics and accessibility of the thioester bond of the ATG3∼LC3 conjugate to lipids, allowing covalent lipidation of LC3. Live cell imaging shows that the transient membrane association of ATG3 with autophagic membranes is governed by the less bulky- hydrophobic feature of AHATG3. Collectively, the unique properties of AHATG3 facilitate protein- lipid bilayer association leading to the remodeling of the lipid bilayer required for the formation of autophagosomes.
Human lymph nodes play a central part of immune defense against infection agents and tumor cells. Lymphoid follicles are compartments of the lymph node which are spherical, mainly filled with B cells. B cells are cellular components of the adaptive immune systems. In the course of a specific immune response, lymphoid follicles pass different morphological differentiation stages. The morphology and the spatial distribution of lymphoid follicles can be sometimes associated to a particular causative agent and development stage of a disease. We report our new approach for the automatic detection of follicular regions in histological whole slide images of tissue sections immuno-stained with actin. The method is divided in two phases: (1) shock filter-based detection of transition points and (2) segmentation of follicular regions. Follicular regions in 10 whole slide images were manually annotated by visual inspection, and sample surveys were conducted by an expert pathologist. The results of our method were validated by comparing with the manual annotation. On average, we could achieve a Zijbendos similarity index of 0.71, with a standard deviation of 0.07.
Afterimages result from a prolonged exposure to still visual stimuli. They are best detectable when viewed against uniform backgrounds and can persist for multiple seconds. Consequently, the dynamics of afterimages appears to be slow by their very nature. To the contrary, we report here that about 50% of an afterimage intensity can be erased rapidly—within less than a second. The prerequisite is that subjects view a rich visual content to erase the afterimage; fast erasure of afterimages does not occur if subjects view a blank screen. Moreover, we find evidence that fast removal of afterimages is a skill learned with practice as our subjects were always more effective in cleaning up afterimages in later parts of the experiment. These results can be explained by a tri-level hierarchy of adaptive mechanisms, as has been proposed by the theory of practopoiesis.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in postexposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
COVID-19 pandemic is a major public health threat with unanswered questions regarding the role of the immune system in the severity level of the disease. In this paper, based on antibody kinetic data of patients with different disease severity, topological data analysis highlights clear differences in the shape of antibody dynamics between three groups of patients, which were non-severe, severe, and one intermediate case of severity. Subsequently, different mathematical models were developed to quantify the dynamics between the different severity groups. The best model was the one with the lowest media value of Akaike Information Criterion for all groups of patients. Although it has been reported high IgG level in severe patients, our findings suggest that IgG antibodies in severe patients may be less effective than non-severe patients due to early B cell production and early activation of the seroconversion process from IgM to IgG antibody.
Untangling the cell immune response dynamic for severe and critical cases of SARS-CoV-2 infection
(2021)
COVID-19 is a global pandemic leading high death tolls worldwide day by day. Clinical evidence suggests that COVID-19 patients can be classified as non-severe, severe and critical cases. In particular, studies have highlighted the relationship between the lymphopenia and the severity of the illness, where CD8+ T cells have the lowest levels in critical cases. In this work, we aim to elucidate the key parameters that define the course of the disease deviating from severe to critical case. To this end, several mathematical models are proposed to represent the dynamic of the immune response in patients with SARS-CoV-2 infection. The best model had a good fit to reported experimental data, and in accordance with values found in the literature. Our results suggest that a rapid proliferation of CD8+ T cells is decisive in the severity of the disease.
Tracking influenza a virus infection in the lung from hematological data with machine learning
(2022)
The tracking of pathogen burden and host responses with minimal-invasive methods during respiratory infections is central for monitoring disease development and guiding treatment decisions. Utilizing a standardized murine model of respiratory Influenza A virus (IAV) infection, we developed and tested different supervised machine learning models to predict viral burden and immune response markers, i.e. cytokines and leukocytes in the lung, from hematological data. We performed independently in vivo infection experiments to acquire extensive data for training and testing purposes of the models. We show here that lung viral load, neutrophil counts, cytokines like IFN-γ and IL-6, and other lung infection markers can be predicted from hematological data. Furthermore, feature analysis of the models shows that blood granulocytes and platelets play a crucial role in prediction and are highly involved in the immune response against IAV. The proposed in silico tools pave the path towards improved tracking and monitoring of influenza infections and possibly other respiratory infections based on minimal-invasively obtained hematological parameters.
Abstract
Co-infections by multiple pathogens have important implications in many aspects of health, epidemiology and evolution. However, how to disentangle the contributing factors of the immune response when two infections take place at the same time is largely unexplored. Using data sets of the immune response during influenza-pneumococcal co-infection in mice, we employ here topological data analysis to simplify and visualise high dimensional data sets.
We identified persistent shapes of the simplicial complexes of the data in the three infection scenarios: single viral infection, single bacterial infection, and co-infection. The immune response was found to be distinct for each of the infection scenarios and we uncovered that the immune response during the co-infection has three phases and two transition points. During the first phase, its dynamics is inherited from its response to the primary (viral) infection. The immune response has an early (few hours post co-infection) and then modulates its response to finally react against the secondary (bacterial) infection. Between 18 to 26 hours post co-infection the nature of the immune response changes again and does no longer resembles either of the single infection scenarios.
Author summary
The mapper algorithm is a topological data analysis technique used for the qualitative analysis, simplification and visualisation of high dimensional data sets. It generates a low-dimensional image that captures topological and geometric information of the data set in high dimensional space, which can highlight groups of data points of interest and can guide further analysis and quantification.
To understand how the immune system evolves during the co-infection between viruses and bacteria, and the role of specific cytokines as contributing factors for these severe infections, we use Topological Data Analysis (TDA) along with an extensive semi-unsupervised parameter value grid search, and k-nearest neighbour analysis.
We find persistent shapes of the data in the three infection scenarios, single viral and bacterial infections and co-infection. The immune response is shown to be distinct for each of the infections scenarios and we uncover that the immune response during the co-infection has three phases and two transition points, a previously unknown property regarding the dynamics of the immune response during co-infection.
Learning in the eyes: specific changes in gaze patterns track explicit and implicit visual learning
(2020)
What is the link between eye movements and sensory learning? Although some theories have argued for a permanent and automatic interaction between what we know and where we look, which continuously modulates human information- gathering behavior during both implicit and explicit learning, there exist surprisingly little evidence supporting such an ongoing interaction. We used a pure form of implicit learning called visual statistical learning and manipulated the explicitness of the task to explore how learning and eye movements interact. During both implicit exploration and explicit visual learning of unknown composite visual scenes, eye movement patterns systematically changed in accordance with the underlying statistical structure of the scenes. Moreover, the degree of change was directly correlated with the amount of knowledge the observers acquired. Our results provide the first evidence for an ongoing and specific interaction between hitherto accumulated knowledge and eye movements during both implicit and explicit learning.
How much data do we need? Lower bounds of brain activation states to predict human cognitive ability
(2022)
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Despite their low frequency of occurrence, states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture (derived from resting-state fMRI) and to be highly subject-specific. However, it is currently unclear whether such network-defining states of high cofluctuation also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, an eigenvector-based prediction framework, we show that functional connectivity estimates from as few as 20 temporally separated time frames (< 3% of a 10 min resting-state fMRI scan) are significantly predictive of individual differences in intelligence (N = 281, p < .001). In contrast and against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not achieve significant prediction of intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest brain connectivity, temporally distributed information is necessary to extract information about cognitive abilities from functional connectivity time series. This information, however, is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
Bacteria of the genera Photorhabdus and Xenorhabdus produce a plethora of natural products to support their similar symbiotic lifecycles. For many of these compounds, the specific bioactivities are unknown. One common challenge in natural product research when trying to prioritize research efforts is the rediscovery of identical (or highly similar) compounds from different strains. Linking genome sequence to metabolite production can help in overcoming this problem. However, sequences are typically not available for entire collections of organisms. Here we perform a comprehensive metabolic screening using HPLC-MS data associated with a 114-strain collection (58 Photorhabdus and 56 Xenorhabdus) from across Thailand and explore the metabolic variation among the strains, matched with several abiotic factors. We utilize machine learning in order to rank the importance of individual metabolites in determining all given metadata. With this approach, we were able to prioritize metabolites in the context of natural product investigations, leading to the identification of previously unknown compounds. The top three highest-ranking features were associated with Xenorhabdus and attributed to the same chemical entity, cyclo(tetrahydroxybutyrate). This work addresses the need for prioritization in high-throughput metabolomic studies and demonstrates the viability of such an approach in future research.
Antimicrobial resistance is a major threat to global health and food security today. Scheduling cycling therapies by targeting phenotypic states associated to specific mutations can help us to eradicate pathogenic variants in chronic infections. In this paper, we introduce a logistic switching model in order to abstract mutation networks of collateral resistance. We found particular conditions for which unstable zero-equilibrium of the logistic maps can be stabilized through a switching signal. That is, persistent populations can be eradicated through tailored switching regimens.
Starting from an optimal-control formulation, the switching policies show their potential in the stabilization of the zero-equilibrium for dynamics governed by logistic maps. However, employing such switching strategies, deserve a specific characterization in terms of limit behaviour. Ultimately, we use evolutionary and control algorithms to find either optimal and sub-optimal switching policies. Simulations results show the applicability of Parrondo’s Paradox to design cycling therapies against drug resistance.
We propose a generalized modeling framework for the kinetic mechanisms of transcriptional riboswitches. The formalism accommodates time-dependent transcription rates and changes of metabolite concentration and permits incorporation of variations in transcription rate depending on transcript length. We derive explicit analytical expressions for the fraction of transcripts that determine repression or activation of gene expression, pause site location and its slowing down of transcription for the case of the (2’dG)-sensing riboswitch from Mesoplasma florum. Our modeling challenges the current view on the exclusive importance of metabolite binding to transcripts containing only the aptamer domain. Numerical simulations of transcription proceeding in a continuous manner under time-dependent changes of metabolite concentration further suggest that rapid modulations in concentration result in a reduced dynamic range for riboswitch function regardless of transcription rate, while a combination of slow modulations and small transcription rates ensures a wide range of finely tuneable regulatory outcomes.
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for treatment and prevention of influenza and its complications is largely debatable. Here, we developed a transparent mathematical modelling setting to analyse the impact of NAIs on influenza disease at within-host and population level. Analytical and simulation results indicate that even assuming unrealistically high efficacies for NAIs, drug intake starting on the onset of symptoms has a negligible effect on an individual's viral load and symptoms score. Increasing NAIs doses does not provide a better outcome as is generally believed. Considering Tamiflu's pandemic regimen for prophylaxis, different multiscale simulation scenarios reveal modest reductions in epidemic size despite high investments in stockpiling. Our results question the use of NAIs in general to treat influenza as well as the respective stockpiling by regulatory authorities.
The successful elimination of bacteria such as Streptococcus pneumoniae from a host involves the coordination between different parts of the immune system. Previous studies have explored the effects of the initial pneumococcal load (bacterial dose) on different representations of innate immunity, finding that pathogenic outcomes can vary with the size of the bacterial dose. However, others yield support to the notion of dose-independent factors contributing to bacterial clearance. In this paper, we seek to provide a deeper understanding of the immune responses associated to the pneumococcus. To this end, we formulate a model that realizes an abstraction of the innate-regulatory immune host response. Stability and bifurcation analyses of the model reveal the following trichotomy of pneumococcal outcomes determined by the bifurcation parameters: (i) dose-independent clearance; (ii) dose-independent persistence; and (iii) dose-limited clearance. Bistability, where the bacteria-free equilibrium co-stabilizes with the most substantial steady-state bacterial load is the specific result behind dose-limited clearance. The trichotomy of pneumococcal outcomes here described integrates all previously observed bacterial fates into a unified framework.
COVID-19 pandemic has underlined the impact of emergent pathogens as a major threat for human health. The development of quantitative approaches to advance comprehension of the current outbreak is urgently needed to tackle this severe disease. In this work, several mathematical models are proposed to represent SARS-CoV-2 dynamics in infected patients. Considering different starting times of infection, parameters sets that represent infectivity of SARS-CoV-2 are computed and compared with other viral infections that can also cause pandemics.
Based on the target cell model, SARS-CoV-2 infecting time between susceptible cells (mean of 30 days approximately) is much slower than those reported for Ebola (about 3 times slower) and influenza (60 times slower). The within-host reproductive number for SARS-CoV-2 is consistent to the values of influenza infection (1.7-5.35). The best model to fit the data was including immune responses, which suggest a slow cell response peaking between 5 to 10 days post onset of symptoms. The model with eclipse phase, time in a latent phase before becoming productively infected cells, was not supported. Interestingly, both, the target cell model and the model with immune responses, predict that virus may replicate very slowly in the first days after infection, and it could be below detection levels during the first 4 days post infection. A quantitative comprehension of SARS-CoV-2 dynamics and the estimation of standard parameters of viral infections is the key contribution of this pioneering work.
The severity of the COVID-19 pandemic, caused by the SARS-CoV-2 coronavirus, calls for the urgent development of a vaccine. The primary immunological target is the SARS-CoV-2 spike (S) protein. S is exposed on the viral surface to mediate viral entry into the host cell. To identify possible antibody binding sites not shielded by glycans, we performed multi-microsecond molecular dynamics simulations of a 4.1 million atom system containing a patch of viral membrane with four full-length, fully glycosylated and palmitoylated S proteins. By mapping steric accessibility, structural rigidity, sequence conservation and generic antibody binding signatures, we recover known epitopes on S and reveal promising epitope candidates for vaccine development. We find that the extensive and inherently flexible glycan coat shields a surface area larger than expected from static structures, highlighting the importance of structural dynamics in epitope mapping.
In particle collider experiments, elementary particle interactions with large momentum transfer produce quarks and gluons (known as partons) whose evolution is governed by the strong force, as described by the theory of quantum chromodynamics (QCD)1. These partons subsequently emit further partons in a process that can be described as a parton shower2, which culminates in the formation of detectable hadrons. Studying the pattern of the parton shower is one of the key experimental tools for testing QCD. This pattern is expected to depend on the mass of the initiating parton, through a phenomenon known as the dead-cone effect, which predicts a suppression of the gluon spectrum emitted by a heavy quark of mass mQ and energy E, within a cone of angular size mQ/E around the emitter3. Previously, a direct observation of the dead-cone effect in QCD had not been possible, owing to the challenge of reconstructing the cascading quarks and gluons from the experimentally accessible hadrons. We report the direct observation of the QCD dead cone by using new iterative declustering techniques4,5 to reconstruct the parton shower of charm quarks. This result confirms a fundamental feature of QCD. Furthermore, the measurement of a dead-cone angle constitutes a direct experimental observation of the non-zero mass of the charm quark, which is a fundamental constant in the standard model of particle physics.
Spike count correlations (SCCs) are ubiquitous in sensory cortices, are characterized by rich structure and arise from structured internal interactions. Yet, most theories of visual perception focus exclusively on the mean responses of individual neurons. Here, we argue that feedback interactions in primary visual cortex (V1) establish the context in which individual neurons process complex stimuli and that changes in visual context give rise to stimulus-dependent SCCs. Measuring V1 population responses to natural scenes in behaving macaques, we show that the fine structure of SCCs is stimulus-specific and variations in response correlations across-stimuli are independent of variations in response means. Moreover, we demonstrate that stimulus-specificity of SCCs in V1 can be directly manipulated by controlling the high-order structure of synthetic stimuli. We propose that stimulus-specificity of SCCs is a natural consequence of hierarchical inference where inferences on the presence of high-level image features modulate inferences on the presence of low-level features.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. We hypothesize that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we show that attention enhances the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Attentional enhancement of stimulus decodability from population responses occurs in low dimensional spaces, as revealed by principal component analysis, suggesting an alignment between the attentional and the natural stimulus variance. Moreover, natural scenes produce stimulus-specific oscillatory responses in V1, whose power undergoes a global shift from low to high frequencies with attention. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
In meditation practices that involve focused attention to a specific object, novice practitioners often experience moments of distraction (i.e., mind wandering). Previous studies have investigated the neural correlates of mind wandering during meditation practice through Electroencephalography (EEG) using linear metrics (e.g., oscillatory power). However, their results are not fully consistent. Since the brain is known to be a chaotic/nonlinear system, it is possible that linear metrics cannot fully capture complex dynamics present in the EEG signal. In this study, we assess whether nonlinear EEG signatures can be used to characterize mind wandering during breath focus meditation in novice practitioners. For that purpose, we adopted an experience sampling paradigm in which 25 participants were iteratively interrupted during meditation practice to report whether they were focusing on the breath or thinking about something else. We compared the complexity of EEG signals during mind wandering and breath focus states using three different algorithms: Higuchi's fractal dimension (HFD), Lempel-Ziv complexity (LZC), and Sample entropy (SampEn). Our results showed that EEG complexity was generally reduced during mind wandering relative to breath focus states. We conclude that EEG complexity metrics are appropriate to disentangle mind wandering from breath focus states in novice meditation practitioners, and therefore, they could be used in future EEG neurofeedback protocols to facilitate meditation practice.
In meditation practices that involve focused attention to a specific object, novice practitioners often experience moments of distraction (i.e., mind wandering). Previous studies have investigated the neural correlates of mind wandering during meditation practice through Electroencephalography (EEG) using linear metrics (e.g., oscillatory power). However, their results are not fully consistent. Since the brain is known to be a chaotic/nonlinear system, it is possible that linear metrics cannot fully capture complex dynamics present in the EEG signal. In this study, we assess whether nonlinear EEG signatures can be used to characterize mind wandering during breath focus meditation in novice practitioners. For that purpose, we adopted an experience sampling paradigm in which 25 participants were iteratively interrupted during meditation practice to report whether they were focusing on the breath or thinking about something else. We compared the complexity of EEG signals during mind wandering and breath focus states using three different algorithms: Higuchi’s fractal dimension (HFD), Lempel-Ziv complexity (LZC), and Sample entropy (SampEn). Our results showed that EEG complexity was generally reduced during mind wandering relative to breath focus states. We conclude that EEG complexity metrics are appropriate to disentangle mind wandering from breath focus states in novice meditation practitioners, and therefore, they could be used in future EEG neurofeedback protocols to facilitate meditation practice.
Inspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications. However, their biological realism and theoretical tractability are limited, resulting in poorly understood parameters. We have recently shown that biological neuronal firing rates in response to distributed inputs are largely independent of size, meaning that neurons are typically responsive to the proportion, not the absolute number, of their inputs that are active. Here we introduce such a normalisation, where the strength of a neuron’s afferents is divided by their number, to various sparsely-connected artificial networks. The learning performance is dramatically increased, providing an improvement over other widely-used normalisations in sparse networks. The resulting machine learning tools are universally applicable and biologically inspired, rendering them better understood and more stable in our tests.
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc-13-1-Munc13-2 knockout mice with completely blocked synaptic transmission. Neither induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal like distribution of spines.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of stochastic growth and retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of targeted growth and stochastic retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
The way in which dendrites spread within neural tissue determines the resulting circuit connectivity and computation. However, a general theory describing the dynamics of this growth process does not exist. Here we obtain the first time-lapse reconstructions of neurons in living fly larvae over the entirety of their developmental stages. We show that these neurons expand in a remarkably regular stretching process that conserves their shape. Newly available space is filled optimally, a direct consequence of constraining the total amount of dendritic cable. We derive a mathematical model that predicts one time point from the previous and use this model to predict dendrite morphology of other cell types and species. In summary, we formulate a novel theory of dendrite growth based on detailed developmental experimental data that optimises wiring and space filling and serves as a basis to better understand aspects of coverage and connectivity for neural circuit formation.
Reducing neuronal size results in less cell membrane and therefore lower input conductance. Smaller neurons are thus more excitable as seen in their voltage responses to current injections in the soma. However, the impact of a neuron’s size and shape on its voltage responses to synaptic activation in dendrites is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs and show that these are entirely independent of dendritic length. For a given synaptic density, a neuron’s response depends only on the average dendritic diameter and its intrinsic conductivity. These results remain true for the entire range of possible dendritic morphologies irrespective of any particular arborisation complexity. Also, spiking models result in morphology invariant numbers of action potentials that encode the percentage of active synapses. Interestingly, in contrast to spike rate, spike times do depend on dendrite morphology. In summary, a neuron’s excitability in response to synaptic inputs is not affected by total dendrite length. It rather provides a homeostatic input-output relation that specialised synapse distributions, local non-linearities in the dendrites and synaptic plasticity can modulate. Our work reveals a new fundamental principle of dendritic constancy that has consequences for the overall computation in neural circuits.
Excess neuronal branching allows for innervation of specific dendritic compartments in cortex
(2019)
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behaviour to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organisation of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of putative connections in cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given connection. We show that potential contacts allow the particular implementation of connectivity at a subcellular level.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.
In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.
We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.
Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown. Here, we generated large stochastic populations of biophysically realistic hippocampal granule cell models comparing those with all 15 ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were more frequent and more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve target excitability.
Background Corticospinal excitability depends on the current brain state. The recent development of real-time EEG-triggered transcranial magnetic stimulation (EEG-TMS) allows studying this relationship in a causal fashion. Specifically, it has been shown that corticospinal excitability is higher during the scalp surface negative EEG peak compared to the positive peak of µ-oscillations in sensorimotor cortex, as indexed by larger motor evoked potentials (MEPs) for fixed stimulation intensity.
Objective We further characterize the effect of µ-rhythm phase on the MEP input-output (IO) curve by measuring the degree of excitability modulation across a range of stimulation intensities. We furthermore seek to optimize stimulation parameters to enable discrimination of functionally relevant EEG-defined brain states.
Methods A real-time EEG-TMS system was used to trigger MEPs during instantaneous brain-states corresponding to µ-rhythm surface positive and negative peaks with five different stimulation intensities covering an individually calibrated MEP IO curve in 15 healthy participants.
Results MEP amplitude is modulated by µ-phase across a wide range of stimulation intensities, with larger MEPs at the surface negative peak. The largest relative MEP-modulation was observed for weak intensities, the largest absolute MEP-modulation for intermediate intensities. These results indicate a leftward shift of the MEP IO curve during the µ-rhythm negative peak.
Conclusion The choice of stimulation intensity influences the observed degree of corticospinal excitability modulation by µ-phase. Lower stimulation intensities enable more efficient differentiation of EEG µ-phase-defined brain states.
Active efficient coding explains the development of binocular vision and its failure in amblyopia
(2020)
The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a new formulation of the Active Efficient Coding theory, which proposes that eye movements, as well as stimulus encoding, are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to co-ordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops, in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
Epilepsy can have many different causes and its development (epileptogenesis) involves a bewildering complexity of interacting processes. Here, we present a first-of-its-kind computational model to better understand the role of neuroimmune interactions in the development of acquired epilepsy. Our model describes the interactions between neuroinflammation, blood-brain barrier disruption, neuronal loss, circuit remodeling, and seizures. Formulated as a system of nonlinear differential equations, the model is validated using data from animal models that mimic human epileptogenesis caused by infection, status epilepticus, and blood-brain barrier disruption. The mathematical model successfully explains characteristic features of epileptogenesis such as its paradoxically long timescales (up to decades) despite short and transient injuries, or its dependence on the intensity of an injury. Furthermore, stochasticity in the model captures the variability of epileptogenesis outcomes in individuals exposed to identical injury. Notably, in line with the concept of degeneracy, our simulations reveal multiple routes towards epileptogenesis with neuronal loss as a sufficient but non-necessary component. We show that our framework allows for in silico predictions of therapeutic strategies, providing information on injury-specific therapeutic targets and optimal time windows for intervention.
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
Active efficient coding explains the development of binocular vision and its failure in amblyopia
(2020)
The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here, we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a formulation of the active efficient coding theory, which proposes that eye movements as well as stimulus encoding are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to coordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
Treatments for amblyopia focus on vision therapy and patching of one eye. Predicting the success of these methods remains difficult, however. Recent research has used binocular rivalry to monitor visual cortical plasticity during occlusion therapy, leading to a successful prediction of the recovery rate of the amblyopic eye. The underlying mechanisms and their relation to neural homeostatic plasticity are not known. Here we propose a spiking neural network to explain the effect of short-term monocular deprivation on binocular rivalry. The model reproduces perceptual switches as observed experimentally. When one eye is occluded, inhibitory plasticity changes the balance between the eyes and leads to longer dominance periods for the eye that has been deprived. The model suggests that homeostatic inhibitory plasticity is a critical component of the observed effects and might play an important role in the recovery from amblyopia.
Models of perceptual decision making have historically been designed to maximally explain behaviour and brain activity independently of their ability to actually perform tasks. More recently, performance-optimized models have been shown to correlate with brain responses to images and thus present a complementary approach to understand perceptual processes. In the present study, we compare how these approaches comparatively account for the spatio-temporal organization of neural responses elicited by ambiguous visual stimuli. Forty-six healthy human subjects performed perceptual decisions on briefly flashed stimuli constructed from ambiguous characters. The stimuli were designed to have 7 orthogonal properties, ranging from low-sensory levels (e.g. spatial location of the stimulus) to conceptual (whether stimulus is a letter or a digit) and task levels (i.e. required hand movement). Magneto-encephalography source and decoding analyses revealed that these 7 levels of representations are sequentially encoded by the cortical hierarchy, and actively maintained until the subject responds. This hierarchy appeared poorly correlated to normative, drift-diffusion, and 5-layer convolutional neural networks (CNN) optimized to accurately categorize alpha-numeric characters, but partially matched the sequence of activations of 3/6 state-of-the-art CNNs trained for natural image labeling (VGG-16, VGG-19, MobileNet). Additionally, we identify several systematic discrepancies between these CNNs and brain activity, revealing the importance of single-trial learning and recurrent processing. Overall, our results strengthen the notion that performance-optimized algorithms can converge towards the computational solution implemented by the human visual system, and open possible avenues to improve artificial perceptual decision making.
Polarization of Λ and ¯Λ hyperons along the beam direction in Pb-Pb collisions at √sNN=5.02 TeV
(2022)
The polarization of the Λ and ¯Λ hyperons along the beam (z) direction, Pz, has been measured in Pb-Pb collisions at √sNN=5.02 TeV recorded with ALICE at the Large Hadron Collider (LHC). The main contribution to Pz comes from elliptic flow-induced vorticity and can be characterized by the second Fourier sine coefficient Pz,s2=⟨Pzsin(2φ−2Ψ2)⟩, where φ is thhyperon azimuthal emission angle and Ψ2 is the elliptic flow plane angle. We report the measurement of Pz,s2 for different collision centralities and in the 30%–50% centrality interval as a function of the hyperon transverse momentum and rapidity. The Pz,s2 is positive similarly as measured by the STAR Collaboration in Au-Au collisions at √sNN=200 GeV, with somewhat smaller amplitude in the semicentral collisions. This is the first experimental evidence of a nonzero hyperon Pz in Pb-Pb collisions at the LHC. The comparison of the measured Pz,s2 with the hydrodynamic model calculations shows sensitivity to the competing contributions from thermal and the recently found shear-induced vorticity, as well as to whether the polarization is acquired at the quark-gluon plasma or the hadronic phase.
Two types of particles exist in the atmosphere, primary and secondary particles. While primary particles such as soot, mineral dust, sea salt particles or pollen are introduced directly as particles into the atmosphere, secondary particles are formed in the atmosphere by condensation of gases. The formation of such new aerosol particles takes place frequently and at a broad variety of atmospheric conditions and geographic locations. A considerable fraction of the atmospheric particles is formed by such nucleation processes. The newly formed particles may grow by condensation to sizes where they are large enough to act as cloud condensation nuclei and therefore may affect cloud properties. The fundamental processes of aerosol nucleation are described and typical atmospheric observations are discussed. Two recent studies are introduced that potentially change our current understanding of atmospheric nucleation substantially.
ALICE (A Large Heavy Ion Experiment) is one of the four large scale experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most computing-intensive task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC) and the Inner Tracking System (ITS). The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As our first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach to fast ITS standalone tracking. As our second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.
The dynamics of strange pseudoscalar and vector mesons in hot and dense nuclear matter is studied within a chiral unitary framework in coupled channels. Our results set up the starting point for implementations in microscopic transport approaches of heavy-ion collisions, particularly at the conditions of the forthcoming experiments at GSI/FAIR and NICA-Dubna. In the K̄ N sector we focus on the calculation of (off-shell) transition rates for the most relevant binary reactions involved in strangeness production close to threshold energies, with special attention to the excitation of sub-threshold hyperon resonances and isospin effects (e.g. K̄ p vs K̄ n). We also give an overview of recent theoretical developments regarding the dynamics of strange vector mesons (K*, K̄* and ϕ) in the nuclear medium, in connection with experimental activity from heavy-ion collisions and nuclear production reactions. We emphasize the role of hadronic decay modes and the excitation of hyperon resonances as the driving mechanisms modifying the properties of vector mesons.
We introduce a top-down stylized model to analyse the impact of a transition to a European power system based only on wind and solar power. Wind and solar power generation is calculated from high-resolution weather data and based on the country specific electricity demand alone, we introduce a model of the conventional power system that facilitates simple spatio-temporal modelling of its macroscopic behavior without direct reference to the underlying technological, economical, and political development in the system. Using this model, we find that wind and solar power generation can replace conventional power generation and power capacity to a large degree if power transmission across the continent is made possible.
Fluctuations of anisotropic flow in lead-lead collisions at LHC energies arising in HYDJET++model are studied. It is shown that intrinsic fluctuations of the flow which appear mainly because of the fluctuations of particle multiplicity, momenta and coordinates are insufficient to match the measured experimental data, provided the eccentricity of the freeze-out hypersurface is fixed at any given impact parameter b. However, when the variations of the eccentricity in HYDJET++ are taken into account, the agreement between the model results and the data is drastically improved. Both model calculations and the data are filtered through the unfolding procedure. This procedure eliminates the non-flow fluctuations to a higher degree, thus indicating a dynamical origin of the flow fluctuations in HYDJET++ event generator.
We apply HYDJET++ model, which contains the treatment of both soft and hard processes, to study the heavy-ion collisions at LHC energies. The interplay of parametrised hydrodynamics and jets describes many features of the development of particle anisotropic flow including the break-up of mass hierarchy of elliptic and triangular flow, the falloff of the flow at certain transverse momentum and violation of the number-ofconstituent- quark (NCQ) scaling at LHC energies compared to the lower ones. Other signals, such as long-range dihadron correlations (ridge) and event-by-event (EbyE) fluctuations of the flow are also discussed. Model calculations demonstrate a good agreement with the available experimental data.
Preface
(2012)
The production of charmonia in the antiproton-nucleus reactions at plab = 3 − 10 GeV/c is studied within the Glauber model and the generalized eikonal approximation. The main reaction channel is charmonium formation in an antiproton-proton collision. The target mass dependence of the charmonium transparency ratio allows to determine the charmonium-nucleon cross section. The polarization effects in the production of χc2 states are evaluated.
We study primary and secondary reactions induced by 600 MeV proton beams in monolithic cylindrical targets made of natural tungsten and uranium by using Monte Carlo simulations with the Geant4 toolkit [1–3]. Bertini intranuclear cascade model, Binary cascade model and IntraNuclear Cascade Liège (INCL) with ABLA model [4] were used as calculational options to describe nuclear reactions. Fission cross sections, neutron multiplicity and mass distributions of fragments for 238U fission induced by 25.6 and 62.9 MeV protons are calculated and compared to recent experimental data [5]. Time distributions of neutron leakage from the targets and heat depositions are calculated.
We found that a true ternary fission with formation of a heavy third fragment (a new kind of radioactivity) is quite possible for superheavy nuclei due to the strong shell effects leading to a three-body clusterization with the two doubly magic tin-like cores. The three-body quasifission process could be even more pronounced for giant nuclear systems formed in collisions of heavy actinide nuclei. In this case a three-body clusterization might be proved experimentally by detection of two coincident lead-like fragments in low-energy U+U collisions.
Using an advanced version of the hadron resonance gas model we have found several remarkable irregularities at chemical freeze-out. The most prominent of them are two sets of highly correlated quasi-plateaus in the collision energy dependence of the entropy per baryon, total pion number per baryon, and thermal pion number per baryon which we found at center of mass energies 3.6-4.9 GeV and 7.6-10 GeV. The low energy set of quasi-plateaus was predicted a long time ago. On the basis of the generalized shockadiabat model we demonstrate that the low energy correlated quasi-plateaus give evidence for the anomalous thermodynamic properties of the mixed phase at its boundary to the quark-gluon plasma. The question is whether the high energy correlated quasi-plateaus are also related to some kind of mixed phase. In order to answer this question we employ the results of a systematic meta-analysis of the quality of data description of 10 existing event generators of nucleus-nucleus collisions in the range of center of mass collision energies from 3.1 GeV to 17.3 GeV. These generators are divided into two groups: the first group includes the generators which account for the quark-gluon plasma formation during nuclear collisions, while the second group includes the generators which do not assume the quark-gluon plasma formation in such collisions. Comparing the quality of data description of more than a hundred of different data sets of strange hadrons by these two groups of generators, we find two regions of the equal quality of data description which are located at the center of mass collision energies 4.3-4.9 GeV and 10.-13.5 GeV. These two regions of equal quality of data description we interpret as regions of the hadron-quark-gluon mixed phase formation. Such a conclusion is strongly supported by the irregularities in the collision energy dependence of the experimental ratios of the Lambda hyperon number per proton and positive kaon number per Lambda hyperon. Although at the moment it is unclear, whether these regions belong to the same mixed phase or not, there are arguments that the most probable collision energy range to probe the QCD phase diagram (tri)critical endpoint is 12-14 GeV.
Cysteine cross-linking in native membranes establishes the transmembrane architecture of Ire1
(2021)
The ER is a key organelle of membrane biogenesis and crucial for the folding of both membrane and secretory proteins. Sensors of the unfolded protein response (UPR) monitor the unfolded protein load in the ER and convey effector functions for maintaining ER homeostasis. Aberrant compositions of the ER membrane, referred to as lipid bilayer stress, are equally potent activators of the UPR. How the distinct signals from lipid bilayer stress and unfolded proteins are processed by the conserved UPR transducer Ire1 remains unknown. Here, we have generated a functional, cysteine-less variant of Ire1 and performed systematic cysteine cross-linking experiments in native membranes to establish its transmembrane architecture in signaling-active clusters. We show that the transmembrane helices of two neighboring Ire1 molecules adopt an X-shaped configuration independent of the primary cause for ER stress. This suggests that different forms of stress converge in a common, signaling-active transmembrane architecture of Ire1.
One of important consequences of Hagedorn statistical bootstrap model is the prediction of limiting temperature Tcrit for hadron systems colloquially known as Hagedorn temperature. According to Hagedorn, this effect should be observed in hadron spectra obtained in infinite equilibrated nuclear matter rather than in relativistic heavy-ion collisions. We present results of microscopic model calculations for the infinite nuclear matter, simulated by a box with periodic boundary conditions. The limiting temperature indeed appears in the model calculations. Its origin is traced to strings and many-body decays of resonances.
These proceedings will cover various studies of hadronic resonances within the UrQMD transport model. After a brief explanation of the model, various observables will be highlighted and the chances for resonance reconstruction in hadronic channels will be discussed. Possible signals of chiral symmetry restoration will be investigated for feasibility.
We propose an effective theory of SU(3) gluonic matter where interactions between color-electric and color-magnetic gluons are constrained by the center and scale symmetries. Through matching to the dimensionally-reduced magnetic theories, the magnetic gluon condensate qualitatively changes its thermal behavior above the critical temperature. We argue its phenomenological consequences for the thermodynamics, in particular the dynamical breaking of scale invariance.
Resonances from PHSD
(2012)
The multi-strange baryon and vector meson resonance production in relativistic nucleus-nucleus collisions is studied within the parton-hadron-string dynamics (PHSD) approach which incorporates explicit partonic degrees-of-freedom in terms of strongly interacting quasiparticles (quarks and gluons) in line with an equation-of-state from lattice QCD as well as the dynamical hadronization and hadronic collision dynamics in the final reaction phase. We find a significant effect of the partonic phase on the production of multi-strange antibaryons at SPS energies due to a slightly enhanced pair production from massive time-like gluon decay and a larger formation of antibaryons in the hadronization process. We, futhermore, obtain a visible in-medium effects in the low mass dilepton sector from dynamical vector-meson spectral functions from SIS to SPS energies whereas at RHIC and LHC energies such medium effects become more moderate. In the intermediate mass regime from 1.1 to 3 GeV pronounced traces of the partonic degrees of freedom are found at SPS energies which superseed the hadronic (multi-meson) channels as well as the correlated and uncorrelated semi-leptonic D-meson decays. The dilepton production from the strongly interacting quark-gluon-plasma (sQGP) becomes already visible at top SPS energies and more pronounced at RHIC and LHC energies.
The so-called Pygmy Dipole Resonance, an additional structure of low-lying electric dipole strength, has attracted strong interest in the last years. Different experimental approaches have been used in the last decade in order to investigate this new interesting nuclear excitation mode. In this contribution an overview on the available experimental data is given.