Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (709) (entfernen)
Volltext vorhanden
- ja (709) (entfernen)
Gehört zur Bibliographie
- nein (709)
Schlagworte
- Heavy Ion Experiments (20)
- Hadron-Hadron scattering (experiments) (10)
- Hadron-Hadron Scattering (9)
- LHC (9)
- Heavy-ion collision (7)
- Black holes (6)
- schizophrenia (6)
- Equation of state (5)
- Quark-Gluon Plasma (5)
- Relativistic heavy-ion collisions (5)
Institut
- Frankfurt Institute for Advanced Studies (FIAS) (709) (entfernen)
The study of (anti-)deuteron production in pp collisions has proven to be a powerful tool to investigate the formation mechanism of loosely bound states in high-energy hadronic collisions. In this paper the production of (anti-)deuterons is studied as a function of the charged particle multiplicity in inelastic pp collisions at s√=13 TeV using the ALICE experiment. Thanks to the large number of accumulated minimum bias events, it has been possible to measure (anti-)deuteron production in pp collisions up to the same charged particle multiplicity (dNch/dη∼26) as measured in p–Pb collisions at similar centre-of-mass energies. Within the uncertainties, the deuteron yield in pp collisions resembles the one in p–Pb interactions, suggesting a common formation mechanism behind the production of light nuclei in hadronic interactions. In this context the measurements are compared with the expectations of coalescence and statistical hadronisation models (SHM).
Nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) can show variable histological growth patterns and present remarkable overlap with T-cell/histiocyte-rich large B-cell lymphoma (THRLBCL). Previous studies suggest that NLPHL histological variants represent progression forms of NLPHL and THRLBCL transformation in aggressive disease. Since molecular studies of both lymphomas are limited due to the low number of tumor cells, the present study aimed to learn if a better understanding of these lymphomas is possible via detailed measurements of nuclear and cell size features in 2D and 3D sections. Whereas no significant differences were visible in 2D analyses, a slightly increased nuclear volume and a significantly enlarged cell size were noted in 3D measurements of the tumor cells of THRLBCL in comparison to typical NLPHL cases. Interestingly, not only was the size of the tumor cells increased in THRLBCL but also the nuclear volume of concomitant T cells in the reactive infiltrate when compared with typical NLPHL. Particularly CD8+ T cells had frequent contacts to tumor cells of THRLBCL. However, the nuclear volume of B cells was comparable in all cases. These results clearly demonstrate that 3D tissue analyses are superior to conventional 2D analyses of histological sections. Furthermore, the results point to a strong activation of T cells in THRLBCL, representing a cytotoxic response against the tumor cells with unclear effectiveness, resulting in enhanced swelling of the tumor cell bodies and limiting proliferative potential. Further molecular studies combining 3D tissue analyses and molecular data will help to gain profound insight into these ill-defined cellular processes.
Aims: The examination of histological sections is still the gold standard in diagnostic pathology. Important histopathological diagnostic criteria are nuclear shapes and chromatin distribution as well as nucleus-cytoplasm relation and immunohistochemical properties of surface and intracellular proteins. The aim of this investigation was to evaluate the benefits and drawbacks of three-dimensional imaging of CD30+ cells in classical Hodgkin Lymphoma (cHL) in comparison to CD30+ lymphoid cells in reactive lymphoid tissues.
Materials and results: Using immunoflourescence confocal microscopy and computer-based analysis, we compared CD30+ neoplastic cells in Nodular Sclerosis cHL (NScCHL), Mixed Cellularity cHL (MCcHL), with reactive CD30+ cells in Adenoids (AD) and Lymphadenitis (LAD). We confirmed that the percentage of CD30+ cell volume can be calculated. The amount in lymphadenitis was approx. 1.5%, in adenoids around 2%, in MCcHL up to 4,5% whereas the values for NScHL rose to more than 8% of the total cell cytoplasm. In addition, CD30+ tumour cells (HRS-cells) in cHL had larger volumes, and more protrusions compared to CD30+ reactive cells. Furthermore, the formation of large cell networks turned out to be a typical characteristic of NScHL.
Conclusion: In contrast to 2D histology, 3D laser scanning offers a visualisation of complete cells, their network interaction and spatial distribution in the tissue. The possibility to differentiate cells in regards to volume, surface, shape, and cluster formation enables a new view on further diagnostic and biological questions. 3D includes an increased amount of information as a basis of bioinformatical calculations.
The Karl Schwarzschild Meeting 2017 (KSM2017) has been the third instalment of the conference dedicated to the great Frankfurter scientist, who derived the first black hole solution of Einstein's equations about 100 years ago.
The event has been a 5 day meeting in the field of black holes, AdS/CFT correspondence and gravitational physics. Like the two previous instalments, the conference continued to attract a stellar ensemble of participants from the world's most renowned institutions. The core of the meeting has been a series of invited talks from eminent experts (keynote speakers) as well as the presence of plenary research talks by students and junior speakers.
List of Conference photo and poster, Sponsors and funding acknowledgments, Committees and List of participants are available in this PDF.
The production of the hypertriton nuclei HΛ3 and H‾Λ¯3 has been measured for the first time in Pb–Pb collisions at sNN=2.76 TeV with the ALICE experiment at LHC. The pT-integrated HΛ3 yield in one unity of rapidity, dN/dy×B.R.(HΛ3→He3,π−)=(3.86±0.77(stat.)±0.68(syst.))×10−5 in the 0–10% most central collisions, is consistent with the predictions from a statistical thermal model using the same temperature as for the light hadrons. The coalescence parameter B3 shows a dependence on the transverse momentum, similar to the B2 of deuterons and the B3 of 3He nuclei. The ratio of yields S3=HΛ3/(He3×Λ/p) was measured to be S3=0.60±0.13(stat.)±0.21(syst.) in 0–10% centrality events; this value is compared to different theoretical models. The measured S3 is compatible with thermal model predictions. The measured HΛ3 lifetime, τ=181−39+54(stat.)±33(syst.)ps is in agreement within 1σ with the world average value.
Poster presentation: Characterizing neuronal encoding is essential for understanding information processing in the brain. Three methods are commonly used to characterize the relationship between neural spiking activity and the features of putative stimuli. These methods include: Wiener-Volterra kernel methods (WVK), the spike-triggered average (STA), and more recently, the point process generalized linear model (GLM). We compared the performance of these three approaches in estimating receptive field properties and orientation tuning of 251 V1 neurons recorded from 2 monkeys during a fixation period in response to a moving bar. The GLM consisted of two formulations of the conditional intensity function for a point process characterization of the spiking activity: one with a stimulus only component and one with the stimulus and spike history. We fit the GLMs by maximum likelihood using GLMfit in Matlab. Goodness-of-fit was assessed using cross-validation with Kolmogorov-Smirnov (KS) tests based on the time-rescaling theorem to evaluate the accuracy with which each model predicts the spiking activity of individual neurons and for each movement direction (4016 models in total, for 251 neurons and 16 different directions). The GLMs that considered spike history of up to 35 ms, accurately predicted neuronal spiking activity (95% confidence intervals for KS test) with a performance of 97.0% (3895/4016) for the training data, and 96.5% (3876/4016) for the test data. If spike history was not considered, performance dropped to 73,1% in the training and 71.3% in the testing data. In contrast, the WVF and the STA predicted spiking accurately for 24.2% and 44.5% of the test data examples respectively. The receptive field size estimates obtained from the GLM (with and without history), WVF and STA were comparable. Relative to the GLM orientation tuning was underestimated on average by a factor of 0.45 by the WVF and the STA. The main reason for using the STA and WVF approaches is their apparent simplicity. However, our analyses suggest that more accurate spike prediction as well as more credible estimates of receptive field size and orientation tuning can be computed easily using GLMs implemented in Matlab with standard functions such as GLMfit.
Poster presentation: Introduction We here focus on constructing a hierarchical neural system for position-invariant recognition, which is one of the most fundamental invariant recognition achieved in visual processing [1,2]. The invariant recognition have been hypothesized to be done by matching a sensory image of a particular object stimulated on the retina to the most suitable representation stored in memory of the higher visual cortical area. Here arises a general problem: In such a visual processing, the position of the object image on the retina must be initially uncertain. Furthermore, the retinal activities possessing sensory information are being far from the ones in the higher area with a loss of the sensory object information. Nevertheless, with such recognition ambiguity, the particular object can effortlessly and easily be recognized. Our aim in this work is an attempt to resolve such a general recognition problem. ...
Derived from a biophysical model for the motion of a crawling cell, the evolution system(⋆){ut=Δu−∇⋅(u∇v),0=Δv−kv+u, is investigated in a finite domain Ω⊂Rn, n≥2, with k≥0. Whereas a comprehensive literature is available for cases in which (⋆) describes chemotaxis-driven population dynamics and hence is accompanied by homogeneous Neumann-type boundary conditions for both components, the presently considered modeling context, besides yet requiring the flux ∂νu−u∂νv to vanish on ∂Ω, inherently involves homogeneous Dirichlet boundary conditions for the attractant v, which in the current setting corresponds to the cell's cytoskeleton being free of pressure at the boundary. This modification in the boundary setting is shown to go along with a substantial change with respect to the potential to support the emergence of singular structures: It is, inter alia, revealed that in contexts of radial solutions in balls there exist two critical mass levels, distinct from each other whenever k>0 or n≥3, that separate ranges within which (i) all solutions are global in time and remain bounded, (ii) both global bounded and exploding solutions exist, or (iii) all nontrivial solutions blow up. While critical mass phenomena distinguishing between regimes of type (i) and (ii) belong to the well-understood characteristics of (⋆) when posed under classical no-flux boundary conditions in planar domains, the discovery of a distinct secondary critical mass level related to the occurrence of (iii) seems to have no nearby precedent. In the planar case with the domain being a disk, the analytical results are supplemented with some numerical illustrations, and it is discussed how the findings can be interpreted biophysically for the situation of a cell on a flat substrate.
A new method of event characterization based on Deep Learning is presented. The PointNet models can be used for fast, online event-by-event impact parameter determination at the CBM experiment. For this study, UrQMD and the CBM detector simulation are used to generate Au+Au collision events at 10 AGeV which are then used to train and evaluate PointNet based architectures. The models can be trained on features like the hit position of particles in the CBM detector planes, tracks reconstructed from the hits or combinations thereof. The Deep Learning models reconstruct impact parameters from 2-14 fm with a mean error varying from -0.33 to 0.22 fm. For impact parameters in the range of 5-14 fm, a model which uses the combination of hit and track information of particles has a relative precision of 4-9% and a mean error of -0.33 to 0.13 fm. In the same range of impact parameters, a model with only track information has a relative precision of 4-10% and a mean error of -0.18 to 0.22 fm. This new method of event-classification is shown to be more accurate and less model dependent than conventional methods and can utilize the performance boost of modern GPU processor units.
Introduction: Neuronal death and subsequent denervation of target areas are hallmarks of many neurological disorders. Denervated neurons lose part of their dendritic tree, and are considered "atrophic", i.e. pathologically altered and damaged. The functional consequences of this phenomenon are poorly understood.
Results: Using computational modelling of 3D-reconstructed granule cells we show that denervation-induced dendritic atrophy also subserves homeostatic functions: By shortening their dendritic tree, granule cells compensate for the loss of inputs by a precise adjustment of excitability. As a consequence, surviving afferents are able to activate the cells, thereby allowing information to flow again through the denervated area. In addition, action potentials backpropagating from the soma to the synapses are enhanced specifically in reorganized portions of the dendritic arbor, resulting in their increased synaptic plasticity. These two observations generalize to any given dendritic tree undergoing structural changes.
Conclusions: Structural homeostatic plasticity, i.e. homeostatic dendritic remodeling, is operating in long-term denervated neurons to achieve functional homeostasis.
Using more than a million randomly generated equations of state that satisfy theoretical and observational constraints, we construct a novel, scale-independent description of the sound speed in neutron stars, where the latter is expressed in a unit cube spanning the normalized radius, r/R, and the mass normalized to the maximum one, M/MTOV. From this generic representation, a number of interesting and surprising results can be deduced. In particular, we find that light (heavy) stars have stiff (soft) cores and soft (stiff) outer layers, or that the maximum of the sound speed is located at the center of light stars but moves to the outer layers for stars with M/MTOV ≳ 0.7, reaching a constant value of cs = 1 2 2 as M → MTOV. We also show that the sound speed decreases below the conformal limit cs = 1 3 2 at the center of stars with M = MTOV. Finally, we construct an analytic expression that accurately describes the radial dependence of the sound speed as a function of the neutron-star mass, thus providing an estimate of the maximum sound speed expected in a neutron star.
Poster presentation: Introduction We here address the problem of integrating information about multiple objects and their positions on the visual scene. A primate visual system has little difficulty in rapidly achieving integration, given only a few objects. Unfortunately, computer vision still has great difficultly achieving comparable performance. It has been hypothesized that temporal binding or temporal separation could serve as a crucial mechanism to deal with information about objects and their positions in parallel to each other. Elaborating on this idea, we propose a neurally plausible mechanism for reaching local decision-making for "what" and "where" information to the global multi-object recognition. ...
The coordinate and momentum space configurations of the net baryon number in heavy ion collisions that undergo spinodal decomposition, due to a first-order phase transition, are investigated using state-of-the-art machine-learning methods. Coordinate space clumping, which appears in the spinodal decomposition, leaves strong characteristic imprints on the spatial net density distribution in nearly every event which can be detected by modern machine learning techniques. On the other hand, the corresponding features in the momentum distributions cannot clearly be detected, by the same machine learning methods, in individual events. Only a small subset of events can be systematically differ- entiated if only the momentum space information is available. This is due to the strong similarity of the two event classes, with and without spinodal decomposition. In such sce- narios, conventional event-averaged observables like the baryon number cumulants signal a spinodal non-equilibrium phase transition. Indeed the third-order cumulant, the skewness, does exhibit a peak at the beam energy (Elab = 3–4 A GeV), where the transient hot and dense system created in the heavy ion collision reaches the first-order phase transition.
The development of epilepsy (epileptogenesis) involves a complex interplay of neuronal and immune processes. Here, we present a first-of-its-kind mathematical model to better understand the relationships among these processes. Our model describes the interaction between neuroinflammation, blood-brain barrier disruption, neuronal loss, circuit remodeling, and seizures. Formulated as a system of nonlinear differential equations, the model reproduces the available data from three animal models. The model successfully describes characteristic features of epileptogenesis such as its paradoxically long timescales (up to decades) despite short and transient injuries or the existence of qualitatively different outcomes for varying injury intensity. In line with the concept of degeneracy, our simulations reveal multiple routes toward epilepsy with neuronal loss as a sufficient but non-necessary component. Finally, we show that our model allows for in silico predictions of therapeutic strategies, revealing injury-specific therapeutic targets and optimal time windows for intervention.
Poster presentation: How can two distant neural assemblies synchronize their firings at zero-lag even in the presence of non-negligible delays in the transfer of information between them? Neural synchronization stands today as one of the most promising mechanisms to counterbalance the huge anatomical and functional specialization of the different brain areas. However, and albeit more evidence is being accumulated in favor of its functional role as a binding mechanism of distributed neural responses, the physical and anatomical substrate for such a dynamic and precise synchrony, especially zero-lag even in the presence of non-negligible delays, remains unclear. Here we propose a simple network motif that naturally accounts for zero-lag synchronization of spiking assemblies of neurons for a wide range of temporal delays. We demonstrate that when two distant neural assemblies do not interact directly but relaying their dynamics via a third mediating single neuron or population and eventually achieve zero-lag coherent firing. Extensive numerical simulations of populations of Hodgkin-Huxley neurons interacting in such a network are analyzed. The results show that even with axonal delays as large as 15 ms the distant neural populations can synchronize their firings at zero-lag in a millisecond precision after the exchange of a few spikes. The role of noise and a distribution of axonal delays in the synchronized dynamics of the neural populations are also studied confirming the robustness of this sync mechanism. The proposed network module is densely embedded within the complex functional architecture of the brain and especially within the reciprocal thalamocortical interactions where the role of indirect pathways mimicking direct cortico-cortical fibers has been already suggested to facilitate trans-areal cortical communication. In summary the robust neural synchronization mechanism presented here arises as a consequence of the relay and redistribution of the dynamics performed by a mediating neuronal population. In opposition to previous works, neither inhibitory, gap junctions, nor complex networks need to be invoked to provide a stable mechanism of zero-phase correlated activity of neural populations in the presence of large conduction delays.
Background: Cognitive dysfunctions represent a core feature of schizophrenia and a predictor for clinical outcomes. One possible mechanism for cognitive impairments could involve an impairment in the experience-dependent modifications of cortical networks.
Methods: To address this issue, we employed magnetoencephalography (MEG) during a visual priming paradigm in a sample of chronic patients with schizophrenia (n = 14), and in a group of healthy controls (n = 14). We obtained MEG-recordings during the presentation of visual stimuli that were presented three times either consecutively or with intervening stimuli. MEG-data were analyzed for event-related fields as well as spectral power in the 1–200 Hz range to examine repetition suppression and repetition enhancement. We defined regions of interest in occipital and thalamic regions and obtained virtual-channel data.
Results: Behavioral priming did not differ between groups. However, patients with schizophrenia showed prominently reduced oscillatory response to novel stimuli in the gamma-frequency band as well as significantly reduced repetition suppression of gamma-band activity and reduced repetition enhancement of beta-band power in occipital cortex to both consecutive repetitions as well as repetitions with intervening stimuli. Moreover, schizophrenia patients were characterized by a significant deficit in suppression of the C1m component in occipital cortex and thalamus as well as of the late positive component (LPC) in occipital cortex.
Conclusions: These data provide novel evidence for impaired repetition suppression in cortical and subcortical circuits in schizophrenia. Although behavioral priming was preserved, patients with schizophrenia showed deficits in repetition suppression as well as repetition enhancement in thalamic and occipital regions, suggesting that experience-dependent modification of neural circuits is impaired in the disorder.
In this Letter we study the radiation measured by an accelerated detector, coupled to a scalar field, in the presence of a fundamental minimal length. The latter is implemented by means of a modified momentum space Green's function. After calibrating the detector, we find that the net flux of field quanta is negligible, and that there is no Planckian spectrum. We discuss possible interpretations of this result, and we comment on experimental implications in heavy ion collisions and atomic systems.
The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network’s changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network’s sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences.
We developed a Monte Carlo event generator for production of nucleon configurations in complex nuclei consistently including effects of nucleon–nucleon (NN) correlations. Our approach is based on the Metropolis search for configurations satisfying essential constraints imposed by short- and long-range NN correlations, guided by the findings of realistic calculations of one- and two-body densities for medium-heavy nuclei. The produced event generator can be used for Monte Carlo (MC) studies of pA and AA collisions. We perform several tests of consistency of the code and comparison with previous models, in the case of high energy proton–nucleus scattering on an event-by-event basis, using nucleus configurations produced by our code and Glauber multiple scattering theory both for the uncorrelated and the correlated configurations; fluctuations of the average number of collisions are shown to be affected considerably by the introduction of NN correlations in the target nucleus. We also use the generator to estimate maximal possible gluon nuclear shadowing in a simple geometric model.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different cell types in motor cortex due to transcranial magnetic stimulation. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict detailed neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to predict activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also predicts differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of corctial pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
Following a brief review of current efforts to identify the neuronal correlates of conscious processing (NCCP) an attempt is made to bridge the gap between the material neuronal processes and the immaterial dimensions of subjective experience. It is argued that this "hard problem" of consciousness research cannot be solved by only considering the neuronal underpinnings of cognition. The proposal is that the hard problem can be treated within a naturalistic framework if one considers not only the biological but also the socio-cultural dimensions of evolution. The argument is based on the following premises: perceptions are the result of a constructivist process that depends on priors. This applies both for perceptions of the outer world and the perception of oneself. Social interactions between agents endowed with the cognitive abilities of humans generated immaterial realities, addressed as social or cultural realities. This novel class of realities assumed the role of priors for the perception of oneself and the embedding world. A natural consequence of these extended perceptions is a dualist classification of observables into material and immaterial phenomena nurturing the concept of ontological substance dualism. It is argued that perceptions shaped by socio-cultural priors lead to the construction of a self-model that has both a material and an immaterial dimension. As priors are implicit and not amenable to conscious recollection the perceived immaterial dimension is experienced as veridical and not derivable from material processes—which is the hallmark of the hard problem. These considerations let the hard problem appear as the result of cognitive constructs that are amenable to naturalistic explanations in an evolutionary framework.
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition. Furthermore, CNNs have major applications in understanding the nature of visual representations in the human brain. Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans. Specifically, there is a major debate about the question of whether CNNs primarily rely on surface regularities of objects, or whether they are capable of exploiting the spatial arrangement of features, similar to humans. Here, we develop a novel feature-scrambling approach to explicitly test whether CNNs use the spatial arrangement of features (i.e. object parts) to classify objects. We combine this approach with a systematic manipulation of effective receptive field sizes of CNNs as well as minimal recognizable configurations (MIRCs) analysis. In contrast to much previous literature, we provide evidence that CNNs are in fact capable of using relatively long-range spatial relationships for object classification. Moreover, the extent to which CNNs use spatial relationships depends heavily on the dataset, e.g. texture vs. sketch. In fact, CNNs even use different strategies for different classes within heterogeneous datasets (ImageNet), suggesting CNNs have a continuous spectrum of classification strategies. Finally, we show that CNNs learn the spatial arrangement of features only up to an intermediate level of granularity, which suggests that intermediate rather than global shape features provide the optimal trade-off between sensitivity and specificity in object classification. These results provide novel insights into the nature of CNN representations and the extent to which they rely on the spatial arrangement of features for object classification.
We introduce a novel technique that utilizes a physics-driven deep learning method to reconstruct the dense matter equation of state from neutron star observables, particularly the masses and radii. The proposed framework involves two neural networks: one to optimize the EoS using Automatic Differentiation in the unsupervised learning scheme; and a pre-trained network to solve the Tolman–Oppenheimer–Volkoff (TOV) equations. The gradient-based optimization process incorporates a Bayesian picture into the proposed framework. The reconstructed EoS is proven to be consistent with the results from conventional methods. Furthermore, the resulting tidal deformation is in agreement with the limits obtained from the gravitational wave event, GW170817.
Using an advanced version of the hadron resonance gas model we have found several remarkable irregularities at chemical freeze-out. The most prominent of them are two sets of highly correlated quasi-plateaus in the collision energy dependence of the entropy per baryon, total pion number per baryon, and thermal pion number per baryon which we found at center of mass energies 3.6-4.9 GeV and 7.6-10 GeV. The low energy set of quasi-plateaus was predicted a long time ago. On the basis of the generalized shockadiabat model we demonstrate that the low energy correlated quasi-plateaus give evidence for the anomalous thermodynamic properties of the mixed phase at its boundary to the quark-gluon plasma. The question is whether the high energy correlated quasi-plateaus are also related to some kind of mixed phase. In order to answer this question we employ the results of a systematic meta-analysis of the quality of data description of 10 existing event generators of nucleus-nucleus collisions in the range of center of mass collision energies from 3.1 GeV to 17.3 GeV. These generators are divided into two groups: the first group includes the generators which account for the quark-gluon plasma formation during nuclear collisions, while the second group includes the generators which do not assume the quark-gluon plasma formation in such collisions. Comparing the quality of data description of more than a hundred of different data sets of strange hadrons by these two groups of generators, we find two regions of the equal quality of data description which are located at the center of mass collision energies 4.3-4.9 GeV and 10.-13.5 GeV. These two regions of equal quality of data description we interpret as regions of the hadron-quark-gluon mixed phase formation. Such a conclusion is strongly supported by the irregularities in the collision energy dependence of the experimental ratios of the Lambda hyperon number per proton and positive kaon number per Lambda hyperon. Although at the moment it is unclear, whether these regions belong to the same mixed phase or not, there are arguments that the most probable collision energy range to probe the QCD phase diagram (tri)critical endpoint is 12-14 GeV.
Neurons collect their inputs from other neurons by sending out arborized dendritic structures. However, the relationship between the shape of dendrites and the precise organization of synaptic inputs in the neural tissue remains unclear. Inputs could be distributed in tight clusters, entirely randomly or else in a regular grid-like manner. Here, we analyze dendritic branching structures using a regularity index R, based on average nearest neighbor distances between branch and termination points, characterizing their spatial distribution. We find that the distributions of these points depend strongly on cell types, indicating possible fundamental differences in synaptic input organization. Moreover, R is independent of cell size and we find that it is only weakly correlated with other branching statistics, suggesting that it might reflect features of dendritic morphology that are not captured by commonly studied branching statistics. We then use morphological models based on optimal wiring principles to study the relation between input distributions and dendritic branching structures. Using our models, we find that branch point distributions correlate more closely with the input distributions while termination points in dendrites are generally spread out more randomly with a close to uniform distribution. We validate these model predictions with connectome data. Finally, we find that in spatial input distributions with increasing regularity, characteristic scaling relationships between branching features are altered significantly. In summary, we conclude that local statistics of input distributions and dendrite morphology depend on each other leading to potentially cell type specific branching features.
We explore the parameter space of the two-flavor thermal quark–meson model and its Polyakov loop-extended version under the influence of a constant external magnetic field B. We investigate the behavior of the pseudo critical temperature for chiral symmetry breaking taking into account the likely dependence of two parameters on the magnetic field: the Yukawa quark–meson coupling and the parameter T0 of the Polyakov loop potential. Under the constraints that magnetic catalysis is realized at zero temperature and the chiral transition at B=0 is a crossover, we find that the quark–meson model leads to thermal magnetic catalysis for the whole allowed parameter space, in contrast to the present picture stemming from lattice QCD.
Poster presentation: Introduction Dopaminergic neurons in the midbrain show a variety of firing patterns, ranging from very regular firing pacemaker cells to bursty and irregular neurons. The effects of different experimental conditions (like pharmacological treatment or genetical manipulations) on these neuronal discharge patterns may be subtle. Applying a stochastic model is a quantitative approach to reveal these changes. ...
A small-world network has been suggested to be an efficient solution for achieving both modular and global processing-a property highly desirable for brain computations. Here, we investigated functional networks of cortical neurons using correlation analysis to identify functional connectivity. To reconstruct the interaction network, we applied the Ising model based on the principle of maximum entropy. This allowed us to assess the interactions by measuring pairwise correlations and to assess the strength of coupling from the degree of synchrony. Visual responses were recorded in visual cortex of anesthetized cats, simultaneously from up to 24 neurons. First, pairwise correlations captured most of the patterns in the population´s activity and, therefore, provided a reliable basis for the reconstruction of the interaction networks. Second, and most importantly, the resulting networks had small-world properties; the average path lengths were as short as in simulated random networks, but the clustering coefficients were larger. Neurons differed considerably with respect to the number and strength of interactions, suggesting the existence of "hubs" in the network. Notably, there was no evidence for scale-free properties. These results suggest that cortical networks are optimized for the coexistence of local and global computations: feature detection and feature integration or binding.
Working memory and conscious perception are thought to share similar brain mechanisms, yet recent reports of non-conscious working memory challenge this view. Combining visual masking with magnetoencephalography, we investigate the reality of non-conscious working memory and dissect its neural mechanisms. In a spatial delayed-response task, participants reported the location of a subjectively unseen target above chance-level after several seconds. Conscious perception and conscious working memory were characterized by similar signatures: a sustained desynchronization in the alpha/beta band over frontal cortex, and a decodable representation of target location in posterior sensors. During non-conscious working memory, such activity vanished. Our findings contradict models that identify working memory with sustained neural firing, but are compatible with recent proposals of ‘activity-silent’ working memory. We present a theoretical framework and simulations showing how slowly decaying synaptic changes allow cell assemblies to go dormant during the delay, yet be retrieved above chance-level after several seconds.
The graph theoretical analysis of structural magnetic resonance imaging (MRI) data has received a great deal of interest in recent years to characterize the organizational principles of brain networks and their alterations in psychiatric disorders, such as schizophrenia. However, the characterization of networks in clinical populations can be challenging, since the comparison of connectivity between groups is influenced by several factors, such as the overall number of connections and the structural abnormalities of the seed regions. To overcome these limitations, the current study employed the whole-brain analysis of connectional fingerprints in diffusion tensor imaging data obtained at 3 T of chronic schizophrenia patients (n = 16) and healthy, age-matched control participants (n = 17). Probabilistic tractography was performed to quantify the connectivity of 110 brain areas. The connectional fingerprint of a brain area represents the set of relative connection probabilities to all its target areas and is, hence, less affected by overall white and gray matter changes than absolute connectivity measures. After detecting brain regions with abnormal connectional fingerprints through similarity measures, we tested each of its relative connection probability between groups. We found altered connectional fingerprints in schizophrenia patients consistent with a dysconnectivity syndrome. While the medial frontal gyrus showed only reduced connectivity, the connectional fingerprints of the inferior frontal gyrus and the putamen mainly contained relatively increased connection probabilities to areas in the frontal, limbic, and subcortical areas. These findings are in line with previous studies that reported abnormalities in striatal–frontal circuits in the pathophysiology of schizophrenia, highlighting the potential utility of connectional fingerprints for the analysis of anatomical networks in the disorder.
The interaction between Λ baryons and kaons/antikaons is a crucial ingredient for the strangeness S=0 and S=−2 sector of the meson–baryon interaction at low energies. In particular, the ΛK‾ might help in understanding the origin of states such as the Ξ(1620), whose nature and properties are still under debate. Experimental data on Λ–K and Λ–K‾ systems are scarce, leading to large uncertainties and tension between the available theoretical predictions constrained by such data. In this Letter we present the measurements of Λ–K⊕+Λ‾–K− and Λ–K⊕−Λ‾–K+ correlations obtained in the high-multiplicity triggered data sample in pp collisions at s=13 TeV recorded by ALICE at the LHC. The correlation function for both pairs is modeled using the Lednický–Lyuboshits analytical formula and the corresponding scattering parameters are extracted. The Λ–K⊕−Λ‾–K+ correlations show the presence of several structures at relative momenta k⁎ above 200 MeV/c, compatible with the Ω baryon, the Ξ(1690), and Ξ(1820) resonances decaying into Λ–K− pairs. The low k⁎ region in the Λ–K⊕−Λ‾–K+ also exhibits the presence of the Ξ(1620) state, expected to strongly couple to the measured pair. The presented data allow to access the ΛK+ and ΛK− strong interaction with an unprecedented precision and deliver the first experimental observation of the Ξ(1620) decaying into ΛK−.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of stochastic growth and retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
Active efficient coding explains the development of binocular vision and its failure in amblyopia
(2020)
The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here, we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a formulation of the active efficient coding theory, which proposes that eye movements as well as stimulus encoding are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to coordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
Poster presentation: Our work deals with the self-organization [1] of a memory structure that includes multiple hierarchical levels with massive recurrent communication within and between them. Such structure has to provide a representational basis for the relevant objects to be stored and recalled in a rapid and efficient way. Assuming that the object patterns consist of many spatially distributed local features, a problem of parts-based learning is posed. We speculate on the neural mechanisms governing the process of the structure formation and demonstrate their functionality on the task of human face recognition. The model we propose is based on two consecutive layers of distributed cortical modules, which in turn contain subunits receiving common afferents and bounded by common lateral inhibition (Figure 1). In the initial state, the connectivity between and within the layers is homogeneous, all types of synapses – bottom-up, lateral and top-down – being plastic. During the iterative learning, the lower layer of the system is exposed to the Gabor filter banks extracted from local points on the face images. Facing an unsupervised learning problem, the system is able to develop synaptic structure capturing local features and their relations on the lower level, as well as the global identity of the person at the higher level of processing, improving gradually its recognition performance with learning time. ...
Hypofunction of the N-methyl-D-aspartate receptor (NMDAR) has been implicated as a possible mechanism underlying cognitive deficits and aberrant neuronal dynamics in schizophrenia. To test this hypothesis, we first administered a sub-anaesthetic dose of S-ketamine (0.006 mg/kg/min) or saline in a single-blind crossover design in 14 participants while magnetoencephalographic data were recorded during a visual task. In addition, magnetoencephalographic data were obtained in a sample of unmedicated first-episode psychosis patients (n = 10) and in patients with chronic schizophrenia (n = 16) to allow for comparisons of neuronal dynamics in clinical populations versus NMDAR hypofunctioning. Magnetoencephalographic data were analysed at source-level in the 1–90 Hz frequency range in occipital and thalamic regions of interest. In addition, directed functional connectivity analysis was performed using Granger causality and feedback and feedforward activity was investigated using a directed asymmetry index. Psychopathology was assessed with the Positive and Negative Syndrome Scale. Acute ketamine administration in healthy volunteers led to similar effects on cognition and psychopathology as observed in first-episode and chronic schizophrenia patients. However, the effects of ketamine on high-frequency oscillations and their connectivity profile were not consistent with these observations. Ketamine increased amplitude and frequency of gamma-power (63–80 Hz) in occipital regions and upregulated low frequency (5–28 Hz) activity. Moreover, ketamine disrupted feedforward and feedback signalling at high and low frequencies leading to hypo- and hyper-connectivity in thalamo-cortical networks. In contrast, first-episode and chronic schizophrenia patients showed a different pattern of magnetoencephalographic activity, characterized by decreased task-induced high-gamma band oscillations and predominantly increased feedforward/feedback-mediated Granger causality connectivity. Accordingly, the current data have implications for theories of cognitive dysfunctions and circuit impairments in the disorder, suggesting that acute NMDAR hypofunction does not recreate alterations in neural oscillations during visual processing observed in schizophrenia.
Adjuvanted influenza vaccines constitute a key element towards inducing neutralizing antibody responses in populations with reduced responsiveness, such as infants and elderly subjects, as well as in devising antigen-sparing strategies. In particular, squalene-containing adjuvants have been observed to induce enhanced antibody responses, as well as having an influence on cross-reactive immunity. To explore the effects of adjuvanted vaccine formulations on antibody response and their relation to protein-specific immunity, we propose different mathematical models of antibody production dynamics in response to influenza vaccination. Data from ferrets immunized with commercial H1N1pdm09 vaccine antigen alone or formulated with different adjuvants was instrumental to adjust model parameters. While the affinity maturation process complexity is abridged, the proposed model is able to recapitulate the essential features of the observed dynamics. Our numerical results suggest that there exists a qualitative shift in protein-specific antibody response, with enhanced production of antibodies targeting the NA protein in adjuvanted versus non-adjuvanted formulations, in conjunction with a protein-independent boost that is over one order of magnitude larger for squalene-containing adjuvants. Furthermore, simulations predict that vaccines formulated with squalene-containing adjuvants are able to induce sustained antibody titers in a robust way, with little impact of the time interval between immunizations.
Evidence from anatomical and functional imaging studies have highlighted major modifications of cortical circuits during adolescence. These include reductions of gray matter (GM), increases in the myelination of cortico-cortical connections and changes in the architecture of large-scale cortical networks. It is currently unclear, however, how the ongoing developmental processes impact upon the folding of the cerebral cortex and how changes in gyrification relate to maturation of GM/WM-volume, thickness and surface area. In the current study, we acquired high-resolution (3 Tesla) magnetic resonance imaging (MRI) data from 79 healthy subjects (34 males and 45 females) between the ages of 12 and 23 years and performed whole brain analysis of cortical folding patterns with the gyrification index (GI). In addition to GI-values, we obtained estimates of cortical thickness, surface area, GM and white matter (WM) volume which permitted correlations with changes in gyrification. Our data show pronounced and widespread reductions in GI-values during adolescence in several cortical regions which include precentral, temporal and frontal areas. Decreases in gyrification overlap only partially with changes in the thickness, volume and surface of GM and were characterized overall by a linear developmental trajectory. Our data suggest that the observed reductions in GI-values represent an additional, important modification of the cerebral cortex during late brain maturation which may be related to cognitive development.
A novel method for identifying the nature of QCD transitions in heavy-ion collision experiments is introduced. PointNet based Deep Learning (DL) models are developed to classify the equation of state (EoS) that drives the hydrodynamic evolution of the system created in Au-Au collisions at 10 AGeV. The DL models were trained and evaluated in different hypothetical experimental situations. A decreased performance is observed when more realistic experimental effects (acceptance cuts and decreased resolutions) are taken into account. It is shown that the performance can be improved by combining multiple events to make predictions. The PointNet based models trained on the reconstructed tracks of charged particles from the CBM detector simulation discriminate a crossover transition from a first order phase transition with an accuracy of up to 99.8%. The models were subjected to several tests to evaluate the dependence of its performance on the centrality of the collisions and physical parameters of fluid dynamic simulations. The models are shown to work in a broad range of centralities (b=0–7 fm). However, the performance is found to improve for central collisions (b=0–3 fm). There is a drop in the performance when the model parameters lead to reduced duration of the fluid dynamic evolution or when less fraction of the medium undergoes the transition. These effects are due to the limitations of the underlying physics and the DL models are shown to be superior in its discrimination performance in comparison to conventional mean observables.
A primordial state of matter consisting of free quarks and gluons that existed in the early universe a few microseconds after the Big Bang is also expected to form in high-energy heavy-ion collisions. Determining the equation of state (EoS) of such a primordial matter is the ultimate goal of high-energy heavy-ion experiments. Here we use supervised learning with a deep convolutional neural network to identify the EoS employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in quantum chromodynamics. Such EoS-meter is model-independent and insensitive to other simulation inputs including the initial conditions for hydrodynamic simulations.
We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7–80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.
Relying on the existing estimates for the production cross sections of mini black holes in models with large extra dimensions, we review strategies for identifying those objects at collider experiments. We further consider a possible stable final state of such black holes and discuss their characteristic signatures. Keywords: Black holes
Charged-particle spectra at midrapidity are measured in Pb–Pb collisions at the centre-of-mass energy per nucleon–nucleon pair √sNN = 5.02 TeV and presented in centrality classes ranging from most central (0–5%) to most peripheral (95–100%) collisions. Possible medium effects are quantified using the nuclear modification factor (RAA) by comparing the measured spectra with those from proton–proton collisions, scaled by the number of independent nucleon–nucleon collisions obtained from a Glauber model. At large transverse momenta (8 < pT < 20 GeV/c), the average RAA is found to increase from about 0.15 in 0–5% central to a maximum value of about 0.8 in 75–85% peripheral collisions, beyond which it falls off strongly to below 0.2 for the most peripheral collisions. Furthermore, RAA initially exhibits a positive slope as a function of pT in the 8–20 GeV/c interval, while for collisions beyond the 80% class the slope is negative. To reduce uncertainties related to event selection and normalization, we also provide the ratio of RAA in adjacent centrality intervals. Our results in peripheral collisions are consistent with a PYTHIA-based model without nuclear modification, demonstrating that biases caused by the event selection and collision geometry can lead to the apparent suppression in peripheral collisions. This explains the unintuitive observation that RAA is below unity in peripheral Pb–Pb, but equal to unity in minimum-bias p–Pb collisions despite similar charged-particle multiplicities.
Dirac spectrum representations of the Polyakov loop fluctuations are derived on the temporally odd-number lattice, where the temporal length is odd with the periodic boundary condition. We investigate the Polyakov loop fluctuations based on these analytical relations. It is semi-analytically and numerically found that the low-lying Dirac eigenmodes have little contribution to the Polyakov loop fluctuations, which are sensitive probe for the quark deconfinement. Our results suggest no direct one-to-one corresponding between quark confinement and chiral symmetry breaking in QCD.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. One of the central questions in neuroscience is how neural activity is organized across different spatial and temporal scales. As larger populations oscillate and synchronize at lower frequencies and smaller ensembles are active at higher frequencies, a cross-frequency coupling would facilitate flexible coordination of neural activity simultaneously in time and space. Although various experiments have revealed amplitude-to-amplitude and phase-to-phase coupling, the most common and most celebrated result is that the phase of the lower frequency component modulates the amplitude of the higher frequency component. Over the recent 5 years the amount of experimental works finding such phase-amplitude coupling in LFP, ECoG, EEG and MEG has been tremendous (summarized in [1]). We suggest that although the mechanism of cross-frequency-coupling (CFC) is theoretically very tempting, the current analysis methods might overestimate any physiological CFC actually evident in the signals of LFP, ECoG, EEG and MEG. In particular, we point out three conceptual problems in assessing the components and their correlations of a time series. Although we focus on phase-amplitude coupling, most of our argument is relevant for any type of coupling. 1) The first conceptual problem is related to isolating physiological frequency components of the recorded signal. The key point is to notice that there are many different mathematical representations for a time series but the physical interpretation we make out of them is dependent on the choice of the components to be analyzed. In particular, when one isolates the components by Fourier-representation based filtering, it is the width of the filtering bands what defines what we consider as our components and how their power or group phase change in time. We will discuss clear cut examples where the interpretation of the existence of CFC depends on the width of the filtering process. 2) A second problem deals with the origin of spectral correlations as detected by current cross-frequency analysis. It is known that non-stationarities are associated with spectral correlations in the Fourier space. Therefore, there are two possibilities regarding the interpretation of any observed CFC. One scenario is that basic neuronal mechanisms indeed generate an interaction across different time scales (or frequencies) resulting in processes with non-stationary features. The other and problematic possibility is that unspecific non-stationarities can also be associated with spectral correlations which in turn will be detected by cross frequency measures even if physiologically there is no causal interaction between the frequencies. 3) We discuss on the role of non-linearities as generators of cross frequency interactions. As an example we performed a phase-amplitude coupling analysis of two nonlinearly related signals: atmospheric noise and the square of it (Figure 1) observing an enhancement of phase-amplitude coupling in the second signal while no pattern is observed in the first. Finally, we discuss some minimal conditions need to be tested to solve some of the ambiguities here noted. In summary, we simply want to point out that finding a significant cross frequency pattern does not always have to imply that there indeed is physiological cross frequency interaction in the brain.
Anisotropic flow and flow fluctuations of identified hadrons in Pb–Pb collisions at √sNN = 5.02 TeV
(2023)
The first measurements of elliptic flow of π±, K±, p+p¯¯¯, K0S, Λ+Λ¯¯¯¯, ϕ, Ξ−+Ξ+, and Ω−+Ω+ using multiparticle cumulants in Pb−Pb collisions at sNN−−−√ = 5.02 TeV are presented. Results obtained with two- (v2{2}) and four-particle cumulants (v2{4}) are shown as a function of transverse momentum, pT, for various collision centrality intervals. Combining the data for both v2{2} and v2{4} also allows us to report the first measurements of the mean elliptic flow, elliptic flow fluctuations, and relative elliptic flow fluctuations for various hadron species. These observables probe the event-by-event eccentricity fluctuations in the initial state and the contributions from the dynamic evolution of the expanding quark-gluon plasma. The characteristic features observed in previous pT-differential anisotropic flow measurements for identified hadrons with two-particle correlations, namely the mass ordering at low pT and the approximate scaling with the number of constituent quarks at intermediate pT, are similarly present in the four-particle correlations and the combinations of v2{2} and v2{4}. In addition, a particle species dependence of flow fluctuations is observed that could indicate a significant contribution from final state hadronic interactions. The comparison between experimental measurements and CoLBT model calculations, which combine the various physics processes of hydrodynamics, quark coalescence, and jet fragmentation, illustrates their importance over a wide pT range.
Fluctuations of anisotropic flow in lead-lead collisions at LHC energies arising in HYDJET++model are studied. It is shown that intrinsic fluctuations of the flow which appear mainly because of the fluctuations of particle multiplicity, momenta and coordinates are insufficient to match the measured experimental data, provided the eccentricity of the freeze-out hypersurface is fixed at any given impact parameter b. However, when the variations of the eccentricity in HYDJET++ are taken into account, the agreement between the model results and the data is drastically improved. Both model calculations and the data are filtered through the unfolding procedure. This procedure eliminates the non-flow fluctuations to a higher degree, thus indicating a dynamical origin of the flow fluctuations in HYDJET++ event generator.
The elliptic, v2, triangular, v3, and quadrangular, v4, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at √sNN=2.76 TeV with the ALICE detector at the Large Hadron Collider. Results obtained with the event plane and four-particle cumulant methods are reported for the pseudo-rapidity range |η|<0.8 at different collision centralities and as a function of transverse momentum, pT, out to pT=20 GeV/c. The observed non-zero elliptic and triangular flow depends only weakly on transverse momentum for pT>8 GeV/c. The small pT dependence of the difference between elliptic flow results obtained from the event plane and four-particle cumulant methods suggests a common origin of flow fluctuations up to pT=8 GeV/c. The magnitude of the (anti-)proton elliptic and triangular flow is larger than that of pions out to at least pT=8 GeV/c indicating that the particle type dependence persists out to high pT.
We report the first results of elliptic (v2), triangular (v3) and quadrangular flow (v4) of charged particles in Pb-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the CERN Large Hadron Collider. The measurements are performed in the central pseudorapidity region |η|<0.8 and for the transverse momentum range 0.2<pT<5 GeV/c. The anisotropic flow is measured using two-particle correlations with a pseudorapidity gap greater than one unit and with the multi-particle cumulant method. Compared to results from Pb-Pb collisions at sNN−−−√=2.76 TeV, the anisotropic flow coefficients v2, v3 and v4 are found to increase by (3.0±0.6)%, (4.3±1.4)% and (10.2±3.8)%, respectively, in the centrality range 0-50%. This increase can be attributed mostly to an increase of the average transverse momentum between the two energies. The measurements are found to be compatible with hydrodynamic model calculations. This comparison provides a unique opportunity to test the validity of the hydrodynamic picture and the power to further discriminate between various possibilities for the temperature dependence of shear viscosity to entropy density ratio of the produced matter in heavy-ion collisions at the highest energies.
Measurements of elliptic (v2) and triangular (v3) flow coefficients of π±, K±, p+p¯¯¯, K0S, and Λ+Λ¯¯¯¯ obtained with the scalar product method in Xe-Xe collisions at sNN−−−√ = 5.44 TeV are presented. The results are obtained in the rapidity range |y| < 0.5 and reported as a function of transverse momentum, pT, for several collision centrality classes. The flow coefficients exhibit a particle mass dependence for pT < 3 GeV/c, while a grouping according to particle type (i.e., meson and baryon) is found at intermediate transverse momenta (3 < pT < 8 GeV/c). The magnitude of the baryon v2 is larger than that of mesons up to pT = 6 GeV/c. The centrality dependence of the shape evolution of the pT-differential v2 is studied for the various hadron species. The v2 coefficients of π±, K±, and p+p¯¯¯ are reproduced by MUSIC hydrodynamic calculations coupled to a hadronic cascade model (UrQMD) for pT < 1 GeV/c. A comparison with vn measurements in the corresponding centrality intervals in Pb-Pb collisions at sNN−−−√ = 5.02 TeV yields an enhanced v2 in central collisions and diminished value in semicentral collisions.
The elliptic (v2), triangular (v3), and quadrangular (v4) flow coefficients of π±, K±, p+p¯¯¯,Λ+Λ¯¯¯¯,K0S, and the ϕ-meson are measured in Pb-Pb collisions at s√NN=5.02 TeV. Results obtained with the scalar product method are reported for the rapidity range |y| < 0.5 as a function of transverse momentum, pT, at different collision centrality intervals between 0–70%, including ultra-central (0–1%) collisions for π±, K±, and p+p¯¯¯. For pT < 3 GeV/c, the flow coefficients exhibit a particle mass dependence. At intermediate transverse momenta (3 < pT < 8–10 GeV/c), particles show an approximate grouping according to their type (i.e., mesons and baryons). The ϕ-meson v2, which tests both particle mass dependence and type scaling, follows p+p¯¯¯ v2 at low pT and π± v2 at intermediate pT. The evolution of the shape of vn(pT) as a function of centrality and harmonic number n is studied for the various particle species. Flow coefficients of π±, K±, and p+p¯¯¯ for pT < 3 GeV/c are compared to iEBE-VISHNU and MUSIC hydrodynamical calculations coupled to a hadronic cascade model (UrQMD). The iEBE-VISHNU calculations describe the results fairly well for pT < 2.5 GeV/c, while MUSIC calculations reproduce the measurements for pT < 1 GeV/c. A comparison to vn coefficients measured in Pb-Pb collisions at sNN−−−√=2.76 TeV is also provided.
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a pT region inaccessible by direct jet identification. In these measurements pseudorapidity (Δη) and azimuthal (Δφ) differences are used to extract the shape of the near-side peak formed by particles associated to a higher pT trigger particle (1<pT,trig< 8 GeV/c). A combined fit of the near-side peak and long-range correlations is applied to the data allowing the extraction of the centrality evolution of the peak shape in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. A significant broadening of the peak in the Δη direction at low pT is found from peripheral to central collisions, which vanishes above 4 GeV/c, while in the Δφ direction the peak is almost independent of centrality. For the 10% most central collisions and 1<pT,assoc< 2 GeV/c, 1<pT,trig< 3 GeV/c a novel feature is observed: a depletion develops around the centre of the peak. The results are compared to pp collisions at the same centre of mass energy and to AMPT model simulations. The comparison to the investigated models suggests that the broadening and the development of the depletion is connected to the strength of radial and longitudinal flow.
Observations show that, at the beginning of their existence, neutron stars are accelerated briskly to velocities of up to a thousand kilometers per second. We argue that this remarkable effect can be explained as a manifestation of quantum anomalies on astrophysical scales. To theoretically describe the early stage in the life of neutron stars we use hydrodynamics as a systematic effective-field-theory framework. Within this framework, anomalies of the Standard Model of particle physics as underlying microscopic theory imply the presence of a particular set of transport terms, whose form is completely fixed by theoretical consistency. The resulting chiral transport effects in proto-neutron stars enhance neutrino emission along the internal magnetic field, and the recoil can explain the order of magnitude of the observed kick velocities.
We investigate charmonium production in Pb + Pb collisions at LHC beam energy Elab=2.76A TeV at fixed-target experiment (√sNN = 72 GeV). In the frame of a transport approach including cold and hot nuclear matter effects on charmonium evolution, we focus on the antishadowing effect on the nuclear modification factors RAA and rAA for the J/ψ yield and transverse momentum. The yield is more suppressed at less forward rapidity (ylab ≃ 2) than that at very forward rapidity (ylab ≃ 4) due to the shadowing and antishadowing in different rapidity bins.
One of important consequences of Hagedorn statistical bootstrap model is the prediction of limiting temperature Tcrit for hadron systems colloquially known as Hagedorn temperature. According to Hagedorn, this effect should be observed in hadron spectra obtained in infinite equilibrated nuclear matter rather than in relativistic heavy-ion collisions. We present results of microscopic model calculations for the infinite nuclear matter, simulated by a box with periodic boundary conditions. The limiting temperature indeed appears in the model calculations. Its origin is traced to strings and many-body decays of resonances.
The search for short-lived particles is usually the final stage in the chain of event reconstruction and precedes event selection when operating in online mode or physics analysis when operating in offline mode. Most often such short-lived particles are neutral and their search and reconstruction is carried out using their daughter charged particles resulting from their decay.
The use of the missing mass method makes it possible to find and analyze also decays of charged short-lived particles, when one of the daughter particles is neutral and is not registered in the detector system. One of the most known examples of such decays is the decay Σ− → nπ−.
In this paper, we discuss in detail the missing mass method, which was implemented as part of the KF Particle Finder package for the search and analysis of short-lived particles, and describe the use of the method in the STAR experiment (BNL, USA).
The method was used to search for pion (π± → μ±ν) and kaon (K± → μ±ν and K± → π±π0) decays online on the HLT farm in the express production chain. An important feature of the express production chain in the STAR experiment is that it allows one to start calibration, production, and analysis of the data immediately after receiving them.
Here, the particular features and results of the real-time application of the method within the express processing of data obtained in the BES-II program at a beam energy of 3.85 GeV/n when working with a fixed target are presented and discussed.
Abstract: Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.
Author Summary: The statistics of our visual world is dominated by occlusions. Almost every image processed by our brain consists of mutually occluding objects, animals and plants. Our visual cortex is optimized through evolution and throughout our lifespan for such stimuli. Yet, the standard computational models of primary visual processing do not consider occlusions. In this study, we ask what effects visual occlusions may have on predicted response properties of simple cells which are the first cortical processing units for images. Our results suggest that recently observed differences between experiments and predictions of the standard simple cell models can be attributed to occlusions. The most significant consequence of occlusions is the prediction of many cells sensitive to center-surround stimuli. Experimentally, large quantities of such cells are observed since new techniques (reverse correlation) are used. Without occlusions, they are only obtained for specific settings and none of the seminal studies (sparse coding, ICA) predicted such fields. In contrast, the new type of response naturally emerges as soon as occlusions are considered. In comparison with recent in vivo experiments we find that occlusive models are consistent with the high percentages of center-surround simple cells observed in macaque monkeys, ferrets and mice.
Visual selective attention and visual working memory (WM) share the same capacity-limited resources. We investigated whether and how participants can cope with a task in which these 2 mechanisms interfere. The task required participants to scan an array of 9 objects in order to select the target locations and to encode the items presented at these locations into WM (1 to 5 shapes). Determination of the target locations required either few attentional resources (“popout condition”) or an attention-demanding serial search (“non pop-out condition”). Participants were able to achieve high memory performance in all stimulation conditions but, in the non popout conditions, this came at the cost of additional processing time. Both empirical evidence and subjective reports suggest that participants invested the additional time in memorizing the locations of all target objects prior to the encoding of their shapes into WM. Thus, they seemed to be unable to interleave the steps of search with those of encoding. We propose that the memory for target locations substitutes for perceptual pop-out and thus may be the key component that allows for flexible coping with the common processing limitations of visual WM and attention. The findings have implications for understanding how we cope with real-life situations in which the demands on visual attention and WM occur simultaneously. Keywords: attention, working memory, interference, encoding strategies
In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left or rightward auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left or rightward) more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.
Correction to: Nature Communications https://doi.org/10.1038/s41467-017-01045-x, published online 31 October 2017
It has come to our attention that we did not specify whether the stimulation magnitudes we report in this Article are peak amplitudes or peak-to-peak. All references to intensity given in mA in the manuscript refer to peak-to-peak amplitudes, except in Fig. 2, where the model is calibrated to 1 mA peak amplitude, as stated. In the original version of the paper we incorrectly calibrated the computational models to 1 mA peak-to-peak, rather than 1 mA peak amplitude. This means that we divided by a value twice as large as we should have. The correct estimated fields are therefore twice as large as shown in the original Fig. 2 and Supplementary Fig. 11. The corrected figures are now properly calibrated to 1mA peak amplitude. Furthermore, the sentence in the first paragraph of the Results section ‘Intensity ranged from 0.5 to 2.5 mA (current density 0.125–0.625 mA mA/cm2), which is stronger than in previous reports’, should have read ‘Intensity ranged from 0.5 to 2.5 mA peak to peak (peak current density 0.0625–0.3125 mA/cm2), which is stronger than in previous reports.’ These errors do not affect any of the Article’s conclusions. Correct versions of Fig. 2 and Supplementary Fig. 11 are presented below as Figs. 1, 2.
Poster presentation: Coordinated neuronal activity across many neurons, i.e. synchronous or spatiotemporal pattern, had been believed to be a major component of neuronal activity. However, the discussion if coordinated activity really exists remained heated and controversial. A major uncertainty was that many analysis approaches either ignored the auto-structure of the spiking activity, assumed a very simplified model (poissonian firing), or changed the auto-structure by spike jittering. We studied whether a statistical inference that tests whether coordinated activity is occurring beyond chance can be made false if one ignores or changes the real auto-structure of recorded data. To this end, we investigated the distribution of coincident spikes in mutually independent spike-trains modeled as renewal processes. We considered Gamma processes with different shape parameters as well as renewal processes in which the ISI distribution is log-normal. For Gamma processes of integer order, we calculated the mean number of coincident spikes, as well as the Fano factor of the coincidences, analytically. We determined how these measures depend on the bin width and also investigated how they depend on the firing rate, and on rate difference between the neurons. We used Monte-Carlo simulations to estimate the whole distribution for these parameters and also for other values of gamma. Moreover, we considered the effect of dithering for both of these processes and saw that while dithering does not change the average number of coincidences, it does change the shape of the coincidence distribution. Our major findings are: 1) the width of the coincidence count distribution depends very critically and in a non-trivial way on the detailed properties of the inter-spike interval distribution, 2) the dependencies of the Fano factor on the coefficient of variation of the ISI distribution are complex and mostly non-monotonic. Moreover, the Fano factor depends on the very detailed properties of the individual point processes, and cannot be predicted by the CV alone. Hence, given a recorded data set, the estimated value of CV of the ISI distribution is not sufficient to predict the Fano factor of the coincidence count distribution, and 3) spike jittering, even if it is as small as a fraction of the expected ISI, can falsify the inference on coordinated firing. In most of the tested cases and especially for complex synchronous and spatiotemporal pattern across many neurons, spike jittering increased the likelihood of false positive finding very strongly. Last, we discuss a procedure [1] that considers the complete auto-structure of each individual spike-train for testing whether synchrony firing occurs at chance and therefore overcomes the danger of an increased level of false positives.
The three-dimensional structure determination of RNAs by NMR spectroscopy relies on chemical shift assignment, which still constitutes a bottleneck. In order to develop more efficient assignment strategies, we analysed relationships between sequence and 1H and 13C chemical shifts. Statistics of resonances from regularly Watson– Crick base-paired RNA revealed highly characteristic chemical shift clusters. We developed two approaches using these statistics for chemical shift assignment of double-stranded RNA (dsRNA): a manual approach that yields starting points for resonance assignment and simplifies decision trees and an automated approach based on the recently introduced automated resonance assignment algorithm FLYA. Both strategies require only unlabeled RNAs and three 2D spectra for assigning the H2/C2, H5/C5, H6/C6, H8/C8 and H10/C10 chemical shifts. The manual approach proved to be efficient and robust when applied to the experimental data of RNAs with a size between 20 nt and 42 nt. The more advanced automated assignment approach was successfully applied to four stemloop RNAs and a 42 nt siRNA, assigning 92–100% of the resonances from dsRNA regions correctly. This is the first automated approach for chemical shift assignment of non-exchangeable protons of RNA and their corresponding 13C resonances, which provides an important step toward automated structure determination of RNAs.
We present a model for the autonomous and simultaneous learning of active binocular and motion vision. The model is based on the Active Efficient Coding (AEC) framework, a recent generalization of classic efficient coding theories to active perception. The model learns how to efficiently encode the incoming visual signals generated by an object moving in 3-D through sparse coding. Simultaneously, it learns how to produce eye movements that further improve the efficiency of the sensory coding. This learning is driven by an intrinsic motivation to maximize the system's coding efficiency. We test our approach on the humanoid robot iCub using simulations. The model demonstrates self-calibration of accurate object fixation and tracking of moving objects. Our results show that the model keeps improving until it hits physical constraints such as camera or motor resolution, or limits on its internal coding capacity. Furthermore, we show that the emerging sensory tuning properties are in line with results on disparity, motion, and motion-in-depth tuning in the visual cortex of mammals. The model suggests that vergence and tracking eye movements can be viewed as fundamentally having the same objective of maximizing the coding efficiency of the visual system and that they can be learned and calibrated jointly through AEC.
We present measurements of the azimuthal dependence of charged jet production in central and semi-central √sNN=2.76 TeV Pb–Pb collisions with respect to the second harmonic event plane, quantified as v2ch jet. Jet finding is performed employing the anti-kT algorithm with a resolution parameter R=0.2 using charged tracks from the ALICE tracking system. The contribution of the azimuthal anisotropy of the underlying event is taken into account event-by-event. The remaining (statistical) region-to-region fluctuations are removed on an ensemble basis by unfolding the jet spectra for different event plane orientations independently. Significant non-zero v2ch jet is observed in semi-central collisions (30–50% centrality) for 20<pTch jet<90 GeV/c. The azimuthal dependence of the charged jet production is similar to the dependence observed for jets comprising both charged and neutral fragments, and compatible with measurements of the v2 of single charged particles at high pT. Good agreement between the data and predictions from JEWEL, an event generator simulating parton shower evolution in the presence of a dense QCD medium, is found in semi-central collisions.
Angular correlations between heavy-flavour decay electrons and charged particles at mid-rapidity (|η|<0.8) are measured in p-Pb collisions at sNN−−−√ = 5.02 TeV. The analysis is carried out for the 0-20% (high) and 60-100% (low) multiplicity ranges. The jet contribution in the correlation distribution from high-multiplicity events is removed by subtracting the distribution from low-multiplicity events. An azimuthal modulation remains after removing the jet contribution, similar to previous observations in two-particle angular correlation measurements for light-flavour hadrons. A Fourier decomposition of the modulation results in a positive second-order coefficient (v2) for heavy-flavour decay electrons in the transverse momentum interval 1.5<pT<4 GeV/c in high-multiplicity events, with a significance larger than 5σ. The results are compared with those of charged particles at mid-rapidity and of inclusive muons at forward rapidity. The v2 measurement of open heavy-flavour particles at mid-rapidity in small collision systems could provide crucial information to help interpret the anisotropies observed in such systems.
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.
The measurement of the azimuthal-correlation function of prompt D mesons with charged particles in pp collisions at s√=5.02 TeV and p–Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the LHC is reported. The D0, D+, and D∗+ mesons, together with their charge conjugates, were reconstructed at midrapidity in the transverse momentum interval 3<pT<24 GeV/c and correlated with charged particles having pT>0.3 GeV/c and pseudorapidity |η|<0.8. The properties of the correlation peaks appearing in the near- and away-side regions (for Δφ≈0 and Δφ≈π, respectively) were extracted via a fit to the azimuthal correlation functions. The shape of the correlation functions and the near- and away-side peak features are found to be consistent in pp and p–Pb collisions, showing no modifications due to nuclear effects within uncertainties. The results are compared with predictions from Monte Carlo simulations performed with the PYTHIA, POWHEG+PYTHIA, HERWIG, and EPOS 3 event generators.
High shares of intermittent renewable power generation in a European electricity system will require flexible backup power generation on the dominant diurnal, synoptic, and seasonal weather timescales. The same three timescales are already covered by today’s dispatchable electricity generation facilities, which are able to follow the typical load variations on the intra-day, intra-week, and seasonal timescales. This work aims to quantify the changing demand for those three backup flexibility classes in emerging large-scale electricity systems, as they transform from low to high shares of variable renewable power generation. A weather-driven modelling is used, which aggregates eight years of wind and solar power generation data as well as load data over Germany and Europe, and splits the backup system required to cover the residual load into three flexibility classes distinguished by their respective maximum rates of change of power output. This modelling shows that the slowly flexible backup system is dominant at low renewable shares, but its optimized capacity decreases and drops close to zero once the average renewable power generation exceeds 50% of the mean load. The medium flexible backup capacities increase for modest renewable shares, peak at around a 40% renewable share, and then continuously decrease to almost zero once the average renewable power generation becomes larger than 100% of the mean load. The dispatch capacity of the highly flexible backup system becomes dominant for renewable shares beyond 50%, and reach their maximum around a 70% renewable share. For renewable shares above 70% the highly flexible backup capacity in Germany remains at its maximum, whereas it decreases again for Europe. This indicates that for highly renewable large-scale electricity systems the total required backup capacity can only be reduced if countries share their excess generation and backup power.
Bardeen black hole chemistry
(2019)
In the present paper we try to connect the Bardeen black hole with the concept of the recently proposed black hole chemistry. We study thermodynamic properties of the regular black hole with an anti-deSitter background. The negative cosmological constant Λ plays the role of the positive thermodynamic pressure of the system. After studying the thermodynamic variables, we derive the corresponding equation of state and we show that a neutral Bardeen-anti-deSitter black hole has similar phenomenology to the chemical Van der Waals fluid. This is equivalent to saying that the system exhibits criticality and a first order small/large black hole phase transition reminiscent of the liquid/gas coexistence.
Baryonic models of ultra-low-mass compact stars for the central compact object in HESS J1731-347
(2023)
The recent attempt on mass and radius inference of the central compact object within the supernova remnant HESS J1731-347 suggests for this object an unusually low mass of M=0.77−0.17+0.20M⊙ and a small radius of R=10.4−0.78+0.86km. We explore the ways such a result can be accommodated within models of dense matter with heavy baryonic degrees of freedom which are constrained by the multi-messenger observations. We find that to do so using only purely nucleonic models, one needs to assume a rather small value of the slope of symmetry energy Lsym. Once heavy baryons are included higher values of the slope Lsym become acceptable at the cost of a slightly reduced maximum mass of static configuration. These two scenarios are distinguished by the particle composition and will undergo different cooling scenarios. In addition, we show that the universalities of the I-Love-Q relations for static configurations can be extended to very low masses without loss in their accuracy.
We have investigated the systematic differences introduced when performing a Bayesian-inference analysis of the equation of state (EOS) of neutron stars employing either variable- or constant-likelihood functions. The former has the advantage of retaining the full information on the distributions of the measurements, making exhaustive usage of the data. The latter, on the other hand, has the advantage of a much simpler implementation and reduced computational costs. In both approaches, the EOSs have identical priors and have been built using the sound speed parameterization method so as to satisfy the constraints from X-ray and gravitational waves observations, as well as those from chiral effective theory and perturbative quantum chromodynamics. In all cases, the two approaches lead to very similar results and the 90% confidence levels essentially overlap. Some differences do appear, but in regions where the probability density is extremely small and are mostly due to the sharp cutoff on the binary tidal deformability L˜ 720 set in the constant-likelihood approach. Our analysis has also produced two additional results. First, an inverse correlation between the normalized central number density, nc,TOV/ns, and the radius of a maximally massive star, RTOV. Second, and most importantly, it has confirmed the relation between the chirp mass and the binary tidal deformability. The importance of this result is that it relates chirp, which is measured very accurately, and L˜ , which contains important information on the EOS. Hence, when chirp is measured in future detections, our relation can be used to set tight constraints on L˜ .
Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference.
The goal of heavy ion reactions at low beam energies is to explore the QCD phase diagram at high net baryon chemical potential. To relate experimental observations with a first order phase transition or a critical endpoint, dynamical approaches for the theoretical description have to be developed. In this summary of the corresponding plenary talk, the status of the dynamical modeling including the most recent advances is presented. The remaining challenges are highlighted and promising experimental measurements are pointed out.
Radiation damage following the ionising radiation of tissue has different scenarios and mechanisms depending on the projectiles or radiation modality. We investigate the radiation damage effects due to shock waves produced by ions. We analyse the strength of the shock wave capable of directly producing DNA strand breaks and, depending on the ion's linear energy transfer, estimate the radius from the ion's path, within which DNA damage by the shock wave mechanism is dominant. At much smaller values of linear energy transfer, the shock waves turn out to be instrumental in propagating reactive species formed close to the ion's path to large distances, successfully competing with diffusion.
Synchronous neuronal firing has been proposed as a potential neuronal code. To determine whether synchronous firing is really involved in different forms of information processing, one needs to directly compare the amount of synchronous firing due to various factors, such as different experimental or behavioral conditions. In order to address this issue, we present an extended version of the previously published method, NeuroXidence. The improved method incorporates bi- and multivariate testing to determine whether different factors result in synchronous firing occurring above the chance level. We demonstrate through the use of simulated data sets that bi- and multivariate NeuroXidence reliably and robustly detects joint-spike-events across different factors.
In this Letter we derive the gravity field equations by varying the action for an ultraviolet complete quantum gravity. Then we consider the case of a static source term and we determine an exact black hole solution. As a result we find a regular spacetime geometry: in place of the conventional curvature singularity extreme energy fluctuations of the gravitational field at small length scales provide an effective cosmological constant in a region locally described in terms of a de Sitter space. We show that the new metric coincides with the noncommutative geometry inspired Schwarzschild black hole. Indeed, we show that the ultraviolet complete quantum gravity, generated by ordinary matter is the dual theory of ordinary Einstein gravity coupled to a noncommutative smeared matter. In other words we obtain further insights about that quantum gravity mechanism which improves Einstein gravity in the vicinity of curvature singularities. This corroborates all the existing literature in the physics and phenomenology of noncommutative black holes.
Recent experiments have demonstrated that visual cortex engages in spatio-temporal sequence learning and prediction. The cellular basis of this learning remains unclear, however. Here we present a spiking neural network model that explains a recent study on sequence learning in the primary visual cortex of rats. The model posits that the sequence learning and prediction abilities of cortical circuits result from the interaction of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. It also reproduces changes in stimulus-evoked multi-unit activity during learning. Furthermore, it makes precise predictions regarding how training shapes network connectivity to establish its prediction ability. Finally, it predicts that the adapted connectivity gives rise to systematic changes in spontaneous network activity. Taken together, our model establishes a new conceptual bridge between the structure and function of cortical circuits in the context of sequence learning and prediction.
In this paper, we discuss the damping of density oscillations in dense nuclear matter in the temperature range relevant to neutron star mergers. This damping is due to bulk viscosity arising from the weak interaction “Urca” processes of neutron decay and electron capture. The nuclear matter is modelled in the relativistic density functional approach. The bulk viscosity reaches a resonant maximum close to the neutrino trapping temperature, then drops rapidly as temperature rises into the range where neutrinos are trapped in neutron stars. We investigate the bulk viscous dissipation timescales in a post-merger object and identify regimes where these timescales are as short as the characteristic timescale ∼10 ms, and, therefore, might affect the evolution of the post-merger object. Our analysis indicates that bulk viscous damping would be important at not too high temperatures of the order of a few MeV and densities up to a few times saturation density.
The procedure for the energy calibration of the high granularity electromagnetic calorimeter PHOS of the ALICE experiment is presented. The methods used to perform the relative gain calibration, to evaluate the geometrical alignment and the corresponding correction of the absolute energy scale, to obtain the nonlinearity correction coefficients and finally, to calculate the time-dependent calibration corrections, are discussed and illustrated by the PHOS performance in proton-proton (pp) collisions at √s=13 TeV. After applying all corrections, the achieved mass resolutions for π0 and η mesons for pT > 1.7 GeV/c are σmπ0 = 4.56 ± 0.03 MeV/c2 and σmη = 15.3 ± 1.0 MeV/c2, respectively.
In binocular rivalry, presentation of different images to the separate eyes leads to conscious perception alternating between the two possible interpretations every few seconds. During perceptual transitions, a stimulus emerging into dominance can spread in a wave-like manner across the visual field. These traveling waves of rivalry dominance have been successfully related to the cortical magnification properties and functional activity of early visual areas, including the primary visual cortex (V1). Curiously however, these traveling waves undergo a delay when passing from one hemifield to another. In the current study, we used diffusion tensor imaging (DTI) to investigate whether the strength of interhemispheric connections between the left and right visual cortex might be related to the delay of traveling waves across hemifields. We measured the delay in traveling wave times (ΔTWT) in 19 participants and repeated this test 6 weeks later to evaluate the reliability of our behavioral measures. We found large interindividual variability but also good test–retest reliability for individual measures of ΔTWT. Using DTI in connection with fiber tractography, we identified parts of the corpus callosum connecting functionally defined visual areas V1–V3. We found that individual differences in ΔTWT was reliably predicted by the diffusion properties of transcallosal fibers connecting left and right V1, but observed no such effect for neighboring transcallosal visual fibers connecting V2 and V3. Our results demonstrate that the anatomical characteristics of topographically specific transcallosal connections predict the individual delay of interhemispheric traveling waves, providing further evidence that V1 is an important site for neural processes underlying binocular rivalry.
Cell fate clusters in ICM organoids arise from cell fate heredity and division: a modelling approach
(2020)
During the mammalian preimplantation phase, cells undergo two subsequent cell fate decisions. During the first decision, the trophectoderm and the inner cell mass are formed. Subsequently, the inner cell mass segregates into the epiblast and the primitive endoderm. Inner cell mass organoids represent an experimental model system, mimicking the second cell fate decision. It has been shown that cells of the same fate tend to cluster stronger than expected for random cell fate decisions. Three major processes are hypothesised to contribute to the cell fate arrangements: (1) chemical signalling; (2) cell sorting; and (3) cell proliferation. In order to quantify the influence of cell proliferation on the observed cell lineage type clustering, we developed an agent-based model accounting for mechanical cell–cell interaction, i.e. adhesion and repulsion, cell division, stochastic cell fate decision and cell fate heredity. The model supports the hypothesis that initial cell fate acquisition is a stochastically driven process, taking place in the early development of inner cell mass organoids. Further, we show that the observed neighbourhood structures can emerge solely due to cell fate heredity during cell division.
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at sNN−−−√=5.02 TeV with the ALICE detector. Centrality classes are determined via the energy deposit in neutron calorimeters at zero degree, close to the beam direction, to minimise dynamical biases of the selection. The corresponding number of participants or binary nucleon-nucleon collisions is determined based on the particle production in the Pb-going rapidity region. Jets have been reconstructed in the central rapidity region from charged particles with the anti-kT algorithm for resolution parameters R=0.2 and R=0.4 in the transverse momentum range 20 to 120 GeV/c. The reconstructed jet momentum and yields have been corrected for detector effects and underlying-event background. In the five centrality bins considered, the charged jet production in p-Pb collisions is consistent with the production expected from binary scaling from pp collisions. The ratio of jet yields reconstructed with the two different resolution parameters is also independent of the centrality selection, demonstrating the absence of major modifications of the radial jet structure in the reported centrality classes.
The inclusive transverse momentum (pT) distributions of primary charged particles are measured in the pseudo-rapidity range |η|<0.8 as a function of event centrality in Pb–Pb collisions at √sNN=2.76 TeV with ALICE at the LHC. The data are presented in the pT range 0.15<pT<50 GeV/c for nine centrality intervals from 70–80% to 0–5%. The results in Pb–Pb are presented in terms of the nuclear modification factor RAA using a pp reference spectrum measured at the same collision energy. We observe that the suppression of high-pT particles strongly depends on event centrality. The yield is most suppressed in central collisions (0–5%) with RAA≈0.13 at pT=6–7 GeV/c. Above pT=7 GeV/c, there is a significant rise in the nuclear modification factor, which reaches RAA≈0.4 for pT>30 GeV/c. In peripheral collisions (70–80%), only moderate suppression (RAA=0.6–0.7) and a weak pT dependence is observed. The measured nuclear modification factors are compared to other measurements and model calculations.
The nuclear modification factor, RAA, of the prompt charmed mesons D0, D+ and D∗+, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass energy sNN−−−√=2.76 TeV in two transverse momentum intervals, 5<pT<8 GeV/c and 8<pT<16 GeV/c, and in six collision centrality classes. The RAA shows a maximum suppression of a factor of 5-6 in the 10% most central collisions. The suppression and its centrality dependence are compatible within uncertainties with those of charged pions. A comparison with the RAA of non-prompt J/ψ from B meson decays, measured by the CMS Collaboration, hints at a larger suppression of D mesons in the most central collisions.
We present a measurement of inclusive J/ψ production in p-Pb collisions at sNN−−−√ = 5.02 TeV as a function of the centrality of the collision, as estimated from the energy deposited in the Zero Degree Calorimeters. The measurement is performed with the ALICE detector down to zero transverse momentum, pT, in the backward (−4.46<ycms<−2.96) and forward (2.03<ycms<3.53) rapidity intervals in the dimuon decay channel and in the mid-rapidity region (−1.37<ycms<0.43) in the dielectron decay channel. The backward and forward rapidity intervals correspond to the Pb-going and p-going direction, respectively. The pT-differential J/ψ production cross section at backward and forward rapidity is measured for several centrality classes, together with the corresponding average pT and p2T values. The nuclear modification factor, QpPb, is presented as a function of centrality for the three rapidity intervals, and, additionally, at backward and forward rapidity, as a function of pT for several centrality classes. At mid- and forward rapidity, the J/ψ yield is suppressed up to 40% compared to that in pp interactions scaled by the number of binary collisions. The degree of suppression increases towards central p-Pb collisions at forward rapidity, and with decreasing pT of the J/ψ. At backward rapidity, the QpPb is compatible with unity within the total uncertainties, with an increasing trend from peripheral to central p-Pb collisions.
The inclusive production of the J/ψ and ψ(2S) charmonium states is studied as a function of centrality in p-Pb collisions at a centre-of-mass energy per nucleon pair sNN−−−√ = 8.16 TeV at the LHC. The measurement is performed in the dimuon decay channel with the ALICE apparatus in the centre-of-mass rapidity intervals −4.46 < ycms < −2.96 (Pb-going direction) and 2.03 < ycms < 3.53 (p-going direction), down to zero transverse momentum (pT). The J/ψ and ψ(2S) production cross sections are evaluated as a function of the collision centrality, estimated through the energy deposited in the zero degree calorimeter located in the Pb-going direction. The pT-differential J/ψ production cross section is measured at backward and forward rapidity for several centrality classes, together with the corresponding average 〈pT〉 and ⟨p2T⟩ values. The nuclear effects affecting the production of both charmonium states are studied using the nuclear modification factor. In the p-going direction, a suppression of the production of both charmonium states is observed, which seems to increase from peripheral to central collisions. In the Pb-going direction, however, the centrality dependence is different for the two states: the nuclear modification factor of the J/ψ increases from below unity in peripheral collisions to above unity in central collisions, while for the ψ(2S) it stays below or consistent with unity for all centralities with no significant centrality dependence. The results are compared with measurements in p-Pb collisions at sNN−−−√ = 5.02 TeV and no significant dependence on the energy of the collision is observed. Finally, the results are compared with theoretical models implementing various nuclear matter effects.
We report on the measurement of freeze-out radii for pairs of identical-charge pions measured in Pb--Pb collisions at sNN−−−√=2.76 TeV as a function of collision centrality and the average transverse momentum of the pair kT. Three-dimensional sizes of the system (femtoscopic radii), as well as direction-averaged one-dimensional radii are extracted. The radii decrease with kT, following a power-law behavior. This is qualitatively consistent with expectations from a collectively expanding system, produced in hydrodynamic calculations. The radii also scale linearly with ⟨dNch/dη⟩1/3. This behaviour is compared to world data on femtoscopic radii in heavy-ion collisions. While the dependence is qualitatively similar to results at smaller sNN−−−√, a decrease in the Rout/Rside ratio is seen, which is in qualitative agreement with specific predictions from hydrodynamic models. The results provide further evidence for the production of a collective, strongly coupled system in heavy-ion collisions at the LHC.
The centrality dependence of the p/π ratio measured by the ALICE Collaboration in 5.02 TeV Pb-Pb collisions indicates a statistically significant suppression with the increase of the charged particle multiplicity once the centrality-correlated part of the systematic uncertainty is eliminated from the data. We argue that this behavior can be attributed to baryon annihilation in the hadronic phase. By implementing the BB¯↔5π reaction within a generalized partial chemical equilibrium framework, we estimate the annihilation freeze-out temperature at different centralities, which decreases with increasing charged particle multiplicity and yields Tann=132±5 MeV in 0-5% most central collisions. This value is considerably below the hadronization temperature of Thad∼160 MeV but above the thermal (kinetic) freeze-out temperature of Tkin∼100 MeV. Baryon annihilation reactions thus remain relevant in the initial stage of the hadronic phase but freeze out before (pseudo-)elastic hadronic scatterings. One experimentally testable consequence of this picture is a suppression of various light nuclei to proton ratios in central collisions of heavy ions.
The pseudorapidity density of charged particles (dNch/dη) at mid-rapidity in Pb-Pb collisions has been measured at a center-of-mass energy per nucleon pair of sNN−−−√ = 5.02 TeV. It increases with centrality and reaches a value of 1943±54 in |η|<0.5 for the 5% most central collisions. A rise in dNch/dη as a function of sNN−−−√ for the most central collisions is observed, steeper than that observed in proton-proton collisions and following the trend established by measurements at lower energy. The centrality dependence of dNch/dη as a function of the average number of participant nucleons, ⟨Npart⟩, calculated in a Glauber model, is compared with the previous measurement at lower energy. A constant factor of about 1.2 describes the increase in 2⟨Npart⟩⟨dNch/dη⟩ from sNN−−−√ = 2.76 TeV to sNN−−−√ = 5.02 TeV for all centrality intervals, within the measured range of 0-80% centrality. The results are also compared to models based on different mechanisms for particle production in nuclear collisions.
Transverse momentum (pT) spectra of pions, kaons, and protons up to pT=20 GeV/c have been measured in Pb-Pb collisions at sNN−−−√=2.76 TeV using the ALICE detector for six different centrality classes covering 0-80%. The proton-to-pion and the kaon-to-pion ratios both show a distinct peak at pT≈3 GeV/c in central Pb-Pb collisions that decreases towards more peripheral collisions. For pT>10 GeV/c, the nuclear modification factor is found to be the same for all three particle species in each centrality interval within systematic uncertainties of 10-20%. This suggests there is no direct interplay between the energy loss in the medium and the particle species composition in the hard core of the quenched jet. For pT<10 GeV/c, the data provide important constraints for models aimed at describing the transition from soft to hard physics.
The inclusive production of the ψ(2S) charmonium state was studied as a function of centrality in p-Pb collisions at the nucleon-nucleon center of mass energy sNN−−−√ = 5.02 TeV at the CERN LHC. The measurement was performed with the ALICE detector in the center of mass rapidity ranges −4.46<ycms<−2.96 and 2.03<ycms<3.53, down to zero transverse momentum, by reconstructing the ψ(2S) decay to a muon pair. The ψ(2S) production cross section σψ(2S) is presented as a function of the collision centrality, which is estimated through the energy deposited in forward rapidity calorimeters. The relative strength of nuclear effects on the ψ(2S) and on the corresponding 1S charmonium state J/ψ is then studied by means of the double ratio of cross sections [σψ(2S)/σJ/ψ]pPb/[σψ(2S)/σJ/ψ]pp between p-Pb and pp collisions, and by the values of the nuclear modification factors for the two charmonium states. The results show a large suppression of ψ(2S) production relative to the J/ψ at backward (negative) rapidity, corresponding to the flight direction of the Pb-nucleus, while at forward (positive) rapidity the suppressions of the two states are comparable. Finally, comparisons to results from lower energy experiments and to available theoretical models are presented.
The centrality dependence of the charged-particle pseudorapidity density measured with ALICE in Pb–Pb collisions at √sNN=2.76 TeV over a broad pseudorapidity range is presented. This Letter extends the previous results reported by ALICE to more peripheral collisions. No strong change of the overall shape of charged-particle pseudorapidity density distributions with centrality is observed, and when normalised to the number of participating nucleons in the collisions, the evolution over pseudorapidity with centrality is likewise small. The broad pseudorapidity range (−3.5<η<5) allows precise estimates of the total number of produced charged particles which we find to range from 162±22(syst.) to 17170±770(syst.) in 80–90% and 0–5% central collisions, respectively. The total charged-particle multiplicity is seen to approximately scale with the number of participating nucleons in the collision. This suggests that hard contributions to the charged-particle multiplicity are limited. The results are compared to models which describe dNch/dη at mid-rapidity in the most central Pb–Pb collisions and it is found that these models do not capture all features of the distributions.
We report on measurements of a charge-dependent flow using a novel three-particle correlator with ALICE in Pb-Pb collisions at the LHC, and discuss the implications for observation of local parity violation and the Chiral Magnetic Wave (CMW) in heavy-ion collisions. Charge-dependent flow is reported for different collision centralities as a function of the event charge asymmetry. While our results are in qualitative agreement with expectations based on the CMW, the nonzero signal observed in higher harmonics correlations indicates a possible significant background contribution. We also present results on a differential correlator, where the flow of positive and negative charges is reported as a function of the mean charge of the particles and their pseudorapidity separation. We argue that this differential correlator is better suited to distinguish the differences in positive and negative charges expected due to the CMW and the background effects, such as local charge conservation coupled with strong radial and anisotropic flow.
In this paper, we present a family of regular black hole solutions in the presence of charge and angular momentum. We also discuss the related thermodynamics and we comment about the black hole life cycle during the balding and spin down phases. Interestingly the static solution resembles the Ayón-Beato–García spacetime, provided the T-duality scale is redefined in terms of the electric charge, l0→Q. The key factor at the basis of our derivation is the employment of Padmanabhan's propagator to calculate static potentials. Such a propagator encodes string T-duality effects. This means that the regularity of the spacetimes here presented can open a new window on string theory phenomenology.
We report the differential charged jet cross section and jet fragmentation distributions measured with the ALICE detector in proton-proton collisions at a centre-of-mass energy s√= 7 TeV. Jets with pseudo-rapidity |η|<0.5 are reconstructed from charged particles using the anti-kT jet finding algorithm with a resolution parameter R = 0.4. The jet cross section is measured in the transverse momentum interval 5 ≤pchjetT< 100 GeV/c. Jet fragmentation is studied measuring the scaled transverse momentum spectra of the charged constituents of jets in four intervals of jet transverse momentum between 5 GeV/c and 30 GeV/c. The measurements are compared to calculations from the PYTHIA model as well as next-to-leading order perturbative QCD calculations with POWHEG + PYTHIA8. The charged jet cross section is described by POWHEG for the entire measured range of pchjetT. For pchjetT > 40 GeV/c, the PYTHIA calculations also agree with the measured charged jet cross section. PYTHIA6 simulations describe the fragmentation distributions to 15%. Larger discrepancies are observed for PYTHIA8.
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy s√=7 TeV using the ALICE detector at the LHC. Jets are reconstructed from charged particle momenta in the mid-rapidity region using the sequential recombination kT and anti-kT as well as the SISCone jet finding algorithms with several resolution parameters in the range R=0.2 to 0.6. Differential jet production cross sections measured with the three jet finders are in agreement in the transverse momentum (pT) interval 20<pjet,chT<100 GeV/c. They are also consistent with prior measurements carried out at the LHC by the ATLAS collaboration. The jet charged particle multiplicity rises monotonically with increasing jet pT, in qualitative agreement with prior observations at lower energies. The transverse profiles of leading jets are investigated using radial momentum density distributions as well as distributions of the average radius containing 80% (⟨R80⟩) of the reconstructed jet pT. The fragmentation of leading jets with R=0.4 using scaled pT spectra of the jet constituents is studied. The measurements are compared to model calculations from event generators (PYTHIA, PHOJET, HERWIG). The measured radial density distributions and ⟨R80⟩ distributions are well described by the PYTHIA model (tune Perugia-2011). The fragmentation distributions are better described by HERWIG.
We present the charged-particle multiplicity distributions over a wide pseudorapidity range (−3.4<η<5.0) for pp collisions at s√=0.9,7, and 8 TeV at the LHC. Results are based on information from the Silicon Pixel Detector and the Forward Multiplicity Detector of ALICE, extending the pseudorapidity coverage of the earlier publications and the high-multiplicity reach. The measurements are compared to results from the CMS experiment and to PYTHIA, PHOJET and EPOS LHC event generators, as well as IP-Glasma calculations.