Refine
Document Type
- Article (4)
- Doctoral Thesis (1)
Language
- English (5)
Has Fulltext
- yes (5)
Is part of the Bibliography
- no (5)
Keywords
- Heavy Ions (1)
- Jets (1)
Institute
A measurement of the transverse momentum spectra of jets in Pb-Pb collisions at sNN−−−√=2.76 TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R of 0.2 and 0.3 in pseudo-rapidity |η|<0.5. The transverse momentum pT of charged particles is measured down to 0.15 GeV/c which gives access to the low pT fragments of the jet. Jets found in heavy-ion collisions are corrected event-by-event for average background density and on an inclusive basis (via unfolding) for residual background fluctuations and detector effects. A strong suppression of jet production in central events with respect to peripheral events is observed. The suppression is found to be similar to the suppression of charged hadrons, which suggests that substantial energy is radiated at angles larger than the jet resolution parameter R=0.3 considered in the analysis. The fragmentation bias introduced by selecting jets with a high pT leading particle, which rejects jets with a soft fragmentation pattern, has a similar effect on the jet yield for central and peripheral events. The ratio of jet spectra with R=0.2 and R=0.3 is found to be similar in Pb-Pb and simulated PYTHIA pp events, indicating no strong broadening of the radial jet structure in the reconstructed jets with R<0.3.
Abstract: Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.
Author Summary: The statistics of our visual world is dominated by occlusions. Almost every image processed by our brain consists of mutually occluding objects, animals and plants. Our visual cortex is optimized through evolution and throughout our lifespan for such stimuli. Yet, the standard computational models of primary visual processing do not consider occlusions. In this study, we ask what effects visual occlusions may have on predicted response properties of simple cells which are the first cortical processing units for images. Our results suggest that recently observed differences between experiments and predictions of the standard simple cell models can be attributed to occlusions. The most significant consequence of occlusions is the prediction of many cells sensitive to center-surround stimuli. Experimentally, large quantities of such cells are observed since new techniques (reverse correlation) are used. Without occlusions, they are only obtained for specific settings and none of the seminal studies (sparse coding, ICA) predicted such fields. In contrast, the new type of response naturally emerges as soon as occlusions are considered. In comparison with recent in vivo experiments we find that occlusive models are consistent with the high percentages of center-surround simple cells observed in macaque monkeys, ferrets and mice.
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
Transverse momentum spectra of π±, K± and p(p¯) up to pT = 20 GeV/c at mid-rapidity in pp, peripheral (60–80%) and central (0–5%) Pb–Pb collisions at √sNN = 2.76 TeV have been measured using the ALICE detector at the Large Hadron Collider. The proton-to-pion and the kaon-to-pion ratios both show a distinct peak at pT ≈ 3 GeV/c in central Pb–Pb collisions. Below the peak, pT < 3 GeV/c, both ratios are in good agreement with hydrodynamical calculations, suggesting that the peak itself is dominantly the result of radial flow rather than anomalous hadronization processes. For pT > 10 GeV/c particle ratios in pp and Pb–Pb collisions are in agreement and the nuclear modification factors for π±, K± and p(p¯) indicate that, within the systematic and statistical uncertainties, the suppression is the same. This suggests that the chemical composition of leading particles from jets in the medium is similar to that of vacuum jets.
The human brain is an unparalleled system: Through millions of years of evolution and during a lifespan of learning, our brains have developed remarkable abilities for dealing with incoming sensory data, extracting structure and useful information, and finally drawing the conclusions that result in the actions we take. Understanding the principles behind this machinery and building artificial systems that mimic at least some of these capabilities is a long standing goal in both the scientific and the engineering communities. While this goal still seems unreachable, we have seen tremendous progress when it comes to training data-driven algorithms on vast amounts of training data, e.g. to learn an optimal data model and its parameters in order to accomplish some task. Such algorithms are now omnipresent: they are part of recommender systems, they perform speech recognition and generally build the foundation for many semi-autonomous systems. They start to be integral part of many technical systems modern technical societies rely on for their everyday functioning. Many of these algorithms were originally inspired by biological systems or act as models for sensory data processing in mammalian brains. The response properties of a certain population of neurons in the first stages of the mammalian visual pathway, for example, can be modeled by algorithms such as Sparse Coding (SC), Independent Component Analysis (ICA) or Factor Analysis (FA). These well established learning algorithms typically assume linear interactions between the variables of the model. Most often these relationships are expressed in the form of a matrix-vector products between a matrix with learned dictionary-elements (basis vectors as column vectors) and the latent variables of these models. While on the one hand this linear interaction can sometimes be justified by the physical process for which the machine learning model is proposed, it is on the other hand often chosen just because of its mathematical and practical convenience. From an optimal coding point of view though, one would generally expect that the ideal model closely reflect the core interactions of the system it is modeling. In vision for example, one of the dominant processes giving rise to our sensory percepts are occlusions. Occluding objects are omnipresent in visual scenes and it would not be surprising if the mammalian visual system would be optimized to process occluding structures in the visual data stream. Yet, the established mathematical models of the first stages of the visual processing path (like, e.g., SC, ICA or FA) all assume linear interactions between the active image components. In this thesis we will discuss new models that aim to approximate the effects of occluding components by assuming nonlinear interactions between their activated dictionary elements. We will present learning algorithms that infer optimal parameters for these models given data. In the experiments, we will validate the algorithms on artificial ground truth data and demonstrate their ability to recover the correct model parameters. We will show that the predictions made by these nonlinear models correspond better to the experimental data measured in-vivo than the predictions made by the established linear models. Furthermore, we systematically explore and compare a large space of plausible combinations of hyperparameters and preprocessing schemes in order to eliminate any effects of artefacts on the observed results. Training nonlinear sparse coding models is computationally more demanding than training linear models. In order to perform the numerical experiments described in this thesis we developed a software framework that facilitates the implementation of massive parallel expectation maximization (EM) based learning algorithms. This infrastructure was used for all experiments described in here, as well as by collaborators in projects we will not discuss. Some of the experiments required more than 1017 floating point operations and were run on a computer cluster running on up to 5000 CPU Cores in parallel. Our parallel framework enabled these experiments to be performed.