Refine
Document Type
- Doctoral Thesis (12)
Has Fulltext
- yes (12)
Is part of the Bibliography
- no (12)
Keywords
- Abstandsinformation (1)
- Auditory cortex (1)
- Bilderkennung (1)
- Entwicklungspsychologie (1)
- Gehirn (1)
- Großhirnrinde (1)
- Größenkonstanzleistung (1)
- Hirnforschung (1)
- Homeostasis (1)
- Learning (1)
Institute
- Physik (12) (remove)
The nature of spontaneous brain activity during wakefulness and sleep: a complex systems approach
(2014)
In this thesis we study the organization of spontaneous brain activity during wakefulness and all stages of human non-rapid eye movement sleep using an approach based on developments and tools from the theory of complex systems. After a brief introduction to sleep physiology and different theoretical models of consciousness, we study how the organization of cortical and sub-cortical interactions is modified during the sleep cycle. Our results, obtained by modeling global brain activity as a complex functional interaction network, show that the capacity of the human brain to integrate different segregated functional modules is diminished during deep sleep, in line with an informationintegration account of consciousness. We then show that integration is impaired not only across space but also in the temporal domain, by assesing the emergence of long-range temporal correlations in brain activity and how they are modified during sleep. We propose an encompassing explanation for this observation, namely, that the brain operatsat different dynamical regimes during different states of consciousness. Finally, we gather massive amounts of data from different collaborative projects and apply machine learning techniques to reveal that the \resting state" cannot be considered as a pure brain state and is in fact a mixture containing different levels of conscious awareness. This last result has deep implications for future attempts to develop a discovery science of brain function both in health and disease.
Neurons are cells with a highly complex morphology; their dendritic arbor spans up to thousands of micrometers. This extended arbor poses a challenge for the logistics of neuronal processes: mRNA, proteins, and organelles have to be transported to dendrites, hundreds of micrometers away from the soma. This thesis aims to calculate the minimum number of proteins needed to populate the dendritic trees for different scenarios.
In chapter 2, I analyzed the ability of different mechanisms to populate the dendritic arbor. I started from the solution of the diffusion equation in Sec. 2.1, then I included the contribution of active transport in Sec. 2.2 and showed how it could have either the effect of increasing the effective diffusion coefficient or of introducing a bias in the diffusion process. In Sec. 2.3 I studied the spatial distribution of locally synthesized protein, accordingly with actively and passively transported mRNA. In Sec. 2.5, I derived the boundary condition for branches showing a qualitatively different behavior of surface and cytoplasmic proteins induced by the medium’s dimensionality in which they diffuse.
In chapter 3, I introduced the concept of protein requirement, defined as the minimum number of proteins that the neuron needs to produce to provide at least one protein to each micrometer of the dendritic arbor. In Sec. 3.1, I derived the protein requirement for diffusive proteins for somatic translation and constant translation in the dendritic arbor. In Sec. 3.2, I analyzed numerically the protein requirement in the case of actively transported protein synthesized in the soma, and, in Sec. 3.3, in the case of actively transported proteins synthesized in the dendritic arbor. In Sec. 3.4, I analyzed the protein requirement of protein synthesized in the dendrite accordingly with the distribution of mRNA described in Sec. 3.3 and 3.2. In Sec. 3.5, I derived the protein requirement for a single branch and purely diffusive proteins.
In chapter 4, I analyzed the relation between the radii of the three afferent dendrites in a branch, their length, and the diffusion length of a protein. In Sec. 4.1 I derived the optimal ratio between the radii of the daughter dendrites that minimizes the protein requirement. In Sec. 4.3 I introduced the 3/2− Rall Rule and in Sec. 4.5 its generalization. Finally, I used those rules to estimate the fraction of proteins diffusing away from and toward the soma.
In chapter 5, I analyzed the radii distribution for three categories of neurons: cultured hippocampal neurons in Sec. 5.1, stomatogastric ganglia neuron in Sec. 5.2, and 3DEM reconstructed prefrontal pyramidal neurons in Sec. 5.3. For each of these three classes, I analyzed the distribution of radii, Rall exponents, and the probability ratio. For most of them, I found that the probability of a protein diffusing away from the soma is higher for surface proteins than for cytoplasmic ones. I quantified this with a parameter called surface bias.
In Chapter 6, I analyzed the fluorescent ratio imaged by our collaborators Anne-Sophie Hafner, for a surface protein, GFP::Nlg, and a soluble one, GFP, in cultured hippocampal neurons, and I compared the fluorescent ratio with the probability ratio obtained in 5.1, finding that they are in good agreement.
In chapter 7, I compared the real dendritic morphologies imaged by one of our collaborators Ali Karimi with the optimal branching rule obtained in Sec. 4.1 and I calculated the cost for not having optimal branching radii.
Finally, in Chapter 8, I used the knowledge of the branching statistics gathered in 5.3 to simulate the protein profile on three different classes of neurons: pyramidal neurons, granule neuron, and Purkinje neurons. I compared the protein profile for surface and cytoplasmic neurons for each morphology for two different values of the diffusion length: λ = 109µm and λ = 473µm, both for optimized radii and symmetrical radii. I showed how the radii optimization reduces the protein requirement of a factor 10 4 for pyramidal neurons.
This dissertation connects two independent fields of theoretical neuroscience: on the one hand, the self-organization of topographic connectivity patterns, and on the other hand, invariant object recognition, that is the recognition of objects independently of their various possible retinal representations (for example due to translations or scalings). The topographic representation is used in the presented approach, as a coordinate system, which then allows for the implementation of invariance transformations. Hence this study shows, that it is possible that the brain self-organizes before birth, so that it is able to invariantly recognize objects immediately after birth. Besides the core hypothesis that links prenatal work with object recognition, advancements in both fields themselves are also presented. In the beginning of the thesis, a novel analytically solvable probabilistic generative model for topographic maps is introduced. And at the end of the thesis, a model that integrates classical feature-based ideas with the normalization-based approach is presented. This bilinear model makes use of sparseness as well as slowness to implement "optimal" topographic representations. It is therefore a good candidate for hierarchical processing in the brain and for future research.
Navigating a complex environment is assumed to require stable cortical representations of environmental stimuli. Previous experimental studies, however, show substantial ongoing remodeling at the level of synaptic connections, even under behaviorally and environmentally stable conditions. It remains unclear, how these changes affect sensory representations on the level of neuronal populations during basal conditions and how learning influences these dynamics.
Our approach is a joint effort between the analysis of experimental data and theory. We analyze chronic neuronal population activity data – acquired by out collaborators in Mainz – to describe population activity dynamics during basal dynamics and during learning (fear conditioning). The data analysis is complemented by the analysis of a circuit model investigating the link between a neural network’s activity and changes in its underlying structure.
Using chronic two-photon imaging data recorded in awake mouse auditory cortex, we reproduce previous findings that responses of neuronal populations to short complex sounds typically cluster into a near discrete set of possible responses. This means that different stimuli evoke basically the same response and are thus grouped together into one of a small set of possible response modes. The near discrete set of response modes can be utilized as a sensitive and robust means to detect and track changes in population activity over time. Doing so we find that sound representations are subject to a significant ongoing remodeling across the time span of days under basal conditions. Auditory cued fear conditioning introduces a bias into these ongoing dynamics, resulting in a differential generalization both on the level of neuronal populations and on the behavioral level. This means that sounds that are perceived similar to the conditioned stimulus (CS+) show an increased co-mapping to the same response mode the CS+ is mapped to. This differential generalization is also observed in animal behavior, where sounds similar to the CS+ result in the same freezing behavior as the CS+, whereas dissimilar sounds do not. These observations could provide a potential mechanism of stimulus generalization, which is one of the most common phenomena associated with post-traumatic stress disorder, on the level of neuronal populations.
To investigate how the aforementioned changes in neuronal population activity are linked to changes in the underlying synaptic connectivity, we devised a circuit model of excitatory and inhibitory neurons. We studied this firing rate model to investigate the effect of gradual changes in the network’s connectivity on its activity. Apart from an input dominated uni-stable regime (one response per stimulus independent of the network) and a network dominated uni-stable regime (one response per network independent of the stimulus), we also find a multi-stable regime for strong recurrent connectivity and a high ratio of inhibition to excitation. In this regime the model reproduces properties of neural population activity in mouse auditory cortex, including sparse activity, a broad distribution of firing rates, and clustering of stimuli into a near discrete set of response modes. This clustering in the multi-stable regime means that, not only can identical stimuli evoke different responses, depending on the network’s initial condition, but different stimuli can also evoke the same response.
Applying gradual drift to the network connectivity we find periods of stable responses, interrupted by abrupt transitions altering the stimulus response mapping. We study the mechanism underlying these transitions by analyzing changes in the fixed points of this network model, employing a method to numerically find all the fixed points of the system. We find that such abrupt transitions typically cannot be explained by the mere displacement of existing fixed points, but involve qualitative changes in the fixed point structure in the vicinity of the response trajectory. We conclude that gradual synaptic drift can lead to abrupt transitions in stimulus responses and that qualitative changes in the network’s fixed point topology underlie such transitions.
In summary we find that cortical networks display ongoing representational drift under basal conditions that is biased towards a differential generalization during fear conditioning. A circuit model is able to reproduce key characteristics of auditory cortex, including a clustering of stimulus responses into a near discrete set of response modes. Implementing synaptic drift into this model leads to periods of stable responses interrupted by abrupt transitions towards new responses.
This thesis investigates the development of early cognition in infancy using neural network models. Fundamental events in visual perception such as caused motion, occlusion, object permanence, tracking of moving objects behind occluders, object unity perception and sequence learning are modeled in a unifying computational framework while staying close to experimental data in developmental psychology of infancy. In the first project, the development of causality and occlusion perception in infancy is modeled using a simple, three-layered, recurrent network trained with error backpropagation to predict future inputs (Elman network). The model unifies two infant studies on causality and occlusion perception. Subsequently, in the second project, the established framework is extended to a larger prediction network that models the development of object unity, object permanence and occlusion perception in infancy. It is shown that these different phenomena can be unified into a single theoretical framework thereby explaining experimental data from 14 infant studies. The framework shows that these developmental phenomena can be explained by accurately representing and predicting statistical regularities in the visual environment. The models assume (1) different neuronal populations processing different motion directions of visual stimuli in the visual cortex of the newborn infant which are supported by neuroscientific evidence and (2) available learning algorithms that are guided by the goal of predicting future events. Specifically, the models demonstrate that no innate force notions, motion analysis modules, common motion detectors, specific perceptual rules or abilities to "reason" about entities which have been widely postulated in the developmental literature are necessary for the explanation of the discussed phenomena. Since the prediction of future events turned out to be fruitful for theoretical explanation of various developmental phenomena and a guideline for learning in infancy, the third model addresses the development of visual expectations themselves. A self-organising, fully recurrent neural network model that forms internal representations of input sequences and maps them onto eye movements is proposed. The reinforcement learning architecture (RLA) of the model learns to perform anticipatory eye movements as observed in a range of infant studies. The model suggests that the goal of maximizing the looking time at interesting stimuli guides infants' looking behavior thereby explaining the occurrence and development of anticipatory eye movements and reaction times. In contrast to classical neural network modelling approaches in the developmental literature, the model uses local learning rules and contains several biologically plausible elements like excitatory and inhibitory spiking neurons, spike-timing dependent plasticity (STDP), intrinsic plasticity (IP) and synaptic scaling. It is also novel from the technical point of view as it uses a dynamic recurrent reservoir shaped by various plasticity mechanisms and combines it with reinforcement learning. The model accounts for twelve experimental studies and predicts among others anticipatory behavior for arbitrary sequences and facilitated reacquisition of already learned sequences. All models emphasize the development of the perception of the discussed phenomena thereby addressing the questions of how and why this developmental change takes place - questions that are difficult to be assessed experimentally. Despite the diversity of the discussed phenomena all three projects rely on the same principle: the prediction of future events. This principle suggests that cognitive development in infancy may largely be guided by building internal models and representations of the visual environment and using those models to predict its future development.
In our daily life, we carry out lots of tasks like typing, playing tennis, and playing the piano, without even noticing there is sequence learning involved. No matter how simple or complex they are, these tasks require the sequential planning and execution of a series of movements. As an ability of primary importance in one’s life, and an ability that everyone manages to learn, action-sequence learning has been studied by researchers from different fields: psychologists, neurophysiologists as well as roboticists. In the concept of sequence learning, perceptual learning and motor learning, implicit and explicit learning have been studied and discussed independently.
We are interested in infancy research, because infants, with underdeveloped brain functions and with limited motor ability, have little experience with the world and not yet built internal models as presumption of how to interpret the world. A series of infant experiments in the 1980s provided evidence that infants can rapidly develop anticipatory eye movements for visual events. Even when infants have no control of those spatial-temporal patterns, they can respond actually prior to the onset of the visual event, referred as "Anticipation".
In this work, we applied a gaze-contingent paradigm using real-time eye tracking to put 6- and 8-month-old infants in direct control of their visual surroundings. This paradigm allows the infant to change an image on a screen by looking at a peripheral red disc, which functions as a switch. We found that infants quickly learn to perform eye movements to trigger the appearance of new stimuli and that they anticipate the consequences of their actions in an early stage of the experiment.
Attention-shift from learning one stimulus to the next novel stimulus is important in sequence learning. In the test phase of infant visual habituation with two objects, we propose a new theory of explaining the familiarity-to-novelty shift. In our opinion an infant’s interest in a stimulus is related to its learning progress, the improvement of performance. As a consequence, infants prefer the stimulus which their current learning progress is maximal for, naturally giving rise to a familiarity-to-novelty shift in certain situations. Our network model predicts that the familiarity-to-novelty-shift only emerges for complex stimuli that produce bell-shaped learning curves after brief familiarization, but does not emerge for simple stimuli that produce exponentially decreasing learning curves or for long familiarization time, which is consistent with experimental results. This research suggests the infant's interest in a stimulus may be related to its current learning progress. This can give rise to a dynamic familiarity-to-novelty shift depending on both the infant's learning efficiency and the task complexity.
We know that for both infants and adults, the performance on certain motor-sequence tasks can be improved through practice. However, adults usually have to perform complex tasks in complicated environments; for example, learning multiple tasks is unavoidable in our daily life. In existing research, learning multiple tasks showed puzzling and seemingly contradictory results. On the one hand, a wide variety of proactive and retroactive interference effects have been observed when multiple tasks have to be learned. On the other hand, some studies have reported facilitation and transfer of learning between different tasks.
In order to find out the interaction between multiple-task learning, and to find an optimal training schedule, we use a recurrent neural network to model a series of experiments on movement sequence learning. The network model learns to carry out the correct movement sequences through training and reproduces differences between training schedules such as blocked training vs. random training in psychophysics experiments. The network model also shows striking similarity to human performance, and makes prediction for tasks similarity and different training schedules.
In conclusion, the thesis presents learning sequences of actions in infants and recurrent neural networks. We carried out a gaze-contingent experiment to study infants’ rapid anticipation of their own action outcomes, and we also constructed two recurrent neural network models, with one model explaining infant attention shift in visual habituation, and the other model directing to task similarity and training schedule in motor sequence control in adults.
Die vorgelegte Dissertation behandelt den Einfluss homöostatischer Adaption auf die Informationsverarbeitung und Lenrprozesse in neuronalen Systemen. Der Begriff Homöostase bezeichnet die Fähigkeit eines dynamischen Systems, bestimmte interne Variablen durch Regelmechanismen in einem dynamischen Gleichgewicht zu halten. Ein klassisches Beispiel neuronaler Homöostase ist die dynamische Skalierung synaptischer Gewichte, wodurch die Aktivität bzw. Feuerrate einzelner Neuronen im zeitlichen Mittel konstant bleibt. Bei den von uns betrachteten Modellen handelt es sich um eine duale Form der neuronalen Homöostase. Das bedeutet, dass für jedes Neuron zwei interne Parameter an eine intrinsische Variable wie die bereits erwähnte mittlere Aktivität oder das Membranpotential gekoppelt werden. Eine Besonderheit dieser dualen Adaption ist die Tatsache, dass dadurch nicht nur das zeitliche Mittel einer dynamischen Variable, sondern auch die zeitliche Varianz, also die stärke der Fluktuation um den Mittelwert, kontrolliert werden kann. In dieser Arbeit werden zwei neuronale Systeme betrachtet, in der dieser Aspekt zum Tragen kommt.
Das erste behandelte System ist ein sogennantes Echo State Netzwerk, welches unter die Kategorie der rekurrenten Netzwerke fällt. Rekurrente neuronale Netzwerke haben im Allgemeinen die Eigenschaft, dass eine Population von Neuronen synaptische Verbindungen besitzt, die auf die Population selbst projizieren, also rückkoppeln. Rekurrente Netzwerke können somit als autonome (falls keinerlei zusätzliche externe synaptische Verbindungen existieren) oder nicht-autonome dynamische Systeme betrachtet werden, die durch die genannte Rückkopplung komplexe dynamische Eigenschaften besitzen. Abhängig von der Struktur der rekurrenten synaptischen Verbindungen kann beispielsweise Information aus externem Input über einen längeren Zeitraum gespeichert werden. Ebenso können dynamische Fixpunkte oder auch periodische bzw. chaotische Aktivitätsmuster entstehen. Diese dynamische Vielseitigkeit findet sich auch in den im Gehirn omnipräsenten rekurrenten Netzwerken und dient hier z.B. der Verarbeitung sensorischer Information oder der Ausführung von motorischen Bewegungsmustern. Das von uns betrachtete Echo State Netzwerk zeichnet sich dadurch aus, dass rekurrente synaptische Verbindungen zufällig generiert werden und keiner synaptischen Plastizität unterliegen. Verändert werden im Zuge eines Lernprozesses nur Verbindungen, die von diesem sogenannten dynamischen Reservoir auf Output-Neuronen projizieren. Trotz der Tatsache, dass dies den Lernvorgang stark vereinfacht, ist die Fähigkeit des Reservoirs zur Verarbeitung zeitabhängiger Inputs stark von der statistischen Verteilung abhängig, die für die Generierung der rekurrenten Verbindungen verwendet wird. Insbesondere die Varianz bzw. die Skalierung der Gewichte ist hierbei von großer Bedeutung. Ein Maß für diese Skalierung ist der Spektralradius der rekurrenten Gewichtsmatrix.
In vorangegangenen theoretischen Arbeiten wurde gezeigt, dass für das betrachtete System ein Spektralradius nahe unterhalb des kritischen Wertes von 1 zu einer guten Performance führt. Oberhalb dieses Wertes kommt es im autonomen Fall zu chaotischem dynamischen Verhalten, welches sich negativ auf die Informationsverarbeitung auswirkt. Der von uns eingeführte und als Flow Control bezeichnete duale Adaptionsmechanismus zielt nun darauf ab, über eine Skalierung der synaptischen Gewichte den Spektralradius auf den gewünschten Zielwert zu regulieren. Essentiell ist hierbei, dass die verwendete Adaptionsdynamik im Sinne der biologischen Plausibilität nur auf lokale Größen zurückgreift. Dies geschieht im Falle von Flow Control über eine Regulation der im Membranpotential der Zelle auftretenden Fluktuationen. Bei der Evaluierung der Effektivität von Flow Control zeigte sich, dass der Spektralradius sehr präzise kontrolliert werden kann, falls die Aktivitäten der Neuronen in der rekurrenten Population nur schwach korreliert sind. Korrelationen können beispielsweise durch einen zwischen den Neuronen stark synchronisierten externen Input induziert werden, der sich dementsprechend negativ auf die Präzision des Adaptionsmechanismus auswirkt.
Beim Testen des Netzwerks in einem Lernszenario wirkte sich dieser Effekt aber nicht negativ auf die Performance aus: Die optimale Performance wurde unabhängig von der stärke des korrelierten Inputs für einen Spektralradius erreicht, der leicht unter dem kritischen Wert von 1 lag. Dies führt uns zu der Schlussfolgerung, dass Flow Control unabhängig von der Stärke der externen Stimulation in der Lage ist, rekurrente Netze in einen für die Informationsverarbeitung optimalen Arbeitsbereich einzuregeln.
Bei dem zweiten betrachteten Modell handelt es sich um ein Neuronenmodell mit zwei Kompartimenten, welche der spezifischen Anatomie von Pyramidenneuronen im Kortex nachempfunden ist. Während ein basales Kompartiment synaptischen Input zusammenfasst, der in Dendriten nahe des Zellkerns auftritt, repräsentiert das zweite apikale Kompartiment die im Kortex anzutreffende komplexe dendritische Baumstruktur. In früheren Experimenten konnte gezeigt werden, dass eine zeitlich korrelierte Stimulation sowohl im basalen als auch apikalen Kompartiment eine deutlich höhere neuronale Aktivität hervorrufen kann als durch Stimulation nur einer der beiden Kompartimente möglich ist. In unserem Modell können wir zeigen, dass dieser Effekt der Koinzidenz-Detektion es erlaubt, den Input im apikalen Kompartiment als Lernsignal für synaptische Plastizität im basalen Kompartiment zu nutzen. Duale Homöostase kommt auch hier zum Tragen, da diese in beiden Kompartimenten sicherstellt, dass sich der synaptische Input hinsichtlich des zeitlichen Mittels und der Varianz in einem für den Lernprozess benötigten Bereich befindet. Anhand eines Lernszenarios, das aus einer linearen binären Klassifikation besteht, können wir zeigen, dass sich das beschriebene Framework für biologisch plausibles überwachtes Lernen eignet.
Die beiden betrachteten Modelle zeigen beispielhaft die Relevanz dualer Homöostase im Hinblick auf zwei Aspekte. Das ist zum einen die Regulation rekurrenter neuronaler Netze in einen dynamischen Zustand, der für Informationsverarbeitung optimal ist. Der Effekt der Adaption zeigt sich hier also im Verhalten des Netzwerks als Ganzes. Zum anderen kann duale Homöostase, wie im zweiten Modell gezeigt, auch für Plastizitäts- und Lernprozesse auf der Ebene einzelner Neuronen von Bedeutung sein. Während neuronale Homöostase im klassischen Sinn darauf beschränkt ist, Teile des Systems möglichst präzise auf einen gewünschten Mittelwert zu regulieren, konnten wir Anhand der diskutierten Modelle also darlegen, dass eine Kontrolle des Ausmaßes von Fluktuationen ebenfalls Einfluss auf die Funktionalität neuronaler Systeme haben kann.
Cryo-electron tomography (CET) is a unique technique to visualize biological objects under near-to-native conditions at near-atomic resolution. CET provides three-dimensional (3D) snapshots of the cellular proteome, in which the spatial relations between macromolecular complexes in their near native cellular context can be explored. Due to the limitation of the electron dose applicable on biological samples, the achievable resolution of a tomogram is restricted to a few nanometers, higher resolution can be achieved by averaging of structures occurring in multiples. For this purpose, computational techniques such as template matching, sub-tomogram averaging and classification are essential for a meaningful processing of CET data.
This thesis introduces the techniques of template matching and sub-tomogram averaging and their applications on real biological data sets. Subsequently, the problem of reference bias, which restricts the applicability of those techniques, is addressed. Two methods that estimate the reference bias in Fourier and real space are demonstrated. The real space method, which we have named the “M-free” score, provides a reliable estimation of the reference bias, which gives access to the reliability of the template matching or sub-tomogram averaging process. Thus, the “M-free” score makes those approaches more applicable to structural biology. Furthermore, a classification algorithm based on Neural Networks (NN) called “KerDenSOM3D” is introduced, which is implemented in 3D and compensates for the missing-wedge. This approach helps extracting different structural states of macromolecular complexes or increasing the class purity of data sets by eliminating outliers. A comprehensive comparison with other classification methods shows superior performance of KerDenSOM3D.
The brain is a large complex system which is remarkably good at maintaining stability under a wide range of input patterns and intensities. In addition, such a stable dynamical state is able to sustain essential functions, including the encoding of information about the external environment and storing memories. In order to succeed in these challenging tasks, neural circuits rely on a variety of plasticity mechanisms that act as self-organizational rules and regulate their dynamics. Based on toy models of self-organized criticality, this stable state has been proposed to be a phase transition point, poised between distinct types of unhealthy dynamics, in what has become known as the critical brain hypothesis. It is not yet known, however, if and how self-organization could drive biological neural networks towards a critical state while maintaining or improving their learning and memory functions.
Here, we investigate the emergence of criticality signatures in the form of neuronal avalanches due to self-organizational plasticity rules in a recurrent neural network. We show that power-law distributions of events, widely observed in experiments, arise from a combination of biologically inspired synaptic and homeostatic plasticity but are highly dependent on the external drive. Additionally, we describe how learning abilities and fading memory emerge and are improved by the same self-organizational processes. We finally propose an application of these enhanced functions, focusing on sequence and simple language learning tasks.
Taken together, our results suggest that the same self-organizational processes can be responsible for improving the brain’s spatio-temporal learning abilities and memory capacity while also giving rise to criticality signatures under particular input conditions, thus proposing a novel link between such abilities and neuronal avalanches. Although criticality was not verified, the detailed study of self-organization towards critical dynamics further elucidates its potential emergence and functions in the brain.
Cortical circuits exhibit highly dynamic and complex neural activity. Intriguingly, cortical activity exhibits consistently two key features across observed species and brain areas. First, individual neurons tend to be co-active in spatially localized domains forming orderly arranged, modular layouts with a typical spatial scale. Second, cortical elements are correlated in their activity over large distances reflecting long-range network interactions distributed over several millimeters. Currently, it is unclear how these two fundamental properties emerge in the early developing cortical activity.
Here, I aim to fill this gap by combining analyses of chronic imaging data and network models of developing cortical activity. Neural recordings of spontaneous and visually evoked activity in primary visual cortex of ferrets during their early cortical development were obtained using in vivo 2-photon and widefield epi-fluorescence calcium imaging. Spontaneous activity was used to probe the early state of cortical networks as its spatiotemporal organization is independent of a stimulus-imposed structure, and it is already present early in cortical development prior to reliably evoked responses. To assess the mature functional organization of distributed networks in cortex, the tuning of neural responses to stimulus features, in particular to the orientation of an edge-like stimulus, was assessed. Cortical responses to moving gratings of varying orientations form an orderly arranged layout of orientation domains extending over several millimeters.
To begin with, I showed that spontaneous activity correlations extend over several millimeters, supporting the assumption of using spontaneous activity to assess distributed networks in cortex.
Next, I asked how distributed networks in the mature visual cortex - assessed by spontaneous activity correlations - are related to its fine-scale functional organization. I found that the spatially extended and modular spontaneous correlation patterns accurately predict the fine spatial structure of visually evoked orientation domains several millimeters away. These results suggest a close relation between spontaneous correlations and visually evoked responses on a fine spatial scale and across large spatial distances.
As the principles governing the functional organization and development of distributed network interactions in the neocortex remain poorly understood, I next asked how long range correlated activity arises early in development. I found that key features of mature spontaneous activity introduced in this work, including long-range spontaneous correlations, were present already early in cortical development prior to the maturation of long-range, horizontal connections, and the predicted mature orientation preference layout. Even after silencing feed-forward input drive by inactivating retina or thalamus, long-range correlated and modular activity robustly emerged in early cortex. These results suggest that local recurrent connections in early cortical circuits can generate structured long-range network correlations that guide the formation of visually-evoked distributed functional networks.
To investigate how these large-scale cortical networks emerge prior to the maturation and elaboration of long-range horizontal connectivity, I examined a statistical network model describing an ensemble of spatially extended spontaneous activity patterns. I found a direct relationship between the dimensionality of this ensemble of activity patterns and the decay of its correlation structure. Specifically, reducing the dimensionality of the ensemble leads to an increase in the spatial range of the correlation structure.
To test whether this mechanism could generate a long-range correlation structure in cortical circuits, I studied a dynamical network model implementing a dimensionality reduction mechanism. Based on previous work demonstrating that network heterogeneity reduces the dimensionality of activity patterns, I showed that by increasing the degree of heterogeneity in the network, the dimensionality of the ensemble of activity patterns decreases and in turn their correlations extend over a greater range. A comparison to experimental data revealed a quantitative match between the network model and the observations in vivo in several of the key features of the early cortex including the spatial scale of correlations. Low dimensionality of spontaneous activity thus might provide an organizational principle explaining the observed long-range correlation structure in the early cortex.
Finally, I asked whether a network with a biologically plausible architecture can generate modular activity. Several classical models showed that modular activity patterns can emerge via an intracortical mechanism involving lateral inhibition. However, this assumption appears to be in conflict with current experimental evidence. Moreover, these network models were not experimentally tested, so far. Here, I showed by using linear stability analysis that spatially localized self-inhibition relaxes the constraints on the connectivity structure in a network model, such that biologically more plausible network motifs with shorter ranging inhibition than excitation can robustly generate modular activity.
Importantly, I also provided several model predictions to make the class of network models experimentally testable in view of recent technological advancements in imaging and manipulation of cortical circuits. A critical prediction of the model is the decrease in spacing of active domains when the total amount of inhibition increases. These results provide a novel mechanism of how cortical circuits with short-range inhibition can form modular activity.
Taken together, this thesis provides evidence that the two described fundamental features of neural activity are already present in the early cortex and shows that activity with those features can be generated in network models with an architecture consistent with the early cortex using basic principles.