Refine
Year of publication
Document Type
- Doctoral Thesis (22)
Has Fulltext
- yes (22)
Is part of the Bibliography
- no (22)
Keywords
- Großhirnrinde (2)
- Sehrinde (2)
- sparse coding (2)
- Abstandsinformation (1)
- Auditory cortex (1)
- Bilderkennung (1)
- Entwicklungspsychologie (1)
- Gedächtnis (1)
- Gedächtnisbildung (1)
- Gehirn (1)
Institute
Das Gehirn ist die wohl komplexeste Struktur auf Erden, die der Mensch erforscht. Es besteht aus einem riesigen Netzwerk von Nervenzellen, welches in der Lage ist eingehende sensorische Informationen zu verarbeiten um daraus eine sinnvolle Repräsentation der Umgebung zu erstellen. Außerdem koordiniert es die Aktionen des Organismus um mit der Umgebung zu interagieren. Das Gehirn hat die bemerkenswerte Fähigkeit sowohl Informationen zu speichern als auch sich ständig an ändernde Bedingungen anzupassen, und zwar über die gesamte Lebensdauer. Dies ist essentiell für Mensch oder Tier um sich zu entwickeln und zu lernen. Die Grundlage für diesen lebenslangen Lernprozess ist die Plastizität des Gehirns, welche das riesige Netzwerk von Neuronen ständig anpasst und neu verbindet. Die Veränderungen an den synaptischen Verbindungen und der intrinsischen Erregbarkeit jedes Neurons finden durch selbstorganisierte Mechanismen statt und optimieren das Verhalten des Organismus als Ganzes. Das Phänomen der neuronalen Plastizität beschäftigt die Neurowissenschaften und anderen Disziplinen bereits über mehrere Jahrzehnte. Dabei beschreibt die intrinsische Plastizität die ständige Anpassung der Erregbarkeit eines Neurons um einen ausbalancierten, homöostatischen Arbeitsbereich zu gewährleisten. Aber besonders die synaptische Plastizität, welche die Änderungen in der Stärke bestehender Verbindungen bezeichnet, wurde unter vielen verschiedenen Bedingungen erforscht und erwies sich mit jeder neuen Studie als immer komplexer. Sie wird durch ein komplexes Zusammenspiel von biophysikalischen Mechanismen induziert und hängt von verschiedenen Faktoren wie der Frequenz der Aktionspotentiale, deren Timing und dem Membranpotential ab und zeigt außerdem eine metaplastische Abhängigkeit von vergangenen Ereignissen. Letztlich beeinflusst die synaptische Plastizität die Signalverarbeitung und Berechnung einzelner Neuronen und der neuronalen Netzwerke.
Der Schwerpunkt dieser Arbeit ist es das Verständnis der biologischen Mechanismen und deren Folgen, die zu den beobachteten Plastizitätsphänomene führen, durch eine stärker vereinheitlichte Theorie voranzutreiben.Dazu stelle ich zwei funktionale Ziele für neuronale Plastizität auf, leite Lernregeln aus diesen ab und analysiere deren Konsequenzen und Vorhersagen.
Kapitel 3 untersucht die Unterscheidbarkeit der Populationsaktivität in Netzwerken als funktionales Ziel für neuronale Plastizität. Die Hypothese ist dabei, dass gerade in rekurrenten aber auch in vorwärtsgekoppelten Netzwerken die Populationsaktivität als Repräsentation der Eingangssignale optimiert werden kann, wenn ähnliche Eingangssignale eine möglichst unterschiedliche Repräsentation haben und dadurch für die nachfolgende Verarbeitung besser unterscheidbar sind. Das funktionale Ziel ist daher diese Unterscheidbarkeit durch Veränderungen an den Verbindungsstärke und der Erregbarkeit der Neuronen mithilfe von lokalen selbst-organisierten Lernregeln zu maximieren. Aus diesem funktionale Ziel lassen sich eine Reihe von Standard-Lernenregeln für künstliche neuronale Netze gemeinsam abzuleiten.
Kapitel 4 wendet einen ähnlichen funktionalen Ansatz auf ein komplexeres, biophysikalisches Neuronenmodell an. Das Ziel ist eine spärliche, stark asymmetrische Verteilung der synaptischen Stärke, wie sie auch bereits mehrfach experimentell gefunden wurde, durch lokale, synaptische Lernregeln zu maximieren. Aus diesem funktionalen Ansatz können alle wichtigen Phänomene der synaptischen Plastizität erklärt werden. Simulationen der Lernregel in einem realistischen Neuronmodell mit voller Morphologie erklären die Daten von timing-, raten- und spannungsabhängigen Plastizitätsprotokollen. Die Lernregel hat auch eine intrinsische Abhängigkeit von der Position der Synapse, welche mit den experimentellen Ergebnissen übereinstimmt. Darüber hinaus kann die Lernregel ohne zusätzliche Annahmen metaplastische Phänomene erklären. Dabei sagt der Ansatz eine neue Form der Metaplastizität voraus, welche die timing-abhängige Plastizität beeinflusst. Die formulierte Lernregel führt zu zwei neuartigen Vereinheitlichungen für synaptische Plastizität: Erstens zeigt sie, dass die verschiedenen Phänomene der synaptischen Plastizität als Folge eines einzigen funktionalen Ziels verstanden werden können. Und zweitens überbrückt der Ansatz die Lücke zwischen der funktionalen und mechanistische Beschreibungsweise. Das vorgeschlagene funktionale Ziel führt zu einer Lernregel mit biophysikalischer Formulierung, welche mit etablierten Theorien der biologischen Mechanismen in Verbindung gebracht werden kann. Außerdem kann das Ziel einer spärlichen Verteilung der synaptischen Stärke als Beitrag zu einer energieeffizienten synaptischen Signalübertragung und optimierten Codierung interpretiert werden.
This thesis will first introduce in more detail the Bayesian theory and its use in integrating multiple information sources. I will briefly talk about models and their relation to the dynamics of an environment, and how to combine multiple alternative models. Following that I will discuss the experimental findings on multisensory integration in humans and animals. I start with psychophysical results on various forms of tasks and setups, that show that the brain uses and combines information from multiple cues. Specifically, the discussion will focus on the finding that humans integrate this information in a way that is close to the theoretical optimal performance. Special emphasis will be put on results about the developmental aspects of cue integration, highlighting experiments that could show that children do not perform similar to the Bayesian predictions. This section also includes a short summary of experiments on how subjects handle multiple alternative environmental dynamics. I will also talk about neurobiological findings of cells receiving input from multiple receptors both in dedicated brain areas but also primary sensory areas. I will proceed with an overview of existing theories and computational models of multisensory integration. This will be followed by a discussion on reinforcement learning (RL). First I will talk about the original theory including the two different main approaches model-free and model-based reinforcement learning. The important variables will be introduced as well as different algorithmic implementations. Secondly, a short review on the mapping of those theories onto brain and behaviour will be given. I mention the most in uential papers that showed correlations between the activity in certain brain regions with RL variables, most prominently between dopaminergic neurons and temporal difference errors. I will try to motivate, why I think that this theory can help to explain the development of near-optimal cue integration in humans. The next main chapter will introduce our model that learns to solve the task of audio-visual orienting. Many of the results in this section have been published in [Weisswange et al. 2009b,Weisswange et al. 2011]. The model agent starts without any knowledge of the environment and acts based on predictions of rewards, which will be adapted according to the reward signaling the quality of the performed action. I will show that after training this model performs similarly to the prediction of a Bayesian observer. The model can also deal with more complex environments in which it has to deal with multiple possible underlying generating models (perform causal inference). In these experiments I use di#erent formulations of Bayesian observers for comparison with our model, and find that it is most similar to the fully optimal observer doing model averaging. Additional experiments using various alterations to the environment show the ability of the model to react to changes in the input statistics without explicitly representing probability distributions. I will close the chapter with a discussion on the benefits and shortcomings of the model. The thesis continues whith a report on an application of the learning algorithm introduced before to two real world cue integration tasks on a robotic head. For these tasks our system outperforms a commonly used approximation to Bayesian inference, reliability weighted averaging. The approximation is handy because of its computational simplicity, because it relies on certain assumptions that are usually controlled for in a laboratory setting, but these are often not true for real world data. This chapter is based on the paper [Karaoguz et al. 2011]. Our second modeling approach tries to address the neuronal substrates of the learning process for cue integration. I again use a reward based training scheme, but this time implemented as a modulation of synaptic plasticity mechanisms in a recurrent network of binary threshold neurons. I start the chapter with an additional introduction section to discuss recurrent networks and especially the various forms of neuronal plasticity that I will use in the model. The performance on a task similar to that of chapter 3 will be presented together with an analysis of the in uence of different plasticity mechanisms on it. Again benefits and shortcomings and the general potential of the method will be discussed. I will close the thesis with a general conclusion and some ideas about possible future work.
The nature of spontaneous brain activity during wakefulness and sleep: a complex systems approach
(2014)
In this thesis we study the organization of spontaneous brain activity during wakefulness and all stages of human non-rapid eye movement sleep using an approach based on developments and tools from the theory of complex systems. After a brief introduction to sleep physiology and different theoretical models of consciousness, we study how the organization of cortical and sub-cortical interactions is modified during the sleep cycle. Our results, obtained by modeling global brain activity as a complex functional interaction network, show that the capacity of the human brain to integrate different segregated functional modules is diminished during deep sleep, in line with an informationintegration account of consciousness. We then show that integration is impaired not only across space but also in the temporal domain, by assesing the emergence of long-range temporal correlations in brain activity and how they are modified during sleep. We propose an encompassing explanation for this observation, namely, that the brain operatsat different dynamical regimes during different states of consciousness. Finally, we gather massive amounts of data from different collaborative projects and apply machine learning techniques to reveal that the \resting state" cannot be considered as a pure brain state and is in fact a mixture containing different levels of conscious awareness. This last result has deep implications for future attempts to develop a discovery science of brain function both in health and disease.
Machine learning (ML) techniques have evolved rapidly in recent years and have shown impressive capabilities in feature extraction, pattern recognition, and causal inference. There has been an increasing attention to applying ML to medical applications, such as medical diagnosis, drug discovery, personalized medicine, and numerous other medical problems. ML-based methods have the advantage of processing vast amounts of data.
With an ever increasing amount of medical data collection and large, inter-subject variability in the medical data, automated data processing pipelines are very much desirable since it is laborious, expensive, and error-prone to rely solely on human processing. ML methods have the potential to uncover interesting patterns, unravel correlations between complex features, learn patient-specific representations, and make accurate predictions. Motivated by these promising aspects, in this thesis, I present studies where I have implemented deep neural networks for the early diagnosis of epilepsy based on electroencephalography (EEG) data and brain tumor detection based on magnetic resonance spectroscopy (MRS) data.
In the project for early diagnosis of epilepsy, we are dealing with one of the most common neurological disorders, epilepsy, which is characterized by recurrent unprovoked seizures. It can be triggered by a variety of initial brain injuries and manifests itself after a time window which is called the latent period. During this period, a cascade of structural and functional brain alterations takes place leading to an increased seizure susceptibility.
The development and extension of brain tissue capable of generating spontaneous seizures is defined as epileptogenesis (EPG).
Detecting the presence of EPG provides a precious opportunity for targeted early medical interventions and, thus, can slow down or even halt the disease progression. In order to study brain signals in this latent window, animal epilepsy models are used to provide valuable data as it is extremely difficult to obtain this data from human patients. The aim of this study is to discover biomarkers of EPG using animal models and then to find the equivalent and counterparts in human patients' data. However, the EEG features for EPG are not well-understood and there is not a sufficiently large amount of annotated data for ML-based algorithms. To approach this problem, firstly, I utilized the timestamp information of the recorded EEG from an animal epilepsy model where epilepsy is induced by an electrical stimulation. The timestamp serves as a form of weak supervision, i.e., before and after the stimulation. Secondly, I implemented a deep residual neural network and trained it with a binary classification task to distinguish the EEG signals from these two phases. After obtaining a high discriminative ability on the binary classification task, I proposed to divide further the time span after the stimulation for a three-class classification, aiming to detect possible stages of the progression of the latent EPG phase. I have shown that the model can distinguish EEG signals at different stages of EPG with high accuracy and generalization ability. I have also demonstrated that some of the learned features from the network are clinically relevant.
In the task of detecting brain tumors based on MRS data, I first proposed to apply a deep neural network on the MRS data collected from over 400 patients for a binary classification task. To combat the challenge of noisy labeling, I developed a distillation step to filter out relatively ``cleanly'' labeled samples. A mixing-based data augmentation method was also implemented to expand the size of the training set. All the experiments were designed to be conducted with a leave-patient-out scheme to ensure the generalization ability of the model. Averaged across all leave-patient-out cross-validation sets, the proposed method performed on par with human neuroradiologists, while outperforming other baseline methods. I have demonstrated the distillation effect on the MNIST data set with manually-introduced label noise as well as providing visualization of the input influences on the final classification through a class activation map method.
Moreover, I have proposed to aggregate information at the subject level, which could provide more information and insights. This is inspired by the concept of multiple instance learning, where instance-level labels are not required and which is more tolerant to noisy labeling. I have proposed to generate data bags consisting of instances from each patient and also proposed two modules to ensure permutation invariance, i.e., an attention module and a pooling module. I have compared the performance of the network in different cases, i.e., with and without permutation-invariant modules, with and without data augmentation, single-instance-based and multiple-instance-based learning and have shown that neural networks equipped with the proposed attention or pooling modules can outperform human experts.
The brain is a large complex system which is remarkably good at maintaining stability under a wide range of input patterns and intensities. In addition, such a stable dynamical state is able to sustain essential functions, including the encoding of information about the external environment and storing memories. In order to succeed in these challenging tasks, neural circuits rely on a variety of plasticity mechanisms that act as self-organizational rules and regulate their dynamics. Based on toy models of self-organized criticality, this stable state has been proposed to be a phase transition point, poised between distinct types of unhealthy dynamics, in what has become known as the critical brain hypothesis. It is not yet known, however, if and how self-organization could drive biological neural networks towards a critical state while maintaining or improving their learning and memory functions.
Here, we investigate the emergence of criticality signatures in the form of neuronal avalanches due to self-organizational plasticity rules in a recurrent neural network. We show that power-law distributions of events, widely observed in experiments, arise from a combination of biologically inspired synaptic and homeostatic plasticity but are highly dependent on the external drive. Additionally, we describe how learning abilities and fading memory emerge and are improved by the same self-organizational processes. We finally propose an application of these enhanced functions, focusing on sequence and simple language learning tasks.
Taken together, our results suggest that the same self-organizational processes can be responsible for improving the brain’s spatio-temporal learning abilities and memory capacity while also giving rise to criticality signatures under particular input conditions, thus proposing a novel link between such abilities and neuronal avalanches. Although criticality was not verified, the detailed study of self-organization towards critical dynamics further elucidates its potential emergence and functions in the brain.
Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural models. In the present thesis, we introduce several recurrent network models of threshold units that combine spike timing dependent plasticity with homeostatic plasticity mechanisms like intrinsic plasticity or synaptic normalization. We investigate how these different forms of plasticity shape the dynamics and computational properties of recurrent networks. The networks receive input sequences composed of different symbols and learn the structure embedded in these sequences in an unsupervised manner. Information is encoded in the form of trajectories through a high-dimensional state space reminiscent of recent biological findings on cortical coding. We find that these self-organizing plastic networks are able to represent and "understand" the spatio-temporal patterns in their inputs while maintaining their dynamics in a healthy regime suitable for learning. The emergent properties are not easily predictable on the basis of the individual plasticity mechanisms at work. Our results underscore the importance of studying the interaction of different forms of plasticity on network behavior.
Cryo-electron tomography (CET) is a unique technique to visualize biological objects under near-to-native conditions at near-atomic resolution. CET provides three-dimensional (3D) snapshots of the cellular proteome, in which the spatial relations between macromolecular complexes in their near native cellular context can be explored. Due to the limitation of the electron dose applicable on biological samples, the achievable resolution of a tomogram is restricted to a few nanometers, higher resolution can be achieved by averaging of structures occurring in multiples. For this purpose, computational techniques such as template matching, sub-tomogram averaging and classification are essential for a meaningful processing of CET data.
This thesis introduces the techniques of template matching and sub-tomogram averaging and their applications on real biological data sets. Subsequently, the problem of reference bias, which restricts the applicability of those techniques, is addressed. Two methods that estimate the reference bias in Fourier and real space are demonstrated. The real space method, which we have named the “M-free” score, provides a reliable estimation of the reference bias, which gives access to the reliability of the template matching or sub-tomogram averaging process. Thus, the “M-free” score makes those approaches more applicable to structural biology. Furthermore, a classification algorithm based on Neural Networks (NN) called “KerDenSOM3D” is introduced, which is implemented in 3D and compensates for the missing-wedge. This approach helps extracting different structural states of macromolecular complexes or increasing the class purity of data sets by eliminating outliers. A comprehensive comparison with other classification methods shows superior performance of KerDenSOM3D.
At present, there is a huge lag between the artificial and the biological information processing systems in terms of their capability to learn. This lag could be certainly reduced by gaining more insight into the higher functions of the brain like learning and memory. For instance, primate visual cortex is thought to provide the long-term memory for the visual objects acquired by experience. The visual cortex handles effortlessly arbitrary complex objects by decomposing them rapidly into constituent components of much lower complexity along hierarchically organized visual pathways. How this processing architecture self-organizes into a memory domain that employs such compositional object representation by learning from experience remains to a large extent a riddle. The study presented here approaches this question by proposing a functional model of a self-organizing hierarchical memory network. The model is based on hypothetical neuronal mechanisms involved in cortical processing and adaptation. The network architecture comprises two consecutive layers of distributed, recurrently interconnected modules. Each module is identified with a localized cortical cluster of fine-scale excitatory subnetworks. A single module performs competitive unsupervised learning on the incoming afferent signals to form a suitable representation of the locally accessible input space. The network employs an operating scheme where ongoing processing is made of discrete successive fragments termed decision cycles, presumably identifiable with the fast gamma rhythms observed in the cortex. The cycles are synchronized across the distributed modules that produce highly sparse activity within each cycle by instantiating a local winner-take-all-like operation. Equipped with adaptive mechanisms of bidirectional synaptic plasticity and homeostatic activity regulation, the network is exposed to natural face images of different persons. The images are presented incrementally one per cycle to the lower network layer as a set of Gabor filter responses extracted from local facial landmarks. The images are presented without any person identity labels. In the course of unsupervised learning, the network creates simultaneously vocabularies of reusable local face appearance elements, captures relations between the elements by linking associatively those parts that encode the same face identity, develops the higher-order identity symbols for the memorized compositions and projects this information back onto the vocabularies in generative manner. This learning corresponds to the simultaneous formation of bottom-up, lateral and top-down synaptic connectivity within and between the network layers. In the mature connectivity state, the network holds thus full compositional description of the experienced faces in form of sparse memory traces that reside in the feed-forward and recurrent connectivity. Due to the generative nature of the established representation, the network is able to recreate the full compositional description of a memorized face in terms of all its constituent parts given only its higher-order identity symbol or a subset of its parts. In the test phase, the network successfully proves its ability to recognize identity and gender of the persons from alternative face views not shown before. An intriguing feature of the emerging memory network is its ability to self-generate activity spontaneously in absence of the external stimuli. In this sleep-like off-line mode, the network shows a self-sustaining replay of the memory content formed during the previous learning. Remarkably, the recognition performance is tremendously boosted after this off-line memory reprocessing. The performance boost is articulated stronger on those face views that deviate more from the original view shown during the learning. This indicates that the off-line memory reprocessing during the sleep-like state specifically improves the generalization capability of the memory network. The positive effect turns out to be surprisingly independent of synapse-specific plasticity, relying completely on the synapse-unspecific, homeostatic activity regulation across the memory network. The developed network demonstrates thus functionality not shown by any previous neuronal modeling approach. It forms and maintains a memory domain for compositional, generative object representation in unsupervised manner through experience with natural visual images, using both on- ("wake") and off-line ("sleep") learning regimes. This functionality offers a promising departure point for further studies, aiming for deeper insight into the learning mechanisms employed by the brain and their consequent implementation in the artificial adaptive systems for solving complex tasks not tractable so far.
Neurons are cells with a highly complex morphology; their dendritic arbor spans up to thousands of micrometers. This extended arbor poses a challenge for the logistics of neuronal processes: mRNA, proteins, and organelles have to be transported to dendrites, hundreds of micrometers away from the soma. This thesis aims to calculate the minimum number of proteins needed to populate the dendritic trees for different scenarios.
In chapter 2, I analyzed the ability of different mechanisms to populate the dendritic arbor. I started from the solution of the diffusion equation in Sec. 2.1, then I included the contribution of active transport in Sec. 2.2 and showed how it could have either the effect of increasing the effective diffusion coefficient or of introducing a bias in the diffusion process. In Sec. 2.3 I studied the spatial distribution of locally synthesized protein, accordingly with actively and passively transported mRNA. In Sec. 2.5, I derived the boundary condition for branches showing a qualitatively different behavior of surface and cytoplasmic proteins induced by the medium’s dimensionality in which they diffuse.
In chapter 3, I introduced the concept of protein requirement, defined as the minimum number of proteins that the neuron needs to produce to provide at least one protein to each micrometer of the dendritic arbor. In Sec. 3.1, I derived the protein requirement for diffusive proteins for somatic translation and constant translation in the dendritic arbor. In Sec. 3.2, I analyzed numerically the protein requirement in the case of actively transported protein synthesized in the soma, and, in Sec. 3.3, in the case of actively transported proteins synthesized in the dendritic arbor. In Sec. 3.4, I analyzed the protein requirement of protein synthesized in the dendrite accordingly with the distribution of mRNA described in Sec. 3.3 and 3.2. In Sec. 3.5, I derived the protein requirement for a single branch and purely diffusive proteins.
In chapter 4, I analyzed the relation between the radii of the three afferent dendrites in a branch, their length, and the diffusion length of a protein. In Sec. 4.1 I derived the optimal ratio between the radii of the daughter dendrites that minimizes the protein requirement. In Sec. 4.3 I introduced the 3/2− Rall Rule and in Sec. 4.5 its generalization. Finally, I used those rules to estimate the fraction of proteins diffusing away from and toward the soma.
In chapter 5, I analyzed the radii distribution for three categories of neurons: cultured hippocampal neurons in Sec. 5.1, stomatogastric ganglia neuron in Sec. 5.2, and 3DEM reconstructed prefrontal pyramidal neurons in Sec. 5.3. For each of these three classes, I analyzed the distribution of radii, Rall exponents, and the probability ratio. For most of them, I found that the probability of a protein diffusing away from the soma is higher for surface proteins than for cytoplasmic ones. I quantified this with a parameter called surface bias.
In Chapter 6, I analyzed the fluorescent ratio imaged by our collaborators Anne-Sophie Hafner, for a surface protein, GFP::Nlg, and a soluble one, GFP, in cultured hippocampal neurons, and I compared the fluorescent ratio with the probability ratio obtained in 5.1, finding that they are in good agreement.
In chapter 7, I compared the real dendritic morphologies imaged by one of our collaborators Ali Karimi with the optimal branching rule obtained in Sec. 4.1 and I calculated the cost for not having optimal branching radii.
Finally, in Chapter 8, I used the knowledge of the branching statistics gathered in 5.3 to simulate the protein profile on three different classes of neurons: pyramidal neurons, granule neuron, and Purkinje neurons. I compared the protein profile for surface and cytoplasmic neurons for each morphology for two different values of the diffusion length: λ = 109µm and λ = 473µm, both for optimized radii and symmetrical radii. I showed how the radii optimization reduces the protein requirement of a factor 10 4 for pyramidal neurons.
A framework for the analysis and visualization of multielectrode spike trains / von Ovidiu F. Jurjut
(2009)
The brain is a highly distributed system of constantly interacting neurons. Understanding how it gives rise to our subjective experiences and perceptions depends largely on understanding the neuronal mechanisms of information processing. These mechanisms are still poorly understood and a matter of ongoing debate remains the timescale on which the coding process evolves. Recently, multielectrode recordings of neuronal activity have begun to contribute substantially to elucidating how information coding is implemented in brain circuits. Unfortunately, analysis and interpretation of multielectrode data is often difficult because of their complexity and large volume. Here we propose a framework that enables the efficient analysis and visualization of multielectrode spiking data. First, using self-organizing maps, we identified reoccurring multi-neuronal spike patterns that evolve on various timescales. Second, we developed a color-based visualization technique for these patterns. They were mapped onto a three-dimensional color space based on their reciprocal similarities, i.e., similar patterns were assigned similar colors. This innovative representation enables a quick and comprehensive inspection of spiking data and provides a qualitative description of pattern distribution across entire datasets. Third, we quantified the observed pattern expression motifs and we investigated their contribution to the encoding of stimulus-related information. An emphasis was on the timescale on which patterns evolve, covering the temporal scales from synchrony up to mean firing rate. Using our multi-neuronal analysis framework, we investigated data recorded from the primary visual cortex of anesthetized cats. We found that cortical responses to dynamic stimuli are best described as successions of multi-neuronal activation patterns, i.e., trajectories in a multidimensional pattern space. Patterns that encode stimulus-specific information are not confined to a single timescale but can span a broad range of timescales, which are tightly related to the temporal dynamics of the stimuli. Therefore, the strict separation between synchrony and mean firing rate is somewhat artificial as these two represent only extreme cases of a continuum of timescales that are expressed in cortical dynamics. Results also indicate that timescales consistent with the time constants of neuronal membranes and fast synaptic transmission (~10-20 ms) appear to play a particularly salient role in coding, as patterns evolving on these timescales seem to be involved in the representation of stimuli with both slow and fast temporal dynamics.