Article
Refine
Year of publication
Document Type
- Article (391) (remove)
Has Fulltext
- yes (391)
Is part of the Bibliography
- no (391)
Keywords
- Heavy Ion Experiments (19)
- Hadron-Hadron scattering (experiments) (10)
- Hadron-Hadron Scattering (9)
- LHC (8)
- Heavy-ion collision (7)
- Quark-Gluon Plasma (4)
- ALICE (3)
- ALICE experiment (3)
- Collective Flow (3)
- Jets and Jet Substructure (3)
- Petri net (3)
- (surface) partial differential equations (2)
- BioCreative V.5 (2)
- BioNLP (2)
- Charm physics (2)
- Computer science (2)
- Electroencephalography (2)
- F.4.1 (2)
- Finite Volumes (2)
- Heavy Ions (2)
- Heavy Quark Production (2)
- Kinect (2)
- Kongress (2)
- Lepton-Nucleon Scattering (experiments) (2)
- Logic in computer science (2)
- Multimodal Learning Analytics (2)
- Named entity recognition (2)
- Particle Correlations and Fluctuations (2)
- Particle and resonance production (2)
- Pb–Pb collisions (2)
- QCD (2)
- Textanalyse ; Linguistische Datenverarbeitung; Computerlinguistik (2)
- amyloid precursor protein (2)
- computational virology (2)
- economics (2)
- hepatitis C virus (HCV) (2)
- hippocampus (2)
- massively parallel multigrid solvers (2)
- morphology (2)
- parameter estimation (2)
- population dynamics (2)
- pp collisions (2)
- presynaptic active zone (2)
- realistic geometries (2)
- septic shock (2)
- viral dynamics (2)
- 3D spatio-temporal resolved mathematical models (1)
- 3D spatiotemporal resolved mathematical models (1)
- 900 GeV (1)
- ALICE detector (1)
- ATPG (1)
- Active learning (1)
- Adaptive control (1)
- Adaptive process control (1)
- Agroecology (1)
- Alternate hydrophobicity (1)
- Amino acid pattern (1)
- Analog Circuits (1)
- Angiography (1)
- Anti-nuclei (1)
- Arabidopsis thaliana metabolism (1)
- Artificial neural networks (1)
- Atrial fibrillation classification (1)
- Attention mechanism (1)
- Automatic (1)
- Automatic prediction (1)
- BESIII (1)
- BFS (1)
- Beauty production (1)
- Behavioural ecology (1)
- Behavioural methods (1)
- Berufswahl (1)
- Beta-sheet (1)
- Big Data (1)
- BigBench (1)
- Biodiversity (1)
- Biomedical named entity recognition (1)
- Boosted Jets (1)
- Branching fractions (1)
- Breaking knapsack cryptosystems (1)
- C-reactive protein (1)
- CABG (1)
- CBM detector (1)
- CEMP (1)
- CHEMDNER (1)
- CRF (1)
- CT (1)
- CTL (1)
- Cell staining (1)
- Cellular neural network (1)
- Centrality Class (1)
- Centrality Selection (1)
- Clustering (1)
- Comparison with QCD (1)
- Complexity (1)
- Computational Humanities (1)
- Computational geometry (1)
- Computational models (1)
- Computational neuroscience (1)
- Computational science (1)
- Computer Science (1)
- Continual deep learning (1)
- Convolutional Neural Networks (1)
- Cuneiform (1)
- Deep learning (1)
- Deictic and iconic gestures (1)
- Developmental biology (1)
- Digital Humanities (1)
- Dirichlet bound (1)
- Dynamic Graph Algorithms (1)
- Dynamical systems (1)
- Effective Field Theories (1)
- Electron-pion identification (1)
- Electroweak interaction (1)
- Elliptic flow (1)
- Energy-efficiency (1)
- Equation of State (1)
- Event-related potential (1)
- Experimental Methods (1)
- External Memory (1)
- F.1.3 (1)
- Feasibility (1)
- Femtoscopy (1)
- Fibre/foam sandwich radiator (1)
- Fourier-Motzkin algorithm (1)
- GPRO (1)
- GPU algorithms (1)
- Gene expression (1)
- Generalized procrustes analysis (1)
- Geodesics (1)
- Gestural writing (1)
- Graph theory (1)
- H.2.3 (1)
- HBT (1)
- Hadron production (1)
- Hadron-Hadron Scattering Heavy (1)
- Hadronic decays (1)
- Hard Scattering (1)
- Heavy Ion Experiment (1)
- Heavy flavor production (1)
- Heavy flavour production (1)
- Heavy ions (1)
- Heavy-flavour decay muons (1)
- Heavy-ion collisions (1)
- Hematoxylin staining (1)
- Higher education (1)
- Histology (1)
- Hive (1)
- Hodgkin lymphoma (1)
- Hydrophobicity scale (1)
- Hypertext (1)
- I/O Model (1)
- Iconography (1)
- Image processing (1)
- Image schemata (1)
- In-TIPS thrombosis (1)
- Inclusive spectra (1)
- Inflammation (1)
- Informatik (1)
- Intensity interferometry (1)
- Intensivpatient (1)
- Invariant Mass Distribution (1)
- Ionisation energy loss (1)
- Jet Physics (1)
- Jet Substructure (1)
- Jets (1)
- Kinetic hypertext authoring (1)
- Klebsiella pneumoniae (1)
- Knapsack problem (1)
- Kohonen map (1)
- Kohonen mapping (1)
- Kolmogorov complexity (1)
- Kompetenz (1)
- Kryptosystem (1)
- LSTM (1)
- LTP (1)
- Lateral inhibited network (1)
- Lattice QCD (1)
- Lattice Quantum Field Theory (1)
- Lattice basis reduction (1)
- Learner goals (1)
- Learning analytics dashboard (1)
- Lifelong machine learning (1)
- Line reconstruction (1)
- Local field potential (1)
- Lymph nodes (1)
- Lymphocytes (1)
- Machine learning (1)
- Manatee invariant (1)
- MapReduce (1)
- Mathematical model (1)
- Microarray (1)
- Mid-rapidity (1)
- Minimum Bias (1)
- Mitochondria (1)
- Model Checking (1)
- Monte Carlo (1)
- Multi-Parton Interactions (1)
- Multi-strange baryons (1)
- Multi-wire proportional drift chamber (1)
- Multiset independent component analysis (1)
- Museum information system (1)
- NF-κB pathway (1)
- NP-complete problems (1)
- NP-hardness (1)
- Natural Language Processing (1)
- Network models (1)
- Neural encoding models (1)
- Neural network (1)
- Neural networks (1)
- Neurofuzzy medical systems (1)
- Neuronales Netzwerk (1)
- Noisy point clouds (1)
- Nuclear Physics (1)
- Nuclear modification factor (1)
- Numismatics (1)
- ORC (1)
- Object vision (1)
- Online self-assessment (1)
- Online-Publikation (1)
- Online-Self-Assessment (1)
- Ontologies (1)
- Ontology (1)
- Open set recognition (1)
- Open world learning (1)
- Open-access data resource (1)
- PDE (1)
- PDEs (1)
- PYTHIA (1)
- Parameter estimation (1)
- Parquet (1)
- Particle and Resonance Production (1)
- Particle correlations and fluctuations (1)
- Pathologists (1)
- Pb–Pb (1)
- Phase Diagram of QCD (1)
- Phase-reset (1)
- Phosphate (1)
- PointNet (1)
- Positive fluid balance (1)
- Postoperative atrial fibrillation (1)
- Preßburg <2003> (1)
- Principal component analysis (1)
- Production Cross Section (1)
- Prognose (1)
- Prognostische Validität (1)
- Properties of Hadrons (1)
- Proteins (1)
- Proton–proton (1)
- Psychometrische Güte (1)
- Q-modularity (1)
- Quantitative Imaging (1)
- Quark Deconfinement (1)
- Quark Gluon Plasma (1)
- Quark Production (1)
- Quark gluon plasma (1)
- Quarkonium (1)
- RBF-nets (1)
- RDF (1)
- Randomization (1)
- Rapidity Range (1)
- Relativistic heavy ion physics (1)
- Resolution Parameter (1)
- Resonances (1)
- Rubber extrusion (1)
- SMASH (1)
- SPARQL (1)
- SQL-on-Hadoop (1)
- Saward (1)
- Self-Assessment (1)
- Self-organized eigenvector jets (1)
- Self-regulated learning (1)
- Sense-making (1)
- Sequence analysis (1)
- Shortest lattice vector problem (1)
- Signaling pathway (1)
- Single electrons (1)
- Single muons (1)
- Social behaviour (1)
- SparkSQL (1)
- Specialized Information Service (1)
- Statistical classification (1)
- Student-facing learning analytics (1)
- Studienberatung (1)
- Studienleistungen (1)
- Studienwahl (1)
- Subset sum problem (1)
- Sumerian (1)
- Systematic Uncertainty (1)
- TIPS (1)
- TR (1)
- Technology-Enhanced Learning (1)
- Text mining (1)
- Theoretische Informatik (1)
- Time Constraints (1)
- Time Projection Chamber (1)
- Tracking (1)
- Transform coding (1)
- Transition invariant (1)
- Transition radiation detector (1)
- Transjugular Intrahepatic Portosystemic Shunt (1)
- Translation (1)
- Transmembrane helix (1)
- Transmembrane sheets (1)
- Transverse momentum (1)
- Trigger (1)
- Valve surgery (1)
- Vector Boson Production (1)
- Visual object recognition (1)
- Xenon-based gas mixture (1)
- Z-inspection (1)
- acute infection (1)
- adaption (1)
- agent-based modeling (1)
- aging (1)
- alignment in communication structural coupling (1)
- anaplastic large cell lymphoma (1)
- approximation networks (1)
- artificial intelligence (1)
- big data (1)
- big data benchmarking (1)
- biochemical pathways (1)
- boutons (1)
- built-in self-test (1)
- calcium dynamics (1)
- cancer (1)
- cell motility (1)
- cellular neural nets (1)
- centrality (1)
- chemokine receptors (1)
- chosen ciphertext attack (1)
- chronic infection (1)
- classical Hodgkin lymphoma (1)
- clinical presentation (1)
- clique problem (1)
- codon usage preferences (1)
- colorabdity (1)
- columnar file formats (1)
- common transition pairs (1)
- community (1)
- competence (1)
- complementary information (1)
- compression (1)
- computer vision (1)
- continued fraction algorithm (1)
- corpus study (1)
- coupled differential equations (1)
- coupon collector problem (1)
- course achievement (1)
- critically ill patients (1)
- dE/dx (1)
- dancing (1)
- data orthonormalization network (1)
- data structures (1)
- deep learning (1)
- detailed modeling (1)
- differential equations (1)
- digital pathology (1)
- dissemination (1)
- e-Entropy (1)
- eHumanities (1)
- early warning signs (1)
- electrical stimulation (1)
- elementary mode (1)
- emergence (1)
- epistemic network analysis (1)
- ethical co-design (1)
- ethics (1)
- evolutionary associative learning (1)
- eye-tracking (1)
- feedforward network layers (1)
- finite model theory (1)
- firing pattern (1)
- first-order logic (1)
- floating point arithmetic (1)
- functional module (1)
- gene expression (1)
- generic algorithm (1)
- generic group model (1)
- genetic algorithm (1)
- genotype–phenotype (1)
- graph algorithms (1)
- graph isomorphism (1)
- graph understanding (1)
- hashing (1)
- hawkes processes (1)
- healthcare (1)
- heavy ion collisions (1)
- heavy ion experiments (1)
- higher education (1)
- human lymph node (1)
- human viruses (1)
- hybrid (1)
- image analysis (1)
- image encoding (1)
- immune system (1)
- impact parameter (1)
- information conservation (1)
- information decomposition (1)
- information distribution (1)
- information landscape (1)
- integer relation (1)
- intertextual similarity (1)
- intratextual similarity (1)
- invariant transition pairs (1)
- kinetic fingerprint (1)
- knapsack cryptosystems (1)
- knowledge graphs (1)
- lattice basis reduction (1)
- lattices (1)
- linguistic linked open data (1)
- linguistic networks graph distance measures (1)
- linguistic relativity (1)
- linked open data (1)
- logic synthesis (1)
- logical networks (1)
- low-resource languages (1)
- malignant melanoma (1)
- matching (1)
- mathematical modeling (1)
- mathematical models of viral RNA cycle (1)
- matroids, online algorithm (1)
- maximal common transition set (1)
- maximal information gain (1)
- meson (1)
- metabolic networks (1)
- minimal cut set (1)
- minimum description length (1)
- mitochondria (1)
- model parameter adaption (1)
- model verification (1)
- modeling and simulation (1)
- models of computation (1)
- morphological filtering (1)
- multiple correspondence analysis (1)
- multiple texts (1)
- mutation (1)
- mutual information (1)
- mutual information of graphs (1)
- network model (1)
- network reduction (1)
- network similarity measurement (1)
- neural network (1)
- neuromuscular junction (1)
- neuron (1)
- neuronal network (1)
- newspaper (1)
- noise suppression (1)
- nondetermmistlc Turing machines (1)
- ntracellular signaling (1)
- one-more decryption attack (1)
- optimization (1)
- overfitting (1)
- packing problem (1)
- parallel (1)
- parliamentary debate (1)
- parsing (1)
- phage (1)
- phage therapy (1)
- predictive validity (1)
- psychometric properties (1)
- public speaking (1)
- quark gluon plasma (1)
- quicksort (1)
- random oracle model (1)
- rate-distortion theory (1)
- reception (1)
- redundancy (1)
- redundant information (1)
- representation (1)
- representative claims (1)
- resistance (1)
- restricted Hebbian learning (1)
- rosetting T cells (1)
- salsa (1)
- satlsfiablhty (1)
- secretary problem (1)
- segmentation (1)
- sensor-based learning support (1)
- shock filter (1)
- signed ElGamal encryption (1)
- simultaneous diophantine approximations (1)
- social media (1)
- sorting (1)
- specialized vocabulary (1)
- spectra (1)
- structure-function relationships (1)
- subset sum problems (1)
- succinct data structures (1)
- succinctness (1)
- synapse (1)
- synergistic interaction (1)
- synergy (1)
- syntax (1)
- systems biology (1)
- t-cluster (1)
- t-invariant (1)
- testability (1)
- text search (1)
- textbooks (1)
- three-level topic model (1)
- threshold concepts (1)
- trajectories (1)
- transition invariant (1)
- translational selection (1)
- trustworthy AI (1)
- trustworthy AI Co-design (1)
- unique information (1)
- volatility clustering (1)
- warts (1)
- weight resolutions (1)
- whitening filter (1)
- whole slide image (1)
- wikipedia (1)
- within-host viral modeling (1)
- within-host viral modelling (1)
- women’s quota (1)
- √sN N = 2.76 TeV (1)
Institute
- Informatik (391) (remove)
1D-3D hybrid modeling : from multi-compartment models to full resolution models in space and time
(2014)
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics.
The production of the hypertriton nuclei HΛ3 and H‾Λ¯3 has been measured for the first time in Pb–Pb collisions at sNN=2.76 TeV with the ALICE experiment at LHC. The pT-integrated HΛ3 yield in one unity of rapidity, dN/dy×B.R.(HΛ3→He3,π−)=(3.86±0.77(stat.)±0.68(syst.))×10−5 in the 0–10% most central collisions, is consistent with the predictions from a statistical thermal model using the same temperature as for the light hadrons. The coalescence parameter B3 shows a dependence on the transverse momentum, similar to the B2 of deuterons and the B3 of 3He nuclei. The ratio of yields S3=HΛ3/(He3×Λ/p) was measured to be S3=0.60±0.13(stat.)±0.21(syst.) in 0–10% centrality events; this value is compared to different theoretical models. The measured S3 is compatible with thermal model predictions. The measured HΛ3 lifetime, τ=181−39+54(stat.)±33(syst.)ps is in agreement within 1σ with the world average value.
50 years of amino acid hydrophobicity scales : revisiting the capacity for peptide classification
(2016)
Background: Physicochemical properties are frequently analyzed to characterize protein-sequences of known and unknown function. Especially the hydrophobicity of amino acids is often used for structural prediction or for the detection of membrane associated or embedded β-sheets and α-helices. For this purpose many scales classifying amino acids according to their physicochemical properties have been defined over the past decades. In parallel, several hydrophobicity parameters have been defined for calculation of peptide properties. We analyzed the performance of separating sequence pools using 98 hydrophobicity scales and five different hydrophobicity parameters, namely the overall hydrophobicity, the hydrophobic moment for detection of the α-helical and β-sheet membrane segments, the alternating hydrophobicity and the exact ß-strand score.
Results: Most of the scales are capable of discriminating between transmembrane α-helices and transmembrane β-sheets, but assignment of peptides to pools of soluble peptides of different secondary structures is not achieved at the same quality. The separation capacity as measure of the discrimination between different structural elements is best by using the five different hydrophobicity parameters, but addition of the alternating hydrophobicity does not provide a large benefit. An in silico evolutionary approach shows that scales have limitation in separation capacity with a maximal threshold of 0.6 in general. We observed that scales derived from the evolutionary approach performed best in separating the different peptide pools when values for arginine and tyrosine were largely distinct from the value of glutamate. Finally, the separation of secondary structure pools via hydrophobicity can be supported by specific detectable patterns of four amino acids.
Conclusion: It could be assumed that the quality of separation capacity of a certain scale depends on the spacing of the hydrophobicity value of certain amino acids. Irrespective of the wealth of hydrophobicity scales a scale separating all different kinds of secondary structures or between soluble and transmembrane peptides does not exist reflecting that properties other than hydrophobicity affect secondary structure formation as well. Nevertheless, application of hydrophobicity scales allows distinguishing between peptides with transmembrane α-helices and β-sheets. Furthermore, the overall separation capacity score of 0.6 using different hydrophobicity parameters could be assisted by pattern search on the protein sequence level for specific peptides with a length of four amino acids.
We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.
A new method of event characterization based on Deep Learning is presented. The PointNet models can be used for fast, online event-by-event impact parameter determination at the CBM experiment. For this study, UrQMD and the CBM detector simulation are used to generate Au+Au collision events at 10 AGeV which are then used to train and evaluate PointNet based architectures. The models can be trained on features like the hit position of particles in the CBM detector planes, tracks reconstructed from the hits or combinations thereof. The Deep Learning models reconstruct impact parameters from 2-14 fm with a mean error varying from -0.33 to 0.22 fm. For impact parameters in the range of 5-14 fm, a model which uses the combination of hit and track information of particles has a relative precision of 4-9% and a mean error of -0.33 to 0.13 fm. In the same range of impact parameters, a model with only track information has a relative precision of 4-10% and a mean error of -0.18 to 0.22 fm. This new method of event-classification is shown to be more accurate and less model dependent than conventional methods and can utilize the performance boost of modern GPU processor units.
Aging of biological systems is controlled by various processes which have a potential impact on gene expression. Here we report a genome-wide transcriptome analysis of the fungal aging model Podospora anserina. Total RNA of three individuals of defined age were pooled and analyzed by SuperSAGE (serial analysis of gene expression). A bioinformatics analysis identified different molecular pathways to be affected during aging. While the abundance of transcripts linked to ribosomes and to the proteasome quality control system were found to decrease during aging, those associated with autophagy increase, suggesting that autophagy may act as a compensatory quality control pathway. Transcript profiles associated with the energy metabolism including mitochondrial functions were identified to fluctuate during aging. Comparison of wild-type transcripts, which are continuously down-regulated during aging, with those down-regulated in the long-lived, copper-uptake mutant grisea, validated the relevance of age-related changes in cellular copper metabolism. Overall, we (i) present a unique age-related data set of a longitudinal study of the experimental aging model P. anserina which represents a reference resource for future investigations in a variety of organisms, (ii) suggest autophagy to be a key quality control pathway that becomes active once other pathways fail, and (iii) present testable predictions for subsequent experimental investigations.
Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable because of the combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm uses interaction delays reconstructed for directed bivariate interactions to tag potentially spurious edges on the basis of their timing signatures in the context of the surrounding network. Such tagged interactions may then be pruned, which produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation in MATLAB to test the algorithm’s performance on simulated networks as well as networks derived from magnetoencephalographic data. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a tractable and data-efficient way of reconstructing approximative networks of multivariate interactions. It is preferable if available data are limited or if fully multivariate approaches are computationally infeasible.
We present a hierarchy of polynomial time lattice basis reduction algorithms that stretch from Lenstra, Lenstra, Lovász reduction to Korkine–Zolotareff reduction. Let λ(L) be the length of a shortest nonzero element of a lattice L. We present an algorithm which for k∈N finds a nonzero lattice vector b so that |b|2⩽(6k2)nkλ(L)2. This algorithm uses O(n2(kk+o(k))+n2)log B) arithmetic operations on O(n log B)-bit integers. This holds provided that the given basis vectors b1,…,bn∈Zn are integral and have the length bound B. This algorithm successively applies Korkine–Zolotareff reduction to blocks of length k of the lattice basis. We also improve Kannan's algorithm for Korkine-Zolotareff reduction.
The human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition. Here we collected a large and rich dataset of high temporal resolution EEG responses to images of objects on a natural background. This dataset includes 10 participants, each with 82,160 trials spanning 16,740 image conditions. Through computational modeling we established the quality of this dataset in five ways. First, we trained linearizing encoding models that successfully synthesized the EEG responses to arbitrary images. Second, we correctly identified the recorded EEG data image conditions in a zero-shot fashion, using EEG synthesized responses to hundreds of thousands of candidate image conditions. Third, we show that both the high number of conditions as well as the trial repetitions of the EEG dataset contribute to the trained models’ prediction accuracy. Fourth, we built encoding models whose predictions well generalize to novel participants. Fifth, we demonstrate full end-to-end training of randomly initialized DNNs that output EEG responses for arbitrary input images. We release this dataset as a tool to foster research in visual neuroscience and computer vision.
Poster presentation: Introduction The ability of neurons to emit different firing patterns is considered relevant for neuronal information processing. In dopaminergic neurons, prominent patterns include highly regular pacemakers with separate spikes and stereotyped intervals, processes with repetitive bursts and partial regularity, and irregular spike trains with nonstationary properties. In order to model and quantify these processes and the variability of their patterns with respect to pharmacological and cellular properties, we aim to describe the two dimensions of burstiness and regularity in a single model framework. Methods We present a stochastic spike train model in which the degree of burstiness and the regularity of the oscillation are described independently and with two simple parameters. In this model, a background oscillation with independent and normally distributed intervals gives rise to Poissonian spike packets with a Gaussian firing intensity. The variability of inter-burst intervals and the average number of spikes in each burst indicate regularity and burstiness, respectively. These parameters can be estimated by fitting the model to the autocorrelograms. This allows to assign every spike train a position in the two-dimensional space described by regularity and burstiness and thus, to investigate the dependence of the firing patterns on different experimental conditions. Finally, burst detection in single spike trains is possible within the model because the parameter estimates determine the appropriate bandwidth that should be used for burst identification. Results and Discussion We applied the model to a sample data set obtained from dopaminergic substantia nigra and ventral tegmental area neurons recorded extracellularly in vivo and studied differences between the firing activity of dopaminergic neurons in wildtype and K-ATP channel knock-out mice. The model is able to represent a variety of discharge patterns and to describe changes induced pharmacologically. It provides a simple and objective classification scheme for the observed spike trains into pacemaker, irregular and bursty processes. In addition to the simple classification, changes in the parameters can be studied quantitatively, also including the properties related to bursting behavior. Interestingly, the proposed algorithm for burst detection may be applicable also to spike trains with nonstationary firing rates if the remaining parameters are unaffected. Thus, the proposed model and its burst detection algorithm can be useful for the description and investigation of neuronal firing patterns and their variability with cellular and experimental conditions.