Refine
Year of publication
- 2023 (245) (remove)
Document Type
- Preprint (245) (remove)
Has Fulltext
- yes (245)
Is part of the Bibliography
- no (245) (remove)
Keywords
- E2 enzyme (5)
- TRACT (5)
- oligomerization (5)
- ubiquitination (5)
- ISGylation (3)
- ISGlyation (2)
- natural scenes (2)
- neuronal populations (2)
- primary visual cortex (2)
- stimulus encoding (2)
Institute
- Physik (146)
- Frankfurt Institute for Advanced Studies (FIAS) (127)
- Informatik (117)
- Medizin (31)
- Ernst Strüngmann Institut (16)
- Biowissenschaften (13)
- MPI für Hirnforschung (13)
- MPI für Biophysik (10)
- Biochemie, Chemie und Pharmazie (9)
- Biochemie und Chemie (8)
SpikeShip: a method for fast, unsupervised discovery of high-dimensional neural spiking patterns
(2023)
Neural coding and memory formation depend on temporal spiking sequences that span high-dimensional neural ensembles. The unsupervised discovery and characterization of these spiking sequences requires a suitable dissimilarity measure to spiking patterns, which can then be used for clustering and decoding. Here, we present a new dissimilarity measure based on optimal transport theory called SpikeShip, which compares multi-neuron spiking patterns based on all the relative spike-timing relationships among neurons. SpikeShip computes the optimal transport cost to make all the relative spike timing relationships (across neurons) identical between two spiking patterns. We show that this transport cost can be decomposed into a temporal rigid translation term, which captures global latency shifts, and a vector of neuron-specific transport flows, which reflect inter-neuronal spike timing differences. SpikeShip can be effectively computed for high-dimensional neuronal ensembles, has a low (linear) computational cost that has the same order as the spike count, and is sensitive to higher-order correlations. Furthermore SpikeShip is binless, can handle any form of spike time distributions, is not affected by firing rate fluctuations, can detect patterns with a low signal-to-noise ratio, and can be effectively combined with a sliding window approach. We compare the advantages and differences between SpikeShip and other measures like SPIKE and Victor-P urpura distance. We applied SpikeShip to large-scale Neuropixel recordings during spontaneous activity and visual encoding. We show that high-dimensional spiking sequences detected via SpikeShip reliably distinguish between different natural images and different behavioral states. These spiking sequences carried complementary information to conventional firing rate codes. SpikeShip opens new avenues for studying neural coding and memory consolidation by rapid and unsupervised detection of temporal spiking patterns in high-dimensional neural ensembles.
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
Motivation DNA CpG methylation (CpGm) has proven to be a crucial epigenetic factor in the gene regulatory system. Assessment of DNA CpG methylation values via whole-genome bisulfite sequencing (WGBS) is, however, computationally extremely demanding.
Results We present FAst MEthylation calling (FAME), the first approach to quantify CpGm values directly from bulk or single-cell WGBS reads without intermediate output files. FAME is very fast but as accurate as standard methods, which first produce BS alignment files before computing CpGm values. We present experiments on bulk and single-cell bisulfite datasets in which we show that data analysis can be significantly sped-up and help addressing the current WGBS analysis bottleneck for large-scale datasets without compromising accuracy.
Availability An implementation of FAME is open source and licensed under GPL-3.0 at https://github.com/FischerJo/FAME.
Investigators in the cognitive neurosciences have turned to Big Data to address persistent replication and reliability issues by increasing sample sizes, statistical power, and representativeness of data. While there is tremendous potential to advance science through open data sharing, these efforts unveil a host of new questions about how to integrate data arising from distinct sources and instruments. We focus on the most frequently assessed area of cognition - memory testing - and demonstrate a process for reliable data harmonization across three common measures. We aggregated raw data from 53 studies from around the world which measured at least one of three distinct verbal learning tasks, totaling N = 10,505 healthy and brain-injured individuals. A mega analysis was conducted using empirical bayes harmonization to isolate and remove site effects, followed by linear models which adjusted for common covariates. After corrections, a continuous item response theory (IRT) model estimated each individual subject’s latent verbal learning ability while accounting for item difficulties. Harmonization significantly reduced inter-site variance by 37% while preserving covariate effects. The effects of age, sex, and education on scores were found to be highly consistent across memory tests. IRT methods for equating scores across AVLTs agreed with held-out data of dually-administered tests, and these tools are made available for free online. This work demonstrates that large-scale data sharing and harmonization initiatives can offer opportunities to address reproducibility and integration challenges across the behavioral sciences.
The Calderón problem with finitely many unknowns is equivalent to convex semidefinite optimization
(2023)
We consider the inverse boundary value problem of determining a coefficient function in an elliptic partial differential equation from knowledge of the associated Neumann-Dirichlet-operator. The unknown coefficient function is assumed to be piecewise constant with respect to a given pixel partition, and upper and lower bounds are assumed to be known a-priori.
We will show that this Calderón problem with finitely many unknowns can be equivalently formulated as a minimization problem for a linear cost functional with a convex non-linear semidefinite constraint. We also prove error estimates for noisy data, and extend the result to the practically relevant case of finitely many measurements, where the coefficient is to be reconstructed from a finite-dimensional Galerkin projection of the Neumann-Dirichlet-operator.
Our result is based on previous works on Loewner monotonicity and convexity of the Neumann-Dirichlet-operator, and the technique of localized potentials. It connects the emerging fields of inverse coefficient problems and semidefinite optimization.