Refine
Year of publication
Document Type
- Preprint (2316) (remove)
Has Fulltext
- yes (2316)
Keywords
- Kollisionen schwerer Ionen (33)
- heavy ion collisions (27)
- Deutsch (23)
- Quark-Gluon-Plasma (14)
- equation of state (13)
- QGP (12)
- heavy-ion collisions (11)
- Kongress (10)
- Syntax (10)
- quark-gluon plasma (10)
Institute
- Physik (1424)
- Frankfurt Institute for Advanced Studies (FIAS) (1006)
- Informatik (803)
- Medizin (176)
- Extern (82)
- Biowissenschaften (76)
- Ernst Strüngmann Institut (70)
- Mathematik (48)
- MPI für Hirnforschung (47)
- Psychologie (46)
Orthologs document the evolution of genes and metabolic capacities encoded in extant and ancient genomes. Orthologous genes that are detected across the full diversity of contemporary life allow reconstructing the gene set of LUCA, the last universal common ancestor. These genes presumably represent the functional repertoire common to – and necessary for – all living organisms. Design of artificial life has the potential to test this. Recently, a minimal gene (MG) set for a self-replicating cell was determined experimentally, and a surprisingly high number of genes have unknown functions and are not represented in LUCA. However, as similarity between orthologs decays with time, it becomes insufficient to infer common ancestry, leaving ancient gene set reconstructions incomplete and distorted to an unknown extent. Here we introduce the evolutionary traceability, together with the software protTrace, that quantifies, for each protein, the evolutionary distance beyond which the sensitivity of the ortholog search becomes limiting. We show that the LUCA set comprises only high-traceable proteins most of which have catalytic functions. We further show that proteins in the MG set lacking orthologs outside bacteria mostly have low traceability, leaving open whether their eukaryotic orthologs have just been overlooked. On the example of REC8, a protein essential for chromosome cohesion, we demonstrate how a traceability-informed adjustment of the search sensitivity identifies hitherto missed orthologs in the fast-evolving microsporidia. Taken together, the evolutionary traceability helps to differentiate between true absence and non-detection of orthologs, and thus improves our understanding about the evolutionary conservation of functional protein networks.
Bacteria of the genera Photorhabdus and Xenorhabdus produce a plethora of natural products to support their similar symbiotic lifecycles. For many of these compounds, the specific bioactivities are unknown. One common challenge in natural product research when trying to prioritize research efforts is the rediscovery of identical (or highly similar) compounds from different strains. Linking genome sequence to metabolite production can help in overcoming this problem. However, sequences are typically not available for entire collections of organisms. Here we perform a comprehensive metabolic screening using HPLC-MS data associated with a 114-strain collection (58 Photorhabdus and 56 Xenorhabdus) from across Thailand and explore the metabolic variation among the strains, matched with several abiotic factors. We utilize machine learning in order to rank the importance of individual metabolites in determining all given metadata. With this approach, we were able to prioritize metabolites in the context of natural product investigations, leading to the identification of previously unknown compounds. The top three highest-ranking features were associated with Xenorhabdus and attributed to the same chemical entity, cyclo(tetrahydroxybutyrate). This work addresses the need for prioritization in high-throughput metabolomic studies and demonstrates the viability of such an approach in future research.
Antimicrobial resistance is a major threat to global health and food security today. Scheduling cycling therapies by targeting phenotypic states associated to specific mutations can help us to eradicate pathogenic variants in chronic infections. In this paper, we introduce a logistic switching model in order to abstract mutation networks of collateral resistance. We found particular conditions for which unstable zero-equilibrium of the logistic maps can be stabilized through a switching signal. That is, persistent populations can be eradicated through tailored switching regimens.
Starting from an optimal-control formulation, the switching policies show their potential in the stabilization of the zero-equilibrium for dynamics governed by logistic maps. However, employing such switching strategies, deserve a specific characterization in terms of limit behaviour. Ultimately, we use evolutionary and control algorithms to find either optimal and sub-optimal switching policies. Simulations results show the applicability of Parrondo’s Paradox to design cycling therapies against drug resistance.
We propose a generalized modeling framework for the kinetic mechanisms of transcriptional riboswitches. The formalism accommodates time-dependent transcription rates and changes of metabolite concentration and permits incorporation of variations in transcription rate depending on transcript length. We derive explicit analytical expressions for the fraction of transcripts that determine repression or activation of gene expression, pause site location and its slowing down of transcription for the case of the (2’dG)-sensing riboswitch from Mesoplasma florum. Our modeling challenges the current view on the exclusive importance of metabolite binding to transcripts containing only the aptamer domain. Numerical simulations of transcription proceeding in a continuous manner under time-dependent changes of metabolite concentration further suggest that rapid modulations in concentration result in a reduced dynamic range for riboswitch function regardless of transcription rate, while a combination of slow modulations and small transcription rates ensures a wide range of finely tuneable regulatory outcomes.
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for treatment and prevention of influenza and its complications is largely debatable. Here, we developed a transparent mathematical modelling setting to analyse the impact of NAIs on influenza disease at within-host and population level. Analytical and simulation results indicate that even assuming unrealistically high efficacies for NAIs, drug intake starting on the onset of symptoms has a negligible effect on an individual's viral load and symptoms score. Increasing NAIs doses does not provide a better outcome as is generally believed. Considering Tamiflu's pandemic regimen for prophylaxis, different multiscale simulation scenarios reveal modest reductions in epidemic size despite high investments in stockpiling. Our results question the use of NAIs in general to treat influenza as well as the respective stockpiling by regulatory authorities.
The successful elimination of bacteria such as Streptococcus pneumoniae from a host involves the coordination between different parts of the immune system. Previous studies have explored the effects of the initial pneumococcal load (bacterial dose) on different representations of innate immunity, finding that pathogenic outcomes can vary with the size of the bacterial dose. However, others yield support to the notion of dose-independent factors contributing to bacterial clearance. In this paper, we seek to provide a deeper understanding of the immune responses associated to the pneumococcus. To this end, we formulate a model that realizes an abstraction of the innate-regulatory immune host response. Stability and bifurcation analyses of the model reveal the following trichotomy of pneumococcal outcomes determined by the bifurcation parameters: (i) dose-independent clearance; (ii) dose-independent persistence; and (iii) dose-limited clearance. Bistability, where the bacteria-free equilibrium co-stabilizes with the most substantial steady-state bacterial load is the specific result behind dose-limited clearance. The trichotomy of pneumococcal outcomes here described integrates all previously observed bacterial fates into a unified framework.
COVID-19 pandemic has underlined the impact of emergent pathogens as a major threat for human health. The development of quantitative approaches to advance comprehension of the current outbreak is urgently needed to tackle this severe disease. In this work, several mathematical models are proposed to represent SARS-CoV-2 dynamics in infected patients. Considering different starting times of infection, parameters sets that represent infectivity of SARS-CoV-2 are computed and compared with other viral infections that can also cause pandemics.
Based on the target cell model, SARS-CoV-2 infecting time between susceptible cells (mean of 30 days approximately) is much slower than those reported for Ebola (about 3 times slower) and influenza (60 times slower). The within-host reproductive number for SARS-CoV-2 is consistent to the values of influenza infection (1.7-5.35). The best model to fit the data was including immune responses, which suggest a slow cell response peaking between 5 to 10 days post onset of symptoms. The model with eclipse phase, time in a latent phase before becoming productively infected cells, was not supported. Interestingly, both, the target cell model and the model with immune responses, predict that virus may replicate very slowly in the first days after infection, and it could be below detection levels during the first 4 days post infection. A quantitative comprehension of SARS-CoV-2 dynamics and the estimation of standard parameters of viral infections is the key contribution of this pioneering work.
Background: Biological psychiatry aims to understand mental disorders in terms of altered neurobiological pathways. However, for one of the most prevalent and disabling mental disorders, Major Depressive Disorder (MDD), patients only marginally differ from healthy individuals on the group-level. Whether Precision Psychiatry can solve this discrepancy and provide specific, reliable biomarkers remains unclear as current Machine Learning (ML) studies suffer from shortcomings pertaining to methods and data, which lead to substantial over-as well as underestimation of true model accuracy.
Methods: Addressing these issues, we quantify classification accuracy on a single-subject level in N=1,801 patients with MDD and healthy controls employing an extensive multivariate approach across a comprehensive range of neuroimaging modalities in a well-curated cohort, including structural and functional Magnetic Resonance Imaging, Diffusion Tensor Imaging as well as a polygenic risk score for depression.
Findings Training and testing a total of 2.4 million ML models, we find accuracies for diagnostic classification between 48.1% and 62.0%. Multimodal data integration of all neuroimaging modalities does not improve model performance. Similarly, training ML models on individuals stratified based on age, sex, or remission status does not lead to better classification. Even under simulated conditions of perfect reliability, performance does not substantially improve. Importantly, model error analysis identifies symptom severity as one potential target for MDD subgroup identification.
Interpretation: Although multivariate neuroimaging markers increase predictive power compared to univariate analyses, single-subject classification – even under conditions of extensive, best-practice Machine Learning optimization in a large, harmonized sample of patients diagnosed using state-of-the-art clinical assessments – does not reach clinically relevant performance. Based on this evidence, we sketch a course of action for Precision Psychiatry and future MDD biomarker research.
Transport of lipids across membranes is fundamental for diverse biological pathways in cells. Multiple ion-coupled transporters participate in lipid translocation, but their mechanisms remain largely unknown. Major facilitator superfamily (MFS) lipid transporters play central roles in cell wall synthesis, brain development and function, lipids recycling, and cell signaling. Recent structures of MFS lipid transporters revealed overlapping architectural features pointing towards a common mechanism. Here we used cysteine disulfide trapping, molecular dynamics simulations, mutagenesis analysis, and transport assays in vitro and in vivo, to investigate the mechanism of LtaA, a proton-dependent MFS lipid transporter essential for lipoteichoic acids synthesis in the pathogen Staphylococcus aureus. We reveal that LtaA displays asymmetric lateral openings with distinct functional relevance and that cycling through outward- and inward-facing conformations is essential for transport activity. We demonstrate that while the entire amphipathic central cavity of LtaA contributes to lipid binding, its hydrophilic pocket dictates substrate specificity. We propose that LtaA catalyzes lipid translocation by a ‘trap-and-flip’ mechanism that might be shared among MFS lipid transporters.
The severity of the COVID-19 pandemic, caused by the SARS-CoV-2 coronavirus, calls for the urgent development of a vaccine. The primary immunological target is the SARS-CoV-2 spike (S) protein. S is exposed on the viral surface to mediate viral entry into the host cell. To identify possible antibody binding sites not shielded by glycans, we performed multi-microsecond molecular dynamics simulations of a 4.1 million atom system containing a patch of viral membrane with four full-length, fully glycosylated and palmitoylated S proteins. By mapping steric accessibility, structural rigidity, sequence conservation and generic antibody binding signatures, we recover known epitopes on S and reveal promising epitope candidates for vaccine development. We find that the extensive and inherently flexible glycan coat shields a surface area larger than expected from static structures, highlighting the importance of structural dynamics in epitope mapping.
Spike count correlations (SCCs) are ubiquitous in sensory cortices, are characterized by rich structure and arise from structured internal interactions. Yet, most theories of visual perception focus exclusively on the mean responses of individual neurons. Here, we argue that feedback interactions in primary visual cortex (V1) establish the context in which individual neurons process complex stimuli and that changes in visual context give rise to stimulus-dependent SCCs. Measuring V1 population responses to natural scenes in behaving macaques, we show that the fine structure of SCCs is stimulus-specific and variations in response correlations across-stimuli are independent of variations in response means. Moreover, we demonstrate that stimulus-specificity of SCCs in V1 can be directly manipulated by controlling the high-order structure of synthetic stimuli. We propose that stimulus-specificity of SCCs is a natural consequence of hierarchical inference where inferences on the presence of high-level image features modulate inferences on the presence of low-level features.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. We hypothesize that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we show that attention enhances the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Attentional enhancement of stimulus decodability from population responses occurs in low dimensional spaces, as revealed by principal component analysis, suggesting an alignment between the attentional and the natural stimulus variance. Moreover, natural scenes produce stimulus-specific oscillatory responses in V1, whose power undergoes a global shift from low to high frequencies with attention. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
Summary We introduce fsbrain, an R package for the visualization of neuroimaging data. The package can be used to visualize vertex-wise and region-wise morphometry data, parcellations, labels and statistical results on brain surfaces in three dimensions (3D). Voxel data can be displayed in lightbox mode. The fsbrain package offers various customization options and produces publication quality plots which can be displayed interactively, saved as bitmap images, or integrated into R notebooks.
Availability and Implementation The software, source code and documentation are available under the MIT license at https://github.com/dfsp-spirit/fsbrain. Releases can be installed directly from the Comprehensive R Archive Network (CRAN).
Grasping the meaning of everyday visual events is a fundamental feat of human intelligence that hinges on diverse neural processes ranging from vision to higher-level cognition. Deciphering the neural basis of visual event understanding requires rich, extensive, and appropriately designed experimental data. However, this type of data is hitherto missing. To fill this gap, we introduce the BOLD Moments Dataset (BMD), a large dataset of whole-brain fMRI responses to over 1,000 short (3s) naturalistic video clips and accompanying metadata. We show visual events interface with an array of processes, extending even to memory, and we reveal a match in hierarchical processing between brains and video-computable deep neural networks. Furthermore, we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. BMD thus establishes a critical groundwork for investigations of the neural basis of visual event understanding.
Visual scene perception is mediated by a set of cortical regions that respond preferentially to images of scenes, including the occipital place area (OPA) and parahippocampal place area (PPA). However, the differential contribution of OPA and PPA to scene perception remains an open research question. In this study, we take a deep neural network (DNN)-based computational approach to investigate the differences in OPA and PPA function. In a first step we search for a computational model that predicts fMRI responses to scenes in OPA and PPA well. We find that DNNs trained to predict scene components (e.g., wall, ceiling, floor) explain higher variance uniquely in OPA and PPA than a DNN trained to predict scene category (e.g., bathroom, kitchen, office). This result is robust across several DNN architectures. On this basis, we then determine whether particular scene components predicted by DNNs differentially account for unique variance in OPA and PPA. We find that variance in OPA responses uniquely explained by the navigation-related floor component is higher compared to the variance explained by the wall and ceiling components. In contrast, PPA responses are better explained by the combination of wall and floor, that is scene components that together contain the structure and texture of the scene. This differential sensitivity to scene components suggests differential functions of OPA and PPA in scene processing. Moreover, our results further highlight the potential of the proposed computational approach as a general tool in the investigation of the neural basis of human scene perception.
The human visual cortex enables visual perception through a cascade of hierarchical computations in cortical regions with distinct functionalities. Here, we introduce an AI-driven approach to discover the functional mapping of the visual cortex. We related human brain responses to scene images measured with functional MRI (fMRI) systematically to a diverse set of deep neural networks (DNNs) optimized to perform different scene perception tasks. We found a structured mapping between DNN tasks and brain regions along the ventral and dorsal visual streams. Low-level visual tasks mapped onto early brain regions, 3-dimensional scene perception tasks mapped onto the dorsal stream, and semantic tasks mapped onto the ventral stream. This mapping was of high fidelity, with more than 60% of the explainable variance in nine key regions being explained. Together, our results provide a novel functional mapping of the human visual cortex and demonstrate the power of the computational approach.
In meditation practices that involve focused attention to a specific object, novice practitioners often experience moments of distraction (i.e., mind wandering). Previous studies have investigated the neural correlates of mind wandering during meditation practice through Electroencephalography (EEG) using linear metrics (e.g., oscillatory power). However, their results are not fully consistent. Since the brain is known to be a chaotic/nonlinear system, it is possible that linear metrics cannot fully capture complex dynamics present in the EEG signal. In this study, we assess whether nonlinear EEG signatures can be used to characterize mind wandering during breath focus meditation in novice practitioners. For that purpose, we adopted an experience sampling paradigm in which 25 participants were iteratively interrupted during meditation practice to report whether they were focusing on the breath or thinking about something else. We compared the complexity of EEG signals during mind wandering and breath focus states using three different algorithms: Higuchi’s fractal dimension (HFD), Lempel-Ziv complexity (LZC), and Sample entropy (SampEn). Our results showed that EEG complexity was generally reduced during mind wandering relative to breath focus states. We conclude that EEG complexity metrics are appropriate to disentangle mind wandering from breath focus states in novice meditation practitioners, and therefore, they could be used in future EEG neurofeedback protocols to facilitate meditation practice.
Living cells constantly remodel the shape of their lipid membranes. In the endo-plasmic reticulum (ER), the reticulon homology domain (RHD) of the reticulophagy regulator 1 (RETR1/FAM134B) forms dense autophagic puncta that are associated with membrane removal by ER-phagy. In molecular dynamics (MD) simulations, we find that FAM134B-RHD spontaneously forms clusters, driven in part by curvature-mediated attraction. At a critical size, the FAM134B-RHD clusters induce the formation of membrane buds. The kinetics of budding depends sensitively on protein concentration and bilayer asymmetry. Our MD simulations shed light on the role of FAM134B-RHD in ER-phagy and show that membrane asymmetry can be used to modulate the kinetics barrier for membrane remodeling.
Gasdermin-D (GSDMD) is the ultimate effector of pyroptosis, a form of programmed cell death associated with pathogen invasion and inflammation. After proteolytic cleavage by caspases activated by the inflammasome, the GSDMD N-terminal domain (GSDMDNT) assembles on the inner leaflet of the plasma membrane and induces the formation of large membrane pores. We use atomistic molecular dynamics simulations to study GSDMDNT monomers, oligomers, and rings in an asymmetric plasma membrane mimetic. We identify distinct interaction motifs of GSDMDNT with phosphatidylinositol-4,5-bisphosphate (PI(4,5)P2) and phosphatidylserine (PS) head-groups and describe differential lipid binding between the pore and prepore conformations. Oligomers are stabilized by shared lipid binding sites between neighboring monomers acting akin to double-sided tape. We show that already small GSDMDNT oligomers form stable, water-filled and ion-conducting membrane pores bounded by curled beta-sheets. In large-scale simulations, we resolve the process of pore formation by lipid detachment from GSDMDNT arcs and lipid efflux from partial rings. We find that that high-order GSDMDNT oligomers can crack under the line tension of 86 pN created by an open membrane edge to form the slit pores or closed GSDMDNT rings seen in experiment. Our simulations provide a detailed view of key steps in GSDMDNT-induced plasma membrane pore formation, including sublytic pores that explain nonselective ion flux during early pyroptosis.
Nuclear pore complexes (NPCs) mediate nucleocytoplasmic transport. Their intricate 120 MDa architecture remains incompletely understood. Here, we report a near-complete structural model of the human NPC scaffold with explicit membrane and in multiple conformational states. We combined AI-based structure prediction with in situ and in cellulo cryo-electron tomography and integrative modeling. We show that linker Nups spatially organize the scaffold within and across subcomplexes to establish the higher-order structure. Microsecond-long molecular dynamics simulations suggest that the scaffold is not required to stabilize the inner and outer nuclear membrane fusion, but rather widens the central pore. Our work exemplifies how AI-based modeling can be integrated with in situ structural biology to understand subcellular architecture across spatial organization levels.
Ribosomes catalyze protein synthesis by cycling through various functional states. These states have been extensively characterized in vitro, yet their distribution in actively translating human cells remains elusive. Here, we optimized a cryo-electron tomography-based approach and resolved ribosome structures inside human cells with a local resolution of up to 2.5 angstroms. These structures revealed the distribution of functional states of the elongation cycle, a Z tRNA binding site and the dynamics of ribosome expansion segments. In addition, we visualized structures of Homoharringtonine, a drug for chronic myeloid leukemia treatment, within the active site of the ribosome and found that its binding reshaped the landscape of translation. Overall, our work demonstrates that structural dynamics and drug effects can be assessed at near-atomic detail within human cells.
Precise estimates of genome sizes are important parameters for both theoretical and practical biodiversity genomics. We present here a fast, easy-to-implement and precise method to estimate genome size from the number of bases sequenced and the mean sequence coverage. To estimate the latter, we take advantage of the fact that a precise estimation of the Poisson distribution parameter lambda is possible from truncated data, restricted to the part of the coverage distribution representing the true underlying distribution. With simulations we could show that reasonable genome size estimates can be gained even from low-coverage (10X), highly discontinuous genome drafts. Comparison of estimates from a wide range of taxa and sequencing strategies with flow-cytometry estimates of the same individuals showed a very good fit and suggested that both methods yield comparable, interchangeable results.
De novo fatty acid biosynthesis in humans is accomplished by a multidomain protein, the type I fatty acid synthase (FAS). Although ubiquitously expressed in all tissues, fatty acid synthesis is not essential in normal healthy cells due to sufficient supply with fatty acids by the diet. However, FAS is overexpressed in cancer cells and correlates with tumor malignancy, which makes FAS an attractive selective therapeutic target in tumorigenesis. Herein, we present a crystal structure of the condensing part of murine FAS, highly homologous to human FAS, with octanoyl moieties covalently bound to the transferase (MAT) and the condensation (KS) domain. The MAT domain binds the octanoyl moiety in a novel (unique) conformation, which reflects the pronounced conformational dynamics of the substrate binding site responsible for the MAT substrate promiscuity. In contrast, the KS binding pocket just subtly adapts to the octanoyl moiety upon substrate binding. Besides the rigid domain structure, we found a positive cooperative effect in the substrate binding of the KS domain by a comprehensive enzyme kinetic study. These structural and mechanistic findings contribute significantly to our understanding of the mode of action of FAS and may guide future rational inhibitor designs.
Cyclic di-AMP is the only known essential second messenger in bacteria and archaea, regulating different proteins indispensable for numerous physiological processes. In particular, it controls various potassium and osmolyte transporters involved in osmoregulation. In Bacillus subtilis, the K+/H+ symporter KimA of the KUP family is inactivated by c-di-AMP. KimA sustains survival at potassium limitation at low external pH by mediating K+ ions uptake. However, at elevated intracellular K+ concentrations, further K+ accumulation would be toxic. In this study, we reveal the molecular basis of how c-di-AMP binding inhibits KimA. We report cryo-EM structures of KimA with bound c-di-AMP in detergent solution and reconstituted in amphipols. By combining structural data with functional assays and molecular dynamics simulations we reveal how c-di-AMP modulates transport. We show that an intracellular loop in the transmembrane domain interacts with c-di-AMP bound to the adjacent cytosolic domain. This reduces the mobility of transmembrane helices at the cytosolic side of the K+ binding site and therefore traps KimA in an inward-occluded conformation.
Modular polyketide synthases (PKSs) produce complex, bioactive secondary metabolites in assembly line-like multistep reactions. Longstanding efforts to produce novel, biologically active compounds by recombining intact modules to new modular PKSs have mostly resulted in poorly active chimeras and decreased product yields. Recent findings demonstrate that the low efficiencies of modular chimeric PKSs also result from rate limitations in the transfer of the growing polyketide chain across the non-cognate module:module interface and further processing of the non-native polyketide substrate by the ketosynthase (KS) domain. In this study, we aim at disclosing and understanding the low efficiency of chimeric modular PKSs and at establishing guidelines for modular PKSs engineering. To do so, we work with a bimodular PKS testbed and systematically vary substrate specificity, substrate identity, and domain:domain interfaces of the KS involved reactions. We observe that KS domains employed in our chimeric bimodular PKSs are bottlenecks with regards to both substrate specificity as well as interaction with the ACP. Overall, our systematic study can explain in quantitative terms why early oversimplified engineering strategies based on the plain shuffling of modules mostly failed and why more recent approaches show improved success rates. We moreover identify two mutations of the KS domain that significantly increased turnover rates in chimeric systems and interpret this finding in mechanistic detail.
Inspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications. However, their biological realism and theoretical tractability are limited, resulting in poorly understood parameters. We have recently shown that biological neuronal firing rates in response to distributed inputs are largely independent of size, meaning that neurons are typically responsive to the proportion, not the absolute number, of their inputs that are active. Here we introduce such a normalisation, where the strength of a neuron’s afferents is divided by their number, to various sparsely-connected artificial networks. The learning performance is dramatically increased, providing an improvement over other widely-used normalisations in sparse networks. The resulting machine learning tools are universally applicable and biologically inspired, rendering them better understood and more stable in our tests.
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
There is increasing evidence that rapid phenotypic adaptation of quantitative traits is not uncommon in nature. However, the circumstances under which rapid adaptation of polygenic traits occurs are not yet understood. Building on previous concepts of soft selection, i.e. frequency and density dependent selection, I developed and tested the hypothesis that adaptation speed of a polygenic trait depends on the number of offspring per breeding pair in a randomly mating diploid population.
Using individual based modelling on a range of offspring per parent (2–200) in populations of various size (100–10000 individuals), I could show that the by far largest proportion of variance (42%) was explained by the offspring number, regardless of genetic trait architecture (10–50 loci, different locus contribution distributions). In addition, it was possible to identify the majority of the responsible loci and account for even more of the observed phenotypic change with a moderate population size.
The simulation results suggest that offspring numbers may a crucial factor for the adaptation speed of quantitative loci. Moreover, as large offspring numbers translates to a large phenotypic variance in the offspring of each parental pair, this genetic bet hedging strategy increases the chance to contribute to the next generation in unpredictable environments.
Mutations are the ultimate basis of evolution, yet their occurrence rate is known only for few species. We directly estimated the spontaneous mutation rate and the mutational spectrum in the non-biting midge C. riparius with a new approach. Individuals from ten mutation accumulation lines over five generations were deep genome sequenced to count de novo mutations (DNMs) that were not present in a pool of F1 individuals, representing parental genotypes. We identified 51 new single site mutations of which 25 were insertions or deletions and 26 single point mutations. This shift in the mutational spectrum compared to other organisms was explained by the high A/T content of the species. We estimated a haploid mutation rate of 2.1 x 10−9 (95% confidence interval: 1.4 x 10−9 – 3.1 x 10−9) which is in the range of recent estimates for other insects and supports the drift barrier hypothesis. We show that accurate mutation rate estimation from a high number of observed mutations is feasible with moderate effort even for non-model species.
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc-13-1-Munc13-2 knockout mice with completely blocked synaptic transmission. Neither induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal like distribution of spines.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of targeted growth and stochastic retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
Dendrites display a striking variety of neuronal type-specific morphologies, but the mechanisms and principles underlying such diversity remain elusive. A major player in defining the morphology of dendrites is the neuronal cytoskeleton, including evolutionarily conserved actin-modulatory proteins (AMPs). Still, we lack a clear understanding of how AMPs might support developmental phenomena such as neuron-type specific dendrite dynamics. To address precisely this level of in vivo specificity, we concentrated on a defined neuronal type, the class III dendritic arborisation (c3da) neuron of Drosophila larvae, displaying actin-enriched short terminal branchlets (STBs). Computational modelling reveals that the main branches of c3da neurons follow a general growth model based on optimal wiring, but the STBs do not. Instead, model STBs are defined by a short reach and a high affinity to grow towards the main branches. We thus concentrated on c3da STBs and developed new methods to quantitatively describe dendrite morphology and dynamics based on in vivo time-lapse imaging of mutants lacking individual AMPs. In this way, we extrapolated the role of these AMPs in defining STB properties. We propose that dendrite diversity is supported by the combination of a common step, refined by a neuron type-specific second level. For c3da neurons, we present a molecular model of how the combined action of multiple AMPs in vivo define the properties of these second level specialisations, the STBs.
The way in which dendrites spread within neural tissue determines the resulting circuit connectivity and computation. However, a general theory describing the dynamics of this growth process does not exist. Here we obtain the first time-lapse reconstructions of neurons in living fly larvae over the entirety of their developmental stages. We show that these neurons expand in a remarkably regular stretching process that conserves their shape. Newly available space is filled optimally, a direct consequence of constraining the total amount of dendritic cable. We derive a mathematical model that predicts one time point from the previous and use this model to predict dendrite morphology of other cell types and species. In summary, we formulate a novel theory of dendrite growth based on detailed developmental experimental data that optimises wiring and space filling and serves as a basis to better understand aspects of coverage and connectivity for neural circuit formation.
Neuronal hyperexcitability is a feature of Alzheimer’s disease (AD). Three main mechanisms have been proposed to explain it: i), dendritic degeneration leading to increased input resistance, ii), ion channel changes leading to enhanced intrinsic excitability, and iii), synaptic changes leading to excitation-inhibition (E/I) imbalance. However, the relative contribution of these mechanisms is not fully understood. Therefore, we performed biophysically realistic multi-compartmental modelling of excitability in reconstructed CA1 pyramidal neurons of wild-type and APP/PS1 mice, a well-established animal model of AD. We show that, for synaptic activation, the excitability promoting effects of dendritic degeneration are cancelled out by excitability decreasing effects of synaptic loss. We find an interesting balance of excitability regulation with enhanced degeneration in the basal dendrites of APP/PS1 cells potentially leading to increased excitation by the apical but decreased excitation by the basal Schaffer collateral pathway. Furthermore, our simulations reveal that three additional pathomechanistic scenarios can account for the experimentally observed increase in firing and bursting of CA1 pyramidal neurons in APP/PS1 mice. Scenario 1: increased excitatory burst input; scenario 2: enhanced E/I ratio and scenario 3: alteration of intrinsic ion channels (IAHP down-regulated; INap, INa and ICaT up-regulated) in addition to enhanced E/I ratio. Our work supports the hypothesis that pathological network and ion channel changes are major contributors to neuronal hyperexcitability in AD. Overall, our results are in line with the concept of multi-causality and degeneracy according to which multiple different disruptions are separately sufficient but no single disruption is necessary for neuronal hyperexcitability.
Reducing neuronal size results in less cell membrane and therefore lower input conductance. Smaller neurons are thus more excitable as seen in their voltage responses to current injections in the soma. However, the impact of a neuron’s size and shape on its voltage responses to synaptic activation in dendrites is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs and show that these are entirely independent of dendritic length. For a given synaptic density, a neuron’s response depends only on the average dendritic diameter and its intrinsic conductivity. These results remain true for the entire range of possible dendritic morphologies irrespective of any particular arborisation complexity. Also, spiking models result in morphology invariant numbers of action potentials that encode the percentage of active synapses. Interestingly, in contrast to spike rate, spike times do depend on dendrite morphology. In summary, a neuron’s excitability in response to synaptic inputs is not affected by total dendrite length. It rather provides a homeostatic input-output relation that specialised synapse distributions, local non-linearities in the dendrites and synaptic plasticity can modulate. Our work reveals a new fundamental principle of dendritic constancy that has consequences for the overall computation in neural circuits.
Excess neuronal branching allows for innervation of specific dendritic compartments in cortex
(2019)
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behaviour to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organisation of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of putative connections in cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given connection. We show that potential contacts allow the particular implementation of connectivity at a subcellular level.
Disordered proteins and nucleic acids can condense into droplets that resemble the membraneless organelles observed in living cells. MD simulations offer a unique tool to characterize the molecular interactions governing the formation of these biomolecular condensates, their physico-chemical properties, and the factors controlling their composition and size. However, biopolymer condensation depends sensitively on the balance between different energetic and entropic contributions. Here, we develop a general strategy to fine-tune the potential energy function for molecular dynamics simulations of biopolymer phase separation. We rebalance protein-protein interactions against solvation and entropic contributions to match the excess free energy of transferring proteins between dilute solution and condensate. We illustrate this formalism by simulating liquid droplet formation of the FUS low complexity domain (LCD) with a rebalanced MARTINI model. By scaling the strength of the nonbonded interactions in the coarse-grained MARTINI potential energy function, we map out a phase diagram in the plane of protein concentration and interaction strength. Above a critical scaling factor of αc ≈ 0.6, FUS LCD condensation is observed, where α = 1 and 0 correspond to full and repulsive interactions in the MARTINI model, respectively. For a scaling factor α = 0.65, we recover the experimental densities of the dilute and dense phases, and thus the excess protein transfer free energy into the droplet and the saturation concentration where FUS LCD condenses. In the region of phase separation, we simulate FUS LCD droplets of four different sizes in stable equilibrium with the dilute phase and slabs of condensed FUS LCD for tens of microseconds, and over one millisecond in aggregate. We determine surface tensions in the range of 0.01 to 0.4mN/m from the fluctuations of the droplet shape and from the capillary-wave-like broadening of the interface between the two phases. From the dynamics of the protein end-to-end distance, we estimate shear viscosities from 0.001 to 0.02Pas for the FUS LCD droplets with scaling factors α in the range of 0.625 to 0.75, where we observe liquid droplets. Significant hydration of the interior of the droplets keeps the proteins mobile and the droplets fluid.
The protein Atg2 has been proposed to form a membrane tether that mediates lipid transfer from the ER to the phagophore in autophagy. However, recent kinetic measurements on the human homolog ATG2A indicated a transport rate of only about one lipid per minute, which would be far too slow to deliver the millions of lipids required to form a phagophore on a physiological time scale. Here, we revisit the analysis of the fluorescence quenching experiments. We develop a detailed kinetic model of the lipid transfer between two membranes bridged by a tether that forms a conduit for lipids. The model provides an excellent fit to the fluorescence experiments, with a lipid transfer rate of about 100 per second and protein. At this rate, Atg2-mediated transfer can supply a significant fraction of the lipids required in autophagosome biogenesis. Our kinetic model is generally applicable to lipid-transfer experiments, in particular to proteins forming organelle contact sites in cells.
Binding of the spike protein of SARS-CoV-2 to the human angiotensin converting enzyme 2 (ACE2) receptor triggers translocation of the virus into cells. Both the ACE2 receptor and the spike protein are heavily glycosylated, including at sites near their binding interface. We built fully glycosylated models of the ACE2 receptor bound to the receptor binding domain (RBD) of the SARS-CoV-2 spike protein. Using atomistic molecular dynamics (MD) simulations, we found that the glycosylation of the human ACE2 receptor contributes substantially to the binding of the virus. Interestingly, the glycans at two glycosylation sites, N90 and N322, have opposite effects on spike protein binding. The glycan at the N90 site partly covers the binding interface of the spike RBD. Therefore, this glycan can interfere with the binding of the spike protein and protect against docking of the virus to the cell. By contrast, the glycan at the N322 site interacts tightly with the RBD of the ACE2-bound spike protein and strengthens the complex. Remarkably, the N322 glycan binds into a conserved region of the spike protein identified previously as a cryptic epitope for a neutralizing antibody. By mapping the glycan binding sites, our MD simulations aid in the targeted development of neutralizing antibodies and SARS-CoV-2 fusion inhibitors.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.
In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.
We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.
Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown. Here, we generated large stochastic populations of biophysically realistic hippocampal granule cell models comparing those with all 15 ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were more frequent and more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve target excitability.
Background Corticospinal excitability depends on the current brain state. The recent development of real-time EEG-triggered transcranial magnetic stimulation (EEG-TMS) allows studying this relationship in a causal fashion. Specifically, it has been shown that corticospinal excitability is higher during the scalp surface negative EEG peak compared to the positive peak of µ-oscillations in sensorimotor cortex, as indexed by larger motor evoked potentials (MEPs) for fixed stimulation intensity.
Objective We further characterize the effect of µ-rhythm phase on the MEP input-output (IO) curve by measuring the degree of excitability modulation across a range of stimulation intensities. We furthermore seek to optimize stimulation parameters to enable discrimination of functionally relevant EEG-defined brain states.
Methods A real-time EEG-TMS system was used to trigger MEPs during instantaneous brain-states corresponding to µ-rhythm surface positive and negative peaks with five different stimulation intensities covering an individually calibrated MEP IO curve in 15 healthy participants.
Results MEP amplitude is modulated by µ-phase across a wide range of stimulation intensities, with larger MEPs at the surface negative peak. The largest relative MEP-modulation was observed for weak intensities, the largest absolute MEP-modulation for intermediate intensities. These results indicate a leftward shift of the MEP IO curve during the µ-rhythm negative peak.
Conclusion The choice of stimulation intensity influences the observed degree of corticospinal excitability modulation by µ-phase. Lower stimulation intensities enable more efficient differentiation of EEG µ-phase-defined brain states.
Active efficient coding explains the development of binocular vision and its failure in amblyopia
(2020)
The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a new formulation of the Active Efficient Coding theory, which proposes that eye movements, as well as stimulus encoding, are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to co-ordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops, in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
Epilepsy can have many different causes and its development (epileptogenesis) involves a bewildering complexity of interacting processes. Here, we present a first-of-its-kind computational model to better understand the role of neuroimmune interactions in the development of acquired epilepsy. Our model describes the interactions between neuroinflammation, blood-brain barrier disruption, neuronal loss, circuit remodeling, and seizures. Formulated as a system of nonlinear differential equations, the model is validated using data from animal models that mimic human epileptogenesis caused by infection, status epilepticus, and blood-brain barrier disruption. The mathematical model successfully explains characteristic features of epileptogenesis such as its paradoxically long timescales (up to decades) despite short and transient injuries, or its dependence on the intensity of an injury. Furthermore, stochasticity in the model captures the variability of epileptogenesis outcomes in individuals exposed to identical injury. Notably, in line with the concept of degeneracy, our simulations reveal multiple routes towards epileptogenesis with neuronal loss as a sufficient but non-necessary component. We show that our framework allows for in silico predictions of therapeutic strategies, providing information on injury-specific therapeutic targets and optimal time windows for intervention.
SARS-CoV-2 infections are rapidly spreading around the globe. The rapid development of therapies is of major importance. However, our lack of understanding of the molecular processes and host cell signaling events underlying SARS-CoV-2 infection hinder therapy development. We employed a SARS-CoV-2 infection system in permissible human cells to study signaling changes by phospho-proteomics. We identified viral protein phosphorylation and defined phosphorylation-driven host cell signaling changes upon infection. Growth factor receptor (GFR) signaling and downstream pathways were activated. Drug-protein network analyses revealed GFR signaling as key pathway targetable by approved drugs. Inhibition of GFR downstream signaling by five compounds prevented SARS-CoV-2 replication in cells, assessed by cytopathic effect, viral dsRNA production, and viral RNA release into the supernatant. This study describes host cell signaling events upon SARS-CoV-2 infection and reveals GFR signaling as central pathway essential for SARS-CoV-2 replication. It provides with novel strategies for COVID-19 treatment.
The measurement of protein dynamics by proteomics to study cell remodeling has seen increased attention over the last years. This development is largely driven by a number of technological advances in proteomics methods. Pulsed stable isotope labeling in cell culture (SILAC) combined with tandem mass tag (TMT) labeling has evolved as a gold standard for profiling protein synthesis and degradation. While the experimental setup is similar to typical proteomics experiments, the data analysis proves more difficult: After peptide identification through search engines, data extraction requires either custom scripted pipelines or tedious manual table manipulations to extract the TMT-labeled heavy and light peaks of interest. To overcome this limitation, which deters researchers from using protein dynamic proteomics, we developed a user-friendly, browser-based application that allows easy and reproducible data analysis without the need for scripting experience. In addition, we provide a python package that can be implemented in established data analysis pipelines. We anticipate that this tool will ease data analysis and spark further research aimed at monitoring protein translation and degradation by proteomics.
The survivin suppressant YM155 is a drug candidate for neuroblastoma. Here, we tested YM155 in 101 neuroblastoma cell lines (19 parental cell lines, 82 drug-adapted sublines). 77 cell lines displayed YM155 IC50s in the range of clinical YM155 concentrations. ABCB1 was an important determinant of YM155 resistance. The activity of the ABCB1 inhibitor zosuquidar ranged from being similar to that of the structurally different ABCB1 inhibitor verapamil to being 65-fold higher. ABCB1 sequence variations may be responsible for this, suggesting that the design of variant-specific ABCB1 inhibitors may be possible. Further, we showed that ABCC1 confers YM155 resistance. Previously, p53 depletion had resulted in decreased YM155 sensitivity. However, TP53-mutant cells were not generally less sensitive to YM155 than TP53 wild-type cells in this study. Finally, YM155 cross-resistance profiles differed between cells adapted to drugs as similar as cisplatin and carboplatin. In conclusion, the large cell line panel was necessary to reveal an unanticipated complexity of the YM155 response in neuroblastoma cell lines with acquired drug resistance. Novel findings include that ABCC1 mediates YM155 resistance and that YM155 cross-resistance profiles differ between cell lines adapted to drugs as similar as cisplatin and carboplatin.
SARS-CoV-2 is a novel coronavirus currently causing a pandemic. We show that the majority of amino acid positions, which differ between SARS-CoV-2 and the closely related SARS-CoV, are differentially conserved suggesting differences in biological behaviour. In agreement, novel cell culture models revealed differences between the tropism of SARS-CoV-2 and SARS-CoV. Moreover, cellular ACE2 (SARS-CoV-2 receptor) and TMPRSS2 (enables virus entry via S protein cleavage) levels did not reliably indicate cell susceptibility to SARS-CoV-2. SARS-CoV-2 and SARS-CoV further differed in their drug sensitivity profiles. Thus, only drug testing using SARS-CoV-2 reliably identifies therapy candidates. Therapeutic concentrations of the approved protease inhibitor aprotinin displayed anti-SARS-CoV-2 activity. The efficacy of aprotinin and of remdesivir (currently under clinical investigation against SARS-CoV-2) were further enhanced by therapeutic concentrations of the proton pump inhibitor omeprazole (aprotinin 2.7-fold, remdesivir 10-fold). Hence, our study has also identified anti-SARS-CoV-2 therapy candidates that can be readily tested in patients.
Doxorubicin-loaded human serum albumin nanoparticles overcome transporter-mediated drug resistance
(2019)
Resistance to systemic drug therapies is a major reason for the failure of anti-cancer therapies. Here, we tested doxorubicin-loaded human serum albumin (HSA) nanoparticles in the neuroblastoma cell line UKF-NB-3 and its ABCB1-expressing sublines adapted to vincristine (UKF-NB-3rVCR1) and doxorubicin (UKF-NB-3rDOX20). Doxorubicin-loaded nanoparticles displayed increased anti-cancer activity in UKF-NB-3rVCR1 and UKF-NB-3rDOX20 cells relative to doxorubicin solution, but not in UKF-NB-3 cells. UKF-NB-3rVCR1 cells were resensitised by nanoparticle-encapsulated doxorubicin to the level of UKF-NB-3 cells. UKF-NB-3rDOX20 cells displayed a more pronounced resistance phenotype than UKF-NB-3rVCR1 cells and were not re-sensitised by doxorubicin-loaded nanoparticles to the level of parental cells. ABCB1 inhibition using zosuquidar resulted in similar effects like nanoparticle incorporation, indicating that doxorubicin-loaded nanoparticles circumvent ABCB1-mediated drug efflux. The limited re-sensitisation of UKF-NB-3rDOX20 cells to doxorubicin by circumvention of ABCB1-mediated efflux is probably due to the presence of multiple doxorubicin resistance mechanisms. So far, ABCB1 inhibitors have failed in clinical trials, probably because systemic ABCB1 inhibition results in a modified body distribution of its many substrates including drugs, xenobiotics, and other molecules. HSA nanoparticles may provide an alternative, more specific way to overcome transporter-mediated resistance.
SARS-CoV-2 is the causative agent of COVID-19. Severe COVID-19 disease has been associated with disseminated intravascular coagulation and thrombosis, but the mechanisms underlying COVID-19-related coagulopathy remain unknown. Since the risk of severe COVID-19 disease is higher in males than in females and increases with age, we combined proteomics data from SARS-CoV-2-infected cells with human gene expression data from the Genotype-Tissue Expression (GTEx) database to identify gene products involved in coagulation that change with age, differ in their levels between females and males, and are regulated in response to SARS-CoV-2 infection. This resulted in the identification of transferrin as a candidate coagulation promoter, whose levels increases with age and are higher in males than in females and that is increased upon SARS-CoV-2 infection. A systematic investigation of gene products associated with the GO term “blood coagulation” did not reveal further high confidence candidates, which are likely to contribute to COVID-19-related coagulopathy. In conclusion, the role of transferrin should be considered in the course of COVID-19 disease and further examined in ongoing clinic-pathological investigations.
SAMHD1 is discussed as a tumour suppressor protein, but its potential role in cancer has only been investigated in very few cancer types. Here, we performed a systematic analysis of the TCGA (adult cancer) and TARGET (paediatric cancer) databases, the results of which did not suggest that SAMHD1 should be regarded as a bona fide tumour suppressor. SAMHD1 mutations that interfere with SAMHD1 function were not associated with poor outcome, which would be expected for a tumour suppressor. High SAMHD1 tumour levels were associated with increased survival in some cancer entities and reduced survival in others. Moreover, the data suggested differences in the role of SAMHD1 between males and females and between different races. Often, there was no significant relationship between SAMHD1 levels and cancer outcome. Taken together, our results indicate that SAMHD1 may exert pro- or anti-tumourigenic effects and that SAMHD1 is involved in the oncogenic process in a minority of cancer cases. These findings seem to be in disaccord with a perception and narrative forming in the field suggesting that SAMHD1 is a tumour suppressor. A systematic literature review confirmed that most of the available scientific articles focus on a potential role of SAMHD1 as a tumour suppressor. The reasons for this remain unclear but may include confirmation bias and publication bias. Our findings emphasise that hypotheses, perceptions, and assumptions need to be continuously challenged by using all available data and evidence.
Objectives Omeprazole was shown to improve the anti-cancer effect of the nucleoside-analogue 5-fluorouracil. Here, we investigated the effects of omeprazole on the activities of the antiviral nucleoside analogues ribavirin and acyclovir.
Methods West Nile virus-infected Vero cells and influenza A H1N1-infected MDCK cells were treated with omeprazole and/ or ribavirin. Herpes simplex virus 1 (HSV-1)- or HSV-2-infected Vero or HaCat cells were treated with omeprazole and/ or acyclovir. Antiviral effects were determined by examination of cytopathogenic effects (CPE), immune staining, and virus yield assay. Cell viability was investigated by MTT assay.
Results Omeprazole concentrations up to 80μg/mL did not affect the antiviral effects of ribavirin. In contrast, omeprazole increased the acyclovir-mediated effects on HSV-1- and HSV-2-induced CPE formation in a dose-dependent manner in Vero and HaCat cells. Addition of omeprazole 80μg/mL resulted in a 10.8-fold reduction of the acyclovir concentration that reduces CPE formation by 50% (IC50) in HSV-1-infected Vero cells and in a 47.7-fold acyclovir IC50 reduction in HSV-1-infected HaCat cells. In HSV-2-infected cells, omeprazole reduced the acyclovir IC50 by 7.3-fold (Vero cells) or by 12.9-fold (HaCat cells). Omeprazole also enhanced the acyclovir-mediated effects on viral antigen expression and virus replication in HSV-1- and HSV-2-infected cells. In HSV-1-infected HaCat cells, omeprazole 80μg/mL reduced the virus titre in the presence of acyclovir 1μg/mL by 1.6×105-fold. In HSV-2-infected HaCat cells omeprazole 80μg/mL reduced the virus titre in the presence of acyclovir 2μg/mL by 9.2×103-fold. The investigated drug concentrations did not affect cell viability, neither alone nor in combination.
Conclusions Omeprazole increases the anti-HSV activity of acyclovir. As clinically well-established and tolerated drug, it is a candidate drug for antiviral therapies in combination with acyclovir.
Spatial attention increases both inter-areal synchronization and spike rates across the visual hierarchy. To investigate whether these attentional changes reflect distinct or common mechanisms, we performed simultaneous laminar recordings of identified cell classes in macaque V1 and V4. Enhanced V4 spike rates were expressed by both excitatory neurons and fast-spiking interneurons, and were most prominent and arose earliest in time in superficial layers, consistent with a feedback modulation. By contrast, V1-V4 gamma-synchronization reflected feedforward communication and surprisingly engaged only fast-spiking interneurons in the V4 input layer. In mouse visual cortex, we found a similar motif for optogenetically identified inhibitory-interneuron classes. Population decoding analyses further indicate that feedback-related increases in spikes rates encoded attention more reliably than feedforward-related increases in synchronization. These findings reveal distinct, cell-type-specific feedforward and feedback pathways for the attentional modulation of inter-areal synchronization and spike rates, respectively.
The new variant of concern (VOC) of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), Omicron (B.1.1.529), is genetically very different from other VOCs. We compared Omicron with the preceding VOC Delta (B.1.617.2) and the wildtype strain (B.1) with respect to their interactions with the antiviral type I interferon (IFN-alpha/beta) response in infected cells. Our data indicate that Omicron has gained an elevated capability to suppress IFN-beta induction upon infection and to better withstand the antiviral state imposed by exogenously added IFN-alpha.
The SARS-CoV-2 Omicron variant is currently causing a large number of infections in many countries. A number of antiviral agents are approved or in clinical testing for the treatment of COVID-19. Despite the high number of mutations in the Omicron variant, we here show that Omicron isolates display similar sensitivity to eight of the most important anti-SARS-CoV-2 drugs and drug candidates (including remdesivir, molnupiravir, and PF-07321332, the active compound in paxlovid), which is of timely relevance for the treatment of the increasing number of Omicron patients. Most importantly, we also found that the Omicron variant displays a reduced capability of antagonising the host cell interferon response. This provides a potential mechanistic explanation for the clinically observed reduced pathogenicity of Omicron variant viruses compared to Delta variant viruses.
Recently, we have shown that SARS-CoV-2 Omicron virus isolates are less effective at inhibiting the host cell interferon response than Delta viruses. Here, we present further evidence that reduced interferon-antagonising activity explains at least in part why Omicron variant infections are inherently less severe than infections with other SARS-CoV-2 variants. Most importantly, we here also show that Omicron variant viruses display enhanced sensitivity to interferon treatment, which makes interferons promising therapy candidates for Omicron patients, in particular in combination with other antiviral agents.
Developmental loss of ErbB4 in PV interneurons disrupts state-dependent cortical circuit dynamics
(2020)
GABAergic inhibition plays an important role in the establishment and maintenance of cortical circuits during development. Neuregulin 1 (Nrg1) and its interneuron-specific receptor ErbB4 are key elements of a signaling pathway critical for the maturation and proper synaptic connectivity of interneurons. Using conditional deletions of the ERBB4 gene in mice, we tested the role of this signaling pathway at two developmental timepoints in parvalbumin-expressing (PV) interneurons, the largest subpopulation of cortical GABAergic cells. Loss of ErbB4 in PV interneurons during embryonic, but not late postnatal, development leads to alterations in the activity of excitatory and inhibitory cortical neurons, along with severe disruption of cortical temporal organization. These impairments emerge by the end of the second postnatal week, prior to the complete maturation of the PV interneurons themselves. Early loss of ErbB4 in PV interneurons also results in profound dysregulation of excitatory pyramidal neuron dendritic architecture and a redistribution of spine density at the apical dendritic tuft. In association with these deficits, excitatory cortical neurons exhibit normal tuning for sensory inputs, but a loss of state-dependent modulation of the gain of sensory responses. Together these data support a key role for early developmental Nrg1/ErbB4 signaling in PV interneurons as powerful mechanism underlying the maturation of both the inhibitory and excitatory components of cortical circuits.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in three common awake marmosets and investigated to what extent event-related-potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy for auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
Rhythmic flicker stimulation has gained interest as a treatment for neurodegenerative diseases and a method for frequency tagging neural activity in human EEG/MEG recordings. Yet, little is known about the way in which flicker-induced synchronization propagates across cortical levels and impacts different cell types. Here, we used Neuropixels to simultaneously record from LGN, V1, and CA1 while presenting visual flicker stimuli at different frequencies. LGN neurons showed strong phase locking up to 40Hz, whereas phase locking was substantially weaker in V1 units and absent in CA1 units. Laminar analyses revealed an attenuation of phase locking at 40Hz for each processing stage, with substantially weaker phase locking in the superficial layers of V1. Gamma-rhythmic flicker predominantly entrained fast-spiking interneurons. Optotagging experiments showed that these neurons correspond to either PV+ or narrow-waveform Sst+ neurons. A computational model could explain the observed differences in phase locking based on the neurons’ capacitative low-pass filtering properties. In summary, the propagation of synchronized activity and its effect on distinct cell types strongly depend on its frequency.
SpikeShip: a method for fast, unsupervised discovery of high-dimensional neural spiking patterns
(2023)
Neural coding and memory formation depend on temporal spiking sequences that span high-dimensional neural ensembles. The unsupervised discovery and characterization of these spiking sequences requires a suitable dissimilarity measure to spiking patterns, which can then be used for clustering and decoding. Here, we present a new dissimilarity measure based on optimal transport theory called SpikeShip, which compares multi-neuron spiking patterns based on all the relative spike-timing relationships among neurons. SpikeShip computes the optimal transport cost to make all the relative spike timing relationships (across neurons) identical between two spiking patterns. We show that this transport cost can be decomposed into a temporal rigid translation term, which captures global latency shifts, and a vector of neuron-specific transport flows, which reflect inter-neuronal spike timing differences. SpikeShip can be effectively computed for high-dimensional neuronal ensembles, has a low (linear) computational cost that has the same order as the spike count, and is sensitive to higher-order correlations. Furthermore SpikeShip is binless, can handle any form of spike time distributions, is not affected by firing rate fluctuations, can detect patterns with a low signal-to-noise ratio, and can be effectively combined with a sliding window approach. We compare the advantages and differences between SpikeShip and other measures like SPIKE and Victor-P urpura distance. We applied SpikeShip to large-scale Neuropixel recordings during spontaneous activity and visual encoding. We show that high-dimensional spiking sequences detected via SpikeShip reliably distinguish between different natural images and different behavioral states. These spiking sequences carried complementary information to conventional firing rate codes. SpikeShip opens new avenues for studying neural coding and memory consolidation by rapid and unsupervised detection of temporal spiking patterns in high-dimensional neural ensembles.
The hippocampal formation is linked to spatial navigation, but there is little corroboration from freely-moving primates with concurrent monitoring of three-dimensional head and gaze stances. We recorded neurons and local field potentials across hippocampal regions in rhesus macaques during free foraging in an open environment while tracking their head and eye. Theta band activity was intermittently present at movement onset and modulated by saccades. Many cells were phase-locked to theta, with few showing theta phase precession. Most hippocampal neurons encoded a mixture of spatial variables beyond place fields and a negligible number showed prominent grid tuning. Spatial representations were dominated by facing location and allocentric direction, mostly in head, rather than gaze, coordinates. Importantly, eye movements strongly modulated neural activity in all regions. These findings reveal that the macaque hippocampal formation represents three-dimensional space using a multiplexed code, with head orientation and eye movement properties dominating over simple place and grid coding during free exploration.
Path integration is a sensorimotor computation that can be used to infer latent dynamical states by integrating self-motion cues. We studied the influence of sensory observation (visual/vestibular) and latent control dynamics (velocity/acceleration) on human path integration using a novel motion-cueing algorithm. Sensory modality and control dynamics were both varied randomly across trials, as participants controlled a joystick to steer to a memorized target location in virtual reality. Visual and vestibular steering cues allowed comparable accuracies only when participants controlled their acceleration, suggesting that vestibular signals, on their own, fail to support accurate path integration in the absence of sustained acceleration. Nevertheless, performance in all conditions reflected a failure to fully adapt to changes in the underlying control dynamics, a result that was well explained by a bias in the dynamics estimation. This work demonstrates how an incorrect internal model of control dynamics affects navigation in volatile environments in spite of continuous sensory feedback.
Olivo-cerebellar loops, where anatomical patches of the cerebellar cortex and inferior olive project one onto the other, form an anatomical unit of cerebellar computation. Here, we investigated how successive computational steps map onto olivo-cerebellar loops. Lobules IX-X of the cerebellar vermis, i.e. the nodulus and uvula, implement an internal model of the inner ear’s graviceptor, the otolith organs. We have previously identified two populations of Purkinje cells that participate in this computation: Tilt-selective cells transform egocentric rotation signals into allocentric tilt velocity signals, to track head motion relative to gravity, and translation-selective cells encode otolith prediction error. Here we show that, despite very distinct simple spike response properties, both types of Purkinje cells emit complex spikes that are proportional to sensory prediction error. This indicates that both cell populations comprise a single olivo-cerebellar loop, in which only translation-selective cells project to the inferior olive. We propose a neural network model where sensory prediction errors computed by translation-selective cells are used as a teaching signal for both populations, and demonstrate that this network can learn to implement an internal model of the otoliths.
Neuroscience studies in non-human primates (NHP) often follow the rule of thumb that results observed in one animal must be replicated in at least one other. However, we lack a statistical justification for this rule of thumb, or an analysis of whether including three or more animals is better than including two. Yet, a formal statistical framework for experiments with few subjects would be crucial for experimental design, ethical justification, and data analysis. Also, including three or four animals in a study creates the possibility that the results observed in one animal will differ from those observed in the others: we need a statistically justified rule to resolve such situations. Here, I present a statistical framework to address these issues. This framework assumes that conducting an experiment will produce a similar result for a large proportion of the population (termed ‘representative’), but will produce spurious results for a substantial proportion of animals (termed ‘outliers’); the fractions of ‘representative’ and ‘outliers’ animals being defined by a prior distribution. I propose a procedure in which experimenters collect results from M animals and accept results that are observed in at least N of them (‘N-out-of-M’ procedure). I show how to compute the risks α (of reaching an incorrect conclusion) and β (of failing to reach a conclusion) for any prior distribution, and as a function of N and M. Strikingly, I find that the N-out-of-M model leads to a similar conclusion across a wide range of prior distributions: recordings from two animals lowers the risk α and therefore ensures reliable result, but leaves a large risk β; and recordings from three animals and accepting results observed in two of them strikes an efficient balance between acceptable risks α and β. This framework gives a formal justification for the rule of thumb of using at least two animals in NHP studies, suggests that recording from three animals when possible markedly improves statistical power, provides a statistical solution for situations where results are not consistent between all animals, and may apply to other types of studies involving few animals.
The neural mechanisms that unfold when humans form a large group defined by an overarching context, such as audiences in theater or sports, are largely unknown and unexplored. This is mainly due to the lack of availability of a scalable system that can record the brain activity from a significantly large portion of such an audience simultaneously. Although the technology for such a system has been readily available for a long time, the high cost as well as the large overhead in human resources and logistic planning have prohibited the development of such a system. However, during the recent years reduction in technology costs and size have led to the emergence of low-cost, consumer-oriented EEG systems, developed primarily for recreational use. Here by combining such a low-cost EEG system with other off-the-shelve hardware and tailor-made software, we develop in the lab and test in a cinema such a scalable EEG hyper-scanning system. The system has a robust and stable performance and achieves accurate unambiguous alignment of the recorded data of the different EEG headsets. These characteristics combined with small preparation time and low-cost make it an ideal candidate for recording large portions of audiences.
Research on psychopathy has so far been largely limited to the investigation of high-level processes, such as emotion perception and regulation. In the present work, we investigate whether psychopathy has an effect on the estimation of fundamental physical parameters, which are computed in the brain during early stages of sensory processing. We employed a simple task in which participants had to estimate their interpersonal distance from a moving avatar and stop it at a given distance. The face expression of the avatars were positive, negative, or neutral. Participants carried out the task online on their home computers. We measured the psychopathy level via a self-report questionnaire. Regardless of the degree of psychopathy, the facial expression of the avatars showed no effect on distance estimation. Our results show that individuals with a high degree of psychopathy underestimate distance of approaching avatars significantly less (let the avatar approach them significantly closer) than did participants with a lesser degree of psychopathy. Moreover, participants who scored high in Self-Centered Impulsivity underestimate the distance to approaching avatars significantly less (let the avatar approach closer) than participants with a low score. Distance estimation is considered an automatic process performed at early stages of visual processing. Therefore, our results imply that psychopathy affects basic early sensory processes, such as feature extraction, in the visual cortex.
Moving in synchrony to external rhythmic stimuli is an elementary function that humans regularly engage in. It is termed “sensorimotor synchronization” and it is governed by two main parameters, the period and the phase of the movement with respect to the external rhythm. There has been an extensive body of research on the characteristics of these parameters, primarily once the movement synchronization has reached a steady-state level. Particular interest has been shown about how these parameters are corrected when there are deviations for the steady-state level. However, little is known about the initial “tuning-in” interval, when one aligns the movement to the external rhythm from rest. The current work investigates this “tuning-in” period for each of the four limbs and makes various novel contributions in the understanding of sensorimotor synchronization. The results suggest that phase and period alignment appear to be separate processes. Phase alignment involves limb-specific somatosensory memory in the order of minutes while period alignment has very limited memory usage. Phase alignment is the primary task but then the brain switches to period alignment where it spends most its resources. In overall this work suggests a central, cognitive role of period alignment and a peripheral, sensorimotor role of phase alignment.
Temporal anticipation is a fundamental process underlying complex neural functions such as associative learning, decision-making, and motor-preparation. Here we study event anticipation in its simplest form in human participants using magnetoencephalography. We distributed events in time according to different probability density functions and presented the stimuli separately in two different sensory modalities. We found that the temporal dynamics in right parietal cortex correlate with reaction times to anticipated events. Specifically, after an event occurred, event probability was represented in right parietal activity, hinting at a functional role of event-related potential component P300 in temporal expectancy. The results are consistent across both visual and auditory modalities. The right parietal cortex seems to play a central role in the processing of event probability density. Overall, this work contributes to the understanding of the neural processes involved in the anticipation of events in time.
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
With the emergence of immunotherapies, the understanding of functional HLA class I antigen presentation to T cells is more relevant than ever. Current knowledge on antigen presentation is based on decades of research in a wide variety of cell types with varying antigen presentation machinery (APM) expression patterns, proteomes and HLA haplotypes. This diversity complicates the establishment of individual APM contributions to antigen generation, selection and presentation. Therefore, we generated a novel Panel of APM Knockout Cell lines (PAKC) from the same genetic origin. After CRISPR/Cas9 genome-editing of ten individual APM components in a human cell line, we derived clonal cell lines and confirmed their knockout status and phenotype. We then show how PAKC will accelerate research on the functional interplay between APM components and their role in antigen generation and presentation. This will lead to improved understanding of peptide-specific T cell responses in infection, cancer and autoimmunity.
Treatments for amblyopia focus on vision therapy and patching of one eye. Predicting the success of these methods remains difficult, however. Recent research has used binocular rivalry to monitor visual cortical plasticity during occlusion therapy, leading to a successful prediction of the recovery rate of the amblyopic eye. The underlying mechanisms and their relation to neural homeostatic plasticity are not known. Here we propose a spiking neural network to explain the effect of short-term monocular deprivation on binocular rivalry. The model reproduces perceptual switches as observed experimentally. When one eye is occluded, inhibitory plasticity changes the balance between the eyes and leads to longer dominance periods for the eye that has been deprived. The model suggests that homeostatic inhibitory plasticity is a critical component of the observed effects and might play an important role in the recovery from amblyopia.
Motivation DNA CpG methylation (CpGm) has proven to be a crucial epigenetic factor in the gene regulatory system. Assessment of DNA CpG methylation values via whole-genome bisulfite sequencing (WGBS) is, however, computationally extremely demanding.
Results We present FAst MEthylation calling (FAME), the first approach to quantify CpGm values directly from bulk or single-cell WGBS reads without intermediate output files. FAME is very fast but as accurate as standard methods, which first produce BS alignment files before computing CpGm values. We present experiments on bulk and single-cell bisulfite datasets in which we show that data analysis can be significantly sped-up and help addressing the current WGBS analysis bottleneck for large-scale datasets without compromising accuracy.
Availability An implementation of FAME is open source and licensed under GPL-3.0 at https://github.com/FischerJo/FAME.
Multiplex families with a high prevalence of a psychiatric disorder are often examined to identify rare genetic variants with large effect sizes. In the present study, we analysed whether the risk for bipolar disorder (BD) in BD multiplex families is influenced by common genetic variants. Furthermore, we investigated whether this risk is conferred mainly by BD-specific risk variants or by variants also associated with the susceptibility to schizophrenia or major depression. In total, 395 individuals from 33 Andalusian BD multiplex families as well as 438 subjects from an independent, sporadic BD case-control cohort were analysed. Polygenic risk scores (PRS) for BD, schizophrenia, and major depression were calculated and compared between the cohorts. Both the familial BD cases and unaffected family members had significantly higher PRS for all three psychiatric disorders than the independent controls, suggesting a high baseline risk for several psychiatric disorders in the families. Moreover, familial BD cases showed significantly higher BD PRS than unaffected family members and sporadic BD cases. A plausible hypothesis is that, in multiplex families with a general increase in risk for psychiatric disease, BD development is attributable to a high burden of common variants that confer a specific risk for BD. The present analyses, therefore, demonstrated that common genetic risk variants for psychiatric disorders are likely to contribute to the high incidence of affective psychiatric disorders in the multiplex families. The PRS explained only part of the observed phenotypic variance and rare variants might have also contributed to disease development.
Investigators in the cognitive neurosciences have turned to Big Data to address persistent replication and reliability issues by increasing sample sizes, statistical power, and representativeness of data. While there is tremendous potential to advance science through open data sharing, these efforts unveil a host of new questions about how to integrate data arising from distinct sources and instruments. We focus on the most frequently assessed area of cognition - memory testing - and demonstrate a process for reliable data harmonization across three common measures. We aggregated raw data from 53 studies from around the world which measured at least one of three distinct verbal learning tasks, totaling N = 10,505 healthy and brain-injured individuals. A mega analysis was conducted using empirical bayes harmonization to isolate and remove site effects, followed by linear models which adjusted for common covariates. After corrections, a continuous item response theory (IRT) model estimated each individual subject’s latent verbal learning ability while accounting for item difficulties. Harmonization significantly reduced inter-site variance by 37% while preserving covariate effects. The effects of age, sex, and education on scores were found to be highly consistent across memory tests. IRT methods for equating scores across AVLTs agreed with held-out data of dually-administered tests, and these tools are made available for free online. This work demonstrates that large-scale data sharing and harmonization initiatives can offer opportunities to address reproducibility and integration challenges across the behavioral sciences.
Mapping cortical brain asymmetry in 17,141 healthy individuals worldwide via the ENIGMA Consortium
(2017)
Models of perceptual decision making have historically been designed to maximally explain behaviour and brain activity independently of their ability to actually perform tasks. More recently, performance-optimized models have been shown to correlate with brain responses to images and thus present a complementary approach to understand perceptual processes. In the present study, we compare how these approaches comparatively account for the spatio-temporal organization of neural responses elicited by ambiguous visual stimuli. Forty-six healthy human subjects performed perceptual decisions on briefly flashed stimuli constructed from ambiguous characters. The stimuli were designed to have 7 orthogonal properties, ranging from low-sensory levels (e.g. spatial location of the stimulus) to conceptual (whether stimulus is a letter or a digit) and task levels (i.e. required hand movement). Magneto-encephalography source and decoding analyses revealed that these 7 levels of representations are sequentially encoded by the cortical hierarchy, and actively maintained until the subject responds. This hierarchy appeared poorly correlated to normative, drift-diffusion, and 5-layer convolutional neural networks (CNN) optimized to accurately categorize alpha-numeric characters, but partially matched the sequence of activations of 3/6 state-of-the-art CNNs trained for natural image labeling (VGG-16, VGG-19, MobileNet). Additionally, we identify several systematic discrepancies between these CNNs and brain activity, revealing the importance of single-trial learning and recurrent processing. Overall, our results strengthen the notion that performance-optimized algorithms can converge towards the computational solution implemented by the human visual system, and open possible avenues to improve artificial perceptual decision making.
Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from “accidental” viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalise to 3D models of objects. Using 3D models further allowed us to probe a broad range of viewpoints and empirically establish accidental and canonical viewpoints. In Experiment 1, we presented 3D models of objects from six different viewpoints (0°, 60°, 120°, 180° 240°, 300°) in colour (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the performance in Experiments 1a and 1b, we determined canonical (0°-rotation) and non-canonical (120°-rotation) viewpoints for the stimuli. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that viewpoint-dependence and scene context effects generalize to depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy.
Bipolar disorder (BD) is a genetically complex mental illness characterized by severe oscillations of mood and behavior. Genome-wide association studies (GWAS) have identified several risk loci that together account for a small portion of the heritability. To identify additional risk loci, we performed a two-stage meta-analysis of >9 million genetic variants in 9,784 bipolar disorder patients and 30,471 controls, the largest GWAS of BD to date. In this study, to increase power we used ~2,000 lithium-treated cases with a long-term diagnosis of BD from the Consortium on Lithium Genetics, excess controls, and analytic methods optimized for markers on the Xchromosome. In addition to four known loci, results revealed genome-wide significant associations at two novel loci: an intergenic region on 9p21.3 (rs12553324, p = 5.87×10-9; odds ratio = 1.12) and markers within ERBB2 (rs2517959, p = 4.53×10-9; odds ratio = 1.13). No significant X-chromosome associations were detected and X-linked markers explained very little BD heritability. The results add to a growing list of common autosomal variants involved in BD and illustrate the power of comparing well-characterized cases to an excess of controls in GWAS.
i-TED is an innovative detection system which exploits Compton imaging techniques to achieve a superior signal-to-background ratio in (n,γ) cross-section measurements using time-of-flight technique. This work presents the first experimental validation of the i-TED apparatus for high-resolution time-of-flight experiments and demonstrates for the first time the concept proposed for background rejection. To this aim both 197Au(n,γ) and 56Fe(n,γ) reactions were measured at CERN n\_TOF using an i-TED demonstrator based on only three position-sensitive detectors. Two \cds detectors were also used to benchmark the performance of i-TED. The i-TED prototype built for this study shows a factor of ∼3 higher detection sensitivity than state-of-the-art \cds detectors in the ∼10~keV neutron energy range of astrophysical interest. This paper explores also the perspectives of further enhancement in performance attainable with the final i-TED array consisting of twenty position-sensitive detectors and new analysis methodologies based on Machine-Learning techniques.
In this work, inhomogeneous chiral phases are studied in a variety of Four-Fermion and Yukawa models in 2+1 dimensions at zero and non-zero temperature and chemical potentials. Employing the mean-field approximation, we do not find indications for an inhomogeneous phase in any of the studied models. We show that the homogeneous phases are stable against inhomogeneous perturbations. At zero temperature, full analytic results are presented.
We deal with the reconstruction of inclusions in elastic bodies based on monotonicity methods and construct conditions under which a resolution for a given partition can be achieved. These conditions take into account the background error as well as the measurement noise. As a main result, this shows us that the resolution guarantees depend heavily on the Lamé parameter μ and only marginally on λ.
Effective spectral functions of the ρ meson are reconstructed by considering the lifetimes inside different media using the hadronic transport SMASH (Simulating Many Accelerated Strongly-interacting Hadrons). Due to inelastic scatterings, resonance lifetimes are dynamically shortened (collisional broadening), even though the employed approach assumes vacuum resonance properties. Analyzing the ρ meson lifetimes allows to quantify an effective broadening of the decay width and spectral function, which is important in order to distinguish dynamical effects from additional genuine medium modifications to the spectral functions, indicating e.g. an onset of chiral symmetry restoration. The broadening of the spectral function in a thermalized system is shown to be consistent with other theoretical calculations. The effective ρ meson spectral function is also presented for the dynamical evolution of heavy-ion collisions, finding a clear correlation of the broadening to system size, which is explained by an observed dependence of the width on the local hadron density. Furthermore, the difference in the results between the thermal system and full collision dynamics is explored, which may point to non-equilibrium effects.
The Calderón problem with finitely many unknowns is equivalent to convex semidefinite optimization
(2023)
We consider the inverse boundary value problem of determining a coefficient function in an elliptic partial differential equation from knowledge of the associated Neumann-Dirichlet-operator. The unknown coefficient function is assumed to be piecewise constant with respect to a given pixel partition, and upper and lower bounds are assumed to be known a-priori.
We will show that this Calderón problem with finitely many unknowns can be equivalently formulated as a minimization problem for a linear cost functional with a convex non-linear semidefinite constraint. We also prove error estimates for noisy data, and extend the result to the practically relevant case of finitely many measurements, where the coefficient is to be reconstructed from a finite-dimensional Galerkin projection of the Neumann-Dirichlet-operator.
Our result is based on previous works on Loewner monotonicity and convexity of the Neumann-Dirichlet-operator, and the technique of localized potentials. It connects the emerging fields of inverse coefficient problems and semidefinite optimization.
The exploration of hot and dense nuclear matter: Introduction to relativistic heavy-ion physics
(2022)
This article summarizes our present knowledge about nuclear matter at the highest energy densities and its formation in relativistic heavy ion collisions. We review what is known about the structure and properties of the quark-gluon plasma and survey the observables that are used to glean information about it from experimental data.
The idea of slow-neutron capture nucleosynthesis formulated in 1957 triggered a tremendous experimental effort in different laboratories worldwide to measure the relevant nuclear physics input quantities, namely (n,γ) cross sections over the stellar temperature range (from few eV up to several hundred keV) for most of the isotopes involved from Fe up to Bi. A brief historical review focused on total energy detectors will be presented to illustrate how, advances in instrumentation have led, over the years, to the assessment and discovery of many new aspects of s-process nucleosynthesis and to the progressive refinement of theoretical models of stellar evolution. A summary will be presented on current efforts to develop new detection concepts, such as the Total-Energy Detector with γ-ray imaging capability (i-TED). The latter is based on the simultaneous combination of Compton imaging with neutron time-of-flight (TOF) techniques, in order to achieve a superior level of sensitivity and selectivity in the measurement of stellar neutron capture rates.
The production of prompt Λ+c baryons at midrapidity (|y|<0.5) was measured in central (0-10%) and mid-central (30-50%) Pb-Pb collisions at the center-of-mass energy per nucleon-nucleon pair sNN−−−√=5.02 TeV with the ALICE detector. The Λ+c production yield, the Λ+c/D0 production ratio, and the Λ+c nuclear modification factor RAA are reported. The results are more precise and more differential in transverse momentum (pT) and centrality with respect to previous measurements. The Λ+c/D0 ratio, which is enhanced with respect to the pp measurement for 4<pT<8 GeV/c, is described by theoretical calculations that model the charm-quark transport in the quark-gluon plasma and include hadronization via both coalescence and fragmentation mechanisms.
The purpose of the paper is to initiate the development of the theory of Newton Okounkov bodies of curve classes. Our denition is based on making a fundamental property of NewtonOkounkov bodies hold also in the curve case: the volume of the NewtonOkounkov body of a curve is a volume-type function of the original curve. This construction allows us to conjecture a new relation between NewtonOkounkov bodies, we prove it in certain cases.
The present article proposes a re-reading of what "inclusion" into the sphere of the historical actually means in modern European historical discourse. It argues that this re-reading permits challenging a powerful, but problematic norm of ontological homogeneity as something to be achieved in and by historical discourse. At least some of the more conceptually profound challenges that accounts of "deep history" - of very distant pasts - pose to historical discourse have to do with pursuits of this norm. Historical theory has the potential of responding to some of these challenges and actually reverting them back at the practice of accounting for deep times in historical writing. The argument proceeds, in a first step, by analyzing the ties between modern European mortuary cultures and historical writing. In a second step, the history of humanitarian moralities is brought to bear on the analysis, in order to make visible, thirdly, the fractured presences of deep time in modern-era and contemporary historical writing. The fractures in question emerge, the article argues, from the ontological heterogeneity of historical knowledge. So in the end, a position beyond ontological homogeneity is adumbrated.
Release of neuropeptides from dense core vesicles (DCVs) is essential for neuromodulation. Compared to the release of small neurotransmitters, much less is known about the mechanisms and proteins contributing to neuropeptide release. By optogenetics, behavioral analysis, electrophysiology, electron microscopy, and live imaging, we show that synapsin SNN-1 is required for cAMP-dependent neuropeptide release in Caenorhabditis elegans hermaphrodite cholinergic motor neurons. In synapsin mutants, behaviors induced by the photoactivated adenylyl cyclase bPAC, which we previously showed to depend on acetylcholine and neuropeptides (Steuer Costa et al., 2017), are altered like in animals with reduced cAMP. Synapsin mutants have slight alterations in synaptic vesicle (SV) distribution, however, a defect in SV mobilization was apparent after channelrhodopsin-based photostimulation. DCVs were largely affected in snn-1 mutants: DCVs were ∼30% reduced in synaptic terminals, and not released following bPAC stimulation. Imaging axonal DCV trafficking, also in genome-engineered mutants in the serine-9 protein kinase A phosphorylation site, showed that synapsin captures DCVs at synapses, making them available for release. SNN-1 co-localized with immobile, captured DCVs. In synapsin deletion mutants, DCVs were more mobile and less likely to be caught at release sites, and in non-phosphorylatable SNN-1B(S9A) mutants, DCVs traffic less and accumulate, likely by enhanced SNN-1 dependent tethering. Our work establishes synapsin as a key mediator of neuropeptide release.
Der vorliegende Beitrag versucht, am Leitfaden der Scham einen Zugang zu Agambens Theorie der Subjektivität zu gewinnen, um die theoretischen und historischen Voraussetzungen seiner Ethik einer Prüfung zu unterziehen, die zugleich an die Kritik Thomäs anschließen kann. Den Ausgangspunkt der folgenden Überlegungen bietet Agambens Untersuchung zum 'homo sacer'. In einem zweiten Schritt geht es um die Theorie der Scham, die "Was von Auschwitz bleibt" vorlegt. Die kritische Diskussion von Agambens Ethik leitet die Auseinandersetzung mit dem Gewährsmann ein, den "Was von Auschwitz bleibt präsentiert", mit Primo Levi. Sie wird weitergeführt und zugespitzt durch die Überbietung, die Levis' Frage "Ist das ein Mensch?" in Imre Kertész' "Roman eines Schicksallosen" gefunden hat. Vor dem Hintergrund der zentralen Bedeutung der Scham bei Primo Levi und Imre Kertécs kehrt der letzte Teil zu Agambens Ethik zurück, um deren Grundlagen im Rückgriff auf Aristoteles einer Revision zu unterziehen.
If projection and transference represent similar terms that imply a fundamental form of ignorance, the aim of this investigation can not be to draw a sharp distinction between projection and transference. Of course, the dialectic of inside and outside doesn't play the central role in transference like it does in projection. In a certain way, the notion of projection concerns all forms of perception and seems to be wider than the notion of transference. But on the other hand, the notion of transference as a poetic act of creating metaphorical analogies seems to be wider than that of projection. My interest in the following lines lies not in the attempt to draw a valuable distinction between both terms, but to look at their interplay in a novel that discusses all forms of archaism, primitivism and regression, commonly linked with projection, a novel, that at the same time tries to give an explanation of the foundation of modern art. Thomas Mann's Doktor Faustus offers an insight not only into the combination of projection and love, but also into ignorance as the common ground of projection and transference. I will therefore first try to determine the modernity of Thomas Mann's novel in regard to the abounding intertextual dimension that characterizes the text, and then closely examine the central scene of the novel, the confrontation between Adrian Leverkühn and the obscure figure of the devil.
Wie Rolf Parr in seinem Aufsatz 'Liminale und andere Übergänge. Theoretische Modellierung von Grenzzonen, Normalitätsaspekten, Schwellen, Übergängen und Zwischenräumen in Literatur- und Kulturwissenschaft' deutlich macht, ist die Intertextualitäts- und Intermedialitätstheorie, die er im Anschluss an die Arbeiten Michel Foucaults und Jürgen Links vertritt, wesentlich von einem Moment der Grenzüberschreitung bestimmt. An die Stelle klar konturierter Grenzen treten Schwellen als "räumlichtopographische Zonen der Unentschiedenheit", die zugleich als zeitliche Erinnerungsschwellen fungieren. Parr richtet im Rekurs auf Foucault den Blick nicht allein auf diskursive Grenzen der Sagbarkeit durch Ausschlussmechanismen, Verbote etc. Er macht zugleich auf Foucaults frühes Konzept der Heterotopie aufmerksam, wo dieser Grenzziehungen auf bestimmte Raumstrukturen bezieht. Parrs eigenes Interesse liegt in diesem Zusammenhang in der Überführung der diskurstheoretischen Arbeiten Foucaults in eine Interdiskurstheorie, die eben die Schwellen einzelner Diskurse zu überschreiten hätte. Ich möchte hier einen anderen Akzent setzen und die Bedeutung von Schwellenerfahrungen bei Foucault selbst herausarbeiten. Ich konzentriere mich dabei zunächst auf den Begriff des historischen Aprioris aus 'Die Ordnung der Dinge', um daran anschließend auf den Begriff der Heterotopie einzugehen, der die Entstehung der 'Ordnung der Dinge' in den sechziger Jahren in gewisser Weise begleitet und komplementiert. Der Vergleich von Foucaults Schwellendenken mit dem Walter Benjamins soll zugleich erlauben, das Thema des Liminalen im Sinne Parrs als ein Grundmotiv von Foucaults Denken auszumachen.
Die Frage, was Literatur ist, scheint nicht nur die grundlegendste zu sein, die sich der Literaturwissenschaft stellt, sie ist zugleich ihre abgründigste. Grundlegend ist sie, weil sie nach dem Wesen der Literatur fragt und damit eigentlich eine Selbstverständlichkeit aufruft, die die Auseinandersetzung mit Literatur begleitet. Abgründig ist sie, weil auch die scheinbar selbstverständlichsten Definitionen der Literatur bisher nicht zu einer einheitlichen Auffassung vom Wesen der Literatur geführt haben. So steht die Literaturwissenschaft bereits mit der ersten Frage, die sich ihr stellt, vor einem scheinbar unaufhebbaren Dilemma. Auf den Gegenstand angesprochen, der ihr zugehört und der entsprechend über ihre Berechtigung als Wissenschaft Auskunft zu geben vermöchte, bleibt sie im Unklaren.
Ist die Literatur, als Abweichung oder als Erfüllung der Ausdrucksfunktion der Sprache verstanden, eine Diskursform, die dem Bereich der Wahrheit zugänglich ist, oder aber verhindert sie jeden systematischen Zugang zur Wahrheit? Und was ist überhaupt damit gewonnen, wenn Literatur und Wahrheit in einen Zusammenhang zueinander gesetzt werden? Diese Fragen mit einer neuen Dringlichkeit versehen zu haben, die über den Gegensatz von analytischer Philosophie und Dekonstruktion hinausreicht, ist das Verdienst der Arbeiten von Stanley Cavell. Im Folgenden geht es darum, die Frage nach der Wahrheit in der Literatur noch einmal anhand der Auseinandersetzung mit Cavells Schriften stellen, um die Reichweite wie die Grenzen des philosophischen Diskurses über die Literatur zu bestimmen. [...] Was für Cavell in grundsätzlicher Weise in Frage steht, ist zum einen das Wissen, das die Philosophie von der Welt haben kann und zum anderen das Wissen, was Philosophie und Literatur in ihrer gemeinsamen und doch unterschiedlichen Auseinandersetzung mit dem Skeptizismus voneinander haben können. In dem Maße, in dem er nach den Möglichkeiten einer Überwindung des Skeptizismus sucht, erkennt Cavell zunächst spezifische Formen des Nichtwissens an, die er im Kontext philosophischer wie literarischer Texte gleichermaßen thematisiert. Eine besondere Stellung nimmt in diesem Zusammenhang der wiederholte Rückgriff auf Shakespeare ein, der in "Der Anspruch der Vernunft" in einer Lektüre des "Othello" kulminiert, die anhand der Analyse der Tragödie als Ausdruck von und Antwort an den Skeptizismus das Problem von Wissen und Nichtwissen zu fassen erlaubt. Insofern bietet es sich an, Cavells Überlegungen zum Zusammenhang von Tragödie und Skeptizismus einer kritischen Lektüre zu unterziehen, die im Rahmen seiner eigenen Fragestellung noch einmal nach dem grundsätzlichen Verhältnis von Literatur und philosophischer Wahrheitsfindung fragt.
Im Mittelpunkt des Textes, so scheint es, steht die trauernde Verarbeitung eines lang zurückliegenden Ereignisses, damit zugleich Erinnerung und Abschied als Grundmotive des Werkes von Droste-Hülshoff, wie sie auch in anderen Texten wie "Meine Toten" oder dem Byron-Gedicht "Lebt Wohl" zum Ausdruck kommen. In der "Taxuswand" durchmisst Droste-Hülshoff eine lange Zeitspanne, achtzehn Jahre, die zwischen der Begegnung und seiner dichterischen Verarbeitung stehen. Die Frage, die in diesem Zusammenhang im Raum steht, ist die nach dem grundsätzlichen Verhältnis von dichterischer Erinnerungsleistung und biographischem Erlebnis im Werk der Annette von Droste-Hülshoff. Dass beide in ähnlicher Weise wie bei Baudelaire nicht einfach zusammenfallen, sondern auseinandertreten, ist die Vermutung, der es im Folgenden nachzugehen gilt.
We tested 6–7-year-olds, 18–22-year-olds, and 67–74-year-olds on an associative memory task that consisted of knowledge-congruent and knowledge-incongruent object–scene pairs that were highly familiar to all age groups. We compared the three age groups on their memory congruency effect (i.e., better memory for knowledge-congruent associations) and on a schema bias score, which measures the participants’ tendency to commit knowledge-congruent memory errors. We found that prior knowledge similarly benefited memory for items encoded in a congruent context in all age groups. However, for associative memory, older adults and, to a lesser extent, children overrelied on their prior knowledge, as indicated by both an enhanced congruency effect and schema bias. Functional Magnetic Resonance Imaging (fMRI) performed during memory encoding revealed an age-independent memory x congruency interaction in the ventromedial prefrontal cortex (vmPFC). Furthermore, the magnitude of vmPFC recruitment correlated positively with the schema bias. These findings suggest that older adults are most prone to rely on their prior knowledge for episodic memory decisions, but that children can also rely heavily on prior knowledge that they are well acquainted with. Furthermore, the fMRI results suggest that the vmPFC plays a key role in the assimilation of new information into existing knowledge structures across the entire lifespan. vmPFC recruitment leads to better memory for knowledge-congruent information but also to a heightened susceptibility to commit knowledge-congruent memory errors, in particular in children and older adults.
During the first two days of August 2016 a seismic crisis occurred on Brava, Cape Verde, which – according to observations based on a local seismic network – was characterized by more than thousand volcano–seismic signals. Brava is considered an active volcanic island, although it has not experienced any historic eruptions. Seismicity significantly exceeded the usual level during the crisis. We report on results based on data from a temporary seismic–array deployment on the neighbouring island of Fogo at a distance of about 35 km. The array was in operation from October 2015 to December 2016 and recorded a total of 1343 earthquakes, 355 thereof were localized. On 1 and 2 August we observed 54 earthquakes, 25 of which could be located beneath Brava. We further evaluate the observations with regards to possible precursors to the crisis and its continuation. Our analysis shows a migration of seismicity around Brava, but no distinct precursory pattern. However, the observations suggest that similar earthquake swarms commonly occur close to Brava. The results further confirm the advantages of seismic arrays as tools for the remote monitoring of regions with limited station coverage or access.
In the last decades, energy modelling has supported energy planning by offering insights into the dynamics between energy access, resource use, and sustainable development. Especially in recent years, there has been an attempt to strengthen the science-policy interface and increase the involvement of society in energy planning processes. This has, both in the EU and worldwide, led to the development of open-source and transparent energy modelling practices.This paper describes the role of an open-source energy modelling tool in the energy planning process and highlights its importance for society. Specifically, it describes the existence and characteristics of the relationship between developing an open-source, freely available tool and its application, dissemination and use for policy making. Using the example of the Open Source energy Modelling System (OSeMOSYS), this work focuses on practices that were established within the community and that made the framework's development and application both relevant and scientifically grounded. Keywords: Energy system modelling tool, Open-source software, Model-based public policy, Software development practice, Outreach practice
Introduction: In the development of bio-enabling formulations, innovative in vivo predictive tools to understand and predict the in vivo performance of such formulations are needed. Etravirine, a non-nucleoside reverse transcriptase inhibitor, is currently marketed as an amorphous solid dispersion (Intelence® tablets). The aims of this study were 1) to investigate and discuss the advantages of using biorelevant in vitro setups in simulating the in vivo performance of Intelence® 100 mg and 200 mg tablets, in the fed state, 2) to build a Physiologically Based Pharmacokinetic (PBPK) model by combining experimental data and literature information with the commercially available in silico software Simcyp® Simulator V17.1 (Certara UK Ltd.), and 3) to discuss the challenges when predicting the in vivo performance of an amorphous solid dispersion and identify the parameters which influence the pharmacokinetics of etravirine most.
Methods: Solubility, dissolution and transfer experiments were performed in various biorelevant media simulating the fasted and fed state environment in the gastrointestinal tract. An in silico PBPK model for healthy volunteers was developed in the Simcyp® Simulator, using in vitro results and data available from the literature as input. The impact of pre- and post-absorptive parameters on the pharmacokinetics of etravirine was investigated using simulations of various scenarios.
Results: In vitro experiments indicated a large effect of naturally occurring solubilizing agents on the solubility of etravirine. Interestingly, supersaturated concentrations of etravirine were observed over the entire duration of dissolution experiments on Intelence® tablets. Coupling the in vitro results with the PBPK model provided the opportunity to investigate two possible absorption scenarios, i.e. with or without implementation of precipitation. The results from the simulations suggested that a scenario in which etravirine does not precipitate is more representative of the in vivo data. On the post-absorptive side, it appears that the concentration dependency of the unbound fraction of etravirine in plasma has a significant effect on etravirine pharmacokinetics.
Conclusions: The present study underlines the importance of combining in vitro and in silico biopharmaceutical tools to advance our knowledge in the field of bio-enabling formulations. Future studies on other bio-enabling formulations can be used to further explore this approach to support rational formulation design as well as robust prediction of clinical outcomes.