Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (962)
- Article (753)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Review (1)
Has Fulltext
- yes (1774) (remove)
Is part of the Bibliography
- no (1774)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (11)
- LHC (10)
- Heavy-ion collisions (8)
- Heavy-ion collision (7)
- heavy-ion collisions (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
- QCD (5)
- Quark-Gluon Plasma (5)
- Relativistic heavy-ion collisions (5)
- visual cortex (5)
- Collective Flow (4)
- Neurons (4)
- Neuroscience (4)
- Particle and resonance production (4)
- QCD phase diagram (4)
- Synapses (4)
- cognition (4)
- deep learning (4)
- gamma (4)
- mathematical modeling (4)
- primary visual cortex (4)
- quark-gluon plasma (4)
- ALICE (3)
- ALICE experiment (3)
- Beam Energy Scan (3)
- Chiral Magnetic Effect (3)
- D-wave (3)
- Diffraction (3)
- EEG (3)
- Heavy Quark Production (3)
- Heavy ion collisions (3)
- I-wave (3)
- Jets (3)
- Jets and Jet Substructure (3)
- Quantum gravity (3)
- RHIC (3)
- Visual cortex (3)
- active perception (3)
- brain stimulation (3)
- compartmental neuron model (3)
- computational model (3)
- computational modeling (3)
- cosmological constant (3)
- dark energy (3)
- efficient coding (3)
- equation of state (3)
- motor cortex (3)
- multi-scale modeling (3)
- natural scenes (3)
- neuroscience (3)
- perception (3)
- synchrony (3)
- transcranial magnetic stimulation (3)
- ultra-peripheral collision (3)
- vector meson production (3)
- Action potentials (2)
- B-slope (2)
- Biophysics (2)
- Charm physics (2)
- Chiral phase transition (2)
- Chiral symmetry restoration (2)
- Collective flow (2)
- Compact stars (2)
- Conserved charge fluctuations (2)
- Consolidation (2)
- D-meson spectral density (2)
- Deep learning (2)
- Delaunay-Triangulierung (2)
- Elastic scattering (2)
- Electroencephalography (2)
- Electroencephalography – EEG (2)
- Elliptic flow (2)
- Energy system design (2)
- Experimental nuclear physics (2)
- Experimental particle physics (2)
- Fourier analysis (2)
- General relativity (2)
- Heavy Ion Phenomenology (2)
- Heavy Ions (2)
- Hodgkin lymphoma (2)
- Kollisionen des schweren Ions (2)
- Kollisionen schwerer Ionen (2)
- Large-scale integration of renewable power generation (2)
- Lepton-Nucleon Scattering (experiments) (2)
- MEG (2)
- Magnetoencephalography (2)
- Meson production (2)
- Minimal length (2)
- Monte Carlo simulations (2)
- Neuronal dendrites (2)
- Neuronal plasticity (2)
- Neutron stars (2)
- Non-REM sleep (2)
- Particle Correlations and Fluctuations (2)
- Pb–Pb collisions (2)
- Polarization (2)
- Proteins (2)
- QCD Phenomenology (2)
- QCD phase transition (2)
- Quark Gluon Plasma (2)
- Quark gluon plasma (2)
- Quark-gluon plasma (2)
- Research Article (2)
- Resonances (2)
- STAR (2)
- agent-based modeling (2)
- anaplastic large cell lymphoma (2)
- attention (2)
- binocular rivalry (2)
- binocular vision (2)
- bulk viscosity (2)
- classical Hodgkin lymphoma (2)
- computer vision (2)
- consciousness (2)
- correlations (2)
- cortex (2)
- dendrite (2)
- diffusion tensor imaging (2)
- effective connectivity (2)
- extended Einstein gravity (2)
- functional connectivity (2)
- gamma oscillations (2)
- gauge theory (2)
- graph theory (2)
- gravitation (2)
- hadronic (2)
- heat stress (2)
- heavy ion collisions (2)
- hot spots (2)
- human (2)
- hydrodynamics (2)
- hyperons (2)
- inflation (2)
- initial state (2)
- intrinsic motivation (2)
- intrinsic plasticity (2)
- learning (2)
- local field potential (2)
- machine learning (2)
- modeling (2)
- molecular dynamics (2)
- network topology (2)
- neural oscillations (2)
- neuroinformatics (2)
- neuronal populations (2)
- nuclear parton modification (2)
- oscillation (2)
- oscillations (2)
- particle ratios (2)
- phase transition (2)
- pp collisions (2)
- predictive coding (2)
- quadratic-linear gravity (2)
- qualia (2)
- relativistic heavy-ion collisions (2)
- segmentation (2)
- self-organization (2)
- semantics (2)
- sparse coding (2)
- statistical model (2)
- stimulus encoding (2)
- strangeness (2)
- synchronization (2)
- synesthesia (2)
- systems biology (2)
- torsion (2)
- transport theory (2)
- unsupervised learning (2)
- vergence (2)
- visual attention (2)
- zero-energy universe (2)
- Υ suppression (2)
- 140Ce (1)
- 3D image analysis (1)
- 900 GeV (1)
- ALICE detector (1)
- Activity-silent (1)
- Adaptive immunity (1)
- Adult neurogenesis (1)
- Age-structure (1)
- Alpha oscillations (1)
- Amblyopia (1)
- Animal flight (1)
- Anti-nuclei (1)
- Antibodies (1)
- Antiteilchen (1)
- Attention (1)
- Autonomous Learning (1)
- B cells (1)
- B-Lymphozyt (1)
- BPTI (1)
- Backpropagating action potential (1)
- Bacterial structural biology (1)
- Bandpass filters (1)
- Baryon number susceptibilities (1)
- Bayesian (1)
- Bayesian model (1)
- Beam energy scan (1)
- Beauty production (1)
- Bell theorem (1)
- Bidirectional connections (1)
- Bilderkennung (1)
- Binary pulsars (1)
- Binocular Rivalry (1)
- Biochemistry (1)
- Biological neural networks (1)
- Biological sciences (1)
- Biomechanik (1)
- Biomedical engineering (1)
- Biopolymere (1)
- Bird flight (1)
- Blow-up (1)
- Boltzmann equation (1)
- Boosted Jets (1)
- Bose–Einstein condensation (1)
- Butterworth filters (1)
- CBM detector (1)
- CLVisc (1)
- CNNs (1)
- COVID-19 (1)
- Cancer detection and diagnosis (1)
- Canonical suppression (1)
- Casimir effect (1)
- Cauchy horizon (1)
- Causality (1)
- Cell differentiation (1)
- Cellular imaging (1)
- Cellular microbiology (1)
- Centrality Class (1)
- Centrality Selection (1)
- Chaostheorie (1)
- Chaperones (1)
- Charm Produktion (1)
- Charm quark spatial diffusion coefficient (1)
- Charmed mesons (1)
- Charmonium (1)
- Chemical equilibration (1)
- Chemische Gleichgewichtherstellung (1)
- Chickens (1)
- Chiral effective model (1)
- Chiral magnetic effect (1)
- Chiral perturbation theory (1)
- Chiral symmetry (1)
- Climate (1)
- Coalescence (1)
- Coherent Infomax (1)
- Coincidence measurement (1)
- Collective Flow, (1)
- Collective cell migration (1)
- Collectivity (1)
- Color Superconductivity (1)
- Compact astrophysical objects (1)
- Compact binary stars (1)
- Compact objects (1)
- Comparison with QCD (1)
- Compartmental modeling (1)
- Complexity (1)
- Computational model (1)
- Computational neuroscience (1)
- Computer simulation (1)
- Computer vision (1)
- Computersimulation (1)
- Conformational transitions (1)
- Connectomics (1)
- Conscious perception (1)
- Consciousness (1)
- Conservation (1)
- Contact network (1)
- Contrastive analysis (1)
- Convolutional (1)
- Correlation (1)
- Cortical circuit (1)
- Cortical column (1)
- Coupled-channel self-consistent calculation (1)
- Critical mass (1)
- Critical point (1)
- Cytoplasm (1)
- Cytoplasmic staining (1)
- D-Dbar (1)
- DN interaction (1)
- DNA repair (1)
- DW Hamiltonian canonical transformation (1)
- De Donder-Weyl Hamiltonian formulation (1)
- Decision (1)
- Delta resonance (1)
- Dendritic structure (1)
- Dense matter (1)
- Dense nuclear matter (1)
- Depolarization (1)
- Depression (1)
- Dermis (1)
- Deuteron production (1)
- Developmental Biology (1)
- Developmental Robotics (1)
- Dichte (1)
- Differential equations (1)
- Differential expression (1)
- Dripline (1)
- Drosophila (1)
- Dual projection (1)
- Duality (1)
- Dynamic transport (1)
- E/I-Balance (1)
- EEG-TMS (1)
- EGF (1)
- EOS (1)
- ERP (1)
- Earthquake magnitude (1)
- Earthquake waveforms (1)
- Ebola virus (1)
- Effective Field Theories (1)
- Effective QCD model (1)
- Effective connectivity (1)
- Effective hadron theories (1)
- Elastic Scattering (1)
- Electron-pion identification (1)
- Electrotonic analysis (1)
- Electroweak interaction (1)
- Elektron (1)
- Elektrostatik (1)
- Elsevier (1)
- Embryos (1)
- Energy loss (1)
- Energy modelling (1)
- Energy system analysis (1)
- Energy system modelling tool (1)
- Entwicklungspsychologie (1)
- Epidemic (1)
- Epilepsy (1)
- EpilepsyNon-REM sleep (1)
- Equation of State (1)
- Europe (1)
- European electricity grid (1)
- Event-by-event (1)
- Exotische (1)
- Extra dimensions (1)
- Eye movements (1)
- FRW spacetime (1)
- Farbsupraleitung (1)
- Feathers (1)
- Femtoscopy (1)
- Fibre/foam sandwich radiator (1)
- Fibroblast growth factor (1)
- Finite temperature (1)
- Finite-temperature QFT (1)
- First-motion polarity (1)
- Fisher information (1)
- Flexible backup power (1)
- Fluctuations (1)
- Fluctuations of conserved charges (1)
- Fluid dynamics (1)
- Follikuläre dendritische Zellen (1)
- Forward physics (1)
- Fractal dimension (1)
- Freeze-out (1)
- Freie Elektrophorese (1)
- Friedman equation (1)
- Functional connectivity (1)
- Gabor wavelets (1)
- Galaxies and clusters (1)
- Gamma-Band Activity (1)
- Gamma-aminobutyric acid (1)
- Gedächtnis (1)
- Gedächtnisbildung (1)
- Gehirn (1)
- Gene ontology (1)
- Gene set analysis (1)
- General cognitive ability (1)
- General relativity equations & solutions (1)
- Geodetic data (1)
- Gesicht (1)
- Gesichtserkennung (1)
- Glauber and Giessen Boltzmann–Uehling–Uhlenbeck (GiBUU) models (1)
- Gleichgewicht (1)
- Gluons (1)
- Granule cell (1)
- Gravitational collapse (1)
- Gravitational wave events (1)
- Gravity self-completeness (1)
- Greeks (1)
- Green fluorescent protein (1)
- Großhirnrinde (1)
- Gyromagnetic Moment (1)
- HBT (1)
- HBT correlation (1)
- HICs (1)
- HSF (1)
- Hadron (1)
- Hadron production (1)
- Hadron-Hadron Scattering Heavy (1)
- Hadron-hadron interactions (1)
- Hadronization (1)
- Hard Scattering (1)
- Hawking radiation (1)
- Heavy Ion Experiment (1)
- Heavy baryons (1)
- Heavy flavor electrons (1)
- Heavy flavor production (1)
- Heavy flavour production (1)
- Heavy ions (1)
- Heavy-Ion Collision (1)
- Heavy-flavor decay electron (1)
- Heavy-flavour decay muons (1)
- Heavy-ion physics (1)
- Heavy-quark symmetry (1)
- Heston model (1)
- High-energy astrophysics (1)
- High-resolution (1)
- Higher moments (1)
- Hirnforschung (1)
- Histology (1)
- Homeostatic plasticity (1)
- Human behaviour (1)
- Hybrid model (1)
- Hydrodynamik (1)
- Hypercolumn (1)
- Hypernuclei (1)
- Hyperonic stars (1)
- IFN-γ neutralization (1)
- IL-6 neutralization (1)
- ING (1)
- Image analysis (1)
- Immune Response (1)
- Immunologie (1)
- Immunology (1)
- In-medium pion mass (1)
- Inclusive spectra (1)
- Inductive bias (1)
- Influenza Virus (1)
- Information theory (1)
- Information theory and computation (1)
- Information transfer (1)
- Intensity interferometry (1)
- Intrinsic Motivations (1)
- Invariant Mass Distribution (1)
- Ionisation energy loss (1)
- Isotropization (1)
- J/ϕ (1)
- Jaynes (1)
- Jet Physics (1)
- Jet Substructure (1)
- Julia (1)
- K+-nucleus interaction (1)
- KN interaction (1)
- Keller-Segel (1)
- Kernastrophysik (1)
- Kerntheorie (1)
- Kollision (1)
- Konformationsübergänge (1)
- LaTeX (1)
- Lambda-c(2593) resonance (1)
- Langzeitgedächtnis (1)
- Lattice QCD (1)
- Lattice Quantum Field Theory (1)
- Learning (1)
- Leptons (1)
- Levelized cost of electricity (1)
- Light-sheet fluorescence microscopy (1)
- Load flow calculations (1)
- Lymphknoten (1)
- MET (1)
- Mach Shocks (1)
- Machine Learning (1)
- Machine learning (1)
- Madagascar (1)
- Magnetic field (1)
- Magnetischer Sinn (1)
- Magnetohydrodynamics (1)
- Makromolekül (1)
- Matching <Graphentheorie> (1)
- Material budget (1)
- Mathematical Modeling (1)
- Mathematical biosciences (1)
- Maxwell–Chern–Simons (1)
- Mdm2 (1)
- Meditation (1)
- Membrane and lipid biology (1)
- Memory (1)
- Meson (1)
- Meson-exchange model (1)
- Mesonenfeldtheorie (1)
- Microstates (1)
- Mid-rapidity (1)
- Mind wandering (1)
- Minimum Bias (1)
- Mitochondria (1)
- Model-based public policy (1)
- Modell (1)
- Modelling (1)
- Molekulardynamik (1)
- Monte Carlo (1)
- Monte Carlo method (1)
- Monte Carlo simulation (1)
- Monte-Carlo simulations (1)
- Morphological modeling (1)
- Multi-Omics (1)
- Multi-Parton Interactions (1)
- Multi-strange baryons (1)
- Multi-wire proportional drift chamber (1)
- Multimodal Modeling (1)
- N-methyl-D-aspartate receptor (1)
- NACI (1)
- NCC (1)
- NMR spectroscopy (1)
- NMR spectrum (1)
- NMR structure determination (1)
- NREM-Schlaf (1)
- Naja naja atra (1)
- Net-charge correlations (1)
- Net-charge fluctuations (1)
- Netzwerktopologie (1)
- Neural Network (1)
- Neural correlates of consciousness (1)
- Neural map (1)
- Neural net (1)
- Neural network (1)
- Neural networks (1)
- NeuroXidence (1)
- Neuronal morphology (1)
- Neuronal network models (1)
- Neuronales Netz (1)
- Neurophysiology (1)
- Neuroscience, neural computation & artificial intelligence (1)
- Neurotransmission (1)
- Non-Equilibrium Physics (1)
- Non-negative matrix factorization (1)
- Noncommutative black holes (1)
- Noncommutativity (1)
- Nonequilibrium dynamics (1)
- Nonlinear analysis (1)
- Nonrandom connectivity (1)
- Nuclear (1)
- Nuclear Astrophysics (1)
- Nuclear Physics (1)
- Nuclear fragments production (1)
- Nuclear modification factor (1)
- Nuclear resonance fluorescence (1)
- Nucleosynthesis in explosive environments (1)
- ODEs (1)
- Object recognition (1)
- Objekterkennung (1)
- Ontogenie (1)
- Open data (1)
- Open data data (1)
- Open source (1)
- Open-source software (1)
- Optimal mix of wind and solar PV (1)
- Optimal wiring (1)
- Organogenese (1)
- Orientation preference (1)
- Outbreak (1)
- Outreach practice (1)
- PING (1)
- PYTHIA (1)
- Palatini (1)
- Parallelverarbeitung (1)
- Particle and Resonance Production (1)
- Particle correlations and fluctuations (1)
- Particle image velocimetry (1)
- Particle production (1)
- Partikel (1)
- Partikelzahlschwankung (1)
- Path integral duality (1)
- Pb–Pb (1)
- Peak overlap (1)
- Peak picking (1)
- Penile carcinomas (1)
- Percolation theory (1)
- Personalized Medicine (1)
- Petri net (1)
- Phase Diagram of QCD (1)
- Phase rotors (1)
- Phase transitions (1)
- Phasenübergänge (1)
- Photosynthesis (1)
- Physics of Living Systems (1)
- Physik (1)
- Pinwheel (1)
- Plague (1)
- Planck scale (1)
- Plasma Instability (1)
- Plasticity (1)
- Plates fractalization (1)
- PointNet (1)
- Polyelektrolyt (1)
- Polymere (1)
- Polypeptide (1)
- Polypeptides (1)
- Power plant (1)
- Power system model (1)
- Power system simulations (1)
- Preclinical research (1)
- Predictive Modeling (1)
- Predictive modeling (1)
- Probability distribution (1)
- Production Cross Section (1)
- Produktion von pentaquark (1)
- Properties of Hadrons (1)
- Protein homeostasis (1)
- Proteine (1)
- Proton–Proton Collisions (1)
- Proton–proton (1)
- Proton–proton collisions (1)
- Pygmy Dipole Resonance (1)
- Pyramidal cells (1)
- QCD equation of state (1)
- QCD vector interaction strength (1)
- QGP (1)
- QGP signals (1)
- Quantenchromodynamik (1)
- Quantum corrected black hole (1)
- Quark Deconfinement (1)
- Quark Production (1)
- Quark-Gluon-Plasma (1)
- Quarkonium (1)
- Quarks (1)
- Quark–gluon plasma (1)
- Quasi-particle phonon model (1)
- RHIC energies (1)
- RING (1)
- Random graph model (1)
- Rapid rotation (1)
- Rapidity Range (1)
- Reaktivierung der Gedächtnisspuren (1)
- Receptor physiology (1)
- Recombination (1)
- Recurrent neural network (1)
- Reinforcement Learning (1)
- Relativistic heavy ion physics (1)
- Relativistic heavy-ion reactions (1)
- Renewable power generation (1)
- Research article (1)
- Residual connection (1)
- Resolution Parameter (1)
- Resting state (1)
- Resting-state (1)
- Robotics (1)
- Routing (1)
- SARS-CoV-2 (1)
- SMASH (1)
- STDP (1)
- Scale invariance (1)
- Schizophrenia (1)
- Schlaf (1)
- Schwerionenphysik (1)
- Schwinger effect (1)
- Seasonal (1)
- Sehrinde (1)
- Seiberg–Witten map (1)
- Selbstorganisation (1)
- Self-dual (1)
- Shape representations (1)
- Shear viscosity (1)
- Sholl analysis (1)
- Signal filtering (1)
- Simulation (1)
- Sine waves (1)
- Single electrons (1)
- Single muons (1)
- Software development practice (1)
- Solanum (1)
- Solanum lycopersicum (1)
- Solar power (1)
- Spin distribution and correlation (1)
- Spin-down (1)
- Starke Wechselwirkung (1)
- Statistical distributions (1)
- Statistical multifragmentation models (1)
- Stellar remnants (1)
- Stimulation experiments (1)
- Stochastic (1)
- Strangeness (1)
- Strangeness enhancement (1)
- Streptococcus pneumoniae (1)
- String T-duality (1)
- Strong Interactions (1)
- Structure (1)
- Superconductivity (1)
- Supernova remnant (1)
- Supraleitung (1)
- Susceptibilities (1)
- Synaptic plasticity (1)
- Synchronisierung (1)
- Systematic Uncertainty (1)
- Säugling (1)
- T-Lymphozyt (1)
- TR (1)
- Template (1)
- Texture bias (1)
- Theoretical nuclear physics (1)
- Theoretische Physik (1)
- Thermal evolution (1)
- Thermal model (1)
- Thermalization (1)
- Time Projection Chamber (1)
- Time series (1)
- Tissue microarray (1)
- Tools and resources (1)
- Tracking (1)
- Transient & explosive astronomical phenomena (1)
- Transition radiation detector (1)
- Transport (1)
- Transport model for heavy-ion collisions (1)
- Transverse momentum (1)
- Tribolium castaneum (1)
- Trigger (1)
- Tumor microenvironment (1)
- Un-particle physics (1)
- Unparticles (1)
- Unruh effect (1)
- Unüberwachtes Lernen (1)
- Urca processes (1)
- Vector Boson Production (1)
- Vesicles (1)
- Voltage attenuation (1)
- Voronoi-Diagramm (1)
- Vorticity (1)
- Vögel (1)
- Wind power (1)
- Within-host infection (1)
- Working memory (1)
- Xenon-based gas mixture (1)
- Zero-point length (1)
- Zustandsgleichung (1)
- acclimation (1)
- actin (1)
- actin-modulatory proteins (1)
- action (1)
- activity homeostasis (1)
- adaptation (1)
- adult neurogenesis (1)
- ambiguous perception (1)
- ambiguous stimuli (1)
- ambiguous structure-from-motion (SFM) (1)
- antibaryon (1)
- antikaon (1)
- antiparticles (1)
- apparent motion (1)
- artificial intelligence (1)
- audiovisual (1)
- auto-structure (1)
- autonomous learning (1)
- avian magnetoreception (1)
- awake monkey recordings (1)
- baryon (1)
- battery (1)
- beam energy scan (1)
- beam quality correction factors (1)
- behavioral performance (1)
- bidirectional plasticity (1)
- binary neutron star mergers (1)
- binaryfission (1)
- bio polymer (1)
- biomagnetite (1)
- biomechanics (1)
- bird song (1)
- bistability (1)
- bistable (1)
- bivariate (1)
- black holes (1)
- brain (1)
- brain-state-dependent stimulation (1)
- canonical transformation (1)
- causality (1)
- cell assembly (1)
- cell internal structure (1)
- cell motility (1)
- cellular self-organization (1)
- central schemes (1)
- centrality (1)
- change detection (1)
- chaos (1)
- charm (1)
- chemokine receptors (1)
- chiral (1)
- chiral phase (1)
- chymotrypsin inhibitor (1)
- co-infection (1)
- cognitive penetrability (1)
- collective flow (1)
- collision (1)
- color superconductivity (1)
- color vision (1)
- common neighbor (1)
- compartmental modeling (1)
- competitive learning (1)
- complementary information (1)
- complex backgrounds (1)
- complexity (1)
- computational and systems biology (1)
- computational biology (1)
- computational neuroscience (1)
- computational pathology (1)
- computer simulation (1)
- concepts (1)
- connectional fingerprint (1)
- connectivity (1)
- connectome (1)
- context-sensitivity (1)
- contextual modulation (1)
- cooperation (1)
- coordination (1)
- corpus callosum (1)
- cortical networks (1)
- cortical slices (1)
- corticospinal excitability (1)
- coupled-channel self-consistent calculation (1)
- covariant canonical gauge gravity (1)
- covariant canonical gauge theory of gravity (1)
- credit assignment (1)
- crowd behaviour (1)
- curvature-dependent fermion mass (1)
- dE/dx (1)
- dark matter (1)
- database (1)
- deconfinement phase transition (1)
- dendritic arborization neurons (1)
- density functional theory (1)
- density oscillations (1)
- density perturbations (1)
- dentate gyrus (1)
- depth perception (1)
- detector (1)
- development (1)
- digital pathology (1)
- digital twin (1)
- dileptons (1)
- discrete processing (1)
- diskrete Verarbeitung (1)
- dissemination (1)
- dissipation (1)
- disulfide bond isomerization (1)
- dosimetry (1)
- dualism (1)
- dynamical Higgs effect (1)
- early warning signs (1)
- effective theories (1)
- efferent copy (1)
- elastic graph maching (1)
- electrocorticography (1)
- electroencephalography (1)
- electron (1)
- electrophysiology (1)
- elliptic flow (1)
- elliptic flow analysis (1)
- elsarticle.cls (1)
- emergence (1)
- emerging length (1)
- emerging mass parameter (1)
- encoding strategies (1)
- energy storage (1)
- energy system simulations (1)
- ensemble cNN (1)
- entropy limited hydrodynamics (1)
- epidemics (1)
- equilibrium (1)
- evacuation (1)
- event-related potential (1)
- excited nuclei (1)
- experimental results (1)
- eye movement (1)
- eye movements (1)
- face recognition (1)
- factor (1)
- feedback (1)
- femtoscopy data (1)
- fermion (1)
- fibroblasts (1)
- field (1)
- finite temperature (1)
- flavonoid (1)
- flow (1)
- flow allocation (1)
- fluctuations and correlations (1)
- flux limiters (1)
- follicular dendritic cells (1)
- formation length (1)
- fractional Fourier transform (1)
- free viewing (1)
- free will (1)
- free-solution electrophoresis (1)
- functional magnetic resonance imaging (1)
- gamma Zyklus (1)
- gamma cycle (1)
- gene expression (1)
- gene expression analysis (1)
- general cognitive ability (1)
- generalized linear models (1)
- generatives Lernen (1)
- gesture recognition (1)
- granule cell (1)
- graph matching (1)
- gratings (1)
- gravitational wave (1)
- gravitational waves (1)
- gyromagnetic moment (1)
- hadron (1)
- hadron gas (1)
- hadronic matter (1)
- hand posture recognition (1)
- hawkes processes (1)
- heavy Hagedorn states (1)
- heavy ion (1)
- heavy ion collision (1)
- heavy ion experiments (1)
- heavy ions (1)
- heavy-flavor quarks (1)
- hidden charm production (1)
- hierarchical processing (1)
- high energy physics (1)
- high temperature (1)
- high-energy nuclear collisions (1)
- high-energy photon and proton radiation (1)
- higher twist effects (1)
- highly parallel recordings (1)
- histology (1)
- host-parasite population dynamics (1)
- human computer intaraction (1)
- human intracranial recordings (1)
- human lymph node (1)
- human robot interaction (1)
- hybrid energy sources (1)
- hybrid power system (1)
- hybrid star (1)
- hysteresis (1)
- image analysis (1)
- imitation (1)
- immune response (1)
- immunology (1)
- impact parameter (1)
- inertia of spacetime (1)
- inertia of space–time (1)
- infancy (1)
- infectious diseases (1)
- influenza (1)
- information (1)
- information decomposition (1)
- integrate and fire (1)
- integro-differential equation (1)
- inter-individual variability (1)
- interaction (1)
- interference (1)
- interhemispheric integration (1)
- internalin B (1)
- invasion probability (1)
- invasion time (1)
- iron (1)
- ising model (1)
- isospin asymmetric matter (1)
- joint-spike-event (1)
- kaon (1)
- kollektive strom (1)
- kontextabhängige Verarbeitung (1)
- language development (1)
- lawsHeavy-ion collisions (1)
- light nuclei production (1)
- luteolin (1)
- lymph node (1)
- macromoleculs (1)
- magentoencephalography (MEG) (1)
- magnetic compass (1)
- magnetic fields (1)
- magnetoencephalography (1)
- masse (1)
- mathematical model (1)
- maximum entropy (1)
- meditation (1)
- metabolomics (1)
- methylation profiling (1)
- microdosimetry (1)
- minimal length (1)
- minimale Länge (1)
- mirror neuron (1)
- mixing of hadron sources (1)
- model (1)
- modulation of synchrony (1)
- modules (1)
- molecular diagnostic techniques (1)
- molecular typing (1)
- monkeys (1)
- monte carlo simulations (1)
- morphological filtering (1)
- morphological modeling (1)
- morphology (1)
- morphometrics (1)
- motif search (1)
- motifs (1)
- motility (1)
- motion capture (1)
- mouse (1)
- multicluster fission (1)
- multiparton interactions (1)
- multisensory integration (1)
- multiunit activity (1)
- multivariate (1)
- muon (1)
- mutual information (1)
- n,p,π and Λ+Σ0 production (1)
- nanolesions (1)
- natural tasks (1)
- networks (1)
- neural computation (1)
- neuraminidase inhibitor (1)
- neurobiology (1)
- neuron (1)
- neuronal plasticity (1)
- neutrino emissivity (1)
- neutrino-trapping (1)
- neutron star (1)
- neutron star mergers (1)
- neutron stars; (1)
- neutron-star-merger (1)
- nodular sclerosis (1)
- non-Poissonian (1)
- nondispatchability (1)
- nonrandom connectivity (1)
- nuclear (1)
- nuclear cold fission (1)
- nuclear collisions (1)
- nuclear modification (1)
- nucleon (1)
- numerical methods (1)
- object recognition (1)
- off-line memory reprocessing (1)
- ontogenesis (1)
- open and hidden charm (1)
- optimal power flow (1)
- optimal wiring (1)
- option pricing (1)
- optokinetic nystagmus (1)
- ordinary differential equation (1)
- orientation in magnetic field (1)
- orientation maps (1)
- orientation selectivity (1)
- oscillation frequency (1)
- oseltamivir (1)
- p14ARF (1)
- p53 (1)
- parallel recording (1)
- parietal cortex (1)
- partial differential equation (1)
- particle number fluctuations (1)
- parton distribution function (1)
- perceptual closure (1)
- perceptual fluency (1)
- perceptual memory (1)
- perceptual organization (1)
- phase locking (1)
- phase sequence (1)
- phenylpropanoid (1)
- photovoltaics (1)
- pinning control (1)
- pions (1)
- polyelectrolyte (1)
- polymorphism (1)
- power system analysis; (1)
- pre-mRNA splicing (1)
- precision weighting (1)
- predictive modelling (1)
- prefrontal cortex (1)
- probabilistic inference (1)
- probability theory (1)
- production of pentaquark (1)
- proton-proton (1)
- p¯ + 40Ar → 40 Cl + (1)
- quadratic Lagrangian (1)
- quantitative methods (1)
- quantum measurement (1)
- quantum mechanics (1)
- quark (1)
- quark deconfinement (1)
- quark gluon plasma (1)
- quark matter (1)
- quintessence (1)
- radial diffusivity (1)
- radiation therapy (1)
- radiation-induced nanostructures (1)
- radical pair mechanism (1)
- random geometric graph (1)
- random sampling (1)
- rat (1)
- rat visual cortex (1)
- receptor tyrosine kinase activation (1)
- recurrent neural networks (1)
- redundancy (1)
- redundant information (1)
- reinforcement (1)
- reinforcement learning (1)
- relativistic boson system of particles and antiparticles (1)
- relativistic hydrodynamics (1)
- reliability (1)
- renewable energy (1)
- repetition suppression (RS) (1)
- research article (1)
- reservoir computing (1)
- resonance (1)
- resonances (1)
- resting state (1)
- reversible figures (1)
- reward (1)
- rhesus macaque (1)
- robotics (1)
- robustness (1)
- root angle (1)
- rosetting T cells (1)
- routing (1)
- saccades (1)
- saddle point shapes (1)
- scale free (1)
- scale-free networks (1)
- scheduling (1)
- schiff base (1)
- schwere Hagedorn Zustände (1)
- schweres Ion (1)
- security-constrained optimal power flow (1)
- self-model (1)
- self-organized criticality (1)
- semiexclusive processes (1)
- sensorimotor oscillations (1)
- shock filter (1)
- single-molecule FRET (1)
- small systems (1)
- small-world networks (1)
- smooth pursuit (1)
- snake venom (1)
- social realities (1)
- source localization (1)
- spatial filtering (1)
- spatial host population structure (1)
- spectra (1)
- spike pattern (1)
- spike synchrony (1)
- spike train (1)
- spike train analysis (1)
- spiking neural networks (1)
- spin crossover (1)
- stabile kalte Kerne (1)
- stable cold nuclear matter (1)
- statistical analysis (1)
- statistical ensembles (1)
- statistical fluctuations (1)
- stereo vision (1)
- stimulus coding (1)
- stochastic volatility (1)
- stomach neoplasms (1)
- strangeness enhancement (1)
- stress response (1)
- string fragmentation (1)
- structural biology (1)
- superdeterminism (1)
- superfluid phonons (1)
- surface reconstruction (1)
- surround suppression (1)
- synaesthesia (1)
- synaptic plasticity (1)
- synchronous firing (1)
- synergy (1)
- synfire braid (1)
- synfire chain (1)
- system (1)
- temporal correlations (1)
- theoretical biology (1)
- thermalization (1)
- time series prediction (1)
- time-lapse imaging (1)
- top-down control (1)
- torsional dark energy (1)
- total cross-section (1)
- transcription (1)
- transmission cost allocation (1)
- transport coefficients (1)
- transport models (1)
- traveling waves (1)
- true ternary fission (1)
- ultraperipheral and central heavy ion collisions (1)
- ultrarelativistic (1)
- ultrarelativistische (1)
- unique information (1)
- unit commitment; (1)
- variable renewable energy sources (1)
- virology (1)
- visual illusion (1)
- visual memory (1)
- visual perception (1)
- visual priming (1)
- visual short-term memory (1)
- visual system (1)
- visual working memory (1)
- volatility clustering (1)
- whole slide image (1)
- working memory (1)
- Überschwere (1)
- Θ+ pentaquark (1)
- γ-ray spectroscopy (1)
- κ meson (1)
- √sN N = 2.76 TeV (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1774)
- Physik (1315)
- Informatik (1008)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (12)
- Helmholtz International Center for FAIR (7)
We study the phase diagram of a generalized chiral SU(3)-flavor model in mean-field approxi- mation. In particular, the influence of the baryon resonances, and their couplings to the scalar and vector fields, on the characteristics of the chiral phase transition as a function of temperature and baryon-chemical potential is investigated. Present and future finite-density lattice calculations might constrain the couplings of the fields to the baryons. The results are compared to recent lattice QCD calculations and it is shown that it is non-trivial to obtain, simultaneously, stable cold nuclear matter.
Abstract: The measured particle ratios in central heavy-ion collisions at RHIC-BNL are investigated within a chemical and thermal equilibrium chiral SU(3) Ã É approach. The commonly adopted non-interacting gas calculations yield temperatures close to or above the critical temperature for the chiral phase transition, but without taking into account any interactions. In contrast, the chiral SU(3) model predicts temperature and density dependent effective hadron masses and effective chemical potentials in the medium and a transition to a chirally restored phase at high temperatures or chemical potentials. Three different parametrizations of the model, which show different types of phase transition behaviour, are investigated. We show that if a chiral phase transition occured in those collisions, freezing of the relative hadron abundances in the symmetric phase is excluded by the data. Therefore, either very rapid chemical equilibration must occur in the broken phase, or the measured hadron ratios are the outcome of the dynamical symmetry breaking. Furthermore, the extracted chemical freeze-out parameters differ considerably from those obtained in simple non-interacting gas calculations. In particular, the three models yield up to 35 MeV lower temperatures than the free gas approximation. The inmedium masses turn out to differ up to 150 MeV from their vacuum values.
Compelling evidence for the creation of a new form of matter has been claimed to be found in Pb+Pb collisions at SPS. We discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion collisions. It is demonstrated that so far none of the proposed signals like J/psi meson production/suppression, strangeness enhancement, dileptons, and directed flow unambigiously show that a phase of deconfined matter has been formed in SPS Pb+Pb collisions. We emphasize the need for systematic future measurements to search for simultaneous irregularities in the excitation functions of several observables in order to come close to pinning the properties of hot, dense QCD matter from data.
The advent of improved experimental and theoretical techniques has brought a lot of attention to the electric dipole (E1) response of atomic nuclei in the last decade. The extensive studies have led to the observation and interpretation of a concentration of E1 strength energetically below the Giant Dipole Resonance in many nuclei. This phenomenon is commonly denoted as Pygmy Dipole Resonance (PDR). This contribution will summarize the most important results obtained using different experimental probes, define the challenges to gain a deeper understanding of the excitations, and discuss the newest experimental developments.
Elliptic flow analysis at RHIC with the Lee-Yang Zeroes method in a relativistic transport approach
(2006)
The Lee-Yang zeroes method is applied to study elliptic flow (v_2) in Au+Au collisions at sqrt s =200 A GeV, with the UrQMD model. In this transport approach, the true event plane is known and both the nonflow effects and event-by-event v_2 fluctuations exist. Although the low resolutions prohibit the application of the method for most central and peripheral collisions, the integral and differential elliptic flow from the Lee-Yang zeroes method agrees with the exact v_2 values very well for semi-central collisions.
The cumulant method is applied to study elliptic flow (v_2) in Au+Au collisions at sqrt s=200 AGeV, with the UrQMD model. In this approach, the true event plane is known and both the non-flow effects and event-by-event spatial (epsilon) and v_2 fluctuations exist. Qualitatively, the hierarchy of v_2 's from two, four and six-particle cumulants is consistent with the STAR data, however, the magnitude of v_2 in the UrQMD model is only 60% of the data. We find that the four and six-particle cumulants are good measures of the real elliptic flow over a wide range of centralities except for the most central and very peripheral events. There the cumulant method is affected by the v_2 fluctuations. In mid-central collisions, the four and six-particle cumulants are shown to give a good estimation of the true differential v_2, especially at large transverse momentum, where the two-particle cumulant method is heavily affected by the non-flow effects.
We propose to measure correlations of heavy-flavor hadrons to address the status of thermalization at the partonic stage of light quarks and gluons in high-energy nuclear collisions, shown on the example of azimuthal correlations of D-Dbar pairs. We show that hadronic interactions at the late stage can not disturb these correlations significantly. Thus, a decrease or the complete absence of these initial correlations indicates frequent interactions of heavy-flavor quarks in the partonic stage. Therefore, early thermalization of light quarks is likely to be reached. PACS numbers: 25.75.-q
Poster presentation: Introduction We study the problem of object recognition invariant to transformations, such as translation, rotation and scale. A system is underdetermined if its degrees of freedom (number of possible transformations and potential objects) exceed the available information (image size). The regularization theory solves this problem by adding constraints [1]. It is unclear what constraints biological systems use. We suggest that rather than seeking constraints, an underdetermined system can make decisions based on available information by grouping its variables. We propose a dynamical system as a minimum system for invariant recognition to demonstrate this strategy. ...
Poster presentation A central problem in neuroscience is to bridge local synaptic plasticity and the global behavior of a system. It has been shown that Hebbian learning of connections in a feedforward network performs PCA on its inputs [1]. In recurrent Hopfield network with binary units, the Hebbian-learnt patterns form the attractors of the network [2]. Starting from a random recurrent network, Hebbian learning reduces system complexity from chaotic to fixed point [3]. In this paper, we investigate the effect of Hebbian plasticity on the attractors of a continuous dynamical system. In a Hopfield network with binary units, it can be shown that Hebbian learning of an attractor stabilizes it with deepened energy landscape and larger basin of attraction. We are interested in how these properties carry over to continuous dynamical systems. Consider system of the form Math(1) where xi is a real variable, and fi a nondecreasing nonlinear function with range [-1,1]. T is the synaptic matrix, which is assumed to have been learned from orthogonal binary ({1,-1}) patterns ξμ, by the Hebbian rule: Math. Similar to the continuous Hopfield network [4], ξμ are no longer attractors, unless the gains gi are big. Assume that the system settles down to an attractor X*, and undergoes Hebbian plasticity: T´ = T + εX*X*T, where ε > 0 is the learning rate. We study how the attractor dynamics change following this plasticity. We show that, in system (1) under certain general conditions, Hebbian plasticity makes the attractor move towards its corner of the hypercube. Linear stability analysis around the attractor shows that the maximum eigenvalue becomes more negative with learning, indicating a deeper landscape. This in a way improves the system´s ability to retrieve the corresponding stored binary pattern, although the attractor itself is no longer stabilized the way it does in binary Hopfield networks.
We investigate charmonium production in Pb + Pb collisions at LHC beam energy Elab=2.76A TeV at fixed-target experiment (√sNN = 72 GeV). In the frame of a transport approach including cold and hot nuclear matter effects on charmonium evolution, we focus on the antishadowing effect on the nuclear modification factors RAA and rAA for the J/ψ yield and transverse momentum. The yield is more suppressed at less forward rapidity (ylab ≃ 2) than that at very forward rapidity (ylab ≃ 4) due to the shadowing and antishadowing in different rapidity bins.
We have built quasi-equilibrium models for uniformly rotating quark stars in general relativity. The conformal flatness approximation is employed and the Compact Object CALculator (cocal) code is extended to treat rotating stars with surface density discontinuity. In addition to the widely used MIT bag model, we have considered a strangeon star equation of state (EoS), suggested by Lai and Xu, that is based on quark clustering and results in a stiff EoS. We have investigated the maximum mass of uniformly rotating axisymmetric quark stars. We have also built triaxially deformed solutions for extremely fast rotating quark stars and studied the possible gravitational wave emission from such configurations.
The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.
One of important consequences of Hagedorn statistical bootstrap model is the prediction of limiting temperature Tcrit for hadron systems colloquially known as Hagedorn temperature. According to Hagedorn, this effect should be observed in hadron spectra obtained in infinite equilibrated nuclear matter rather than in relativistic heavy-ion collisions. We present results of microscopic model calculations for the infinite nuclear matter, simulated by a box with periodic boundary conditions. The limiting temperature indeed appears in the model calculations. Its origin is traced to strings and many-body decays of resonances.
A small-world network has been suggested to be an efficient solution for achieving both modular and global processing-a property highly desirable for brain computations. Here, we investigated functional networks of cortical neurons using correlation analysis to identify functional connectivity. To reconstruct the interaction network, we applied the Ising model based on the principle of maximum entropy. This allowed us to assess the interactions by measuring pairwise correlations and to assess the strength of coupling from the degree of synchrony. Visual responses were recorded in visual cortex of anesthetized cats, simultaneously from up to 24 neurons. First, pairwise correlations captured most of the patterns in the population´s activity and, therefore, provided a reliable basis for the reconstruction of the interaction networks. Second, and most importantly, the resulting networks had small-world properties; the average path lengths were as short as in simulated random networks, but the clustering coefficients were larger. Neurons differed considerably with respect to the number and strength of interactions, suggesting the existence of "hubs" in the network. Notably, there was no evidence for scale-free properties. These results suggest that cortical networks are optimized for the coexistence of local and global computations: feature detection and feature integration or binding.
Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes.
In order to investigate the involvement of primary visual cortex (V1) in working memory (WM), parallel, multisite recordings of multiunit activity were obtained from monkey V1 while the animals performed a delayed match-to-sample (DMS) task. During the delay period, V1 population firing rate vectors maintained a lingering trace of the sample stimulus that could be reactivated by intervening impulse stimuli that enhanced neuronal firing. This fading trace of the sample did not require active engagement of the monkeys in the DMS task and likely reflects the intrinsic dynamics of recurrent cortical networks in lower visual areas. This renders an active, attention-dependent involvement of V1 in the maintenance of working memory contents unlikely. By contrast, population responses to the test stimulus depended on the probabilistic contingencies between sample and test stimuli. Responses to tests that matched expectations were reduced which agrees with concepts of predictive coding.
In the present work, the problem of protein folding is addressed from the point of view of equilibrium thermodynamics. The conformation of a globular protein in solution at common temperatures is quite complicated without any geometrical symmetry, but it is an ordered state in the sense of its biological activity. This complicated conformation of a single protein molecule is destroyed upon increasing the temperature or by the addition of appropriate chemical agents, as is revealed by the loss of its activity and change of the physical properties, and so on. Once the complicated native structures having biological activity are lost, it would be natural to suppose that the native structure could hardly be restored. Nevertheless, pioneers, such as Anson and Mirsky, recognized as early as in 1925 that this was not always the case. If one defines the folded and unfolded states of a protein as two distinct phases of a system, then under the variation of temperature the system is transformed from one phase state into another and vice versa. The process of protein folding is accompanied by the release or absorption of a certain amount of energy, corresponding to the first-oder-type phase transitions in the bulk. Knowing the partition function of the system one can evaluate its energy and heat capacity under different temperatures. This task was performed in this work. The results of the developed statistical mechanics model were compared with the results of molecular dynamic simulations of alanine poylpeptides. In particular, the dependencies on temperature of the total energy of the system and heat capacity were compared for alanine polypeptides consisting of 21, 30, 40, 50 and 100 amino acids. The good correspondence of the results of the theoretical model with the results of molecular dynamics simulations allowed to validate the assumptions made about the system and to establish the accuracy range of the theory. In order to perform the comparison of the results of theoretical model and the molecular dynamics simulations it is necessary to perform the efficient analysis of the results of molecular dynamics simulations. This task was also addressed in the present work. In particular, different ways to obtain dependence of the heat capacity on temperature from molecular dynamics simulations are discussed and the most efficient one is proposed. The present thesis reports the result of molecular dynamic simulations for not only alanine polypeptides by also for valine and leucine polypeptides. In valine and leucine polypeptides, it is also possible to observe the helix↔random coil transitions with the increase of temperature. The current thesis presents a work that starts with the investigation of the fundamental degrees of freedom in polypeptides that are responsible for the conformational transitions. Then this knowledge is applied for the statistical mechanics description of helix↔coil transitions in polypeptides. Finally, the theoretical formalism is generalized for the case of proteins in water environment and the comparison of the results of the statistical mechanics model with the experimental measurements of the heat capacity on temperature dependencies for two globular proteins is performed. The presented formalism is based on fundamental physical properties of the system and provides the possibility to describe the folding↔unfolding transitions quantitatively. The combination of these two facts is the major novelty of the presented approach in comparison to the existing ones. The “transparent” physical nature of the formalism provides a possibility to further apply it to a large variety of systems and processes. For instance, it can be used for investigation of the influence of the mutations in the proteins on their stability. This task is of primary importance for design of novel proteins and drug delivering molecules in medicine. It can provide further insights into the problem of protein aggregation and formation of amyloids. The problem of protein aggregation is closely associated with various illnesses such as Alzheimer and mad cow disease. With certain modifications, the presented theoretical method can be applied to the description of the protein crystallization process, which is important for the determination of the structure of proteins with X-Rays. There many other possible applications of the ideas described in the thesis. For instance, the similar formalism can be developed for the description of melting and unzipping of DNA, growth of nanotubes, formation of fullerenes, etc.
Cyclophilins, or immunophilins, are proteins found in many organisms including bacteria, plants and humans. Most of them display peptidyl-prolyl cis-trans isomerase activity, and play roles as chaperones or in signal transduction. Here, we show that cyclophilin anaCyp40 from the cyanobacterium Anabaena sp. PCC 7120 is enzymatically active, and seems to be involved in general stress responses and in assembly of photosynthetic complexes. The protein is associated with the thylakoid membrane and interacts with phycobilisome and photosystem components. Knockdown of anacyp40 leads to growth defects under high-salt and high-light conditions, and reduced energy transfer from phycobilisomes to photosystems. Elucidation of the anaCyp40 crystal structure at 1.2-Å resolution reveals an N-terminal helical domain with similarity to PsbQ components of plant photosystem II, and a C-terminal cyclophilin domain with a substrate-binding site. The anaCyp40 structure is distinct from that of other multi-domain cyclophilins (such as Arabidopsis thaliana Cyp38), and presents features that are absent in single-domain cyclophilins.
Human lymph nodes play a central part of immune defense against infection agents and tumor cells. Lymphoid follicles are compartments of the lymph node which are spherical, mainly filled with B cells. B cells are cellular components of the adaptive immune systems. In the course of a specific immune response, lymphoid follicles pass different morphological differentiation stages. The morphology and the spatial distribution of lymphoid follicles can be sometimes associated to a particular causative agent and development stage of a disease. We report our new approach for the automatic detection of follicular regions in histological whole slide images of tissue sections immuno-stained with actin. The method is divided in two phases: (1) shock filter-based detection of transition points and (2) segmentation of follicular regions. Follicular regions in 10 whole slide images were manually annotated by visual inspection, and sample surveys were conducted by an expert pathologist. The results of our method were validated by comparing with the manual annotation. On average, we could achieve a Zijbendos similarity index of 0.71, with a standard deviation of 0.07.
Poster presentation: Background To test the importance of synchronous neuronal firing for information processing in the brain, one has to investigate if synchronous firing strength is correlated to the experimental subjects. This requires a tool that can compare the strength of the synchronous firing across different conditions, while at the same time it should correct for other features of neuronal firing such as spike rate modulation or the auto-structure of the spike trains that might co-occur with synchronous firing. Here we present the bi- and multivariate extension of previously developed method NeuroXidence [1,2], which allows for comparing the amount of synchronous firing between different conditions. ...
Background Synchronous neuronal firing has been discussed as a potential neuronal code. For testing first, if synchronous firing exists, second if it is modulated by the behaviour, and third if it is not by chance, a large set of tools has been developed. However, to test whether synchronous neuronal firing is really involved in information processing one needs a direct comparison of the amount of synchronous firing for different factors like experimental or behavioural conditions. To this end we present an extended version of a previously published method NeuroXidence [1], which tests, based on a bi- and multivariate test design, whether the amount of synchronous firing above the chance level is different for different factors.
Network or graph theory has become a popular tool to represent and analyze large-scale interaction patterns in the brain. To derive a functional network representation from experimentally recorded neural time series one has to identify the structure of the interactions between these time series. In neuroscience, this is often done by pairwise bivariate analysis because a fully multivariate treatment is typically not possible due to limited data and excessive computational cost. Furthermore, a true multivariate analysis would consist of the analysis of the combined effects, including information theoretic synergies and redundancies, of all possible subsets of network components. Since the number of these subsets is the power set of the network components, this leads to a combinatorial explosion (i.e. a problem that is computationally intractable). In contrast, a pairwise bivariate analysis of interactions is typically feasible but introduces the possibility of false detection of spurious interactions between network components, especially due to cascade and common drive effects. These spurious connections in a network representation may introduce a bias to subsequently computed graph theoretical measures (e.g. clustering coefficient or centrality) as these measures depend on the reliability of the graph representation from which they are computed. Strictly speaking, graph theoretical measures are meaningful only if the underlying graph structure can be guaranteed to consist of one type of connections only, i.e. connections in the graph are guaranteed to be non-spurious. ...
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.
The dissertation deals with the general problem of how the brain can establish correspondences between neural patterns stored in different cortical areas. Although an important capability in many cognitive areas like language understanding, abstract reasoning, or motor control, this thesis concentrates on invariant object recognition as application of correspondence finding. One part of the work presents a correspondence-based, neurally plausible system for face recognition. Other parts address the question of visual information routing over several stages by proposing optimal architectures for such routing ('switchyards') and deriving ontogenetic mechanisms for the growth of switchyards. Finally, the idea of multi-stage routing is united with the object recognition system introduced before, making suggestions of how the so far distinct feature-based and correspondence-based approaches to object recognition could be reconciled.
The behavior of hadronic matter at high baryon densities is studied within Ultrarelativistic Quantum Molecular Dynamics (URQMD). Baryonic stopping is observed for Au+Au collisions from SIS up to SPS energies. The excitation function of flow shows strong sensitivities to the underlying equation of state (EOS), allowing for systematic studies of the EOS. Effects of a density dependent pole of the rho-meson propagator on dilepton spectra are studied for different systems and centralities at CERN energies.
A key competence for open-ended learning is the formation of increasingly abstract representations useful for driving complex behavior. Abstract representations ignore specific details and facilitate generalization. Here we consider the learning of abstract representations in a multi-modal setting with two or more input modalities. We treat the problem as a lossy compression problem and show that generic lossy compression of multimodal sensory input naturally extracts abstract representations that tend to strip away modalitiy specific details and preferentially retain information that is shared across the different modalities. Furthermore, we propose an architecture to learn abstract representations by identifying and retaining only the information that is shared across multiple modalities while discarding any modality specific information.
Recent advances in artificial neural networks enabled the quick development of new learning algorithms, which, among other things, pave the way to novel robotic applications. Traditionally, robots are programmed by human experts so as to accomplish pre-defined tasks. Such robots must operate in a controlled environment to guarantee repeatability, are designed to solve one unique task and require costly hours of development. In developmental robotics, researchers try to artificially imitate the way living beings acquire their behavior by learning. Learning algorithms are key to conceive versatile and robust robots that can adapt to their environment and solve multiple tasks efficiently. In particular, Reinforcement Learning (RL) studies the acquisition of skills through teaching via rewards. In this thesis, we will introduce RL and present recent advances in RL applied to robotics. We will review Intrinsically Motivated (IM) learning, a special form of RL, and we will apply in particular the Active Efficient Coding (AEC) principle to the learning of active vision. We also propose an overview of Hierarchical Reinforcement Learning (HRL), an other special form of RL, and apply its principle to a robotic manipulation task.
We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7–80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.
Poster presentation: Functional connectivity of the brain describes the network of correlated activities of different brain areas. However, correlation does not imply causality and most synchronization measures do not distinguish causal and non-causal interactions among remote brain areas, i.e. determine the effective connectivity [1]. Identification of causal interactions in brain networks is fundamental to understanding the processing of information. Attempts at unveiling signs of functional or effective connectivity from non-invasive Magneto-/Electroencephalographic (M/EEG) recordings at the sensor level are hampered by volume conduction leading to correlated sensor signals without the presence of effective connectivity. Here, we make use of the transfer entropy (TE) concept to establish effective connectivity. The formalism of TE has been proposed as a rigorous quantification of the information flow among systems in interaction and is a natural generalization of mutual information [2]. In contrast to Granger causality, TE is a non-linear measure and not influenced by volume conduction. ...
TRENTOOL : an open source toolbox to estimate neural directed interactions with transfer entropy
(2011)
To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wiener’s definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle – such as Granger causality – modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems – such as the brain – nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. ...
In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.
The timing of feedback to early visual cortex in the perception of long-range apparent motion
(2008)
When 2 visual stimuli are presented one after another in different locations, they are often perceived as one, but moving object. Feedback from area human motion complex hMT/V5+ to V1 has been hypothesized to play an important role in this illusory perception of motion. We measured event-related responses to illusory motion stimuli of varying apparent motion (AM) content and retinal location using Electroencephalography. Detectable cortical stimulus processing started around 60-ms poststimulus in area V1. This component was insensitive to AM content and sequential stimulus presentation. Sensitivity to AM content was observed starting around 90 ms post the second stimulus of a sequence and most likely originated in area hMT/V5+. This AM sensitive response was insensitive to retinal stimulus position. The stimulus sequence related response started to be sensitive to retinal stimulus position at a longer latency of 110 ms. We interpret our findings as evidence for feedback from area hMT/V5+ or a related motion processing area to early visual cortices (V1, V2, V3).
This thesis will first introduce in more detail the Bayesian theory and its use in integrating multiple information sources. I will briefly talk about models and their relation to the dynamics of an environment, and how to combine multiple alternative models. Following that I will discuss the experimental findings on multisensory integration in humans and animals. I start with psychophysical results on various forms of tasks and setups, that show that the brain uses and combines information from multiple cues. Specifically, the discussion will focus on the finding that humans integrate this information in a way that is close to the theoretical optimal performance. Special emphasis will be put on results about the developmental aspects of cue integration, highlighting experiments that could show that children do not perform similar to the Bayesian predictions. This section also includes a short summary of experiments on how subjects handle multiple alternative environmental dynamics. I will also talk about neurobiological findings of cells receiving input from multiple receptors both in dedicated brain areas but also primary sensory areas. I will proceed with an overview of existing theories and computational models of multisensory integration. This will be followed by a discussion on reinforcement learning (RL). First I will talk about the original theory including the two different main approaches model-free and model-based reinforcement learning. The important variables will be introduced as well as different algorithmic implementations. Secondly, a short review on the mapping of those theories onto brain and behaviour will be given. I mention the most in uential papers that showed correlations between the activity in certain brain regions with RL variables, most prominently between dopaminergic neurons and temporal difference errors. I will try to motivate, why I think that this theory can help to explain the development of near-optimal cue integration in humans. The next main chapter will introduce our model that learns to solve the task of audio-visual orienting. Many of the results in this section have been published in [Weisswange et al. 2009b,Weisswange et al. 2011]. The model agent starts without any knowledge of the environment and acts based on predictions of rewards, which will be adapted according to the reward signaling the quality of the performed action. I will show that after training this model performs similarly to the prediction of a Bayesian observer. The model can also deal with more complex environments in which it has to deal with multiple possible underlying generating models (perform causal inference). In these experiments I use di#erent formulations of Bayesian observers for comparison with our model, and find that it is most similar to the fully optimal observer doing model averaging. Additional experiments using various alterations to the environment show the ability of the model to react to changes in the input statistics without explicitly representing probability distributions. I will close the chapter with a discussion on the benefits and shortcomings of the model. The thesis continues whith a report on an application of the learning algorithm introduced before to two real world cue integration tasks on a robotic head. For these tasks our system outperforms a commonly used approximation to Bayesian inference, reliability weighted averaging. The approximation is handy because of its computational simplicity, because it relies on certain assumptions that are usually controlled for in a laboratory setting, but these are often not true for real world data. This chapter is based on the paper [Karaoguz et al. 2011]. Our second modeling approach tries to address the neuronal substrates of the learning process for cue integration. I again use a reward based training scheme, but this time implemented as a modulation of synaptic plasticity mechanisms in a recurrent network of binary threshold neurons. I start the chapter with an additional introduction section to discuss recurrent networks and especially the various forms of neuronal plasticity that I will use in the model. The performance on a task similar to that of chapter 3 will be presented together with an analysis of the in uence of different plasticity mechanisms on it. Again benefits and shortcomings and the general potential of the method will be discussed. I will close the thesis with a general conclusion and some ideas about possible future work.
Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference.
Background Objects in our environment are often partly occluded, yet we effortlessly perceive them as whole and complete. This phenomenon is called visual amodal completion. Psychophysical investigations suggest that the process of completion starts from a representation of the (visible) physical features of the stimulus and ends with a completed representation of the stimulus. The goal of our study was to investigate both stages of the completion process by localizing both brain regions involved in processing the physical features of the stimulus as well as brain regions representing the completed stimulus. Results Using fMRI adaptation we reveal clearly distinct regions in the visual cortex of humans involved in processing of amodal completion: early visual cortex - presumably V1 - processes the local contour information of the stimulus whereas regions in the inferior temporal cortex represent the completed shape. Furthermore, our data suggest that at the level of inferior temporal cortex information regarding the original local contour information is not preserved but replaced by the representation of the amodally completed percept. Conclusion These findings provide neuroimaging evidence for a multiple step theory of amodal completion and further insights into the neuronal correlates of visual perception.
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
Synchronous neuronal firing has been proposed as a potential neuronal code. To determine whether synchronous firing is really involved in different forms of information processing, one needs to directly compare the amount of synchronous firing due to various factors, such as different experimental or behavioral conditions. In order to address this issue, we present an extended version of the previously published method, NeuroXidence. The improved method incorporates bi- and multivariate testing to determine whether different factors result in synchronous firing occurring above the chance level. We demonstrate through the use of simulated data sets that bi- and multivariate NeuroXidence reliably and robustly detects joint-spike-events across different factors.
We study the effects of isovector-scalar meson delta on the equation of state (EOS) of neutron star matter in strong magnetic fields. The EOS of neutron-star matter and nucleon effective masses are calculated in the framework of Lagrangian field theory, which is solved within the mean-field approximation. From the numerical results one can find that the delta-field leads to a remarkable splitting of proton and neutron effective masses. The strength of delta-field decreases with the increasing of the magnetic field and is little at ultrastrong field. The proton effective mass is highly influenced by magnetic fields, while the effect of magnetic fields on the neutron effective mass is negligible. The EOS turns out to be stiffer at B < 10^15G but becomes softer at stronger magnetic field after including the delta-field. The AMM terms can affect the system merely at ultrastrong magnetic field(B > 10^19G). In the range of 10^15 G - 10^18 G the properties of neutron-star matter are found to be similar with those without magnetic fields.
How much data do we need? Lower bounds of brain activation states to predict human cognitive ability
(2022)
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Despite their low frequency of occurrence, states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture (derived from resting-state fMRI) and to be highly subject-specific. However, it is currently unclear whether such network-defining states of high cofluctuation also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, an eigenvector-based prediction framework, we show that functional connectivity estimates from as few as 20 temporally separated time frames (< 3% of a 10 min resting-state fMRI scan) are significantly predictive of individual differences in intelligence (N = 281, p < .001). In contrast and against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not achieve significant prediction of intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest brain connectivity, temporally distributed information is necessary to extract information about cognitive abilities from functional connectivity time series. This information, however, is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Highlights
• Brain connectivity states identified by cofluctuation strength.
• CMEP as new method to robustly predict human traits from brain imaging data.
• Network-identifying connectivity ‘events’ are not predictive of cognitive ability.
• Sixteen temporally independent fMRI time frames allow for significant prediction.
• Neuroimaging-based assessment of cognitive ability requires sufficient scan lengths.
Abstract
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10 min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual's network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
EEG microstate periodicity explained by rotating phase patterns of resting-state alpha oscillations
(2020)
Spatio-temporal patterns in electroencephalography (EEG) can be described by microstate analysis, a discrete approximation of the continuous electric field patterns produced by the cerebral cortex. Resting-state EEG microstates are largely determined by alpha frequencies (8-12 Hz) and we recently demonstrated that microstates occur periodically with twice the alpha frequency.
To understand the origin of microstate periodicity, we analyzed the analytic amplitude and the analytic phase of resting-state alpha oscillations independently. In continuous EEG data we found rotating phase patterns organized around a small number of phase singularities which varied in number and location. The spatial rotation of phase patterns occurred with the underlying alpha frequency. Phase rotors coincided with periodic microstate motifs involving the four canonical microstate maps. The analytic amplitude showed no oscillatory behaviour and was almost static across time intervals of 1-2 alpha cycles, resulting in the global pattern of a standing wave.
In n=23 healthy adults, time-lagged mutual information analysis of microstate sequences derived from amplitude and phase signals of awake eyes-closed EEG records showed that only the phase component contributed to the periodicity of microstate sequences. Phase sequences showed mutual information peaks at multiples of 50 ms and the group average had a main peak at 100 ms (10 Hz), whereas amplitude sequences had a slow and monotonous information decay. This result was confirmed by an independent approach combining temporal principal component analysis (tPCA) and autocorrelation analysis.
We reproduced our observations in a generic model of EEG oscillations composed of coupled non-linear oscillators (Stuart-Landau model). Phase-amplitude dynamics similar to experimental EEG occurred when the oscillators underwent a supercritical Hopf bifurcation, a common feature of many computational models of the alpha rhythm.
These findings explain our previous description of periodic microstate recurrence and its relation to the time scale of alpha oscillations. Moreover, our results corroborate the predictions of computational models and connect experimentally observed EEG patterns to properties of critical oscillator networks.
What is the energy function guiding behavior and learningµ Representationbased approaches like maximum entropy, generative models, sparse coding, or slowness principles can account for unsupervised learning of biologically observed structure in sensory systems from raw sensory data. However, they do not relate to behavior. Behavior-based approaches like reinforcement learning explain animal behavior in well-described situations. However, they rely on high-level representations which they cannot extract from raw sensory data. Combinations of multiple goal functions seems the methodology of choice to understand the complexity of the brain. But what is the set of possible goals. ...
A deep convolutional neural network (CNN) is developed to study symmetry energy (Esym(ρ)) effects by learning the mapping between the symmetry energy and the two-dimensional (transverse momentum and rapidity) distributions of protons and neutrons in heavy-ion collisions. Supervised training is performed with labeled data-set from the ultrarelativistic quantum molecular dynamics (UrQMD) model simulation. It is found that, by using proton spectra on event-by-event basis as input, the accuracy for classifying the soft and stiff Esym(ρ) is about 60% due to large event-by-event fluctuations, while by setting event-summed proton spectra as input, the classification accuracy increases to 98%. The accuracies for 5-label (5 different Esym(ρ)) classification task are about 58% and 72% by using proton and neutron spectra, respectively. For the regression task, the mean absolute errors (MAE) which measure the average magnitude of the absolute differences between the predicted and actual L (the slope parameter of Esym(ρ)) are about 20.4 and 14.8 MeV by using proton and neutron spectra, respectively. Fingerprints of the density-dependent nuclear symmetry energy on the transverse momentum and rapidity distributions of protons and neutrons can be identified by convolutional neural network algorithm.
The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network’s changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network’s sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences.
Infants' poor motor abilities limit their interaction with their environment and render studying infant cognition notoriously difficult. Exceptions are eye movements, which reach high accuracy early, but generally do not allow manipulation of the physical environment. In this study, real-time eye tracking is used to put 6- and 8-month-old infants in direct control of their visual surroundings to study the fundamental problem of discovery of agency, i.e. the ability to infer that certain sensory events are caused by one's own actions. We demonstrate that infants quickly learn to perform eye movements to trigger the appearance of new stimuli and that they anticipate the consequences of their actions in as few as 3 trials. Our findings show that infants can rapidly discover new ways of controlling their environment. We suggest that gaze-contingent paradigms offer effective new ways for studying many aspects of infant learning and cognition in an interactive fashion and provide new opportunities for behavioral training and treatment in infants.
Spherical harmonics coeffcients for ligand-based virtual screening of cyclooxygenase inhibitors
(2011)
Background: Molecular descriptors are essential for many applications in computational chemistry, such as ligand-based similarity searching. Spherical harmonics have previously been suggested as comprehensive descriptors of molecular structure and properties. We investigate a spherical harmonics descriptor for shape-based virtual screening. Methodology/Principal Findings: We introduce and validate a partially rotation-invariant three-dimensional molecular shape descriptor based on the norm of spherical harmonics expansion coefficients. Using this molecular representation, we parameterize molecular surfaces, i.e., isosurfaces of spatial molecular property distributions. We validate the shape descriptor in a comprehensive retrospective virtual screening experiment. In a prospective study, we virtually screen a large compound library for cyclooxygenase inhibitors, using a self-organizing map as a pre-filter and the shape descriptor for candidate prioritization. Conclusions/Significance: 12 compounds were tested in vitro for direct enzyme inhibition and in a whole blood assay. Active compounds containing a triazole scaffold were identified as direct cyclooxygenase-1 inhibitors. This outcome corroborates the usefulness of spherical harmonics for representation of molecular shape in virtual screening of large compound collections. The combination of pharmacophore and shape-based filtering of screening candidates proved to be a straightforward approach to finding novel bioactive chemotypes with minimal experimental effort.
We compiled an NMR data set consisting of exact nuclear Overhauser enhancement (eNOE) distance limits, residual dipolar couplings (RDCs) and scalar (J) couplings for GB3, which forms one of the largest and most diverse data set for structural characterization of a protein to date. All data have small experimental errors, which are carefully estimated. We use the data in the research article Vogeli et al., 2015, Complementarity and congruence between exact NOEs and traditional NMR probes for spatial decoding of protein dynamics, J. Struct. Biol., 191, 3, 306–317, doi:10.1016/j.jsb.2015.07.008 [1] for cross-validation in multiple-state structural ensemble calculation. We advocate this set to be an ideal test case for molecular dynamics simulations and structure calculations.
Cysteine cross-linking in native membranes establishes the transmembrane architecture of Ire1
(2021)
The ER is a key organelle of membrane biogenesis and crucial for the folding of both membrane and secretory proteins. Sensors of the unfolded protein response (UPR) monitor the unfolded protein load in the ER and convey effector functions for maintaining ER homeostasis. Aberrant compositions of the ER membrane, referred to as lipid bilayer stress, are equally potent activators of the UPR. How the distinct signals from lipid bilayer stress and unfolded proteins are processed by the conserved UPR transducer Ire1 remains unknown. Here, we have generated a functional, cysteine-less variant of Ire1 and performed systematic cysteine cross-linking experiments in native membranes to establish its transmembrane architecture in signaling-active clusters. We show that the transmembrane helices of two neighboring Ire1 molecules adopt an X-shaped configuration independent of the primary cause for ER stress. This suggests that different forms of stress converge in a common, signaling-active transmembrane architecture of Ire1.