Refine
Year of publication
Document Type
- Article (687) (remove)
Language
- English (687) (remove)
Has Fulltext
- yes (687)
Is part of the Bibliography
- no (687)
Keywords
- Heavy Ion Experiments (20)
- Hadron-Hadron scattering (experiments) (10)
- Hadron-Hadron Scattering (9)
- LHC (9)
- Black holes (6)
- Heavy-ion collision (6)
- schizophrenia (6)
- Equation of state (5)
- Quark-Gluon Plasma (5)
- Relativistic heavy-ion collisions (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (687) (remove)
Solving the problem of consciousness remains one of the biggest challenges in modern science. One key step towards understanding consciousness is to empirically narrow down neural processes associated with the subjective experience of a particular content. To unravel these neural correlates of consciousness (NCC) a common scientific strategy is to compare perceptual conditions in which consciousness of a particular content is present with those in which it is absent, and to determine differences in measures of brain activity (the so called "contrastive analysis"). However, this comparison appears not to reveal exclusively the NCC, as the NCC proper can be confounded with prerequisites for and consequences of conscious processing of the particular content. This implies that previous results cannot be unequivocally interpreted as reflecting the neural correlates of conscious experience. Here we review evidence supporting this conjecture and suggest experimental strategies to untangle the NCC from the prerequisites and consequences of conscious experience in order to further develop the otherwise valid and valuable contrastive methodology.
In order to investigate the involvement of primary visual cortex (V1) in working memory (WM), parallel, multisite recordings of multiunit activity were obtained from monkey V1 while the animals performed a delayed match-to-sample (DMS) task. During the delay period, V1 population firing rate vectors maintained a lingering trace of the sample stimulus that could be reactivated by intervening impulse stimuli that enhanced neuronal firing. This fading trace of the sample did not require active engagement of the monkeys in the DMS task and likely reflects the intrinsic dynamics of recurrent cortical networks in lower visual areas. This renders an active, attention-dependent involvement of V1 in the maintenance of working memory contents unlikely. By contrast, population responses to the test stimulus depended on the probabilistic contingencies between sample and test stimuli. Responses to tests that matched expectations were reduced which agrees with concepts of predictive coding.
We compiled an NMR data set consisting of exact nuclear Overhauser enhancement (eNOE) distance limits, residual dipolar couplings (RDCs) and scalar (J) couplings for GB3, which forms one of the largest and most diverse data set for structural characterization of a protein to date. All data have small experimental errors, which are carefully estimated. We use the data in the research article Vogeli et al., 2015, Complementarity and congruence between exact NOEs and traditional NMR probes for spatial decoding of protein dynamics, J. Struct. Biol., 191, 3, 306–317, doi:10.1016/j.jsb.2015.07.008 [1] for cross-validation in multiple-state structural ensemble calculation. We advocate this set to be an ideal test case for molecular dynamics simulations and structure calculations.
Structural rearrangements play a central role in the organization and function of complex biomolecular systems. In principle, Molecular Dynamics (MD) simulations enable us to investigate these thermally activated processes with an atomic level of resolution. In practice, an exponentially large fraction of computational resources must be invested to simulate thermal fluctuations in metastable states. Path sampling methods focus the computational power on sampling the rare transitions between states. One of their outstanding limitations is to efficiently generate paths that visit significantly different regions of the conformational space. To overcome this issue, we introduce a new algorithm for MD simulations that integrates machine learning and quantum computing. First, using functional integral methods, we derive a rigorous low-resolution spatially coarse-grained representation of the system’s dynamics, based on a small set of molecular configurations explored with machine learning. Then, we use a quantum annealer to sample the transition paths of this low-resolution theory. We provide a proof-of-concept application by simulating a benchmark conformational transition with all-atom resolution on the D-Wave quantum computer. By exploiting the unique features of quantum annealing, we generate uncorrelated trajectories at every iteration, thus addressing one of the challenges of path sampling. Once larger quantum machines will be available, the interplay between quantum and classical resources may emerge as a new paradigm of high-performance scientific computing. In this work, we provide a platform to implement this integrated scheme in the field of molecular simulations.
Determining the structure and mechanisms of all individual functional modules of cells at high molecular detail has often been seen as equal to understanding how cells work. Recent technical advances have led to a flush of high-resolution structures of various macromolecular machines, but despite this wealth of detailed information, our understanding of cellular function remains incomplete. Here, we discuss present-day limitations of structural biology and highlight novel technologies that may enable us to analyze molecular functions directly inside cells. We predict that the progression toward structural cell biology will involve a shift toward conceptualizing a 4D virtual reality of cells using digital twins. These will capture cellular segments in a highly enriched molecular detail, include dynamic changes, and facilitate simulations of molecular processes, leading to novel and experimentally testable predictions. Transferring biological questions into algorithms that learn from the existing wealth of data and explore novel solutions may ultimately unveil how cells work.
Residual connections have been proposed as an architecture-based inductive bias to mitigate the problem of exploding and vanishing gradients and increased task performance in both feed-forward and recurrent networks (RNNs) when trained with the backpropagation algorithm. Yet, little is known about how residual connections in RNNs influence their dynamics and fading memory properties. Here, we introduce weakly coupled residual recurrent networks (WCRNNs) in which residual connections result in well-defined Lyapunov exponents and allow for studying properties of fading memory. We investigate how the residual connections of WCRNNs influence their performance, network dynamics, and memory properties on a set of benchmark tasks. We show that several distinct forms of residual connections yield effective inductive biases that result in increased network expressivity. In particular, those are residual connections that (i) result in network dynamics at the proximity of the edge of chaos, (ii) allow networks to capitalize on characteristic spectral properties of the data, and (iii) result in heterogeneous memory properties. In addition, we demonstrate how our results can be extended to non-linear residuals and introduce a weakly coupled residual initialization scheme that can be used for Elman RNNs.
From August to November 2017, Madagascar endured an outbreak of plague. A total of 2417 cases of plague were confirmed, causing a death toll of 209. Public health intervention efforts were introduced and successfully stopped the epidemic at the end of November. The plague, however, is endemic in the region and occurs annually, posing the risk of future outbreaks. To understand the plague transmission, we collected real-time data from official reports, described the outbreak's characteristics, and estimated transmission parameters using statistical and mathematical models. The pneumonic plague epidemic curve exhibited multiple peaks, coinciding with sporadic introductions of new bubonic cases. Optimal climate conditions for rat flea to flourish were observed during the epidemic. Estimate of the plague basic reproduction number during the large wave of the epidemic was high, ranging from 5 to 7 depending on model assumptions. The incubation and infection periods for bubonic and pneumonic plague were 4.3 and 3.4 days and 3.8 and 2.9 days, respectively. Parameter estimation suggested that even with a small fraction of the population exposed to infected rat fleas (1/10,000) and a small probability of transition from a bubonic case to a secondary pneumonic case (3%), the high human-to-human transmission rate can still generate a large outbreak. Controlling rodent and fleas can prevent new index cases, but managing human-to-human transmission is key to prevent large-scale outbreaks.
Ebola virus (EBOV) infection causes a high death toll, killing a high proportion of EBOV-infected patients within 7 days. Comprehensive data on EBOV infection are fragmented, hampering efforts in developing therapeutics and vaccines against EBOV. Under this circumstance, mathematical models become valuable resources to explore potential controlling strategies. In this paper, we employed experimental data of EBOV-infected nonhuman primates (NHPs) to construct a mathematical framework for determining windows of opportunity for treatment and vaccination. Considering a prophylactic vaccine based on recombinant vesicular stomatitis virus expressing the EBOV glycoprotein (rVSV-EBOV), vaccination could be protective if a subject is vaccinated during a period from one week to four months before infection. For the case of a therapeutic vaccine based on monoclonal antibodies (mAbs), a single dose might resolve the invasive EBOV replication even if it was administrated as late as four days after infection. Our mathematical models can be used as building blocks for evaluating therapeutic and vaccine modalities as well as for evaluating public health intervention strategies in outbreaks. Future laborator experiments will help to validate and refine the estimates of the windows of opportunity proposed here.
The search for materials with topological properties is an ongoing effort. In this article we propose a systematic statistical method, supported by machine learning techniques, that is capable of constructing topological models for a generic lattice without prior knowledge of the phase diagram. By sampling tight-binding parameter vectors from a random distribution, we obtain data sets that we label with the corresponding topological index. This labeled data is then analyzed to extract those parameters most relevant for the topological classification and to find their most likely values. We find that the marginal distributions of the parameters already define a topological model. Additional information is hidden in correlations between parameters. Here we present as a proof of concept the prediction of the Haldane model as the prototypical topological insulator for the honeycomb lattice in Altland-Zirnbauer (AZ) class A. The algorithm is straightforwardly applicable to any other AZ class or lattice, and could be generalized to interacting systems.
The elliptic flow (v2) of D0 mesons from beauty-hadron decays (non-prompt D0) was measured in midcentral (30-50%) Pb-Pb collisions at a centre-of-mass energy per nucleon pair sNN−−−√ = 5.02 TeV with the ALICE detector at the LHC. The D0 mesons were reconstructed at midrapidity (|y|<0.8) from their hadronic decay D0→K−π+, in the transverse momentum interval 2<pT<12 GeV/c. The result indicates a positive v2 for non-prompt D0 mesons with a significance of 2.7σ. The non-prompt D0-meson v2 is lower than that of prompt non-strange D mesons with 3.2σ significance in 2<pT<8 GeV/c, and compatible with the v2 of beauty-decay electrons. Theoretical calculations of beauty-quark transport in a hydrodynamically expanding medium describe the measurement within uncertainties.