Refine
Year of publication
- 2015 (27) (remove)
Document Type
- Article (27) (remove)
Language
- English (27)
Has Fulltext
- yes (27)
Is part of the Bibliography
- no (27)
Keywords
- Hadron-Hadron Scattering (3)
- Charm physics (2)
- Heavy Ions (2)
- ALICE experiment (1)
- Clustering (1)
- Deictic and iconic gestures (1)
- Elliptic flow (1)
- Gene expression (1)
- Gestural writing (1)
- Heavy ions (1)
Institute
- Informatik (27) (remove)
Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable because of the combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm uses interaction delays reconstructed for directed bivariate interactions to tag potentially spurious edges on the basis of their timing signatures in the context of the surrounding network. Such tagged interactions may then be pruned, which produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation in MATLAB to test the algorithm’s performance on simulated networks as well as networks derived from magnetoencephalographic data. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a tractable and data-efficient way of reconstructing approximative networks of multivariate interactions. It is preferable if available data are limited or if fully multivariate approaches are computationally infeasible.
This paper provides a theoretical assessment of gestures in the context of authoring image-related hypertexts by example of the museum information system WikiNect. To this end, a first implementation of gestural writing based on image schemata is provided (Lakoff in Women, fire, and dangerous things: what categories reveal about the mind. University of Chicago Press, Chicago, 1987). Gestural writing is defined as a sort of coding in which propositions are only expressed by means of gestures. In this respect, it is shown that image schemata allow for bridging between natural language predicates and gestural manifestations. Further, it is demonstrated that gestural writing primarily focuses on the perceptual level of image descriptions (Hollink et al. in Int J Hum Comput Stud 61(5):601–626, 2004). By exploring the metaphorical potential of image schemata, it is finally illustrated how to extend the expressiveness of gestural writing in order to reach the conceptual level of image descriptions. In this context, the paper paves the way for implementing museum information systems like WikiNect as systems of kinetic hypertext authoring based on full-fledged gestural writing.
We provide elementary algorithms for two preservation theorems for first-order sentences (FO) on the class ℭd of all finite structures of degree at most d: For each FO-sentence that is preserved under extensions (homomorphisms) on ℭd, a ℭd-equivalent existential (existential-positive) FO-sentence can be constructed in 5-fold (4-fold) exponential time. This is complemented by lower bounds showing that a 3-fold exponential blow-up of the computed existential (existential-positive) sentence is unavoidable. Both algorithms can be extended (while maintaining the upper and lower bounds on their time complexity) to input first-order sentences with modulo m counting quantifiers (FO+MODm). Furthermore, we show that for an input FO-formula, a ℭd-equivalent Feferman-Vaught decomposition can be computed in 3-fold exponential time. We also provide a matching lower bound
Viruses rely completely on the hosts' machinery for translation of viral transcripts. However, for most viruses infecting humans, codon usage preferences (CUPrefs) do not match those of the host. Human papillomaviruses (HPVs) are a showcase to tackle this paradox: they present a large genotypic diversity and a broad range of phenotypic presentations, from asymptomatic infections to productive lesions and cancer. By applying phylogenetic inference and dimensionality reduction methods, we demonstrate first that genes in HPVs are poorly adapted to the average human CUPrefs, the only exception being capsid genes in viruses causing productive lesions. Phylogenetic relationships between HPVs explained only a small proportion of CUPrefs variation. Instead, the most important explanatory factor for viral CUPrefs was infection phenotype, as orthologous genes in viruses with similar clinical presentation displayed similar CUPrefs. Moreover, viral genes with similar spatiotemporal expression patterns also showed similar CUPrefs. Our results suggest that CUPrefs in HPVs reflect either variations in the mutation bias or differential selection pressures depending on the clinical presentation and expression timing. We propose that poor viral CUPrefs may be central to a trade-off between strong viral gene expression and the potential for eliciting protective immune response.
Background: Microarray analysis represents a powerful way to test scientific hypotheses on the functionality of cells. The measurements consider the whole genome, and the large number of generated data requires sophisticated analysis. To date, no gold-standard for the analysis of microarray images has been established. Due to the lack of a standard approach there is a strong need to identify new processing algorithms.
Methods: We propose a novel approach based on hyperbolic partial differential equations (PDEs) for unsupervised spot segmentation. Prior to segmentation, morphological operations were applied for the identification of co-localized groups of spots. A grid alignment was performed to determine the borderlines between rows and columns of spots. PDEs were applied to detect the inflection points within each column and row; vertical and horizontal luminance profiles were evolved respectively. The inflection points of the profiles determined borderlines that confined a spot within adapted rectangular areas. A subsequent k-means clustering determined the pixels of each individual spot and its local background.
Results: We evaluated the approach for a data set of microarray images taken from the Stanford Microarray Database (SMD). The data set is based on two studies on global gene expression profiles of Arabidopsis Thaliana. We computed values for spot intensity, regression ratio, and coefficient of determination. For spots with irregular contours and inner holes, we found intensity values that were significantly different from those determined by the GenePix Pro microarray analysis software. We determined the set of differentially expressed genes from our intensities and identified more activated genes than were predicted by the GenePix software.
Conclusions: Our method represents a worthwhile alternative and complement to standard approaches used in industry and academy. We highlight the importance of our spot segmentation approach, which identified supplementary important genes, to better explains the molecular mechanisms that are activated in a defense responses to virus and pathogen infection.
This paper shows equivalence of several versions of applicative similarity and contextual approximation, and hence also of applicative bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seq-operator. LR models an untyped version of the core language of Haskell. The use of bisimilarities simplifies equivalence proofs in calculi and opens a way for more convenient correctness proofs for program transformations. The proof is by a fully abstract and surjective transfer into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of our similarities and contextual approximation can be shown by Howe's method. Similarity is transferred back to LR on the basis of an inductively defined similarity. The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite trees which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, which is also an identity on letrec-free expressions.
A measurement of dijet correlations in p–Pb collisions at √sNN = 5.02 TeV with the ALICE detector is presented. Jets are reconstructed from charged particles measured in the central tracking detectors and neutral energy deposited in the electromagnetic calorimeter. The transverse momentum of the full jet (clustered from charged and neutral constituents) and charged jet (clustered from charged particles only) is corrected event-by-event for the contribution of the underlying event, while corrections for underlying event fluctuations and finite detector resolution are applied on an inclusive basis. A projection of the dijet transverse momentum, kTy = pch+ne T,jet sin(ϕdijet) with ϕdijet the azimuthal angle between a full and charged jet and pch+ne T,jet the transverse momentum of the full jet, is used to study nuclear matter effects in p–Pb collisions. This observable is sensitive to the acoplanarity of dijet production and its potential modification in p–Pb collisions with respect to pp collisions. Measurements of the dijet kTy as a function of the transverse momentum of the full and recoil charged jet, and the event multiplicity are presented. No significant modification of kTy due to nuclear matter effects in p–Pb collisions with respect to the event multiplicity or a PYTHIA8 reference is observed.
Charged jet production cross sections in p–Pb collisions at √sNN=5.02 TeV measured with the ALICE detector at the LHC are presented. Using the anti-kT algorithm, jets have been reconstructed in the central rapidity region from charged particles with resolution parameters R=0.2 and R=0.4. The reconstructed jets have been corrected for detector effects and the underlying event background. To calculate the nuclear modification factor, RpPb, of charged jets in p–Pb collisions, a pp reference was constructed by scaling previously measured charged jet spectra at s=7 TeV. In the transverse momentum range 20≤pT,chjet≤120 GeV/c, RpPb is found to be consistent with unity, indicating the absence of strong nuclear matter effects on jet production. Major modifications to the radial jet structure are probed via the ratio of jet production cross sections reconstructed with the two different resolution parameters. This ratio is found to be similar to the measurement in pp collisions at √s=7 TeV and to the expectations from PYTHIA pp simulations and NLO pQCD calculations at √sNN=5.02 TeV.
We have performed the first measurement of the coherent ψ(2S) photo-production cross section in ultra-peripheral PbPb collisions at the LHC. This charmonium excited state is reconstructed via the ψ(2S)→l+l− and ψ(2S)→J/ψπ+π− decays, where the J/ψ decays into two leptons. The analysis is based on an event sample corresponding to an integrated luminosity of about 22 μb−1. The cross section for coherent ψ(2S) production in the rapidity interval −0.9<y<0.9 is dσψ(2S)coh/dy=0.83±0.19(stat+syst) mb. The ψ(2S) to J/ψ coherent cross section ratio is 0.34−0.07+0.08(stat+syst). The obtained results are compared to predictions from theoretical models.
The elliptic flow, v2, of muons from heavy-flavour hadron decays at forward rapidity (2.5<y<4) is measured in Pb–Pb collisions at √sNN=2.76 TeV with the ALICE detector at the LHC. The scalar product, two- and four-particle Q cumulants and Lee–Yang zeros methods are used. The dependence of the v2 of muons from heavy-flavour hadron decays on the collision centrality, in the range 0–40%, and on transverse momentum, pT, is studied in the interval 3<pT<10 GeV/c. A positive v2 is observed with the scalar product and two-particle Q cumulants in semi-central collisions (10–20% and 20–40% centrality classes) for the pT interval from 3 to about 5 GeV/c with a significance larger than 3σ, based on the combination of statistical and systematic uncertainties. The v2 magnitude tends to decrease towards more central collisions and with increasing pT. It becomes compatible with zero in the interval 6<pT<10 GeV/c. The results are compared to models describing the interaction of heavy quarks and open heavy-flavour hadrons with the high-density medium formed in high-energy heavy-ion collisions.