Refine
Year of publication
Document Type
- Article (75)
Has Fulltext
- yes (75)
Is part of the Bibliography
- no (75)
Keywords
- data science (19)
- machine-learning (7)
- Data science (6)
- pain (6)
- Machine-learning (5)
- artificial intelligence (5)
- digital medicine (5)
- machine learning (5)
- patients (5)
- Pain (3)
Institute
Background: It is assumed that different pain phenotypes are based on varying molecular pathomechanisms. Distinct ion channels seem to be associated with the perception of cold pain, in particular TRPM8 and TRPA1 have been highlighted previously. The present study analyzed the distribution of cold pain thresholds with focus at describing the multimodality based on the hypothesis that it reflects a contribution of distinct ion channels.
Methods: Cold pain thresholds (CPT) were available from 329 healthy volunteers (aged 18 - 37 years; 159 men) enrolled in previous studies. The distribution of the pooled and log-transformed threshold data was described using a kernel density estimation (Pareto Density Estimation (PDE)) and subsequently, the log data was modeled as a mixture of Gaussian distributions using the expectation maximization (EM) algorithm to optimize the fit.
Results: CPTs were clearly multi-modally distributed. Fitting a Gaussian Mixture Model (GMM) to the log-transformed threshold data revealed that the best fit is obtained when applying a three-model distribution pattern. The modes of the identified three Gaussian distributions, retransformed from the log domain to the mean stimulation temperatures at which the subjects had indicated pain thresholds, were obtained at 23.7 °C, 13.2 °C and 1.5 °C for Gaussian #1, #2 and #3, respectively.
Conclusions: The localization of the first and second Gaussians was interpreted as reflecting the contribution of two different cold sensors. From the calculated localization of the modes of the first two Gaussians, the hypothesis of an involvement of TRPM8, sensing temperatures from 25 - 24 °C, and TRPA1, sensing cold from 17 °C can be derived. In that case, subjects belonging to either Gaussian would possess a dominance of the one or the other receptor at the skin area where the cold stimuli had been applied. The findings therefore support a suitability of complex analytical approaches to detect mechanistically determined patterns from pain phenotype data.
Computed ABC analysis for rational selection of most informative variables in multivariate data
(2015)
Objective: Multivariate data sets often differ in several factors or derived statistical parameters, which have to be selected for a valid interpretation. Basing this selection on traditional statistical limits leads occasionally to the perception of losing information from a data set. This paper proposes a novel method for calculating precise limits for the selection of parameter sets.
Methods: The algorithm is based on an ABC analysis and calculates these limits on the basis of the mathematical properties of the distribution of the analyzed items. The limits implement the aim of any ABC analysis, i.e., comparing the increase in yield to the required additional effort. In particular, the limit for set A, the "important few", is optimized in a way that both, the effort and the yield for the other sets (B and C), are minimized and the additional gain is optimized.
Results: As a typical example from biomedical research, the feasibility of the ABC analysis as an objective replacement for classical subjective limits to select highly relevant variance components of pain thresholds is presented. The proposed method improved the biological interpretation of the results and increased the fraction of valid information that was obtained from the experimental data.
Conclusions: The method is applicable to many further biomedical problems including the creation of diagnostic complex biomarkers or short screening tests from comprehensive test batteries. Thus, the ABC analysis can be proposed as a mathematically valid replacement for traditional limits to maximize the information obtained from multivariate research data.
Process pharmacology : a pharmacological data science approach to drug development and therapy
(2016)
A novel functional-genomics based concept of pharmacology that uses artificial intelligence techniques for mining and knowledge discovery in "big data" providing comprehensive information about the drugs’ targets and their functional genomics is proposed. In “process pharmacology”, drugs are associated with biological processes. This puts the disease, regarded as alterations in the activity in one or several cellular processes, in the focus of drug therapy. In this setting, the molecular drug targets are merely intermediates. The identification of drugs for therapeutic or repurposing is based on similarities in the high-dimensional space of the biological processes that a drug influences. Applying this principle to data associated with lymphoblastic leukemia identified a short list of candidate drugs, including one that was recently proposed as novel rescue medication for lymphocytic leukemia. The pharmacological data science approach provides successful selections of drug candidates within development and repurposing tasks.
The Gini index is a measure of the inequality of a distribution that can be derived from Lorenz curves. While commonly used in, e.g., economic research, it suffers from ambiguity via lack of Lorenz dominance preservation. Here, investigation of large sets of empirical distributions of incomes of the World’s countries over several years indicated firstly, that the Gini indices are centered on a value of 33.33% corresponding to the Gini index of the uniform distribution and secondly, that the Lorenz curves of these distributions are consistent with Lorenz curves of log-normal distributions. This can be employed to provide a Lorenz dominance preserving equivalent of the Gini index. Therefore, a modified measure based on log-normal approximation and standardization of Lorenz curves is proposed. The so-called UGini index provides a meaningful and intuitive standardization on the uniform distribution as this characterizes societies that provide equal chances. The novel UGini index preserves Lorenz dominance. Analysis of the probability density distributions of the UGini index of the World’s counties income data indicated multimodality in two independent data sets. Applying Bayesian statistics provided a data-based classification of the World’s countries’ income distributions. The UGini index can be re-transferred into the classical index to preserve comparability with previous research.
Background: The quantification of global DNA methylation has been established in epigenetic screening. As more practicable alternatives to the HPLC-based gold standard, the methylation analysis of CpG islands in repeatable elements (LINE-1) and the luminometric methylation assay (LUMA) of overall 5-methylcytosine content in “CCGG” recognition sites are most widely used. Both methods are applied as virtually equivalent, despite the hints that their results only partly agree. This triggered the present agreement assessments.
Results: Three different human cell types (cultured MCF7 and SHSY5Y cell lines treated with different chemical modulators of DNA methylation and whole blood drawn from pain patients and healthy volunteers) were submitted to the global DNA methylation assays employing LINE-1 or LUMA-based pyrosequencing measurements. The agreement between the two bioassays was assessed using generally accepted approaches to the statistics for laboratory method comparison studies. Although global DNA methylation levels measured by the two methods correlated, five different lines of statistical evidence consistently rejected the assumption of complete agreement. Specifically, a bias was observed between the two methods. In addition, both the magnitude and direction of bias were tissue-dependent. Interassay differences could be grouped based on Bayesian statistics, and these groups allowed in turn to re-identify the originating tissue.
Conclusions: Although providing partly correlated measurements of DNA methylation, interchangeability of the quantitative results obtained with LINE-1 and LUMA was jeopardized by a consistent bias between the results. Moreover, the present analyses strongly indicate a tissue specificity of the differences between the two methods.
Advances in flow cytometry enable the acquisition of large and high-dimensional data sets per patient. Novel computational techniques allow the visualization of structures in these data and, finally, the identification of relevant subgroups. Correct data visualizations and projections from the high-dimensional space to the visualization plane require the correct representation of the structures in the data. This work shows that frequently used techniques are unreliable in this respect. One of the most important methods for data projection in this area is the t-distributed stochastic neighbor embedding (t-SNE). We analyzed its performance on artificial and real biomedical data sets. t-SNE introduced a cluster structure for homogeneously distributed data that did not contain any subgroupstructure. Inotherdatasets,t-SNEoccasionallysuggestedthewrongnumberofsubgroups or projected data points belonging to different subgroups, as if belonging to the same subgroup. As an alternative approach, emergent self-organizing maps (ESOM) were used in combination with U-matrix methods. This approach allowed the correct identification of homogeneous data while in sets containing distance or density-based subgroups structures; the number of subgroups and data point assignments were correctly displayed. The results highlight possible pitfalls in the use of a currently widely applied algorithmic technique for the detection of subgroups in high dimensional cytometric data and suggest a robust alternative.
Computational analyses of functions of gene sets obtained in microarray analyses or by topical database searches are increasingly important in biology. To understand their functions, the sets are usually mapped to Gene Ontology knowledge bases by means of over-representation analysis (ORA). Its result represents the specific knowledge of the functionality of the gene set. However, the specific ontology typically consists of many terms and relationships, hindering the understanding of the ‘main story’. We developed a methodology to identify a comprehensibly small number of GO terms as “headlines” of the specific ontology allowing to understand all central aspects of the roles of the involved genes. The Functional Abstraction method finds a set of headlines that is specific enough to cover all details of a specific ontology and is abstract enough for human comprehension. This method exceeds the classical approaches at ORA abstraction and by focusing on information rather than decorrelation of GO terms, it directly targets human comprehension. Functional abstraction provides, with a maximum of certainty, information value, coverage and conciseness, a representation of the biological functions in a gene set plays a role. This is the necessary means to interpret complex Gene Ontology results thus strengthening the role of functional genomics in biomarker and drug discovery.
Increasing evidence about the central nervous representation of pain in the brain suggests that the operculo-insular cortex is a crucial part of the pain matrix. The pain-specificity of a brain region may be tested by administering nociceptive stimuli while controlling for unspecific activations by administering non-nociceptive stimuli. We applied this paradigm to nasal chemosensation, delivering trigeminal or olfactory stimuli, to verify the pain-specificity of the operculo-insular cortex. In detail, brain activations due to intranasal stimulation induced by non-nociceptive olfactory stimuli of hydrogen sulfide (5 ppm) or vanillin (0.8 ppm) were used to mask brain activations due to somatosensory, clearly nociceptive trigeminal stimulations with gaseous carbon dioxide (75% v/v). Functional magnetic resonance (fMRI) images were recorded from 12 healthy volunteers in a 3T head scanner during stimulus administration using an event-related design. We found that significantly more activations following nociceptive than non-nociceptive stimuli were localized bilaterally in two restricted clusters in the brain containing the primary and secondary somatosensory areas and the insular cortices consistent with the operculo-insular cortex. However, these activations completely disappeared when eliminating activations associated with the administration of olfactory stimuli, which were small but measurable. While the present experiments verify that the operculo-insular cortex plays a role in the processing of nociceptive input, they also show that it is not a pain-exclusive brain region and allow, in the experimental context, for the interpretation that the operculo-insular cortex splay a major role in the detection of and responding to salient events, whether or not these events are nociceptive or painful.
Background: Cannabis proofed to be effective in pain relief, but one major side effect is its influence on memory in humans. Therefore, the role of memory on central processing of nociceptive information was investigated in healthy volunteers.
Methods: In a placebo-controlled cross-over study including 22 healthy subjects, the effect of 20 mg oral Δ9-tetrahydrocannabinol (THC) on memory involving nociceptive sensations was studied, using a delayed stimulus discrimination task (DSDT). To control for nociceptive specificity, a similar DSDT-based study was performed in a subgroup of thirteen subjects, using visual stimuli.
Results: For each nociceptive stimulus pair, the second stimulus was associated with stronger and more extended brain activations than the first stimulus. These differences disappeared after THC administration. The THC effects were mainly located in two clusters comprising the insula and inferior frontal cortex in the right hemisphere, and the caudate nucleus and putamen bilaterally. These cerebral effects were accompanied in the DSDT by a significant reduction of correct ratings from 41.61% to 37.05% after THC administration (rm-ANOVA interaction "drug" by "measurement": F (1,21) = 4.685, p = 0.042). Rating performance was also reduced for the visual DSDT (69.87% to 54.35%; rm-ANOVA interaction of "drug" by "measurement": F (1,12) = 13.478, p = 0.003) and reflected in a reduction of stimulus-related brain deactivations in the bilateral angular gyrus.
Conclusions: Results suggest that part of the effect of THC on pain may be related to memory effects. THC reduced the performance in DSDT of nociceptive and visual stimuli, which was accompanied by significant effects on brain activations. However, a pain specificity of these effects cannot be deduced from the data presented.
The measurement of concentrations of drugs and endogenous substances is widely used in basic and clinical pharmacology research and service tasks. Using data science‐derived visualizations of laboratory data, it is demonstrated on a real‐life example that basic statistical exploration of laboratory assay results or advised standard visual methods of data inspection may fall short in detecting systematic laboratory errors. For example, data pathologies such as generating always the same value in all probes of a particular assay run may pass undetected when using standard methods of data quality check. It is shown that the use of different data visualizations that emphasize different views of the data may enhance the detection of systematic laboratory errors. A dotplot of single data in the order of assay is proposed that provides an overview on the data range, outliers and a particular type of systematic errors where similar values are wrongly measured in all probes.