Refine
Year of publication
Document Type
- Article (19)
- Working Paper (3)
Language
- English (22)
Has Fulltext
- yes (22)
Is part of the Bibliography
- no (22)
Keywords
- Machine learning (22) (remove)
Institute
Contemporary information systems make widespread use of artificial intelligence (AI). While AI offers various benefits, it can also be subject to systematic errors, whereby people from certain groups (defined by gender, age, or other sensitive attributes) experience disparate outcomes. In many AI applications, disparate outcomes confront businesses and organizations with legal and reputational risks. To address these, technologies for so-called “AI fairness” have been developed, by which AI is adapted such that mathematical constraints for fairness are fulfilled. However, the financial costs of AI fairness are unclear. Therefore, the authors develop AI fairness for a real-world use case from e-commerce, where coupons are allocated according to clickstream sessions. In their setting, the authors find that AI fairness successfully manages to adhere to fairness requirements, while reducing the overall prediction performance only slightly. However, they find that AI fairness also results in an increase in financial cost. Thus, in this way the paper’s findings contribute to designing information systems on the basis of AI fairness.
Nerve tissue contains a high density of chemical synapses, about 1 per µm3 in the mammalian cerebral cortex. Thus, even for small blocks of nerve tissue, dense connectomic mapping requires the identification of millions to billions of synapses. While the focus of connectomic data analysis has been on neurite reconstruction, synapse detection becomes limiting when datasets grow in size and dense mapping is required. Here, we report SynEM, a method for automated detection of synapses from conventionally en-bloc stained 3D electron microscopy image stacks. The approach is based on a segmentation of the image data and focuses on classifying borders between neuronal processes as synaptic or non-synaptic. SynEM yields 97% precision and recall in binary cortical connectomes with no user interaction. It scales to large volumes of cortical neuropil, plausibly even whole-brain datasets. SynEM removes the burden of manual synapse annotation for large densely mapped connectomes.
Background: The prevalence of multimorbidity is increasing in recent years, and patients with multimorbidity often have a decrease in quality of life and require more health care. The aim of this study was to explore the evolution of multimorbidity taking the sequence of diseases into consideration.
Methods: We used a Belgian database collected by extracting coded parameters and more than 100 chronic conditions from the Electronic Health Records of general practitioners to study patients older than 40 years with multiple diagnoses between 1991 and 2015 (N = 65 939). We applied Markov chains to estimate the probability of developing another condition in the next state after a diagnosis. The results of Weighted Association Rule Mining (WARM) allow us to show strong associations among multiple conditions.
Results: About 66.9% of the selected patients had multimorbidity. Conditions with high prevalence, such as hypertension and depressive disorder, were likely to occur after the diagnosis of most conditions. Patterns in several disease groups were apparent based on the results of both Markov chain and WARM, such as musculoskeletal diseases and psychological diseases. Psychological diseases were frequently followed by irritable bowel syndrome.
Conclusions: Our study used Markov chains and WARM for the first time to provide a comprehensive view of the relations among 103 chronic conditions, taking sequential chronology into consideration. Some strong associations among specific conditions were detected and the results were consistent with current knowledge in literature, meaning the approaches were valid to be used on larger data sets, such as National Health care Systems or private insurers.
Purpose: To determine whether machine learning assisted-texture analysis of multi-energy virtual monochromatic image (VMI) datasets from dual-energy CT (DECT) can be used to differentiate metastatic head and neck squamous cell carcinoma (HNSCC) lymph nodes from lymphoma, inflammatory, or normal lymph nodes.
Materials and methods: A retrospective evaluation of 412 cervical nodes from 5 different patient groups (50 patients in total) having undergone DECT of the neck between 2013 and 2015 was performed: (1) HNSCC with pathology proven metastatic adenopathy, (2) HNSCC with pathology proven benign nodes (controls for (1)), (3) lymphoma, (4) inflammatory, and (5) normal nodes (controls for (3) and (4)). Texture analysis was performed with TexRAD® software using two independent sets of contours to assess the impact of inter-rater variation. Two machine learning algorithms (Random Forests (RF) and Gradient Boosting Machine (GBM)) were used with independent training and testing sets and determination of accuracy, sensitivity, specificity, PPV, NPV, and AUC.
Results: In the independent testing (prediction) sets, the accuracy for distinguishing different groups of pathologic nodes or normal nodes ranged between 80 and 95%. The models generated using texture data extracted from the independent contour sets had substantial to almost perfect agreement. The accuracy, sensitivity, specificity, PPV, and NPV for correctly classifying a lymph node as malignant (i.e. metastatic HNSCC or lymphoma) versus benign were 92%, 91%, 93%, 95%, 87%, respectively.
Conclusion: Machine learning assisted-DECT texture analysis can help distinguish different nodal pathology and normal nodes with a high accuracy.
Background: Enhancers play a fundamental role in orchestrating cell state and development. Although several methods have been developed to identify enhancers, linking them to their target genes is still an open problem. Several theories have been proposed on the functional mechanisms of enhancers, which triggered the development of various methods to infer promoter–enhancer interactions (PEIs). The advancement of high-throughput techniques describing the three-dimensional organization of the chromatin, paved the way to pinpoint long-range PEIs. Here we investigated whether including PEIs in computational models for the prediction of gene expression improves performance and interpretability.
Results: We have extended our TEPIC framework to include DNA contacts deduced from chromatin conformation capture experiments and compared various methods to determine PEIs using predictive modelling of gene expression from chromatin accessibility data and predicted transcription factor (TF) motif data. We designed a novel machine learning approach that allows the prioritization of TFs binding to distal loop and promoter regions with respect to their importance for gene expression regulation. Our analysis revealed a set of core TFs that are part of enhancer–promoter loops involving YY1 in different cell lines.
Conclusion: We present a novel approach that can be used to prioritize TFs involved in distal and promoter-proximal regulatory events by integrating chromatin accessibility, conformation, and gene expression data. We show that the integration of chromatin conformation data can improve gene expression prediction and aids model interpretability.
Purpose: To develop and validate a CT-based radiomics signature for the prognosis of loco-regional tumour control (LRC) in patients with locally advanced head and neck squamous cell carcinoma (HNSCC) treated by primary radiochemotherapy (RCTx) based on retrospective data from 6 partner sites of the German Cancer Consortium - Radiation Oncology Group (DKTK-ROG).
Material and methods: Pre-treatment CT images of 318 patients with locally advanced HNSCC were collected. Four-hundred forty-six features were extracted from each primary tumour volume and then filtered through stability analysis and clustering. First, a baseline signature was developed from demographic and tumour-associated clinical parameters. This signature was then supplemented by CT imaging features. A final signature was derived using repeated 3-fold cross-validation on the discovery cohort. Performance in external validation was assessed by the concordance index (C-Index). Furthermore, calibration and patient stratification in groups with low and high risk for loco-regional recurrence were analysed.
Results: For the clinical baseline signature, only the primary tumour volume was selected. The final signature combined the tumour volume with two independent radiomics features. It achieved moderately good discriminatory performance (C-Index [95% confidence interval]: 0.66 [0.55–0.75]) on the validation cohort along with significant patient stratification (p = 0.005) and good calibration.
Conclusion: We identified and validated a clinical-radiomics signature for LRC of locally advanced HNSCC using a multi-centric retrospective dataset. Prospective validation will be performed on the primary cohort of the HNprädBio trial of the DKTK-ROG once follow-up is completed.
The hierarchical feature regression (HFR) is a novel graph-based regularized regression estimator, which mobilizes insights from the domains of machine learning and graph theory to estimate robust parameters for a linear regression. The estimator constructs a supervised feature graph that decomposes parameters along its edges, adjusting first for common variation and successively incorporating idiosyncratic patterns into the fitting process. The graph structure has the effect of shrinking parameters towards group targets, where the extent of shrinkage is governed by a hyperparameter, and group compositions as well as shrinkage targets are determined endogenously. The method offers rich resources for the visual exploration of the latent effect structure in the data, and demonstrates good predictive accuracy and versatility when compared to a panel of commonly used regularization techniques across a range of empirical and simulated regression tasks.
Pattern recognition approaches, such as the Support Vector Machine (SVM), have been successfully used to classify groups of individuals based on their patterns of brain activity or structure. However these approaches focus on finding group differences and are not applicable to situations where one is interested in accessing deviations from a specific class or population. In the present work we propose an application of the one-class SVM (OC-SVM) to investigate if patterns of fMRI response to sad facial expressions in depressed patients would be classified as outliers in relation to patterns of healthy control subjects. We defined features based on whole brain voxels and anatomical regions. In both cases we found a significant correlation between the OC-SVM predictions and the patients' Hamilton Rating Scale for Depression (HRSD), i.e. the more depressed the patients were the more of an outlier they were. In addition the OC-SVM split the patient groups into two subgroups whose membership was associated with future response to treatment. When applied to region-based features the OC-SVM classified 52% of patients as outliers. However among the patients classified as outliers 70% did not respond to treatment and among those classified as non-outliers 89% responded to treatment. In addition 89% of the healthy controls were classified as non-outliers.
Sustainability orientation has a positive effect on startups' initial valuation and a negative effect on their post-funding financial performance. All else equal, improving sustainability orientation by one standard deviation increases startups' funding amount by 28 % and decreases investors' abnormal returns per post-funding year by 16 %. The results hold in a large sample of blockchain-based crowdfunding campaigns, also known as Initial Coin Offerings (ICOs) or token offerings. A key contribution is a machine-learning approach to assess startups' Environment, Society and Governance (ESG) properties from textual data, which we make readily available at www.SustainableEntrepreneurship.org.
Motivation: Gaussian mixture models (GMMs) are probabilistic models commonly used in biomedical research to detect subgroup structures in data sets with one-dimensional information. Reliable model parameterization requires that the number of modes, i.e., states of the generating process, is known. However, this is rarely the case for empirically measured biomedical data. Several implementations are available that estimate GMM parameters differently. This work aims to provide a comparative evaluation of automated GMM fitting methods.
Results and conclusions: The performance of commonly used algorithms for automatic parameterization and mode number determination was compared with respect to reproducing the ground truth of generated data derived from multiple normal distributions. Four main variants of Gaussian mode number detection algorithms and five variants of GMM parameter estimation methods were tested in a combinatory scenario. The combination of best performing mode number determination algorithms and GMM parameter estimation methods was then tested on artificial and real-live data sets known to display a GMM structure. None of the tested methods correctly determined the underlying data structure consistently. The likelihood ratio test had the best performance in identifying the mode number associated with the best GMM fit of the data distribution while the Markov chain Monte Carlo (MCMC) algorithm was best for GMM parameter estimation while. The combination of the two methods of number determination algorithms and GMM parameter estimation was consistently among the best and overall outperformed the available implementations.
Implementation: An automated tool for the detection of GMM based structures in (biomedical) datasets was created based on the present results and made freely available in the R library “opGMMassessment” at https://cran.r-project.org/package=opGMMassessment.