Refine
Year of publication
Has Fulltext
- yes (74)
Is part of the Bibliography
- no (74)
Keywords
- COVID-19 (4)
- SARS-CoV-2 (3)
- risk factors (3)
- ACLF (2)
- Biomarkers (2)
- Immunology (2)
- Non-small cell lung cancer (2)
- Portal hypertension (2)
- acute myeloid leukemia (2)
- acute-on-chronic liver failure (2)
Institute
- Medizin (74) (remove)
Investigators in the cognitive neurosciences have turned to Big Data to address persistent replication and reliability issues by increasing sample sizes, statistical power, and representativeness of data. While there is tremendous potential to advance science through open data sharing, these efforts unveil a host of new questions about how to integrate data arising from distinct sources and instruments. We focus on the most frequently assessed area of cognition - memory testing - and demonstrate a process for reliable data harmonization across three common measures. We aggregated raw data from 53 studies from around the world which measured at least one of three distinct verbal learning tasks, totaling N = 10,505 healthy and brain-injured individuals. A mega analysis was conducted using empirical bayes harmonization to isolate and remove site effects, followed by linear models which adjusted for common covariates. After corrections, a continuous item response theory (IRT) model estimated each individual subject’s latent verbal learning ability while accounting for item difficulties. Harmonization significantly reduced inter-site variance by 37% while preserving covariate effects. The effects of age, sex, and education on scores were found to be highly consistent across memory tests. IRT methods for equating scores across AVLTs agreed with held-out data of dually-administered tests, and these tools are made available for free online. This work demonstrates that large-scale data sharing and harmonization initiatives can offer opportunities to address reproducibility and integration challenges across the behavioral sciences.
The emerging disciplines of lipidomics and metabolomics show great potential for the discovery of diagnostic biomarkers, but appropriate pre-analytical sample-handling procedures are critical because several analytes are prone to ex vivo distortions during sample collection. To test how the intermediate storage temperature and storage period of plasma samples from K3EDTA whole-blood collection tubes affect analyte concentrations, we assessed samples from non-fasting healthy volunteers (n = 9) for a broad spectrum of metabolites, including lipids and lipid mediators, using a well-established LC-MS-based platform. We used a fold change-based approach as a relative measure of analyte stability to evaluate 489 analytes, employing a combination of targeted LC-MS/MS and LC-HRMS screening. The concentrations of many analytes were found to be reliable, often justifying less strict sample handling; however, certain analytes were unstable, supporting the need for meticulous processing. We make four data-driven recommendations for sample-handling protocols with varying degrees of stringency, based on the maximum number of analytes and the feasibility of routine clinical implementation. These protocols also enable the simple evaluation of biomarker candidates based on their analyte-specific vulnerability to ex vivo distortions. In summary, pre-analytical sample handling has a major effect on the suitability of certain metabolites as biomarkers, including several lipids and lipid mediators. Our sample-handling recommendations will increase the reliability and quality of samples when such metabolites are necessary for routine clinical diagnosis.
Background: Trauma may be associated with significant to life-threatening blood loss, which in turn may increase the risk of complications and death, particularly in the absence of adequate treatment. Hydroxyethyl starch (HES) solutions are used for volume therapy to treat hypovolemia due to acute blood loss to maintain or re-establish hemodynamic stability with the ultimate goal to avoid organ hypoperfusion and cardiovascular collapse. The current study compares a 6% HES 130 solution (Volulyte 6%) versus an electrolyte solution (Ionolyte) for volume replacement therapy in adult patients with traumatic injuries, as requested by the European Medicines Agency to gain more insights into the safety and efficacy of HES in the setting of trauma care.
Methods: TETHYS is a pragmatic, prospective, randomized, controlled, double-blind, multicenter, multinational trial performed in two parallel groups. Eligible consenting adults ≥ 18 years, with an estimated blood loss of ≥ 500 ml, and in whom initial surgery is deemed necessary within 24 h after blunt or penetrating trauma, will be randomized to receive intravenous treatment at an individualized dose with either a 6% HES 130, or an electrolyte solution, for a maximum of 24 h or until reaching the maximum daily dose of 30 ml/kg body weight, whatever occurs first. Sample size is estimated as 175 patients per group, 350 patients total (α = 0.025 one-tailed, power 1–β = 0.8). Composite primary endpoint evaluated in an exploratory manner will be 90-day mortality and 90-day renal failure, defined as AKIN stage ≥ 2, RIFLE injury/failure stage, or use of renal replacement therapy (RRT) during the first 3 months. Secondary efficacy and safety endpoints are fluid administration and balance, changes in vital signs and hemodynamic status, changes in laboratory parameters including renal function, coagulation, and inflammation biomarkers, incidence of adverse events during treatment period, hospital, and intensive care unit (ICU) length of stay, fitness for ICU or hospital discharge, and duration of mechanical ventilation and/or RRT.
Discussion: This pragmatic study will increase the evidence on safety and efficacy of 6% HES 130 for treatment of hypovolemia secondary to acute blood loss in trauma patients.
Trial registration:Registered in EudraCT, No.: 2016-002176-27 (21 April 2017) and ClinicalTrials.gov, ID: NCT03338218 (09 November 2017).
Background & Aims: In ACLF patients, an adequate risk stratification is essential, especially for liver transplant allocation, since ACLF is associated with high short-term mortality. The CLIF-C ACLF score is the best prognostic model to predict outcome in ACLF patients. While lung failure is generally regarded as signum malum in ICU care, this study aims to evaluate and quantify the role of pulmonary impairment on outcome in ACLF patients.
Methods: In this retrospective study, 498 patients with liver cirrhosis and admission to IMC/ICU were included. ACLF was defined according to EASL-CLIF criteria. Pulmonary impairment was classified into three groups: unimpaired ventilation, need for mechanical ventilation and defined pulmonary failure. These factors were analysed in different cohorts, including a propensity score-matched ACLF cohort.
Results: Mechanical ventilation and pulmonary failure were identified as independent risk factors for increased short-term mortality. In matched ACLF patients, the presence of pulmonary failure showed the highest 28-day mortality (83.7%), whereas mortality rates in ACLF with mechanical ventilation (67.3%) and ACLF without pulmonary impairment (38.8%) were considerably lower (p < .001). Especially in patients with pulmonary impairment, the CLIF-C ACLF score showed poor predictive accuracy. Adjusting the CLIF-C ACLF score for the grade of pulmonary impairment improved the prediction significantly.
Conclusions: This study highlights that not only pulmonary failure but also mechanical ventilation is associated with worse prognosis in ACLF patients. The grade of pulmonary impairment should be considered in the risk assessment in ACLF patients. The new score may be useful in the selection of patients for liver transplantation.
(1) Background: The aim of our study was to identify specific risk factors for fatal outcome in critically ill COVID-19 patients. (2) Methods: Our data set consisted of 840 patients enclosed in the LEOSS registry. Using lasso regression for variable selection, a multifactorial logistic regression model was fitted to the response variable survival. Specific risk factors and their odds ratios were derived. A nomogram was developed as a graphical representation of the model. (3) Results: 14 variables were identified as independent factors contributing to the risk of death for critically ill COVID-19 patients: age (OR 1.08, CI 1.06–1.10), cardiovascular disease (OR 1.64, CI 1.06–2.55), pulmonary disease (OR 1.87, CI 1.16–3.03), baseline Statin treatment (0.54, CI 0.33–0.87), oxygen saturation (unit = 1%, OR 0.94, CI 0.92–0.96), leukocytes (unit 1000/μL, OR 1.04, CI 1.01–1.07), lymphocytes (unit 100/μL, OR 0.96, CI 0.94–0.99), platelets (unit 100,000/μL, OR 0.70, CI 0.62–0.80), procalcitonin (unit ng/mL, OR 1.11, CI 1.05–1.18), kidney failure (OR 1.68, CI 1.05–2.70), congestive heart failure (OR 2.62, CI 1.11–6.21), severe liver failure (OR 4.93, CI 1.94–12.52), and a quick SOFA score of 3 (OR 1.78, CI 1.14–2.78). The nomogram graphically displays the importance of these 14 factors for mortality. (4) Conclusions: There are risk factors that are specific to the subpopulation of critically ill COVID-19 patients.
Simple Summary: Acute myeloid leukemia (AML) is a genetically heterogeneous disease. Clinical phenotypes of frequent mutations and their impact on patient outcome are well established. However, the role of rare mutations often remains elusive. We retrospectively analyzed 1529 newly diagnosed and intensively treated AML patients for mutations of BCOR and BCORL1. We report a distinct co-mutational pattern that suggests a role in disease progression rather than initiation, especially affecting mechanisms of DNA-methylation. Further, we found loss-of-function mutations of BCOR to be independent markers of poor outcomes in multivariable analysis. Therefore, loss-of-function mutations of BCOR need to be considered for AML management, as they may influence risk stratification and subsequent treatment allocation.
Abstract: Acute myeloid leukemia (AML) is characterized by recurrent genetic events. The BCL6 corepressor (BCOR) and its homolog, the BCL6 corepressor-like 1 (BCORL1), have been reported to be rare but recurrent mutations in AML. Previously, smaller studies have reported conflicting results regarding impacts on outcomes. Here, we retrospectively analyzed a large cohort of 1529 patients with newly diagnosed and intensively treated AML. BCOR and BCORL1 mutations were found in 71 (4.6%) and 53 patients (3.5%), respectively. Frequently co-mutated genes were DNTM3A, TET2 and RUNX1. Mutated BCORL1 and loss-of-function mutations of BCOR were significantly more common in the ELN2017 intermediate-risk group. Patients harboring loss-of-function mutations of BCOR had a significantly reduced median event-free survival (HR = 1.464 (95%-Confidence Interval (CI): 1.005–2.134), p = 0.047), relapse-free survival (HR = 1.904 (95%-CI: 1.163–3.117), p = 0.01), and trend for reduced overall survival (HR = 1.495 (95%-CI: 0.990–2.258), p = 0.056) in multivariable analysis. Our study establishes a novel role for loss-of-function mutations of BCOR regarding risk stratification in AML, which may influence treatment allocation.
Purpose: While more advanced COVID-19 necessitates medical interventions and hospitalization, patients with mild COVID-19 do not require this. Identifying patients at risk of progressing to advanced COVID-19 might guide treatment decisions, particularly for better prioritizing patients in need for hospitalization.
Methods: We developed a machine learning-based predictor for deriving a clinical score identifying patients with asymptomatic/mild COVID-19 at risk of progressing to advanced COVID-19. Clinical data from SARS-CoV-2 positive patients from the multicenter Lean European Open Survey on SARS-CoV-2 Infected Patients (LEOSS) were used for discovery (2020-03-16 to 2020-07-14) and validation (data from 2020-07-15 to 2021-02-16).
Results: The LEOSS dataset contains 473 baseline patient parameters measured at the first patient contact. After training the predictor model on a training dataset comprising 1233 patients, 20 of the 473 parameters were selected for the predictor model. From the predictor model, we delineated a composite predictive score (SACOV-19, Score for the prediction of an Advanced stage of COVID-19) with eleven variables. In the validation cohort (n = 2264 patients), we observed good prediction performance with an area under the curve (AUC) of 0.73 ± 0.01. Besides temperature, age, body mass index and smoking habit, variables indicating pulmonary involvement (respiration rate, oxygen saturation, dyspnea), inflammation (CRP, LDH, lymphocyte counts), and acute kidney injury at diagnosis were identified. For better interpretability, the predictor was translated into a web interface.
Conclusion: We present a machine learning-based predictor model and a clinical score for identifying patients at risk of developing advanced COVID-19.
Autophagy is a core molecular pathway for the preservation of cellular and organismal homeostasis. Pharmacological and genetic interventions impairing autophagy responses promote or aggravate disease in a plethora of experimental models. Consistently, mutations in autophagy-related processes cause severe human pathologies. Here, we review and discuss preclinical data linking autophagy dysfunction to the pathogenesis of major human disorders including cancer as well as cardiovascular, neurodegenerative, metabolic, pulmonary, renal, infectious, musculoskeletal, and ocular disorders.
Peri-implantitis: summary and consensus statements of group 3. The 6th EAO Consensus Conference 2021
(2021)
Objective: To evaluate the influence of implant and prosthetic components on peri-implant tissue health. A further aim was to evaluate peri-implant soft-tissue changes following surgical peri-implantitis treatment. Materials and methods: Group discussions based on two systematic reviews (SR) and one critical review (CR) addressed (i) the influence of implant material and surface characteristics on the incidence and progression of peri-implantitis, (ii) implant and restorative design elements and the associated risk for peri-implant diseases, and (iii) peri-implant soft-tissue level changes and patient-reported outcomes following peri-implantitis treatment. Consensus statements, clinical recommendations, and implications for future research were discussed within the group and approved during plenary sessions. Results: Data from preclinical in vivo studies demonstrated significantly greater radiographic bone loss and increased area of inflammatory infiltrate at modified compared to non-modified surface implants. Limited clinical data did not show differences between modified and non-modified implant surfaces in incidence or progression of peri-implantitis (SR). There is some evidence that restricted accessibility for oral hygiene and an emergence angle of >30 combined with a convex emergence profile of the abutment/prosthesis are associated with an increased risk for peri-implantitis (CR). Reconstructive therapy for peri-implantitis resulted in significantly less soft-tissue recession, when compared with access flap. Implantoplasty or the adjunctive use of a barrier membrane had no influence on the extent of peri-implant mucosal recession following peri-implantitis treatment (SR).