Refine
Year of publication
Document Type
- Article (79)
- Conference Proceeding (1)
- Preprint (1)
Has Fulltext
- yes (81)
Is part of the Bibliography
- no (81)
Keywords
Institute
- Medizin (81) (remove)
Background: Patients with liver cirrhosis have a highly elevated risk of developing bacterial infections that significantly decrease survival rates. One of the most relevant infections is spontaneous bacterial peritonitis (SBP). Recently, NOD2 germline variants were found to be potential predictors of the development of infectious complications and mortality in patients with cirrhosis. The aim of the INCA (Impact of NOD2 genotype-guided antibiotic prevention on survival in patients with liver Cirrhosis and Ascites) trial is to investigate whether survival of this genetically defined high-risk group of patients with cirrhosis defined by the presence of NOD2 variants is improved by primary antibiotic prophylaxis of SBP.
Methods/Design: The INCA trial is a double-blind, placebo-controlled clinical trial with two parallel treatment arms (arm 1: norfloxacin 400 mg once daily; arm 2: placebo once daily; 12-month treatment and observational period). Balanced randomization of 186 eligible patients with stratification for the protein content of the ascites (<15 versus ≥15 g/L) and the study site is planned. In this multicenter national study, patients are recruited in at least 13 centers throughout Germany. The key inclusion criterion is the presence of a NOD2 risk variant in patients with decompensated liver cirrhosis. The most important exclusion criteria are current SBP or previous history of SBP and any long-term antibiotic prophylaxis. The primary endpoint is overall survival after 12 months of treatment. Secondary objectives are to evaluate whether the frequencies of SBP and other clinically relevant infections necessitating antibiotic treatment, as well as the total duration of unplanned hospitalization due to cirrhosis, differ in both study arms. Recruitment started in February 2014.
Discussion: Preventive strategies are required to avoid life-threatening infections in patients with liver cirrhosis, but unselected use of antibiotics can trigger resistant bacteria and worsen outcome. Thus, individualized approaches that direct intervention only to patients with the highest risk are urgently needed. This trial meets this need by suggesting stratified prevention based on genetic risk assessment. To our knowledge, the INCA trial is first in the field of hepatology aimed at rapidly transferring and validating information on individual genetic risk into clinical decision algorithms.
Trial registrations: German Clinical Trials Register DRKS00005616. Registered 22 January 2014. EU Clinical Trials Register EudraCT 2013-001626-26. Registered 26 January 2015.
Testing for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) by RT-PCR is a vital public health tool in the pandemic. Self-collected samples are increasingly used as an alternative to nasopharyngeal swabs. Several studies suggested that they are sufficiently sensitive to be a useful alternative. However, there are limited data directly comparing several different types of self-collected materials to determine which material is preferable. A total of 102 predominantly symptomatic adults with a confirmed SARS-CoV-2 infection self-collected native saliva, a tongue swab, a mid-turbinate nasal swab, saliva obtained by chewing a cotton pad and gargle lavage, within 48 h of initial diagnosis. Sample collection was unsupervised. Both native saliva and gargling with tap water had high diagnostic sensitivity of 92.8% and 89.1%, respectively. Nasal swabs had a sensitivity of 85.1%, which was not significantly inferior to saliva (p = 0.092), but 16.6% of participants reported they had difficult in self-collection of this sample. A tongue swab and saliva obtained by chewing a cotton pad had a significantly lower sensitivity of 74.2% and 70.2%, respectively. Diagnostic sensitivity was not related to the presence of clinical symptoms or to age. When comparing self-collected specimens from different material, saliva, gargle lavage or mid-turbinate nasal swabs may be considered for most symptomatic patients. However, complementary experiments are required to verify that differences in performance observed among the five sampling modes were not attributed to collection impairment.
Rationale: Postinfectious bronchiolitis obliterans (PIBO) is a rare, chronic respiratory condition, which follows an acute insult due to a severe infection of the lower airways. Objectives: The objective of this study was to investigate the long-term course of bronchial inflammation and pulmonary function testing in children with PIBO. Methods: Medical charts of 21 children with PIBO were analyzed retrospectively at the Children's University Hospital Frankfurt/Main Germany. Pulmonary function tests (PFTs) with an interval of at least 1 month were studied between 2002 and 2019. A total of 382 PFTs were analyzed retrospectively and per year, the two best PFTs, in total 217, were evaluated. Additionally, 56 sputum analysis were assessed and the sputum neutrophils were evaluated. Results: The evaluation of the 217 PFTs showed a decrease in FEV1 with a loss of 1.07% and a loss in z score of −0.075 per year. FEV1/FVC decreased by 1.44 per year. FVC remained stable, showing a nonsignificant increase by 0.006 in z score per year. However, FEV1 and FVC in L increased significantly with FEV1 0.032 L per cm and FVC 0.048 L/cm in height. Sputum neutrophils showed a significant increase of 2.12% per year. Conclusion: Our results demonstrated that in patients with PIBO pulmonary function decreased significantly showing persistent obstruction over an average follow-up period of 8 years. However, persistent lung growth was revealed. In addition, pulmonary inflammation persisted clearly showing an increasing amount of neutrophils in induced sputum. Patients did not present with a general susceptibility to respiratory infections.
The coronavirus pandemic continues to challenge global healthcare. Severely affected patients are often in need of high doses of analgesics and sedatives. The latter was studied in critically ill coronavirus disease 2019 (COVID-19) patients in this prospective monocentric analysis. COVID-19 acute respiratory distress syndrome (ARDS) patients admitted between 1 April and 1 December 2020 were enrolled in the study. A statistical analysis of impeded sedation using mixed-effect linear regression models was performed. Overall, 114 patients were enrolled, requiring unusual high levels of sedatives. During 67.9% of the observation period, a combination of sedatives was required in addition to continuous analgesia. During ARDS therapy, 85.1% (n = 97) underwent prone positioning. Veno-venous extracorporeal membrane oxygenation (vv-ECMO) was required in 20.2% (n = 23) of all patients. vv-ECMO patients showed significantly higher sedation needs (p < 0.001). Patients with hepatic (p = 0.01) or renal (p = 0.01) dysfunction showed significantly lower sedation requirements. Except for patient age (p = 0.01), we could not find any significant influence of pre-existing conditions. Age, vv-ECMO therapy and additional organ failure could be demonstrated as factors influencing sedation needs. Young patients and those receiving vv-ECMO usually require increased sedation for intensive care therapy. However, further studies are needed to elucidate the causes and mechanisms of impeded sedation.
High sedation needs of critically ill COVID-19 ARDS patients - a monocentric observational study
(2021)
Background: Therapy of severely affected coronavirus patient, requiring intubation and sedation is still challenging. Recently, difficulties in sedating these patients have been discussed. This study aims to describe sedation practices in patients with 2019 coronavirus disease (COVID-19)-induced acute respiratory distress syndrome (ARDS). Methods: We performed a retrospective monocentric analysis of sedation regimens in critically ill intubated patients with respiratory failure who required sedation in our mixed 32-bed university intensive care unit. All mechanically ventilated adults with COVID-19-induced ARDS requiring continuously infused sedative therapy admitted between April 4, 2020, and June 30, 2020 were included. We recorded demographic data, sedative dosages, prone positioning, sedation levels and duration. Descriptive data analysis was performed; for additional analysis, a logistic regression with mixed effect was used. Results: In total, 56 patients (mean age 67 (±14) years) were included. The mean observed sedation period was 224 (±139) hours. To achieve the prescribed sedation level, we observed the need for two or three sedatives in 48.7% and 12.8% of the cases, respectively. In cases with a triple sedation regimen, the combination of clonidine, esketamine and midazolam was observed in most cases (75.7%). Analgesia was achieved using sufentanil in 98.6% of the cases. The analysis showed that the majority of COVID-19 patients required an unusually high sedation dose compared to those available in the literature. Conclusion: The global pandemic continues to affect patients severely requiring ventilation and sedation, but optimal sedation strategies are still lacking. The findings of our observation suggest unusual high dosages of sedatives in mechanically ventilated patients with COVID-19. Prescribed sedation levels appear to be achievable only with several combinations of sedatives in most critically ill patients suffering from COVID-19-induced ARDS and a potential association to the often required sophisticated critical care including prone positioning and ECMO treatment seems conceivable.
Introduction: Recommendations for venous thromboembolism and deep venous thrombosis (DVT) prophylaxis using graduated compression stockings (GCS) is historically based and has been critically examined in current publications. Existing guidelines are inconclusive as to recommend the general use of GCS.
Patients/Methods: 24 273 in-patients (general surgery and orthopedic patients) undergoing surgery between 2006 and 2016 were included in a retrospectively analysis from a single center. From January 2006 to January 2011 perioperative GCS was employed additionally to drug prophylaxis and from February 2011 to March 2016 patients received drug prophylaxis alone. According to german guidelines all patients received venous thromboembolism prophylaxis with weight-adapted LMWH. Risk stratification (low risk, moderate risk, high risk) was based on the guideline of the American College of Chest Physicians. Data analysis was performed before and after propensity matching (PM). The defined primary endpoint was the incidence of symptomatic or fatal pulmonary embolism (PE). A secondary endpoint was the incidence of deep venous thromboembolism (DVT).
Results: After risk stratification (low risk n = 16 483; moderate risk n = 4464; high risk n = 3326) a total of 24 273 patient were analyzed. Before to PM the relative risk for the occurrence of a PE or DVT was not increased by abstaining from GCS. After PM two groups of 11 312 patients each, one with and one without GCS application, were formed. When comparing the two groups, the relative risk (RR) for the occurrence of a pulmonary embolism was: Low Risk 0.99 [CI95% 0.998–1.000]; Moderate Risk 0.999 [CI95% 0.95–1.003]; High Risk 0.996 [CI95% 0.992–1.000] (p > 0.05). The incidence of PE in the total group LMWH alone was 0.1% (n = 16). In the total group using LMWH + GCS, the incidence was 0.3% (n = 29). RR after PM was 0.999 [CI95% 0.998–1.00].
Conclusion: In comparison to prior studies with only small numbers of patients our trial shows in a large group of patients with moderate and high risk developing VTE we can support the view that abstaining from GCS-use does not increase the incidence of symptomatic or fatal PE and symptomatic DVT.
Background: Epileptic seizures are common clinical features in patients with acute subdural hematoma (aSDH); however, diagnostic feasibility and therapeutic monitoring remain limited. Surface electroencephalography (EEG) is the major diagnostic tool for the detection of seizures but it might be not sensitive enough to detect all subclinical or nonconvulsive seizures or status epilepticus. Therefore, we have planned a clinical trial to evaluate a novel treatment modality by perioperatively implanting subdural EEG electrodes to diagnose seizures; we will then treat the seizures under therapeutic monitoring and analyze the clinical benefit.
Methods: In a prospective nonrandomized trial, we aim to include 110 patients with aSDH. Only patients undergoing surgical removal of aSDH will be included; one arm will be treated according to the guidelines of the Brain Trauma Foundation, while the other arm will additionally receive a subdural grid electrode. The study's primary outcome is the comparison of incidence of seizures and time-to-seizure between the interventional and control arms. Invasive therapeutic monitoring will guide treatment with antiseizure drugs (ASDs). The secondary outcome will be the functional outcome for both groups as assessed via the Glasgow Outcome Scale and modified Rankin Scale both at discharge and during 6 months of follow-up. The tertiary outcome will be the evaluation of chronic epilepsy within 2-4 years of follow-up.
Discussion: The implantation of a subdural EEG grid electrode in patients with aSDH is expected to be effective in diagnosing seizures in a timely manner, facilitating treatment with ASDs and monitoring of treatment success. Moreover, the occurrence of epileptiform discharges prior to the manifestation of seizure patterns could be evaluated in order to identify high-risk patients who might benefit from prophylactic treatment with ASDs.
Trial registration: ClinicalTrials.gov identifier no. NCT04211233.
Seroconversion rates following influenza vaccination in patients with hematologic malignancies after hematopoietic stem cell transplantation (HSCT) are known to be lower compared to healthy adults. The aim of our diagnostic study was to determine the rate of seroconversion after 1 or 2 doses of a novel split virion, inactivated, AS03-adjuvanted pandemic H1N1 influenza vaccine (A/California/7/2009) in HSCT recipients (ClinicalTrials.gov Identifier: NCT01017172). Blood samples were taken before and 21 days after a first dose and 21 days after a second dose of the vaccine. Antibody (AB) titers were determined by hemagglutination inhibition assay. Seroconversion was defined by either an AB titer of ≤1:10 before and ≥1:40 after or ≥1:10 before and ≥4-fold increase in AB titer 21 days after vaccination. Seventeen patients (14 allogeneic, 3 autologous HSCT) received 1 dose and 11 of these patients 2 doses of the vaccine. The rate of seroconversion was 41.2% (95% confidence interval [CI] 18.4-67.1) after the first and 81.8% (95% CI 48.2-97.7) after the second dose. Patients who failed to seroconvert after 1 dose of the vaccine were more likely to receive any immunosuppressive agent (P = .003), but time elapsed after or type of HSCT, age, sex, or chronic graft-versus-host disease was not different when compared to patients with seroconversion. In patients with hematologic malignancies after HSCT the rate of seroconversion after a first dose of an adjuvanted H1N1 influenza A vaccine was poor, but increased after a second dose.
Intrahepatic cholangiocarcinoma (iCCA) is the most frequent subtype of cholangiocarcinoma (CCA), and the incidence has globally increased in recent years. In contrast to surgically treated iCCA, data on the impact of fibrosis on survival in patients undergoing palliative chemotherapy are missing. We retrospectively analyzed the cases of 70 patients diagnosed with iCCA between 2007 and 2020 in our tertiary hospital. Histopathological assessment of fibrosis was performed by an expert hepatobiliary pathologist. Additionally, the fibrosis-4 score (FIB-4) was calculated as a non-invasive surrogate marker for liver fibrosis. For overall survival (OS) and progression-free survival (PFS), Kaplan–Meier curves and Cox-regression analyses were performed. Subgroup analyses revealed a median OS of 21 months (95% CI = 16.7–25.2 months) and 16 months (95% CI = 7.6–24.4 months) for low and high fibrosis, respectively (p = 0.152). In non-cirrhotic patients, the median OS was 21.8 months (95% CI = 17.1–26.4 months), compared with 9.5 months (95% CI = 4.6–14.3 months) in cirrhotic patients (p = 0.007). In conclusion, patients with iCCA and cirrhosis receiving palliative chemotherapy have decreased OS rates, while fibrosis has no significant impact on OS or PFS. These patients should not be prevented from state-of-the-art first-line chemotherapy.
Standard monitoring of heart rate, blood pressure and arterial oxygen saturation during endoscopy is recommended by current guidelines on procedural sedation. A number of studies indicated a reduction of hypoxic (art. oxygenation < 90% for > 15 s) and severe hypoxic events (art. oxygenation < 85%) by additional use of capnography. Therefore, U.S. and the European guidelines comment that additional capnography monitoring can be considered in long or deep sedation. Integrated Pulmonary Index® (IPI) is an algorithm-based monitoring parameter that combines oxygenation measured by pulse oximetry (art. oxygenation, heart rate) and ventilation measured by capnography (respiratory rate, apnea > 10 s, partial pressure of end-tidal carbon dioxide [PetCO2]). The aim of this paper was to analyze the value of IPI as parameter to monitor the respiratory status in patients receiving propofol sedation during PEG-procedure. Patients reporting for PEG-placement under sedation were randomized 1:1 in either standard monitoring group (SM) or capnography monitoring group including IPI (IM). Heart rate, blood pressure and arterial oxygen saturation were monitored in SM. In IM additional monitoring was performed measuring PetCO2, respiratory rate and IPI. Capnography and IPI values were recorded for all patients but were only visible to the endoscopic team for the IM-group. IPI values range between 1 and 10 (10 = normal; 8–9 = within normal range; 7 = close to normal range, requires attention; 5–6 = requires attention and may require intervention; 3–4 = requires intervention; 1–2 requires immediate intervention). Results on capnography versus standard monitoring of the same study population was published previously. A total of 147 patients (74 in SM and 73 in IM) were included in the present study. Hypoxic events occurred in 62 patients (42%) and severe hypoxic events in 44 patients (29%), respectively. Baseline characteristics were equally distributed in both groups. IPI = 1, IPI < 7 as well as the parameters PetCO2 = 0 mmHg and apnea > 10 s had a high sensitivity for hypoxic and severe hypoxic events, respectively (IPI = 1: 81%/81% [hypoxic/severe hypoxic event], IPI < 7: 82%/88%, PetCO2: 69%/68%, apnea > 10 s: 84%/84%). All four parameters had a low specificity for both hypoxic and severe hypoxic events (IPI = 1: 13%/12%, IPI < 7: 7%/7%, PetCO2: 29%/27%, apnea > 10 s: 7%/7%). In multivariate analysis, only SM and PetCO2 = 0 mmHg were independent risk factors for hypoxia. IPI (IPI = 1 and IPI < 7) as well as the individual parameters PetCO2 = 0 mmHg and apnea > 10 s allow a fast and convenient conclusion on patients’ respiratory status in a morbid patient population. Sensitivity is good for most parameters, but specificity is poor. In conclusion, IPI can be a useful metric to assess respiratory status during propofol-sedation in PEG-placement. However, IPI was not superior to PetCO2 and apnea > 10 s.
The immune response is known to wane after vaccination with BNT162b2, but the role of age, morbidity and body composition is not well understood. We conducted a cross-sectional study in long-term care facilities (LTCFs) for the elderly. All study participants had completed two-dose vaccination with BNT162b2 five to 7 months before sample collection. In 298 residents (median age 86 years, range 75–101), anti-SARS-CoV-2 rector binding IgG antibody (anti-RBD-IgG) concentrations were low and inversely correlated with age (mean 51.60 BAU/ml). We compared the results to Health Care Workers (HCW) aged 18–70 years (n = 114, median age: 53 years), who had a higher mean anti-RBD-IgG concentration of 156.99 BAU/ml. Neutralization against the Delta variant was low in both groups (9.5% in LTCF residents and 31.6% in HCWs). The Charlson Comorbidity Index was inversely correlated with anti-RBD-IgG, but not the body mass index (BMI). A control group of 14 LTCF residents with known breakthrough infection had significant higher antibody concentrations (mean 3,199.65 BAU/ml), and 85.7% had detectable neutralization against the Delta variant. Our results demonstrate low but recoverable markers of immunity in LTCF residents five to 7 months after vaccination.
Background: The development of robotic systems has provided an alternative to frame-based stereotactic procedures. The aim of this experimental phantom study was to compare the mechanical accuracy of the Robotic Surgery Assistant (ROSA) and the Leksell stereotactic frame by reducing clinical and procedural factors to a minimum.
Methods: To precisely compare mechanical accuracy, a stereotactic system was chosen as reference for both methods. A thin layer CT scan with an acrylic phantom fixed to the frame and a localizer enabling the software to recognize the coordinate system was performed. For each of the five phantom targets, two different trajectories were planned, resulting in 10 trajectories. A series of five repetitions was performed, each time based on a new CT scan. Hence, 50 trajectories were analyzed for each method. X-rays of the final cannula position were fused with the planning data. The coordinates of the target point and the endpoint of the robot- or frame-guided probe were visually determined using the robotic software. The target point error (TPE) was calculated applying the Euclidian distance. The depth deviation along the trajectory and the lateral deviation were separately calculated.
Results: Robotics was significantly more accurate, with an arithmetic TPE mean of 0.53 mm (95% CI 0.41–0.55 mm) compared to 0.72 mm (95% CI 0.63–0.8 mm) in stereotaxy (p < 0.05). In robotics, the mean depth deviation along the trajectory was −0.22 mm (95% CI −0.25 to −0.14 mm). The mean lateral deviation was 0.43 mm (95% CI 0.32–0.49 mm). In frame-based stereotaxy, the mean depth deviation amounted to −0.20 mm (95% CI −0.26 to −0.14 mm), the mean lateral deviation to 0.65 mm (95% CI 0.55–0.74 mm).
Conclusion: Both the robotic and frame-based approach proved accurate. The robotic procedure showed significantly higher accuracy. For both methods, procedural factors occurring during surgery might have a more relevant impact on overall accuracy.
Estimating intraoperative blood loss is one of the daily challenges for clinicians. Despite the knowledge of the inaccuracy of visual estimation by anaesthetists and surgeons, this is still the mainstay to estimate surgical blood loss. This review aims at highlighting the strengths and weaknesses of currently used measurement methods. A systematic review of studies on estimation of blood loss was carried out. Studies were included investigating the accuracy of techniques for quantifying blood loss in vivo and in vitro. We excluded nonhuman trials and studies using only monitoring parameters to estimate blood loss. A meta-analysis was performed to evaluate systematic measurement errors of the different methods. Only studies that were compared with a validated reference e.g. Haemoglobin extraction assay were included. 90 studies met the inclusion criteria for systematic review and were analyzed. Six studies were included in the meta-analysis, as only these were conducted with a validated reference. The mixed effect meta-analysis showed the highest correlation to the reference for colorimetric methods (0.93 95% CI 0.91–0.96), followed by gravimetric (0.77 95% CI 0.61–0.93) and finally visual methods (0.61 95% CI 0.40–0.82). The bias for estimated blood loss (ml) was lowest for colorimetric methods (57.59 95% CI 23.88–91.3) compared to the reference, followed by gravimetric (326.36 95% CI 201.65–450.86) and visual methods (456.51 95% CI 395.19–517.83). Of the many studies included, only a few were compared with a validated reference. The majority of the studies chose known imprecise procedures as the method of comparison. Colorimetric methods offer the highest degree of accuracy in blood loss estimation. Systems that use colorimetric techniques have a significant advantage in the real-time assessment of blood loss.
Background: IL28B gene polymorphism is the best baseline predictor of response to interferon alfa-based antiviral therapies in chronic hepatitis C. Recently, a new IFN-L4 polymorphism was identified as first potential functional variant for induction of IL28B expression. Individualization of interferon alfa-based therapies based on a combination of IL28B/IFN-L4 polymorphisms may help to optimize virologic outcome and economic resources.
Methods: Optimization of treatment outcome prediction was assessed by combination of different IL28B and IFN-L4 polymorphisms in patients with chronic HCV genotype 1 (n = 385), 2/3 (n = 267), and 4 (n = 220) infection treated with pegylated interferon alfa (PEG-IFN) and ribavirin with (n = 79) or without telaprevir. Healthy people from Germany (n = 283) and Egypt (n = 96) served as controls.
Results: Frequencies of beneficial IL28B rs12979860 C/C genotypes were lower in HCV genotype 1/4 infected patients in comparison to controls (20–35% vs. 46–47%) this was also true for ss469415590 TT/TT (20–35% vs. 45–47%). Single interferon-lambda SNPs (rs12979860, rs8099917, ss469415590) correlated with sustained virologic response (SVR) in genotype 1, 3, and 4 infected patients while no association was observed for genotype 2. Interestingly, in genotype 3 infected patients, best SVR prediction was based on IFN-L4 genotype. Prediction of SVR with high accuracy (71–96%) was possible in genotype 1, 2, 3 and 4 infected patients who received PEG-IFN/ribavirin combination therapy by selection of beneficial IL28B rs12979860 C/C and/or ss469415590 TT/TT genotypes (p<0.001). For triple therapy with first generation protease inhibitors (PIs) (boceprevir, telaprevir) prediction of high SVR (90%) rates was based on the presence of at least one beneficial genotype of the 3 IFN-lambda SNPs.
Conclusion: IFN-L4 seems to be the best single predictor of SVR in genotype 3 infected patients. For optimized prediction of SVR by treatment with dual combination or first generation PI triple therapies, grouping of interferon-lambda haplotypes may be helpful with positive predictive values of 71–96%.
Triple therapy of chronic hepatitis C virus (HCV) infection with boceprevir (BOC) or telaprevir (TVR) leads to virologic failure in many patients which is often associated with the selection of resistance-associated variants (RAVs). These resistance profiles are of importance for the selection of potential rescue treatment options. In this study, we sequenced baseline NS3 RAVs population-based and investigated the sensitivity of NS3 phenotypes in an HCV replicon assay together with clinical factors for a prediction of treatment response in a cohort of 165 German and Swiss patients treated with a BOC or TVR-based triple therapy. Overall, the prevalence of baseline RAVs was low, although the frequency of RAVs was higher in patients with virologic failure compared to those who achieved a sustained virologic response (SVR) (7% versus 1%, P = 0.06). The occurrence of RAVs was associated with a resistant NS3 quasispecies phenotype (P<0.001), but the sensitivity of phenotypes was not associated with treatment outcome (P = 0.2). The majority of single viral and host predictors of SVR was only weakly associated with treatment response. In multivariate analyses, low AST levels, female sex and an IFNL4 CC genotype were independently associated with SVR. However, a combined analysis of negative predictors revealed a significantly lower overall number of negative predictors in patients with SVR in comparison to individuals with virologic failure (P<0.0001) and the presence of 2 or less negative predictors was indicative for SVR. These results demonstrate that most single baseline viral and host parameters have a weak influence on the response to triple therapy, whereas the overall number of negative predictors has a high predictive value for SVR.
Interleukin-22 predicts severity and death in advanced liver cirrhosis: a prospective cohort study
(2012)
Background: Interleukin-22 (IL-22), recently identified as a crucial parameter of pathology in experimental liver damage, may determine survival in clinical end-stage liver disease. Systematic analysis of serum IL-22 in relation to morbidity and mortality of patients with advanced liver cirrhosis has not been performed so far.
Methods: This is a prospective cohort study including 120 liver cirrhosis patients and 40 healthy donors to analyze systemic levels of IL-22 in relation to survival and hepatic complications.
Results: A total of 71% of patients displayed liver cirrhosis-related complications at study inclusion. A total of 23% of the patients died during a mean follow-up of 196 +/- 165 days. Systemic IL-22 was detectable in 74% of patients but only in 10% of healthy donors (P <0.001). Elevated levels of IL-22 were associated with ascites (P = 0.006), hepatorenal syndrome (P <0.0001), and spontaneous bacterial peritonitis (P = 0.001). Patients with elevated IL-22 (>18 pg/ml, n = 57) showed significantly reduced survival compared to patients with regular ([less than or equal to]18 pg/ml) levels of IL-22 (321 days versus 526 days, P = 0.003). Other factors associated with overall survival were high CRP ([greater than or equal to]2.9 mg/dl, P = 0.005, hazard ratio (HR) 0.314, confidence interval (CI) (0.141 to 0.702)), elevated serum creatinine (P = 0.05, HR 0.453, CI (0.203 to 1.012)), presence of liver-related complications (P = 0.028, HR 0.258 CI (0.077 to 0.862)), model of end stage liver disease (MELD) score [greater than or equal to]20 (P = 0.017, HR 0.364, CI (0.159 to 0.835)) and age (P = 0.011, HR 1.047, CI (1.011 to 1.085)). Adjusted multivariate Cox proportional-hazards analysis identified elevated systemic IL-22 levels as independent predictors of reduced survival (P = 0.007, HR 0.218, CI (0.072 to 0.662)).
Conclusions: In patients with liver cirrhosis, elevated systemic IL-22 levels are predictive for reduced survival independently from age, liver-related complications, CRP, creatinine and the MELD score. Thus, processes that lead to a rise in systemic interleukin-22 may be relevant for prognosis of advanced liver cirrhosis.
Background and Aims: In patients with advanced liver cirrhosis due to chronic hepatitis C virus (HCV) infection antiviral therapy with peginterferon and ribavirin is feasible in selected cases only due to potentially life-threatening side effects. However, predictive factors associated with hepatic decompensation during antiviral therapy are poorly defined.
Methods: In a retrospective cohort study, 68 patients with HCV-associated liver cirrhosis (mean MELD score 9.18±2.72) were treated with peginterferon and ribavirin. Clinical events indicating hepatic decompensation (onset of ascites, hepatic encephalopathy, upper gastrointestinal bleeding, hospitalization) as well as laboratory data were recorded at baseline and during a follow up period of 72 weeks after initiation of antiviral therapy. To monitor long term sequelae of end stage liver disease an extended follow up for HCC development, transplantation and death was applied (240weeks, ±SD 136weeks).
Results: Eighteen patients (26.5%) achieved a sustained virologic response. During the observational period a hepatic decompensation was observed in 36.8%. Patients with hepatic decompensation had higher MELD scores (10.84 vs. 8.23, p<0.001) and higher mean bilirubin levels (26.74 vs. 14.63 µmol/l, p<0.001), as well as lower serum albumin levels (38.2 vs. 41.1 g/l, p = 0.015), mean platelets (102.64 vs. 138.95/nl, p = 0.014) and mean leukocytes (4.02 vs. 5.68/nl, p = 0.002) at baseline as compared to those without decompensation. In the multivariate analysis the MELD score remained independently associated with hepatic decompensation (OR 1.56, 1.18–2.07; p = 0.002). When the patients were grouped according to their baseline MELD scores, hepatic decompensation occurred in 22%, 59%, and 83% of patients with MELD scores of 6–9, 10–13, and >14, respectively. Baseline MELD score was significantly associated with the risk for transplantation/death (p<0.001).
Conclusions: Our data suggest that the baseline MELD score predicts the risk of hepatic decompensation during antiviral therapy and thus contributes to decision making when antiviral therapy is discussed in HCV patients with advanced liver cirrhosis.
After myocardial infarction in the adult heart the remaining, non-infarcted tissue adapts to compensate the loss of functional tissue. This adaptation requires changes in gene expression networks, which are mostly controlled by transcription regulating proteins. Long non-coding transcripts (lncRNAs) are now recognized for taking part in fine-tuning such gene programs. We identified and characterized the cardiomyocyte specific lncRNA Sweetheart RNA (Swhtr), an approximately 10 kb long transcript divergently expressed from the cardiac core transcription factor coding gene Nkx2-5. We show that Swhtr is dispensable for normal heart development and function, but becomes essential for the tissue adaptation process after myocardial infarction. Re-expressing Swhtr from an exogenous locus rescues the Swhtr null phenotype. Genes depending on Swhtr after cardiac stress are significantly occupied, and therefore most likely regulated by NKX2-5. Our results indicate a synergistic role for Swhtr and the developmentally essential transcription factor NKX2-5 in tissue adaptation after myocardial injury.
Association of mortality and early tracheostomy in patients with COVID-19: a retrospective analysis
(2022)
COVID-19 adds to the complexity of optimal timing for tracheostomy. Over the course of this pandemic, and expanded knowledge of the disease, many centers have changed their operating procedures and performed an early tracheostomy. We studied the data on early and delayed tracheostomy regarding patient outcome such as mortality. We performed a retrospective analysis of all tracheostomies at our institution in patients diagnosed with COVID-19 from March 2020 to June 2021. Time from intubation to tracheostomy and mortality of early (≤ 10 days) vs. late (> 10 days) tracheostomy were the primary objectives of this study. We used mixed cox-regression models to calculate the effect of distinct variables on events. We studied 117 tracheostomies. Intubation to tracheostomy shortened significantly (Spearman’s correlation coefficient; rho = − 0.44, p ≤ 0.001) during the course of this pandemic. Early tracheostomy was associated with a significant increase in mortality in uni- and multivariate analysis (Hazard ratio 1.83, 95% CI 1.07–3.17, p = 0.029). The timing of tracheostomy in COVID-19 patients has a potentially critical impact on mortality. The timing of tracheostomy has changed during this pandemic tending to be performed earlier. Future prospective research is necessary to substantiate these results.
CD4+ T cell lymphopenia predicts mortality from Pneumocystis pneumonia in kidney transplant patients
(2020)
Background: Pneumocystis jirovecii pneumonia (PcP) remains a life-threatening opportunistic infection after solid organ transplantation, even in the era of Pneumocystis prophylaxis. The association between risk of developing PcP and low CD4+ T cell counts has been well established. However, it is unknown whether lymphopenia in the context of post-renal transplant PcP increases the risk of mortality. Methods: We carried out a retrospective analysis of a cohort of kidney transplant patients with PcP (n = 49) to determine the risk factors for mortality associated with PcP. We correlated clinical and demographic data with the outcome of the disease. For CD4+ T cell counts, we used the Wilcoxon rank sum test for in-hospital mortality and a Cox proportional-hazards regression model for 60-day mortality. Results: In univariate analyses, high CRP, high neutrophils, CD4+ T cell lymphopenia, mechanical ventilation, and high acute kidney injury network stage were associated with in-hospital mortality following presentation with PcP. In a receiver-operator characteristic (ROC) analysis, an optimum cutoff of ≤200 CD4+ T cells/µL predicted in-hospital mortality, CD4+ T cell lymphopenia remained a risk factor in a Cox regression model. Conclusions: Low CD4+ T cell count in kidney transplant recipients is a biomarker for disease severity and a risk factor for in-hospital mortality following presentation with PcP.
After myocardial infarction in the adult heart the remaining, non-infarcted tissue adapts to compensate the loss of functional tissue. This adaptation requires changes in gene expression networks, which are mostly controlled by transcription regulating proteins. Long non-coding transcripts (lncRNAs) are taking part in fine-tuning such gene programs. We describe and characterize the cardiomyocyte specific lncRNA Sweetheart RNA (Swhtr), an approximately 10 kb long transcript divergently expressed from the cardiac core transcription factor coding gene Nkx2-5. We show that Swhtr is dispensable for normal heart development and function but becomes essential for the tissue adaptation process after myocardial infarction in murine males. Re-expressing Swhtr from an exogenous locus rescues the Swhtr null phenotype. Genes that depend on Swhtr after cardiac stress are significantly occupied and therefore most likely regulated by NKX2-5. The Swhtr transcript interacts with NKX2-5 and disperses upon hypoxic stress in cardiomyocytes, indicating an auxiliary role of Swhtr for NKX2-5 function in tissue adaptation after myocardial injury.