Refine
Year of publication
Document Type
- Article (42)
- Contribution to a Periodical (1)
- Doctoral Thesis (1)
Has Fulltext
- yes (44)
Is part of the Bibliography
- no (44)
Keywords
- HIV (4)
- COVID-19 (2)
- Opportunistic infections (2)
- AA-amylodosis (1)
- Acute coronary syndrome (1)
- Antibody avidity (1)
- Antigens (1)
- Antiretroviral therapy (1)
- Assay variation (1)
- Atrial fibrillation (1)
Institute
- Medizin (44) (remove)
Objective: Combination antiretroviral therapy (cART) has markedly increased survival and quality of life in people living with HIV. With the advent of new treatment options, including single-tablet regimens, durability and efficacy of first-line cART regimens are evolving.
Methods: We analyzed data from the prospective multicenter German Clinical Surveillance of HIV Disease (ClinSurv) cohort of the Robert-Koch Institute. Kaplan–Meier and Cox proportional hazards models were run to examine the factors associated with treatment modification. Recovery after treatment initiation was analyzed comparing pre-cART viral load and CD4+ T-cell counts with follow-up data.
Results: We included 8788 patients who initiated cART between 2005 and 2017. The sample population was predominantly male (n = 7040; 80.1%), of whom 4470 (63.5%) were reporting sex with men as the transmission risk factor. Overall, 4210 (47.9%) patients modified their first-line cART after a median time of 63 months (IQR 59–66). Regimens containing integrase strand transfer inhibitors (INSTI) were associated with significantly lower rates of treatment modification (adjusted hazard ratio 0.44; 95% CI 0.39–0.50) compared to protease inhibitor (PI)-based regimens. We found a decreased durability of first-line cART significantly associated with being female, a low CD4+ T-cell count, cART initiation in the later period (2011–2017), being on a multi-tablet regimen (MTR).
Conclusions: Drug class and MTRs are significantly associated with treatment modification. INSTI-based regimens showed to be superior compared to PI-based regimens in terms of durability.
Background: Patients with chronic kidney disease (CKD) are at high risk of myocardial infarction. Cardiac troponins are the biomarkers of choice for the diagnosis of acute myocardial infarction (AMI) without ST‐segment elevation (NSTE). In patients with CKD, troponin levels are often chronically elevated, which reduces their diagnostic utility when NSTE‐AMI is suspected. The aim of this study was to derive a diagnostic algorithm for serial troponin measurements in patients with CKD and suspected NSTE‐AMI.
Methods and Results: Two cohorts, 1494 patients from a prospective cohort study with high‐sensitivity troponin I (hs‐cTnI) measurements and 7059 cases from a clinical registry with high‐sensitivity troponin T (hs‐cTnT ) measurements, were analyzed. The prospective cohort comprised 280 CKD patients (estimated glomerular filtration rate <60 mL/min/1.73 m2). The registry data set contained 1581 CKD patients. In both cohorts, CKD patients were more likely to have adjudicated NSTE‐AMI than non‐CKD patients. The specificities of hs‐cTnI and hs‐cTnT to detect NSTE‐AMI were reduced with CKD (0.82 versus 0.91 for hs‐cTnI and 0.26 versus 0.73 for hs‐cTnT) but could be restored by applying optimized cutoffs to either the first or a second measurement after 3 hours. The best diagnostic performance was achieved with an algorithm that incorporates serial measurements and rules in or out AMI in 69% (hs‐cTnI) and 55% (hs‐cTnT) of CKD patients.
Conclusions: The diagnostic performance of high‐sensitivity cardiac troponins in patients with CKD with suspected NSTE‐AMI is improved by use of an algorithm based on admission troponin and dynamic changes in troponin concentration.
An accurate quantification of low viremic HCV RNA plasma samples has gained importance since the approval of direct acting antivirals and since only one single measurement predicts the necessity of a prolonged or shortened therapy. As reported previously, HCV quantification assays such as Abbott RealTime HCV and Roche COBAS AmpliPrep/COBAS TaqMan HCV version 2 (CTM v2) may vary in sensitivity and precision particularly in low-level viremia. Importantly, substantial variations were previously demonstrated between some of these assays compared to the Roche High Pure System/COBAS TaqMan assay (HPS) reference assay, which was used to establish the clinical decision points in clinical studies. In this study, the reproducibility of assay performances across several laboratories was assessed by analysing quantification results generated by six independent laboratories (3× RealTime, 3× CTM v2) in comparison with one HPS reference laboratory. The 4th WHO Standard was diluted to 100, 25 and 10 IU/ml, and aliquots were tested in triplicates in 5 independent runs by each assay in the different laboratories to assess assay precision and detection rates. In a second approach, 2 clinical samples (GT 1a & GT 1b) were diluted to 100 and 25 IU/ml and tested as described above. While the result range for WHO 100 IU/ml replicates across all laboratories was similar in this analysis, the CVs of each laboratory ranged from 19.3 to 25.6 % for RealTime laboratories and were lower than CVs of CTM v2 laboratories with a range of 26.1–47.3 %, respectively, and also in comparison with the CV of the HPS reference laboratory (34.9 %). At WHO standard dilution of 25 IU/ml, 24 replicates were quantified by RealTime compared to 8 replicates with CTM v2. Results of clinical samples again revealed a higher variation of CTM v2 results as compared to RealTime values. (CVs at 100 IU/ml: RealTime: 13.1–21.0 % and CTM v2: 15.0–32.3 %; CVs at 25 IU/ml: RealTime 17.6–34.9 % and CTM v2 28.2–54.9 %). These findings confirm the superior precision of RealTime versus CTM v2 at low-level viremia even across different laboratories including the new clinical decision point at 25 IU/ml. A highly precise monitoring of HCV viral load during therapy will remain crucial for patient management with regard to futility rules, therapy efficacy and SVR.
Background: 15-20% of all patients initially diagnosed with colorectal cancer develop metastatic disease and surgical resection remains the only potentially curative treatment available. Current 5-year survival following R0-resection of liver metastases is 28-39%, but recurrence eventually occurs in up to 70%. To date, adjuvant chemotherapy has not improved clinical outcomes significantly. The primary objective of the ongoing LICC trial (L-BLP25 In Colorectal Cancer) is to determine whether L-BLP25, an active cancer immunotherapy, extends recurrence-free survival (RFS) time over placebo in colorectal cancer patients following R0/R1 resection of hepatic metastases. L-BLP25 targets MUC1 glycoprotein, which is highly expressed in hepatic metastases from colorectal cancer. In a phase IIB trial, L-BLP25 has shown acceptable tolerability and a trend towards longer survival in patients with stage IIIB locoregional NSCLC.
Methods: This is a multinational, phase II, multicenter, randomized, double-blind, placebo-controlled trial with a sample size of 159 patients from 20 centers in 3 countries. Patients with stage IV colorectal adenocarcinoma limited to liver metastases are included. Following curative-intent complete resection of the primary tumor and of all synchronous/metachronous metastases, eligible patients are randomized 2:1 to receive either L-BLP25 or placebo. Those allocated to L-BLP25 receive a single dose of 300 mg/m2 cyclophosphamide (CP) 3 days before first L-BLP25 dose, then primary treatment with s.c. L-BLP25 930 mug once weekly for 8 weeks, followed by s.c. L-BLP25 930 mug maintenance doses at 6-week (years 1&2) and 12-week (year 3) intervals unless recurrence occurs. In the control arm, CP is replaced by saline solution and L-BLP25 by placebo. Primary endpoint is the comparison of recurrence-free survival (RFS) time between groups. Secondary endpoints are overall survival (OS) time, safety, tolerability, RFS/OS in MUC-1 positive cancers. Exploratory immune response analyses are planned. The primary endpoint will be assessed in Q3 2016. Follow-up will end Q3 2017. Interim analyses are not planned.
Discussion: The design and implementation of such a vaccination study in colorectal cancer is feasible. The study will provide recurrence-free and overall survival rates of groups in an unbiased fashion. Trial Registration EudraCT Number 2011-000218-20
Background: Few studies have evaluated the impact of pre-treatment drug resistance (PDR) on response to combination antiretroviral treatment (cART) in children. The objective of this joint EuroCoord-CHAIN-EPPICC/PENTA project was to assess the prevalence of PDR mutations and their association with virological outcome in the first year of cART in children.
Methods: HIV-infected children <18 years initiating cART between 1998 and 2008 were included if having at least one genotypic resistance test prior to cART initiation. We used the World Health Organization 2009 resistance mutation list and Stanford algorithm to infer resistance to prescribed drugs. Time to virological failure (VF) was defined as the first of two consecutive HIV-RNA > 500 copies/mL after 6 months cART and was assessed by Cox proportional hazards models. All models were adjusted for baseline demographic, clinical, immunology and virology characteristics and calendar period of cART start and initial cART regimen.
Results: Of 476 children, 88 % were vertically infected. At cART initiation, median (interquartile range) age was 6.6 years (2.1–10.1), CD4 cell count 297 cells/mm3 (98–639), and HIV-RNA 5.2 log10copies/mL (4.7–5.7). Of 37 children (7.8 %, 95 % confidence interval (CI), 5.5–10.6) harboring a virus with ≥1 PDR mutations, 30 children had a virus resistant to ≥1 of the prescribed drugs. Overall, the cumulative Kaplan-Meier estimate for virological failure was 19.8 % (95 %CI, 16.4–23.9). Cumulative risk for VF tended to be higher among children harboring a virus with PDR and resistant to ≥1 drug prescribed than among those receiving fully active cART: 32.1 % (17.2–54.8) versus 19.4 % (15.9–23.6) (P = 0.095). In multivariable analysis, age was associated with a higher risk of VF with a 12 % reduced risk per additional year (HR 0.88; 95 %CI, 0.82–0.95; P < 0.001).
Conclusions: PDR was not significantly associated with a higher risk of VF in children in the first year of cART. The risk of VF decreased by 12 % per additional year at treatment initiation which may be due to fading of PDR mutations over time. Lack of appropriate formulations, in particular for the younger age group, may be an important determinant of virological failure.
Despite the recent availability of vaccines against severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2), there is an urgent need for specific anti-SARS-CoV-2 drugs. Monoclonal neutralizing antibodies are an important drug class in the global fight against the SARS-CoV-2 pandemic due to their ability to convey immediate protection and their potential to be used as both prophylactic and therapeutic drugs. Clinically used neutralizing antibodies against respiratory viruses are currently injected intravenously, which can lead to suboptimal pulmonary bioavailability and thus to a lower effectiveness. Here we describe DZIF-10c, a fully human monoclonal neutralizing antibody that binds the receptor-binding domain of the SARS-CoV-2 spike protein. DZIF-10c displays an exceptionally high neutralizing potency against SARS-CoV-2, retains full activity against the variant of concern (VOC) B.1.1.7 and still neutralizes the VOC B.1.351, although with reduced potency. Importantly, not only systemic but also intranasal application of DZIF-10c abolished the presence of infectious particles in the lungs of SARS-CoV-2 infected mice and mitigated lung pathology when administered prophylactically. Along with a favorable pharmacokinetic profile, these results highlight DZIF-10c as a novel human SARS-CoV-2 neutralizing antibody with high in vitro and in vivo antiviral potency. The successful intranasal application of DZIF-10c paves the way for clinical trials investigating topical delivery of anti-SARS-CoV-2 antibodies.
Biomarkers and bacterial pneumonia risk in patients with treated HIV infection: a case-control study
(2013)
Background: Despite advances in HIV treatment, bacterial pneumonia continues to cause considerable morbidity and mortality in patients with HIV infection. Studies of biomarker associations with bacterial pneumonia risk in treated HIV-infected patients do not currently exist.
Methods: We performed a nested, matched, case-control study among participants randomized to continuous combination antiretroviral therapy (cART) in the Strategies for Management of Antiretroviral Therapy trial. Patients who developed bacterial pneumonia (cases) and patients without bacterial pneumonia (controls) were matched 1:1 on clinical center, smoking status, age, and baseline cART use. Baseline levels of Club Cell Secretory Protein 16 (CC16), Surfactant Protein D (SP-D), C-reactive protein (hsCRP), interleukin-6 (IL-6), and d-dimer were compared between cases and controls.
Results: Cases (n = 72) and controls (n = 72) were 25.7% female, 51.4% black, 65.3% current smokers, 9.7% diabetic, 36.1% co-infected with Hepatitis B/C, and 75.0% were on cART at baseline. Median (IQR) age was 45 (41, 51) years with CD4+ count of 553 (436, 690) cells/mm3. Baseline CC16 and SP-D were similar between cases and controls, but hsCRP was significantly higher in cases than controls (2.94 µg/mL in cases vs. 1.93 µg/mL in controls; p = 0.02). IL-6 and d-dimer levels were also higher in cases compared to controls, though differences were not statistically significant (p-value 0.06 and 0.10, respectively).
Conclusions: In patients with cART-treated HIV infection, higher levels of systemic inflammatory markers were associated with increased bacterial pneumonia risk, while two pulmonary-specific inflammatory biomarkers, CC16 and SP-D, were not associated with bacterial pneumonia risk.
Objectives: Rising prevalence of multidrug-resistant organisms (MDRO) is a major health problem in patients with liver cirrhosis. The impact of MDRO colonization in liver transplantation (LT) candidates and recipients on mortality has not been determined in detail.
Methods: Patients consecutively evaluated and listed for LT in a tertiary German liver transplant center from 2008 to 2018 underwent screening for MDRO colonization including methicillin-resistant Staphylococcus aureus (MRSA), multidrug-resistant gram-negative bacteria (MDRGN), and vancomycin-resistant enterococci (VRE). MDRO colonization and infection status were obtained at LT evaluation, planned and unplanned hospitalization, three months upon graft allocation, or at last follow-up on the waiting list.
Results: In total, 351 patients were listed for LT, of whom 164 (47%) underwent LT after a median of 249 (range 0–1662) days. Incidence of MDRO colonization increased during waiting time for LT, and MRDO colonization was associated with increased mortality on the waiting list (HR = 2.57, p<0.0001. One patients was colonized with a carbapenem-resistant strain at listing, 9 patients acquired carbapenem-resistant gram-negative bacteria (CRGN) on the waiting list, and 4 more after LT. In total, 10 of these 14 patients died.
Conclusions: Colonization with MDRO is associated with increased mortality on the waiting list, but not in short-term follow-up after LT. Moreover, colonization with CRGN seems associated with high mortality in liver transplant candidates and recipients.
Background: Common ECG criteria such as ST-segment changes are of limited value in patients with suspected acute myocardial infarction (AMI) and bundle branch block or wide QRS complex. A large proportion of these patients do not suffer from an AMI, whereas those with ST-elevation myocardial infarction (STEMI) equivalent AMI benefit from an aggressive treatment. Aim of the present study was to evaluate the diagnostic information of cardiac troponin I (cTnI) in hemodynamically stable patients with wide QRS complex and suspected AMI.
Methods: In 417 out of 1818 patients presenting consecutively between 01/2007 and 12/2008 in a prospective multicenter observational study with suspected AMI a prolonged QRS duration was observed. Of these, n = 117 showed significant obstructive coronary artery disease (CAD) used as diagnostic outcome variable. cTnI was determined at admission.
Results: Patients with significant CAD had higher cTnI levels compared to individuals without (median 250ng/L vs. 11ng/L; p<0.01). To identify patients needing a coronary intervention, cTnI yielded an area under the receiver operator characteristics curve of 0.849. Optimized cut-offs with respect to a sensitivity driven rule-out and specificity driven rule-in strategy were established (40ng/L/96ng/L). Application of the specificity optimized cut-off value led to a positive predictive value of 71% compared to 59% if using the 99th percentile cut-off. The sensitivity optimized cut-off value was associated with a negative predictive value of 93% compared to 89% provided by application of the 99th percentile threshold.
Conclusion: cTnI determined in hemodynamically stable patients with suspected AMI and wide QRS complex using optimized diagnostic thresholds improves rule-in and rule-out with respect to presence of a significant obstructive CAD.