Institutes
Refine
Document Type
- Article (24)
- Conference Proceeding (1)
Language
- English (25)
Has Fulltext
- yes (25)
Is part of the Bibliography
- no (25)
Keywords
- COVID-19 (5)
- SARS-CoV-2 (4)
- influenza (2)
- out-patient paediatrics (2)
- respiratory tract infection (2)
- 3D laparoscopy (1)
- Acute respiratory distress syndrom (1)
- Age estimation (1)
- Analysis tool (1)
- Antibody therapy (1)
Institute
- Medizin (25) (remove)
Objectives In this early retrospective cohort study, a total of 26 patients with SARS-CoV-2 were treated with bamlanivimab or casirivimab/imdevimab, and the reduction of the viral load associated with the developed clinical symptoms was analyzed.
Methods: Patients in the intervention groups received bamlanivimab or casirivimab/imdevimab. Patients without treatment served as control. Outcomes were assessed by clinical symptoms and change in log viral load from baseline based on the cycle threshold over a period of 18 days.
Results: Median log viral load decline was higher in both intervention groups after 3 and 6 days compared to control. However, at later time points, the decline of the viral load was more distinct in the control group. Mild symptoms of COVID-19 were observed in 6.3% of the intervention groups and in no patient of the control. No patients treated with bamlanivimab, 18.8% treated with casirivimab/imdevimab, and 14.2% in the control group developed moderate symptoms. Severe symptoms were recorded only in the control group (14.2%), including one related death.
Conclusion: Treatment with monoclonal SARS-CoV-2 antibodies seems to accelerate decline of virus loads, especially in the first 6 days after administration, compared to control. This may be associated with a reduced likeliness of a severe course of COVID-19.
Compressive knee joint contact force during walking is thought to be related to initiation and progression of knee osteoarthritis. However, joint loading is often evaluated with surrogate measures, like the external knee adduction moment, due to the complexity of computing joint contact forces. Statistical models have shown promising correlations between medial knee joint contact forces and knee adduction moments in particularly in individuals with knee osteoarthritis or after total knee replacements (R2 = 0.44–0.60). The purpose of this study was to evaluate how accurately model-based predictions of peak medial and lateral knee joint contact forces during walking could be estimated by linear mixed-effects models including joint moments for children and adolescents with and without valgus malalignment. Peak knee joint moments were strongly correlated (R2 > 0.85, p < 0.001) with both peak medial and lateral knee joint contact forces. The knee flexion and adduction moments were significant covariates in the models, strengthening the understanding of the statistical relationship between both moments and medial and lateral knee joint contact forces. In the future, these models could be used to evaluate peak knee joint contact forces from musculoskeletal simulations using peak joint moments from motion capture software, obviating the need for time-consuming musculoskeletal simulations.
Background: Postoperative complication rates using 3D visualization are rarely reported. The primary aim of our study is to detect a possible advantage of using 3D on postoperative complication rates in a real-world setting.
Method: With a sample size calculation for a medium effect size difference that 3D reduces significantly postoperative complications, data of 287 patients with 3D visualization and 832 with 2D procedure were screened. The groups underwent an exact propensity score-matching to be comparable. Comprehensive complication index (CCI) for every procedure was calculated and Operation Time was determined.
Results: Including 1078 patients in the study, 213 exact propensity score-matched pairs could finally be established. Concerning overall CCI (3D: 5.70 ± 13.63 vs. 2D: 3.37 ± 9.89; p = 0.076) and operation time (3D: 103.98 ± 93.26 min vs. 2D: 88.60 ±6 9.32 min; p = 0.2569) there was no significant difference between the groups.
Conclusion: Our study shows no advantage of 3D over 2D laparoscopy regarding postoperative complications in a real-world setting, the second endpoint operation time, too, was not influenced by 3D overall.
Keywords: 3D laparoscopy; Comprehensive complication index; Propensity score matching
Objectives: Regarding reactogenicity and immunogenicity, heterologous COVID-19 vaccination regimens are considered as an alternative to conventional immunization schemes.
Methods: Individuals receiving either heterologous (ChAdOx1-S [AstraZeneca, Cambridge, UK]/BNT162b2 [Pfizer-BioNTech, Mainz, Germany]; n = 306) or homologous (messenger RNA [mRNA]-1273 [Moderna, Cambridge, Massachusetts, USA]; n = 139) vaccination were asked to participate when receiving their second dose. Reactogenicity was assessed after 1 month, immunogenicity after 1, 3, and/or 6 months, including a third dose, through SARS-CoV-2 antispike immunoglobulin G, surrogate virus neutralization test, and a plaque reduction neutralization test against the Delta (B.1.167.2) and Omicron (B.1.1.529; BA.1) variants of concern.
Results: The overall reactogenicity was lower after heterologous vaccination. In both cohorts, SARS-CoV-2 antispike immunoglobulin G concentrations waned over time with the heterologous vaccination demonstrating higher neutralizing activity than homologous mRNA vaccination after 3 months to low neutralizing levels in the Delta plaque reduction neutralization test after 6 months. At this point, 3.2% of the heterologous and 11.4% of the homologous cohort yielded low neutralizing activity against Omicron. After a third dose of an mRNA vaccine, ≥99% of vaccinees demonstrated positive neutralizing activity against Delta. Depending on the vaccination scheme and against Omicron, 60% to 87.5% of vaccinees demonstrated positive neutralizing activity.
Conclusion: ChAdOx1-S/BNT162b2 vaccination demonstrated an acceptable reactogenicity and immunogenicity profile. A third dose of an mRNA vaccine is necessary to maintain neutralizing activity against SARS-CoV-2. However, variants of concern-adapted versions of the vaccines would be desirable.
Background: The development of robotic systems has provided an alternative to frame-based stereotactic procedures. The aim of this experimental phantom study was to compare the mechanical accuracy of the Robotic Surgery Assistant (ROSA) and the Leksell stereotactic frame by reducing clinical and procedural factors to a minimum.
Methods: To precisely compare mechanical accuracy, a stereotactic system was chosen as reference for both methods. A thin layer CT scan with an acrylic phantom fixed to the frame and a localizer enabling the software to recognize the coordinate system was performed. For each of the five phantom targets, two different trajectories were planned, resulting in 10 trajectories. A series of five repetitions was performed, each time based on a new CT scan. Hence, 50 trajectories were analyzed for each method. X-rays of the final cannula position were fused with the planning data. The coordinates of the target point and the endpoint of the robot- or frame-guided probe were visually determined using the robotic software. The target point error (TPE) was calculated applying the Euclidian distance. The depth deviation along the trajectory and the lateral deviation were separately calculated.
Results: Robotics was significantly more accurate, with an arithmetic TPE mean of 0.53 mm (95% CI 0.41–0.55 mm) compared to 0.72 mm (95% CI 0.63–0.8 mm) in stereotaxy (p < 0.05). In robotics, the mean depth deviation along the trajectory was −0.22 mm (95% CI −0.25 to −0.14 mm). The mean lateral deviation was 0.43 mm (95% CI 0.32–0.49 mm). In frame-based stereotaxy, the mean depth deviation amounted to −0.20 mm (95% CI −0.26 to −0.14 mm), the mean lateral deviation to 0.65 mm (95% CI 0.55–0.74 mm).
Conclusion: Both the robotic and frame-based approach proved accurate. The robotic procedure showed significantly higher accuracy. For both methods, procedural factors occurring during surgery might have a more relevant impact on overall accuracy.
Introduction Patients undergoing heart valve surgery are predominantly transferred postoperatively to the intensive care unit (ICU) under continuous sedation. Volatile anaesthetics are an increasingly used treatment alternative to intravenous substances in the ICU. As subject to inhalational uptake and elimination, the resulting pharmacological benefits have been repeatedly demonstrated. Therefore, volatile anaesthetics appear suitable to meet the growing demands of fast-track cardiac surgery. However, their use requires special preparation at the bedside and trained medical and nursing staff, which might limit the pharmacological benefits. The aim of our work is to assess whether the temporal advantages of recovery under volatile sedation outweigh the higher effort of special preparation.
Methods and analysis The study is designed to evaluate the differences between intravenous sedatives (n=48) and volatile sedatives (n=48) in continued intensive care sedation. This study will be conducted as a prospective, randomised, controlled, single-blinded, monocentre trial at a German university hospital in consenting adult patients undergoing heart valve surgery at a university hospital. This observational study will examine the necessary preparation time, staff consultation and overall feasibility of the chosen sedation method. For this purpose, the continuation of sedation in the ICU with volatile sedatives is considered as one study arm and with intravenous sedatives as the comparison group. Due to rapid elimination and quick awakening after the termination of sedation, closer consultation between the attending physician and the ICU nursing staff is required, in addition to a prolonged setup time. Study analysis will include the required setup time, time from admission to extubation as primary outcome and neurocognitive assessability. In addition, possible operation-specific (blood loss, complications), treatment parameters (catecholamine dosages, lung function) and laboratory results (acute kidney injury, acid base balance (lactataemia), liver failure) as influencing factors will be collected. The study-relevant data will be extracted from the continuous digital records of the patient data management system after the patient has been discharged from the ICU. For statistical evaluation, 95% CIs will be calculated for the median time to extubation and neurocognitive assessability, and the association will be assessed with a Cox regression model. In addition, secondary binary outcome measures will be evaluated using Fisher’s exact tests. Further descriptive and exploratory statistical analyses are also planned.
Ethics and dissemination The study was approved by the Institutional Ethics Board of the University of Frankfurt, Germany (#20-1050). Informed consent of all individual patients will be obtained before randomisation. Results will be disseminated via publication in peer-reviewed journals.
Association of mortality and early tracheostomy in patients with COVID-19: a retrospective analysis
(2022)
COVID-19 adds to the complexity of optimal timing for tracheostomy. Over the course of this pandemic, and expanded knowledge of the disease, many centers have changed their operating procedures and performed an early tracheostomy. We studied the data on early and delayed tracheostomy regarding patient outcome such as mortality. We performed a retrospective analysis of all tracheostomies at our institution in patients diagnosed with COVID-19 from March 2020 to June 2021. Time from intubation to tracheostomy and mortality of early (≤ 10 days) vs. late (> 10 days) tracheostomy were the primary objectives of this study. We used mixed cox-regression models to calculate the effect of distinct variables on events. We studied 117 tracheostomies. Intubation to tracheostomy shortened significantly (Spearman’s correlation coefficient; rho = − 0.44, p ≤ 0.001) during the course of this pandemic. Early tracheostomy was associated with a significant increase in mortality in uni- and multivariate analysis (Hazard ratio 1.83, 95% CI 1.07–3.17, p = 0.029). The timing of tracheostomy in COVID-19 patients has a potentially critical impact on mortality. The timing of tracheostomy has changed during this pandemic tending to be performed earlier. Future prospective research is necessary to substantiate these results.
Background: In recent months, Omicron variants of SARS-CoV-2 have become dominant in many regions of the world, and case numbers with Omicron subvariants BA.1 and BA.2 continue to increase. Due to numerous mutations in the spike protein, the efficacy of currently available vaccines, which are based on Wuhan-Hu 1 isolate of SARS-CoV-2, is reduced, leading to breakthrough infections. Efficacy of monoclonal antibody therapy is also likely impaired.
Methods: In our in vitro study using A549-AT cells constitutively expressing ACE2 and TMPRSS2, we determined and compared the neutralizing capacity of vaccine-elicited sera, convalescent sera and monoclonal antibodies against authentic SARS-CoV-2 Omicron BA.1 and BA.2 compared with Delta.
Findings: Almost no neutralisation of Omicron BA.1 and BA.2 was observed using sera from individuals vaccinated with two doses 6 months earlier, regardless of the type of vaccine taken. Shortly after the booster dose, most sera from triple BNT162b2-vaccinated individuals were able to neutralise both Omicron variants. In line with waning antibody levels three months after the booster, only weak residual neutralisation was observed for BA.1 (26%, n = 34, 0 median NT50) and BA.2 (44%, n = 34, 0 median NT50). In addition, BA.1 but not BA.2 was resistant to the neutralising monoclonal antibodies casirivimab/imdevimab, while BA.2 exhibited almost a complete evasion from the neutralisation induced by sotrovimab.
Interpretation: Both SARS-CoV-2 Omicron subvariants BA.1 and BA.2 escape antibody-mediated neutralisation elicited by vaccination, previous infection with SARS-CoV-2, and monoclonal antibodies. Waning immunity renders the majority of tested sera obtained three months after booster vaccination negative in BA.1 and BA.2 neutralisation. Omicron subvariant specific resistance to the monoclonal antibodies casirivimab/imdevimab and sotrovimab emphasizes the importance of genotype-surveillance and guided application.
Funding: This study was supported in part by the Goethe-Corona-Fund of the Goethe University Frankfurt (M.W.) and the Federal Ministry of Education and Research (COVIDready; grant 02WRS1621C (M.W.).
The coronavirus pandemic continues to challenge global healthcare. Severely affected patients are often in need of high doses of analgesics and sedatives. The latter was studied in critically ill coronavirus disease 2019 (COVID-19) patients in this prospective monocentric analysis. COVID-19 acute respiratory distress syndrome (ARDS) patients admitted between 1 April and 1 December 2020 were enrolled in the study. A statistical analysis of impeded sedation using mixed-effect linear regression models was performed. Overall, 114 patients were enrolled, requiring unusual high levels of sedatives. During 67.9% of the observation period, a combination of sedatives was required in addition to continuous analgesia. During ARDS therapy, 85.1% (n = 97) underwent prone positioning. Veno-venous extracorporeal membrane oxygenation (vv-ECMO) was required in 20.2% (n = 23) of all patients. vv-ECMO patients showed significantly higher sedation needs (p < 0.001). Patients with hepatic (p = 0.01) or renal (p = 0.01) dysfunction showed significantly lower sedation requirements. Except for patient age (p = 0.01), we could not find any significant influence of pre-existing conditions. Age, vv-ECMO therapy and additional organ failure could be demonstrated as factors influencing sedation needs. Young patients and those receiving vv-ECMO usually require increased sedation for intensive care therapy. However, further studies are needed to elucidate the causes and mechanisms of impeded sedation.
The immune response is known to wane after vaccination with BNT162b2, but the role of age, morbidity and body composition is not well understood. We conducted a cross-sectional study in long-term care facilities (LTCFs) for the elderly. All study participants had completed two-dose vaccination with BNT162b2 five to 7 months before sample collection. In 298 residents (median age 86 years, range 75–101), anti-SARS-CoV-2 rector binding IgG antibody (anti-RBD-IgG) concentrations were low and inversely correlated with age (mean 51.60 BAU/ml). We compared the results to Health Care Workers (HCW) aged 18–70 years (n = 114, median age: 53 years), who had a higher mean anti-RBD-IgG concentration of 156.99 BAU/ml. Neutralization against the Delta variant was low in both groups (9.5% in LTCF residents and 31.6% in HCWs). The Charlson Comorbidity Index was inversely correlated with anti-RBD-IgG, but not the body mass index (BMI). A control group of 14 LTCF residents with known breakthrough infection had significant higher antibody concentrations (mean 3,199.65 BAU/ml), and 85.7% had detectable neutralization against the Delta variant. Our results demonstrate low but recoverable markers of immunity in LTCF residents five to 7 months after vaccination.
Intrahepatic cholangiocarcinoma (iCCA) is the most frequent subtype of cholangiocarcinoma (CCA), and the incidence has globally increased in recent years. In contrast to surgically treated iCCA, data on the impact of fibrosis on survival in patients undergoing palliative chemotherapy are missing. We retrospectively analyzed the cases of 70 patients diagnosed with iCCA between 2007 and 2020 in our tertiary hospital. Histopathological assessment of fibrosis was performed by an expert hepatobiliary pathologist. Additionally, the fibrosis-4 score (FIB-4) was calculated as a non-invasive surrogate marker for liver fibrosis. For overall survival (OS) and progression-free survival (PFS), Kaplan–Meier curves and Cox-regression analyses were performed. Subgroup analyses revealed a median OS of 21 months (95% CI = 16.7–25.2 months) and 16 months (95% CI = 7.6–24.4 months) for low and high fibrosis, respectively (p = 0.152). In non-cirrhotic patients, the median OS was 21.8 months (95% CI = 17.1–26.4 months), compared with 9.5 months (95% CI = 4.6–14.3 months) in cirrhotic patients (p = 0.007). In conclusion, patients with iCCA and cirrhosis receiving palliative chemotherapy have decreased OS rates, while fibrosis has no significant impact on OS or PFS. These patients should not be prevented from state-of-the-art first-line chemotherapy.
Aim: It can be challenging to distinguish COVID-19 in children from other common infections. We set out to determine the rate at which children consulting a primary care paediatrician with an acute infection are infected with SARS-CoV-2 and to compare distinct findings. Method: In seven out-patient clinics, children aged 0–13 years with any new respiratory or gastrointestinal symptoms and presumed infection were invited to be tested for SARS-CoV-2. Factors that were correlated with testing positive were determined. Samples were collected from 25 January 2021 to 01 April 2021. Results: Seven hundred and eighty-three children participated in the study (median age 3 years and 0 months, range 1 month to 12 years and 11 months). Three hundred and fifty-eight were female (45.7%). SARS-CoV-2 RNA was detected in 19 (2.4%). The most common symptoms in children with as well as without detectable SARS-CoV-2 RNA were rhinitis, fever and cough. Known recent exposure to a case of COVID-19 was significantly correlated with testing positive, but symptoms or clinical findings were not. Conclusion: COVID-19 among the children with symptoms of an acute infection was uncommon, and the clinical presentation did not differ significantly between children with and without evidence of an infection with SARS-CoV-2.
Estimating the age of the developmental stages of the blow fly Calliphora vicina (Diptera: Calliphoridae) is of forensic relevance for the determination of the minimum post-mortem interval (PMImin). Fly eggs and larvae can be aged using anatomical and morphological characters and their modification during development. However, such methods can only hardly be applied for aging fly pupae. Previous study described age estimation of C. vicina pupae using gene expression, but just when reared at constant temperatures, but fluctuating temperatures represent a more realistic scenario at a crime scene. Therefore, age-dependent gene expression of C. vicina pupae were compared at 3 fluctuating and 3 constant temperatures, the latter representing the mean values of the fluctuating profiles. The chosen marker genes showed uniform expression patterns during metamorphosis of C. vicina pupae bred at different temperature conditions (constant or fluctuating) but the same mean temperature (e.g. constant 10 °C vs. fluctuating 5–15 °C). We present an R-based statistical tool, which enables estimation of the age of the examined pupa based on the analysed gene expression data.
Long-term effects on cirrhosis and portal hypertension of direct antiviral agent (DAA)-based eradication of hepatitis C virus (HCV) are still under debate. We analysed dynamics of liver and spleen elastography to assess potential regression of cirrhosis and portal hypertension 3 years post-treatment. Fifty-four patients with HCV-associated cirrhosis and DAA-induced SVR were included. Liver and spleen stiffness were measured at baseline (BL), end of treatment (EOT), 24 weeks after EOT (FU24) and 1, 2 and 3 (FU144) years post-treatment by transient liver elastography (L-TE) and point shear wave elastography (pSWE) using acoustic radiation force impulse (ARFI) of the liver (L-ARFI) and spleen (S-ARFI). Biochemical, virological and clinical data were also obtained. Liver stiffness assessed by L-TE decreased between BL [median (range), 32.5(9.1–75) kPa] and EOT [21.3(6.7–73.5) kPa; p < .0001] and EOT and FU144 [16(4.1–75) kPa; p = .006]. L-ARFI values improved between EOT [2.5(1.2–4.1) m/s] and FU144 [1.7(0.9–4.1) m/s; p = .001], while spleen stiffness remained unchanged. Overall, L-TE improved in 38 of 54 (70.4%) patients at EOT and 29 of 38 (76.3%) declined further until FU144, whereas L-ARFI values decreased in 30/54 (55.6%) patients at EOT and continued to decrease in 28/30 (93.3%) patients at FU144. Low bilirubin and high albumin levels at BL were associated with improved L-ARFI values (p = .048) at EOT or regression of cirrhosis (<12.5 kPa) by L-TE at FU144 (p = .005), respectively. Liver stiffness, but not spleen stiffness, continued to decline in a considerable proportion of patients with advanced liver disease after HCV eradication.
Estimating intraoperative blood loss is one of the daily challenges for clinicians. Despite the knowledge of the inaccuracy of visual estimation by anaesthetists and surgeons, this is still the mainstay to estimate surgical blood loss. This review aims at highlighting the strengths and weaknesses of currently used measurement methods. A systematic review of studies on estimation of blood loss was carried out. Studies were included investigating the accuracy of techniques for quantifying blood loss in vivo and in vitro. We excluded nonhuman trials and studies using only monitoring parameters to estimate blood loss. A meta-analysis was performed to evaluate systematic measurement errors of the different methods. Only studies that were compared with a validated reference e.g. Haemoglobin extraction assay were included. 90 studies met the inclusion criteria for systematic review and were analyzed. Six studies were included in the meta-analysis, as only these were conducted with a validated reference. The mixed effect meta-analysis showed the highest correlation to the reference for colorimetric methods (0.93 95% CI 0.91–0.96), followed by gravimetric (0.77 95% CI 0.61–0.93) and finally visual methods (0.61 95% CI 0.40–0.82). The bias for estimated blood loss (ml) was lowest for colorimetric methods (57.59 95% CI 23.88–91.3) compared to the reference, followed by gravimetric (326.36 95% CI 201.65–450.86) and visual methods (456.51 95% CI 395.19–517.83). Of the many studies included, only a few were compared with a validated reference. The majority of the studies chose known imprecise procedures as the method of comparison. Colorimetric methods offer the highest degree of accuracy in blood loss estimation. Systems that use colorimetric techniques have a significant advantage in the real-time assessment of blood loss.
Aim: It can be challenging to distinguish COVID-19 in children from other common infections. We set out to determine the rate at which children consulting a primary care paediatrician with an acute infection are infected with SARS-CoV-2 and to compare distinct findings. Method: In seven out-patient clinics, children aged 0–13 years with any new respiratory or gastrointestinal symptoms and presumed infection were invited to be tested for SARS-CoV-2. Factors that were correlated with testing positive were determined. Samples were collected from 25 January 2021 to 01 April 2021. Results: Seven hundred and eighty-three children participated in the study (median age 3 years and 0 months, range 1 month to 12 years and 11 months). Three hundred and fifty-eight were female (45.7%). SARS-CoV-2 RNA was detected in 19 (2.4%). The most common symptoms in children with as well as without detectable SARS-CoV-2 RNA were rhinitis, fever and cough. Known recent exposure to a case of COVID-19 was significantly correlated with testing positive, but symptoms or clinical findings were not. Conclusion: COVID-19 among the children with symptoms of an acute infection was uncommon, and the clinical presentation did not differ significantly between children with and without evidence of an infection with SARS-CoV-2.
Background: The objective of the STREAM Trial was to evaluate the effect of simulation training on process times in acute stroke care.
Methods: The multicenter prospective interventional STREAM Trial was conducted between 10/2017 and 04/2019 at seven tertiary care neurocenters in Germany with a pre- and post-interventional observation phase. We recorded patient characteristics, acute stroke care process times, stroke team composition and simulation experience for consecutive direct-to-center patients receiving intravenous thrombolysis (IVT) and/or endovascular therapy (EVT). The intervention consisted of a composite intervention centered around stroke-specific in situ simulation training. Primary outcome measure was the ‘door-to-needle’ time (DTN) for IVT. Secondary outcome measures included process times of EVT and measures taken to streamline the pre-existing treatment algorithm.
Results: The effect of the STREAM intervention on the process times of all acute stroke operations was neutral. However, secondary analyses showed a DTN reduction of 5 min from 38 min pre-intervention (interquartile range [IQR] 25–43 min) to 33 min (IQR 23–39 min, p = 0.03) post-intervention achieved by simulation-experienced stroke teams. Concerning EVT, we found significantly shorter door-to-groin times in patients who were treated by teams with simulation experience as compared to simulation-naive teams in the post-interventional phase (−21 min, simulation-naive: 95 min, IQR 69–111 vs. simulation-experienced: 74 min, IQR 51–92, p = 0.04).
Conclusion: An intervention combining workflow refinement and simulation-based stroke team training has the potential to improve process times in acute stroke care.
Testing for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) by RT-PCR is a vital public health tool in the pandemic. Self-collected samples are increasingly used as an alternative to nasopharyngeal swabs. Several studies suggested that they are sufficiently sensitive to be a useful alternative. However, there are limited data directly comparing several different types of self-collected materials to determine which material is preferable. A total of 102 predominantly symptomatic adults with a confirmed SARS-CoV-2 infection self-collected native saliva, a tongue swab, a mid-turbinate nasal swab, saliva obtained by chewing a cotton pad and gargle lavage, within 48 h of initial diagnosis. Sample collection was unsupervised. Both native saliva and gargling with tap water had high diagnostic sensitivity of 92.8% and 89.1%, respectively. Nasal swabs had a sensitivity of 85.1%, which was not significantly inferior to saliva (p = 0.092), but 16.6% of participants reported they had difficult in self-collection of this sample. A tongue swab and saliva obtained by chewing a cotton pad had a significantly lower sensitivity of 74.2% and 70.2%, respectively. Diagnostic sensitivity was not related to the presence of clinical symptoms or to age. When comparing self-collected specimens from different material, saliva, gargle lavage or mid-turbinate nasal swabs may be considered for most symptomatic patients. However, complementary experiments are required to verify that differences in performance observed among the five sampling modes were not attributed to collection impairment.
Rationale: Postinfectious bronchiolitis obliterans (PIBO) is a rare, chronic respiratory condition, which follows an acute insult due to a severe infection of the lower airways. Objectives: The objective of this study was to investigate the long-term course of bronchial inflammation and pulmonary function testing in children with PIBO. Methods: Medical charts of 21 children with PIBO were analyzed retrospectively at the Children's University Hospital Frankfurt/Main Germany. Pulmonary function tests (PFTs) with an interval of at least 1 month were studied between 2002 and 2019. A total of 382 PFTs were analyzed retrospectively and per year, the two best PFTs, in total 217, were evaluated. Additionally, 56 sputum analysis were assessed and the sputum neutrophils were evaluated. Results: The evaluation of the 217 PFTs showed a decrease in FEV1 with a loss of 1.07% and a loss in z score of −0.075 per year. FEV1/FVC decreased by 1.44 per year. FVC remained stable, showing a nonsignificant increase by 0.006 in z score per year. However, FEV1 and FVC in L increased significantly with FEV1 0.032 L per cm and FVC 0.048 L/cm in height. Sputum neutrophils showed a significant increase of 2.12% per year. Conclusion: Our results demonstrated that in patients with PIBO pulmonary function decreased significantly showing persistent obstruction over an average follow-up period of 8 years. However, persistent lung growth was revealed. In addition, pulmonary inflammation persisted clearly showing an increasing amount of neutrophils in induced sputum. Patients did not present with a general susceptibility to respiratory infections.
High sedation needs of critically ill COVID-19 ARDS patients - a monocentric observational study
(2021)
Background: Therapy of severely affected coronavirus patient, requiring intubation and sedation is still challenging. Recently, difficulties in sedating these patients have been discussed. This study aims to describe sedation practices in patients with 2019 coronavirus disease (COVID-19)-induced acute respiratory distress syndrome (ARDS). Methods: We performed a retrospective monocentric analysis of sedation regimens in critically ill intubated patients with respiratory failure who required sedation in our mixed 32-bed university intensive care unit. All mechanically ventilated adults with COVID-19-induced ARDS requiring continuously infused sedative therapy admitted between April 4, 2020, and June 30, 2020 were included. We recorded demographic data, sedative dosages, prone positioning, sedation levels and duration. Descriptive data analysis was performed; for additional analysis, a logistic regression with mixed effect was used. Results: In total, 56 patients (mean age 67 (±14) years) were included. The mean observed sedation period was 224 (±139) hours. To achieve the prescribed sedation level, we observed the need for two or three sedatives in 48.7% and 12.8% of the cases, respectively. In cases with a triple sedation regimen, the combination of clonidine, esketamine and midazolam was observed in most cases (75.7%). Analgesia was achieved using sufentanil in 98.6% of the cases. The analysis showed that the majority of COVID-19 patients required an unusually high sedation dose compared to those available in the literature. Conclusion: The global pandemic continues to affect patients severely requiring ventilation and sedation, but optimal sedation strategies are still lacking. The findings of our observation suggest unusual high dosages of sedatives in mechanically ventilated patients with COVID-19. Prescribed sedation levels appear to be achievable only with several combinations of sedatives in most critically ill patients suffering from COVID-19-induced ARDS and a potential association to the often required sophisticated critical care including prone positioning and ECMO treatment seems conceivable.
Background: Epileptic seizures are common clinical features in patients with acute subdural hematoma (aSDH); however, diagnostic feasibility and therapeutic monitoring remain limited. Surface electroencephalography (EEG) is the major diagnostic tool for the detection of seizures but it might be not sensitive enough to detect all subclinical or nonconvulsive seizures or status epilepticus. Therefore, we have planned a clinical trial to evaluate a novel treatment modality by perioperatively implanting subdural EEG electrodes to diagnose seizures; we will then treat the seizures under therapeutic monitoring and analyze the clinical benefit.
Methods: In a prospective nonrandomized trial, we aim to include 110 patients with aSDH. Only patients undergoing surgical removal of aSDH will be included; one arm will be treated according to the guidelines of the Brain Trauma Foundation, while the other arm will additionally receive a subdural grid electrode. The study's primary outcome is the comparison of incidence of seizures and time-to-seizure between the interventional and control arms. Invasive therapeutic monitoring will guide treatment with antiseizure drugs (ASDs). The secondary outcome will be the functional outcome for both groups as assessed via the Glasgow Outcome Scale and modified Rankin Scale both at discharge and during 6 months of follow-up. The tertiary outcome will be the evaluation of chronic epilepsy within 2-4 years of follow-up.
Discussion: The implantation of a subdural EEG grid electrode in patients with aSDH is expected to be effective in diagnosing seizures in a timely manner, facilitating treatment with ASDs and monitoring of treatment success. Moreover, the occurrence of epileptiform discharges prior to the manifestation of seizure patterns could be evaluated in order to identify high-risk patients who might benefit from prophylactic treatment with ASDs.
Trial registration: ClinicalTrials.gov identifier no. NCT04211233.
CD4+ T cell lymphopenia predicts mortality from Pneumocystis pneumonia in kidney transplant patients
(2020)
Background: Pneumocystis jirovecii pneumonia (PcP) remains a life-threatening opportunistic infection after solid organ transplantation, even in the era of Pneumocystis prophylaxis. The association between risk of developing PcP and low CD4+ T cell counts has been well established. However, it is unknown whether lymphopenia in the context of post-renal transplant PcP increases the risk of mortality. Methods: We carried out a retrospective analysis of a cohort of kidney transplant patients with PcP (n = 49) to determine the risk factors for mortality associated with PcP. We correlated clinical and demographic data with the outcome of the disease. For CD4+ T cell counts, we used the Wilcoxon rank sum test for in-hospital mortality and a Cox proportional-hazards regression model for 60-day mortality. Results: In univariate analyses, high CRP, high neutrophils, CD4+ T cell lymphopenia, mechanical ventilation, and high acute kidney injury network stage were associated with in-hospital mortality following presentation with PcP. In a receiver-operator characteristic (ROC) analysis, an optimum cutoff of ≤200 CD4+ T cells/µL predicted in-hospital mortality, CD4+ T cell lymphopenia remained a risk factor in a Cox regression model. Conclusions: Low CD4+ T cell count in kidney transplant recipients is a biomarker for disease severity and a risk factor for in-hospital mortality following presentation with PcP.
Background: Many patients suffering from exercise-induced asthma (EIA) have normal lung function at rest and show symptoms and a decline in FEV1 when they do sports or during exercise-challenge. It has been described that long-chain polyunsaturated fatty acids (LCPUFA) could exert a protective effect on EIA.
Methods: In this study the protective effect of supplementation with a special combination of n-3 and n-6 LCPUFA (sc-LCPUFA) (total 1.19 g/ day) were investigated in an EIA cold air provocation model. Primary outcome measure: Decrease in FEV1 after exercise challenge and secondary outcome measure: anti-inflammatory effects monitored by exhaled NO (eNO) before and after sc-LCPUFA supplementation versus placebo.
Results: Ninety-nine patients with exercise-induced symptoms aged 10 to 45 were screened by a standardized exercise challenge in a cold air chamber at 4 °C. Seventy-three patients fulfilled the inclusion criteria of a FEV1 decrease > 15% and were treated double-blind placebo-controlled for 4 weeks either with sc-LCPUFA or placebo. Thirty-two patients in each group completed the study. Mean FEV1 decrease after cold air exercise challenge and eNO were unchanged after 4 weeks sc-LCPUFA supplementation.
Conclusion: Supplementation with sc-LCPUFA at a dose of 1.19 g/d did not have any broncho-protective and anti-inflammatory effects on EIA.
Trial registration: Clinical trial registration number: NCT02410096. Registered 7 February 2015 at Clinicaltrial.gov
Standard monitoring of heart rate, blood pressure and arterial oxygen saturation during endoscopy is recommended by current guidelines on procedural sedation. A number of studies indicated a reduction of hypoxic (art. oxygenation < 90% for > 15 s) and severe hypoxic events (art. oxygenation < 85%) by additional use of capnography. Therefore, U.S. and the European guidelines comment that additional capnography monitoring can be considered in long or deep sedation. Integrated Pulmonary Index® (IPI) is an algorithm-based monitoring parameter that combines oxygenation measured by pulse oximetry (art. oxygenation, heart rate) and ventilation measured by capnography (respiratory rate, apnea > 10 s, partial pressure of end-tidal carbon dioxide [PetCO2]). The aim of this paper was to analyze the value of IPI as parameter to monitor the respiratory status in patients receiving propofol sedation during PEG-procedure. Patients reporting for PEG-placement under sedation were randomized 1:1 in either standard monitoring group (SM) or capnography monitoring group including IPI (IM). Heart rate, blood pressure and arterial oxygen saturation were monitored in SM. In IM additional monitoring was performed measuring PetCO2, respiratory rate and IPI. Capnography and IPI values were recorded for all patients but were only visible to the endoscopic team for the IM-group. IPI values range between 1 and 10 (10 = normal; 8–9 = within normal range; 7 = close to normal range, requires attention; 5–6 = requires attention and may require intervention; 3–4 = requires intervention; 1–2 requires immediate intervention). Results on capnography versus standard monitoring of the same study population was published previously. A total of 147 patients (74 in SM and 73 in IM) were included in the present study. Hypoxic events occurred in 62 patients (42%) and severe hypoxic events in 44 patients (29%), respectively. Baseline characteristics were equally distributed in both groups. IPI = 1, IPI < 7 as well as the parameters PetCO2 = 0 mmHg and apnea > 10 s had a high sensitivity for hypoxic and severe hypoxic events, respectively (IPI = 1: 81%/81% [hypoxic/severe hypoxic event], IPI < 7: 82%/88%, PetCO2: 69%/68%, apnea > 10 s: 84%/84%). All four parameters had a low specificity for both hypoxic and severe hypoxic events (IPI = 1: 13%/12%, IPI < 7: 7%/7%, PetCO2: 29%/27%, apnea > 10 s: 7%/7%). In multivariate analysis, only SM and PetCO2 = 0 mmHg were independent risk factors for hypoxia. IPI (IPI = 1 and IPI < 7) as well as the individual parameters PetCO2 = 0 mmHg and apnea > 10 s allow a fast and convenient conclusion on patients’ respiratory status in a morbid patient population. Sensitivity is good for most parameters, but specificity is poor. In conclusion, IPI can be a useful metric to assess respiratory status during propofol-sedation in PEG-placement. However, IPI was not superior to PetCO2 and apnea > 10 s.
Introduction: Recommendations for venous thromboembolism and deep venous thrombosis (DVT) prophylaxis using graduated compression stockings (GCS) is historically based and has been critically examined in current publications. Existing guidelines are inconclusive as to recommend the general use of GCS.
Patients/Methods: 24 273 in-patients (general surgery and orthopedic patients) undergoing surgery between 2006 and 2016 were included in a retrospectively analysis from a single center. From January 2006 to January 2011 perioperative GCS was employed additionally to drug prophylaxis and from February 2011 to March 2016 patients received drug prophylaxis alone. According to german guidelines all patients received venous thromboembolism prophylaxis with weight-adapted LMWH. Risk stratification (low risk, moderate risk, high risk) was based on the guideline of the American College of Chest Physicians. Data analysis was performed before and after propensity matching (PM). The defined primary endpoint was the incidence of symptomatic or fatal pulmonary embolism (PE). A secondary endpoint was the incidence of deep venous thromboembolism (DVT).
Results: After risk stratification (low risk n = 16 483; moderate risk n = 4464; high risk n = 3326) a total of 24 273 patient were analyzed. Before to PM the relative risk for the occurrence of a PE or DVT was not increased by abstaining from GCS. After PM two groups of 11 312 patients each, one with and one without GCS application, were formed. When comparing the two groups, the relative risk (RR) for the occurrence of a pulmonary embolism was: Low Risk 0.99 [CI95% 0.998–1.000]; Moderate Risk 0.999 [CI95% 0.95–1.003]; High Risk 0.996 [CI95% 0.992–1.000] (p > 0.05). The incidence of PE in the total group LMWH alone was 0.1% (n = 16). In the total group using LMWH + GCS, the incidence was 0.3% (n = 29). RR after PM was 0.999 [CI95% 0.998–1.00].
Conclusion: In comparison to prior studies with only small numbers of patients our trial shows in a large group of patients with moderate and high risk developing VTE we can support the view that abstaining from GCS-use does not increase the incidence of symptomatic or fatal PE and symptomatic DVT.