Refine
Year of publication
Document Type
- Article (50)
- Preprint (2)
- Conference Proceeding (1)
Language
- English (53)
Has Fulltext
- yes (53) (remove)
Is part of the Bibliography
- no (53)
Keywords
- Patient blood management (5)
- patient blood management (4)
- Transfusion (3)
- anaemia (3)
- transfusion (3)
- COVID-19 (2)
- Cell salvage (2)
- Critical care (2)
- Mortality (2)
- Organ dysfunctions (2)
Institute
- Medizin (53)
Introduction: Observational studies have demonstrated an association between vitamin D deficiency and increased risk of morbidity and mortality in critically ill patients. Cohort studies and pilot trials have suggested promising beneficial effects of vitamin D replacement in the critical ill, at least in patients with severe vitamin D deficiency. As vitamin D is a simple, low-cost and safe intervention, it has potential to improve survival in critically ill patients.
Methods and analysis: In this randomised, placebo-controlled, double-blind, multicentre, international trial, 2400 adult patients with severe vitamin D deficiency (25-hydroxyvitamin D≤12 ng/mL) will be randomised in a 1:1 ratio by www.randomizer.at to receive a loading dose of 540 000 IU cholecalciferol within 72 hours after intensive care unit (ICU) admission, followed by 4000 IU daily for 90 days or placebo. Hypercalcaemia may occur as a side effect, but is monitored by regular checks of the calcium level. The primary outcome is all-cause mortality at 28 days after randomisation. Secondary outcomes are: ICU, hospital, 90-day and 1-year mortality; hospital and ICU length of stay, change in organ dysfunction on day 5 as measured by Sequential Organ Function Assessment (SOFA) score, number of organ failures; hospital and ICU readmission until day 90; discharge destination, self-reported infections requiring antibiotics until day 90 and health-related quality of life. Recruitment status is ongoing.
Ethics and dissemination: National ethical approval was obtained by the Ethics Committee of the University of Graz for Austria, Erasme University Brussels (Belgium) and University Hospital Frankfurt (Germany), and will further be gained according to individual national processes. On completion, results will be published in a peer-reviewed scientific journal. The study findings will be presented at national and international meetings with abstracts online.
Trial registration: NCT03188796, EudraCT-No: 2016-002460-13.
Background: Macrophage Migration Inhibitory Factor (MIF) is highly elevated after cardiac surgery and impacts the postoperative inflammation. The aim of this study was to analyze whether the polymorphisms CATT5–7 (rs5844572/rs3063368,“-794”) and G>C single-nucleotide polymorphism (rs755622,-173) in the MIF gene promoter are related to postoperative outcome. Methods: In 1116 patients undergoing cardiac surgery, the MIF gene polymorphisms were analyzed and serum MIF was measured by ELISA in 100 patients. Results: Patients with at least one extended repeat allele (CATT7) had a significantly higher risk of acute kidney injury (AKI) compared to others (23% vs. 13%; OR 2.01 (1.40–2.88), p = 0.0001). Carriers of CATT7 were also at higher risk of death (1.8% vs. 0.4%; OR 5.12 (0.99–33.14), p = 0.026). The GC genotype was associated with AKI (20% vs. GG/CC:13%, OR 1.71 (1.20–2.43), p = 0.003). Multivariate analyses identified CATT7 predictive for AKI (OR 2.13 (1.46–3.09), p < 0.001) and death (OR 5.58 (1.29–24.04), p = 0.021). CATT7 was associated with higher serum MIF before surgery (79.2 vs. 50.4 ng/mL, p = 0.008). Conclusion: The CATT7 allele associates with a higher risk of AKI and death after cardiac surgery, which might be related to chronically elevated serum MIF. Polymorphisms in the MIF gene may constitute a predisposition for postoperative complications and the assessment may improve risk stratification and therapeutic guidance.
Background: Invasive off- or on-pump cardiac surgery (elective and emergency procedures, excluding transplants are routinely performed to treat complications of ischaemic heart disease. Randomised controlled trials (RCT) evaluate the effectiveness of treatments in the setting of cardiac surgery. However, the impact of RCTs is weakened by heterogeneity in outcome measuring and reporting, which hinders comparison across trials. Core outcome sets (COS, a set of outcomes that should be measured and reported, as a minimum, in clinical trials for a specific clinical field) help reduce this problem. In light of the above, we developed a COS for cardiac surgery effectiveness trials.
Methods: Potential core outcomes were identified a priori by analysing data on 371 RCTs of 58,253 patients. We reached consensus on core outcomes in an international three-round eDelphi exercise. Outcomes for which at least 60% of the participants chose the response option "no" and less than 20% chose the response option "yes" were excluded.
Results: Eighty-six participants from 23 different countries involving adult cardiac patients, cardiac surgeons, anaesthesiologists, nursing staff and researchers contributed to this eDelphi. The panel reached consensus on four core outcomes: 1) Measure of mortality, 2) Measure of quality of life, 3) Measure of hospitalisation and 4) Measure of cerebrovascular complication to be included in adult cardiac surgery trials.
Conclusion: This study used robust research methodology to develop a minimum core outcome set for clinical trials evaluating the effectiveness of treatments in the setting of cardiac surgery. As a next step, appropriate outcome measurement instruments have to be selected.
Introduction: Organ dysfunction or failure after the first days of ICU treatment and subsequent mortality with respect to the type of intensive care unit (ICU) admission is poorly elucidated. Therefore we analyzed the association of ICU mortality and admission for medical (M), scheduled surgery (ScS) or unscheduled surgery (US) patients mirrored by the occurrence of organ dysfunction/failure (OD/OF) after the first 72h of ICU stay.
Methods: For this retrospective cohort study (23,795 patients; DIVI registry; German Interdisciplinary Association for Intensive Care Medicine (DIVI)) organ dysfunction or failure were derived from the Sequential Organ Failure Assessment (SOFA) score (excluding the Glasgow Coma Scale). SOFA scores were collected on admission to ICU and 72h later. For patients with a length of stay of at least five days, a multivariate analysis was performed for individual OD/OF on day three.
Results: M patients had the lowest prevalence of cardiovascular failure (M 31%; ScS 35%; US 38%), and the highest prevalence of respiratory (M 24%; ScS 13%; US 17%) and renal failure (M 10%; ScS 6%; US 7%). Risk of death was highest for M- and ScS-patients in those with respiratory failure (OR; M 2.4; ScS 2.4; US 1.4) and for surgical patients with renal failure (OR; M 1.7; ScS 2.7; US 2.4).
Conclusion: The dynamic evolution of OD/OF within 72h after ICU admission and mortality differed between patients depending on their types of admission. This has to be considered to exclude a systematic bias during multi-center trials.
Background: Age and preoperative anaemia are risk factors for poor surgical outcome and blood transfusion. The aim of this study was to examine the effect of iron supplementation in iron-deficient (ID) elderly patients undergoing major surgery.
Method: In this single-centre observational study, patients ≥ 65 years undergoing major surgery were screened for anaemia and ID. Patients were assigned to the following groups: A− (no anaemia); A−,ID+,T+ (no anaemia, iron-deficient, intravenous iron supplementation); A+ (anaemia); and A+,ID+,T+ (anaemia, iron-deficient, intravenous iron supplementation).
Results: Of 4,381 patients screened at the anaemia walk-in clinic, 2,381 (54%) patients were ≥ 65 years old and 2,191 cases were included in analysis. The ID prevalence was 63% in patients with haemoglobin (Hb) < 8 g/dl, 47.2% in patients with Hb from 8.0 to 8.9 g/dl, and 44.3% in patients with Hb from 9 to 9.9 g/dl. In severely anaemic patients, an Hb increase of 0.6 (0.4; 1.2) and 1.2 (0.7; 1.6) g/dl was detected with iron supplementation 6–10 and > 10 days before surgery, respectively. Hb increased by 0 (-0.1; 0) g/dl with iron supplementation 1–5 days before surgery, 0.2 (-0.1; 0.5) g/dl with iron supplementation 6–10 days before surgery, and 0.2 (-0.2; 1.1) g/dl with supplementation > 10 days before surgery (p < 0.001 for 1–5 vs. 6–10 days). Overall, 58% of A+,ID+,T+ patients showed an Hb increase of > 0.5 g/dl. The number of transfused red blood cell units was significantly lower in patients supplemented with iron (0 (0; 3)) compared to non-treated anaemic patients (1 (0; 4)) (p = 0.03). Patients with iron supplementation > 6 days before surgery achieved mobility 2 days earlier than patients with iron supplementation < 6 days.
Conclusions: Intravenous iron supplementation increases Hb level and thereby reduces blood transfusion rate in elderly surgical patients with ID anaemia.
Patients with risks of ischemic injury, e.g. during circulatory arrest in cardiac surgery, or after resuscitation are subjected to therapeutic hypothermia. For aortic surgery, the body is traditionally cooled down to 18 °C and then rewarmed to body temperature. The role of hypothermia and the subsequent rewarming process on leukocyte-endothelial interactions and expression of junctional-adhesion-molecules is not clarified yet. Thus, we investigated in an in-vitro model the influence of temperature modulation during activation and transendothelial migration of leukocytes through human endothelial cells. Additionally, we investigated the expression of JAMs in the rewarming phase. Exposure to low temperatures alone during transmigration scarcely affects leukocyte extravasation, whereas hypothermia during treatment and transendothelial migration improves leukocyte-endothelial interactions. Rewarming causes a significant up-regulation of transmigration with falling temperatures. JAM-A is significantly modulated during rewarming. Our data suggest that transendothelial migration of leukocytes is not only modulated by cell-activation itself. Activation temperatures and the rewarming process are essential. Continued hypothermia significantly inhibits transendothelial migration, whereas the rewarming process enhances transmigration strongly. The expression of JAMs, especially JAM-A, is strongly modulated during the rewarming process. Endothelial protection prior to warm reperfusion and mild hypothermic conditions reducing the difference between hypothermia and rewarming temperatures should be considered.
Background: The ability of stroke volume variation (SVV), pulse pressure variation (PPV) and global end-diastolic volume (GEDV) for prediction of fluid responsiveness in presence of pleural effusion is unknown. The aim of the present study was to challenge the ability of SVV, PPV and GEDV to predict fluid responsiveness in a porcine model with pleural effusions.
Methods: Pigs were studied at baseline and after fluid loading with 8 ml kg−1 6% hydroxyethyl starch. After withdrawal of 8 ml kg−1 blood and induction of pleural effusion up to 50 ml kg−1 on either side, measurements at baseline and after fluid loading were repeated. Cardiac output, stroke volume, central venous pressure (CVP) and pulmonary occlusion pressure (PAOP) were obtained by pulmonary thermodilution, whereas GEDV was determined by transpulmonary thermodilution. SVV and PPV were monitored continuously by pulse contour analysis.
Results: Pleural effusion was associated with significant changes in lung compliance, peak airway pressure and stroke volume in both responders and non-responders. At baseline, SVV, PPV and GEDV reliably predicted fluid responsiveness (area under the curve 0.85 (p<0.001), 0.88 (p<0.001), 0.77 (p = 0.007). After induction of pleural effusion the ability of SVV, PPV and GEDV to predict fluid responsiveness was well preserved and also PAOP was predictive. Threshold values for SVV and PPV increased in presence of pleural effusion.
Conclusions: In this porcine model, bilateral pleural effusion did not affect the ability of SVV, PPV and GEDV to predict fluid responsiveness.
Uncalibrated semi-invasive continous monitoring of cardiac index (CI) has recently gained increasing interest. The aim of the present study was to compare the accuracy of CI determination based on arterial waveform analysis with transpulmonary thermodilution. Fifty patients scheduled for elective coronary surgery were studied after induction of anaesthesia and before and after cardiopulmonary bypass (CPB), respectively. Each patient was monitored with a central venous line, the PiCCO system, and the FloTrac/Vigileo-system. Measurements included CI derived by transpulmonary thermodilution and uncalibrated semi-invasive pulse contour analysis. Percentage changes of CI were calculated. There was a moderate, but significant correlation between pulse contour CI and thermodilution CI both before (r(2) = 0.72, P < 0.0001) and after (r(2) = 0.62, P < 0.0001) CPB, with a percentage error of 31% and 25%, respectively. Changes in pulse contour CI showed a significant correlation with changes in thermodilution CI both before (r(2) = 0.52, P < 0.0001) and after (r(2) = 0.67, P < 0.0001) CPB. Our findings demonstrated that uncalibrated semi-invasive monitoring system was able to reliably measure CI compared with transpulmonary thermodilution in patients undergoing elective coronary surgery. Furthermore, the semi-invasive monitoring device was able to track haemodynamic changes and trends.
Background: This study assessed the ability of mid-regional proadrenomedullin (MR-proADM) in comparison to conventional biomarkers (procalcitonin (PCT), lactate, C-reactive protein) and clinical scores to identify disease severity in patients with sepsis.
Methods: This is a secondary analysis of a randomised controlled trial in patients with severe sepsis or septic shock across 33 German intensive care units. The association between biomarkers and clinical scores with mortality was assessed by Cox regression analysis, area under the receiver operating characteristic and Kaplan-Meier curves. Patients were stratified into three severity groups (low, intermediate, high) for all biomarkers and scores based on cutoffs with either a 90% sensitivity or specificity.
Results: 1089 patients with a 28-day mortality rate of 26.9% were analysed. According to the Sepsis-3 definition, 41.2% and 58.8% fulfilled the criteria for sepsis and septic shock, with respective mortality rates of 20.0% and 32.1%. MR-proADM had the strongest association with mortality across all Sepsis-1 and Sepsis-3 subgroups and could facilitate a more accurate classification of low (e.g. MR-proADM vs. SOFA: N = 265 vs. 232; 9.8% vs. 13.8% mortality) and high (e.g. MR-proADM vs. SOFA: N = 161 vs. 155; 55.9% vs. 41.3% mortality) disease severity. Patients with decreasing PCT concentrations of either ≥ 20% (baseline to day 1) or ≥ 50% (baseline to day 4) but continuously high MR-proADM concentrations had a significantly increased mortality risk (HR (95% CI): 19.1 (8.0–45.9) and 43.1 (10.1–184.0)).
Conclusions: MR-proADM identifies disease severity and treatment response more accurately than established biomarkers and scores, adding additional information to facilitate rapid clinical decision-making and improve personalised sepsis treatment.
An early identification of sepsis patients likely to progress towards multiple organ failure is crucial in order to initiate targeted therapeutic strategies to decrease mortality. Our recent publication highlighted the greater accuracy of mid-regional proadrenomedullin (MR-proADM) compared with conventional biomarkers and clinical scores in predicting 28-day mortality in patients with initially low (≤7 points; N = 240) or moderate (8–13 points; N = 653) Sepsis-related Organ Failure Assessment (SOFA) scores, thus confirming results from smaller investigations. This additional post hoc analysis aimed to further describe the non-surviving patient population of both subgroups and identify those likely to progress towards sepsis-related multiple organ failure. ...
Background: Hemodynamic instability is frequent and outcome-relevant in critical illness. The understanding of complex hemodynamic disturbances and their monitoring and management plays an important role in treatment of intensive care patients. An increasing number of treatment recommendations and guidelines in intensive care medicine emphasize hemodynamic goals, which go beyond the measurement of blood pressures. Yet, it is not known to which extent the infrastructural prerequisites for extended hemodynamic monitoring are given in intensive care units (ICUs) and how hemodynamic management is performed in clinical practice. Further, it is still unclear which factors trigger the use of extended hemodynamic monitoring.
Methods: In this multicenter, 1-day (November 7, 2013, and the preceding 24 h) cross-sectional study, we retrieved data on patient monitoring from ICUs in Germany, Austria, and Switzerland by means of a web-based case report form. One hundred and sixty-one intensive care units contributed detailed information on availability of hemodynamic monitoring. In addition, detailed information on hemodynamic monitoring of 1789 patients that were treated on due date was collected, and independent factors triggering the use of extended hemodynamic monitoring were identified by multivariate analysis.
Results: Besides basic monitoring with electrocardiography (ECG), pulse oximetry, and blood pressure monitoring, the majority of patients received invasive arterial (77.9 %) and central venous catheterization (55.2 %). All over, additional extended hemodynamic monitoring for assessment of cardiac output was only performed in 12.3 % of patients, while echocardiographic examination was used in only 1.9 %. The strongest independent predictors for the use of extended hemodynamic monitoring of any kind were mechanical ventilation, the need for catecholamine therapy, and treatment backed by protocols. In 71.6 % of patients in whom extended hemodynamic monitoring was added during the study period, this extension led to changes in treatment.
Conclusions: Extended hemodynamic monitoring, which goes beyond the measurement of blood pressures, to date plays a minor role in the surveillance of critically ill patients in German, Austrian, and Swiss ICUs. This includes also consensus-based recommended diagnostic and monitoring applications, such as echocardiography and cardiac output monitoring. Mechanical ventilation, the use of catecholamines, and treatment backed by protocol could be identified as factors independently associated with higher use of extended hemodynamic monitoring.
Background: Approximately every third surgical patient is anemic. The most common form, iron deficiency anemia, results from persisting iron‐deficient erythropoiesis (IDE). Zinc protoporphyrin (ZnPP) is a promising parameter for diagnosing IDE, hitherto requiring blood drawing and laboratory workup.
Study design and methods: Noninvasive ZnPP (ZnPP‐NI) measurements are compared to ZnPP reference determination of the ZnPP/heme ratio by high‐performance liquid chromatography (ZnPP‐HPLC) and the analytical performance in detecting IDE is evaluated against traditional iron status parameters (ferritin, transferrin saturation [TSAT], soluble transferrin receptor–ferritin index [sTfR‐F], soluble transferrin receptor [sTfR]), likewise measured in blood. The study was conducted at the University Hospitals of Frankfurt and Zurich.
Results: Limits of agreement between ZnPP‐NI and ZnPP‐HPLC measurements for 584 cardiac and noncardiac surgical patients equaled 19.7 μmol/mol heme (95% confidence interval, 18.0–21.3; acceptance criteria, 23.2 μmol/mol heme; absolute bias, 0 μmol/mol heme). Analytical performance for detecting IDE (inferred from area under the curve receiver operating characteristics) of parameters measured in blood was: ZnPP‐HPLC (0.95), sTfR (0.92), sTfR‐F (0.89), TSAT (0.87), and ferritin (0.67). Noninvasively measured ZnPP‐NI yielded results of 0.90.
Conclusion: ZnPP‐NI appears well suited for an initial IDE screening, informing on the state of erythropoiesis at the point of care without blood drawing and laboratory analysis. Comparison with a multiparameter IDE test revealed that ZnPP‐NI values of 40 μmol/mol heme or less allows exclusion of IDE, whereas for 65 μmol/mol heme or greater, IDE is very likely if other causes of increased values are excluded. In these cases (77% of our patients) ZnPP‐NI may suffice for a diagnosis, while values in between require analyses of additional iron status parameters.
Estimating intraoperative blood loss is one of the daily challenges for clinicians. Despite the knowledge of the inaccuracy of visual estimation by anaesthetists and surgeons, this is still the mainstay to estimate surgical blood loss. This review aims at highlighting the strengths and weaknesses of currently used measurement methods. A systematic review of studies on estimation of blood loss was carried out. Studies were included investigating the accuracy of techniques for quantifying blood loss in vivo and in vitro. We excluded nonhuman trials and studies using only monitoring parameters to estimate blood loss. A meta-analysis was performed to evaluate systematic measurement errors of the different methods. Only studies that were compared with a validated reference e.g. Haemoglobin extraction assay were included. 90 studies met the inclusion criteria for systematic review and were analyzed. Six studies were included in the meta-analysis, as only these were conducted with a validated reference. The mixed effect meta-analysis showed the highest correlation to the reference for colorimetric methods (0.93 95% CI 0.91–0.96), followed by gravimetric (0.77 95% CI 0.61–0.93) and finally visual methods (0.61 95% CI 0.40–0.82). The bias for estimated blood loss (ml) was lowest for colorimetric methods (57.59 95% CI 23.88–91.3) compared to the reference, followed by gravimetric (326.36 95% CI 201.65–450.86) and visual methods (456.51 95% CI 395.19–517.83). Of the many studies included, only a few were compared with a validated reference. The majority of the studies chose known imprecise procedures as the method of comparison. Colorimetric methods offer the highest degree of accuracy in blood loss estimation. Systems that use colorimetric techniques have a significant advantage in the real-time assessment of blood loss.
Introduction: Electrical impedance tomography (EIT) is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen). In this way, large and discrete shifts in global and regional lung air content were elicited.
Methods: We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm), GREIT (a more recent consensus algorithm for lung imaging), truncated singular value decomposition (TSVD), several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV) reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data.
Results and Conclusions: Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT.
Background and objectives: Preoperative anaemia is an independent risk factor for a higher morbidity and mortality, a longer hospitalization and increased perioperative transfusion rates. Managing preoperative anaemia is the first of three pillars of Patient Blood Management (PBM), a multidisciplinary concept to improve patient safety. While various studies provide medical information on (successful) anaemia treatment pathways, knowledge of organizational details of diagnosis and management of preoperative anaemia across Europe is scarce.
Materials and methods: To gain information on various aspects of preoperative anaemia management including organization, financing, diagnostics and treatment, we conducted a survey (74 questions) in ten hospitals from seven European nations within the PaBloE (Patient Blood Management in Europe) working group covering the year 2016.
Results: Organization and activity in the field of preoperative anaemia management were heterogeneous in the participating hospitals. Almost all hospitals had pathways for managing preoperative anaemia in place, however, only two nations had national guidelines. In six of the ten participating hospitals, preoperative anaemia management was organized by anaesthetists. Diagnostics and treatment focused on iron deficiency anaemia which, in most hospitals, was corrected with intravenous iron.
Conclusion: Implementation and approaches of preoperative anaemia management vary across Europe with a primary focus on treating iron deficiency anaemia. Findings of this survey motivated the hospitals involved to critically evaluate their practice and may also help other hospitals interested in PBM to develop action plans for diagnosis and management of preoperative anaemia.
Background: Patient Blood Management (PBM) is a systematic quality improving clinical model to reduce anemia and avoid transfusions in all kinds of clinical settings. Here, we investigated the potential of PBM in oncologic surgery and hypothesized that PBM improves 2-year overall survival (OS).
Methods: Retrospective analysis of patients 2 years before and after PBM implementation. The primary endpoint was OS at 2 years after surgery. We identified a sample size of 824 to detect a 10% improvement in survival in the PBM group.
Results: The analysis comprised of 836 patients that underwent oncologic surgery, 389 before and 447 after PBM, was implemented. Patients in the PBM+ presented significantly more frequent with normal hemoglobin values before surgery than PBM− (56.6 vs. 35.7%; p < 0.001). The number of transfusions was significantly reduced from 5.5 ± 11.1 to 3.0 ± 6.9 units/patient (p < 0.001); moreover, the percentage of patients being transfused during the clinic stay was significantly reduced from 62.4 to 40.9% (p < 0.001). Two-year OS was significantly better in the PBM+ and increased from 67.0 to 80.1% (p = 0.001). A normal hemoglobin value (> 12 g/dl in female and > 13 g/dl in male) before surgery (HR 0.43, 95% CI 0.29–0.65, p < 0.001) was the only independent predictive factor positively affecting survival.
Conclusions: PBM is a quality improvement tool that is associated with better mid-term surgical oncologic outcome. The root cause for improvement is the increase of patients entering surgery with normal hemoglobin values.
Background: The use of cell salvage and autologous blood transfusion has become an important method of blood conservation. So far, there are no clinical data about the performance of the continuous autotransfusion device CATSmart.
Methods: In total, 74 patients undergoing either cardiac or orthopedic surgery were included in this prospective, bicenter and observational technical evaluation to validate red cell separation process and washout quality of CATSmart. The target of red cell separation process was defined as a hematocrit value in the packed red cell unit of 55–75% and of washout quality of 80–100% removal ratio.
Results: Hematocrit values measured by CATSmart and laboratory analysis were 78.5% [71.3%; 84.0%] and 73.7% [67.5%; 75.5%], respectively. Removal ratios for platelets 94.7% [88.2%; 96.7%], free hemoglobin 89.3% [85.2%; 94.9%], albumin 97.9% [96.6%; 98.5%], heparin 99.9% [99.9%; 100.0%], and potassium 92.5% [90.8%; 95.0%] were within the target range while removal of white blood cells was slightly worse 72.4% [57.9%; 87.3%].
Conclusion: The new autotransfusion device enables sufficient red cell separation and washout quality.
Background: Intensive Care Resources are heavily utilized during the COVID-19 pandemic. However, risk stratification and prediction of SARS-CoV-2 patient clinical outcomes upon ICU admission remain inadequate. This study aimed to develop a machine learning model, based on retrospective & prospective clinical data, to stratify patient risk and predict ICU survival and outcomes. Methods: A Germany-wide electronic registry was established to pseudonymously collect admission, therapeutic and discharge information of SARS-CoV-2 ICU patients retrospectively and prospectively. Machine learning approaches were evaluated for the accuracy and interpretability of predictions. The Explainable Boosting Machine approach was selected as the most suitable method. Individual, non-linear shape functions for predictive parameters and parameter interactions are reported. Results: 1039 patients were included in the Explainable Boosting Machine model, 596 patients retrospectively collected, and 443 patients prospectively collected. The model for prediction of general ICU outcome was shown to be more reliable to predict “survival”. Age, inflammatory and thrombotic activity, and severity of ARDS at ICU admission were shown to be predictive of ICU survival. Patients’ age, pulmonary dysfunction and transfer from an external institution were predictors for ECMO therapy. The interaction of patient age with D-dimer levels on admission and creatinine levels with SOFA score without GCS were predictors for renal replacement therapy. Conclusions: Using Explainable Boosting Machine analysis, we confirmed and weighed previously reported and identified novel predictors for outcome in critically ill COVID-19 patients. Using this strategy, predictive modeling of COVID-19 ICU patient outcomes can be performed overcoming the limitations of linear regression models. Trial registration “ClinicalTrials” (clinicaltrials.gov) under NCT04455451.