Refine
Document Type
- Article (23)
Language
- English (23)
Has Fulltext
- yes (23)
Is part of the Bibliography
- no (23)
Keywords
- Artificial intelligence (3)
- CT (3)
- Magnetic resonance imaging (3)
- Algorithms (2)
- Bone density (2)
- Multidetector computed tomography (2)
- Osteoporosis (2)
- Radiomics (2)
- Spine (2)
- Tomography (x-ray computed) (2)
Institute
- Medizin (23)
- Informatik (1)
- Informatik und Mathematik (1)
BACKGROUND: Evaluation of latest generation automated attenuation-based tube potential selection (ATPS) impact on image quality and radiation dose in contrast-enhanced chest-abdomen-pelvis computed tomography examinations for gynaecologic cancer staging.
METHODS: This IRB approved single-centre, observer-blinded retrospective study with a waiver for informed consent included a total of 100 patients with contrast-enhanced chest-abdomen-pelvis CT for gynaecologic cancer staging. All patients were examined with activated ATPS for adaption of tube voltage to body habitus. 50 patients were scanned on a third-generation dual-source CT (DSCT), and another 50 patients on a second-generation DSCT. Predefined image quality setting remained stable between both groups at 120 kV and a current of 210 Reference mAs. Subjective image quality assessment was performed by two blinded readers independently. Attenuation and image noise were measured in several anatomic structures. Signal-to-noise ratio (SNR) was calculated. For the evaluation of radiation exposure, CT dose index (CTDIvol) values were compared.
RESULTS: Diagnostic image quality was obtained in all patients. The median CTDIvol (6.1 mGy, range 3.9-22 mGy) was 40 % lower when using the algorithm compared with the previous ATCM protocol (median 10.2 mGy · cm, range 5.8-22.8 mGy). A reduction in potential to 90 kV occurred in 19 cases, a reduction to 100 kV in 23 patients and a reduction to 110 kV in 3 patients of our experimental cohort. These patients received significantly lower radiation exposure compared to the former used protocol.
CONCLUSION: Latest generation automated ATPS on third-generation DSCT provides good diagnostic image quality in chest-abdomen-pelvis CT while average radiation dose is reduced by 40 % compared to former ATPS protocol on second-generation DSCT.
Background: Computed tomography (CT) low-dose (LD) imaging is used to lower radiation exposure, especially in vascular imaging; in current literature, this is mostly on latest generation high-end CT systems.
Purpose: To evaluate the effects of reduced tube current on objective and subjective image quality of a 15-year-old 16-slice CT system for pulmonary angiography (CTPA).
Material and Methods: CTPA scans from 60 prospectively randomized patients (28 men, 32 women) were examined in this study on a 15-year-old 16-slice CT scanner system. Standard CT (SD) settings were 100 kV and 150 mAs, LD settings were 100 kV and 50 mAs. Attenuation of the pulmonary trunk, various anatomic landmarks, and image noise were quantitatively measured; contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR) were calculated. Three independent blinded radiologists subjectively rated each image series using a 5-point grading scale.
Results: CT dose index (CTDI) in the LD series was 66.46% lower compared to the SD settings (2.49 ± 0.55 mGy versus 7.42 ± 1.17 mGy). Attenuation of the pulmonary trunk showed similar results for both series (SD 409.55 ± 91.04 HU; LD 380.43 HU ± 93.11 HU; P = 0.768). Subjective image analysis showed no significant differences between SD and LD settings regarding the suitability for detection of central and peripheral PE (central SD/LD, 4.88; intra-class correlation coefficients [ICC], 0.894/4.83; ICC, 0.745; peripheral SD/LD, 4.70; ICC, 0.943/4.57; ICC, 0.919; all P > 0.4).
Conclusion: The LD protocol, on a 15-year-old CT scanner system without current high-end hardware or post-processing tools, led to a dose reduction of approximately 67% with similar subjective image quality and delineation of central and peripheral pulmonary arteries.
Dual-energy CT (DECT) has emerged into clinical routine as an imaging technique with unique postprocessing utilities that improve the evaluation of different body areas. The virtual non-calcium (VNCa) reconstruction algorithm has shown beneficial effects on the depiction of bone marrow pathologies such as bone marrow edema. Its main advantage is the ability to substantially increase the image contrast of structures that are usually covered with calcium mineral, such as calcified vessels or bone marrow, and to depict a large number of traumatic, inflammatory, infiltrative, and degenerative disorders affecting either the spine or the appendicular skeleton. Therefore, VNCa imaging represents another step forward for DECT to image conditions and disorders that usually require the use of more expensive and time-consuming techniques such as magnetic resonance imaging, positron emission tomography/CT, or bone scintigraphy. The aim of this review article is to explain the technical background of VNCa imaging, showcase its applicability in the different body regions, and provide an updated outlook on the clinical impact of this technique, which goes beyond the sole improvement in image quality.
Objectives: To compare radiation dose and image quality of single-energy (SECT) and dual-energy (DECT) head and neck CT examinations performed with second- and third-generation dual-source CT (DSCT) in matched patient cohorts. Methods: 200 patients (mean age 55.1 ± 16.9 years) who underwent venous phase head and neck CT with a vendor-preset protocol were retrospectively divided into four equal groups (n = 50) matched by gender and BMI: second (Group A, SECT, 100-kV; Group B, DECT, 80/Sn140-kV), and third-generation DSCT (Group C, SECT, 100-kV; Group D, DECT, 90/Sn150-kV). Assess- ment of radiation dose was performed for an average scan length of 27 cm. Contrast-to-noise ratio measure- ments and dose-independent figure-of-merit calcu- lations of the submandibular gland, thyroid, internal jugular vein, and common carotid artery were analyzed quantitatively. Qualitative image parameters were evalu- ated regarding overall image quality, artifacts and reader confidence using 5-point Likert scales. Results: Effective radiation dose (ED) was not signifi- cantly different between SECT and DECT acquisition for each scanner generation (p = 0.10). Significantly lower effective radiation dose (p < 0.01) values were observed for third-generation DSCT groups C (1.1 ± 0.2 mSv) and D (1.0 ± 0.3 mSv) compared to second-generation DSCT groups A (1.8 ± 0.1 mSv) and B (1.6 ± 0.2 mSv). Figure-of- merit/contrast-to-noise ratio analysis revealed superior results for third-generation DECT Group D compared to all other groups. Qualitative image parameters showed non-significant differences between all groups (p > 0.06). Conclusion: Contrast-enhanced head and neck DECT can be performed with second- and third-generation DSCT systems without radiation penalty or impaired image quality compared with SECT, while third-generation DSCT is the most dose efficient acquisition method. Advances in knowledge: Differences in radiation dose between SECT and DECT of the dose-vulnerable head and neck region using DSCT systems have not been evaluated so far. Therefore, this study directly compares radiation dose and image quality of standard SECT and DECT protocols of second- and third-generation DSCT platforms.
Objectives: To determine the diagnostic accuracy of dual-energy CT (DECT) virtual noncalcium (VNCa) reconstructions for assessing thoracic disk herniation compared to standard grayscale CT. Methods: In this retrospective study, 87 patients (1131 intervertebral disks; mean age, 66 years; 47 women) who underwent third-generation dual-source DECT and 3.0-T MRI within 3 weeks between November 2016 and April 2020 were included. Five blinded radiologists analyzed standard DECT and color-coded VNCa images after a time interval of 8 weeks for the presence and degree of thoracic disk herniation and spinal nerve root impingement. Consensus reading of independently evaluated MRI series served as the reference standard, assessed by two separate experienced readers. Additionally, image ratings were carried out by using 5-point Likert scales. Results: MRI revealed a total of 133 herniated thoracic disks. Color-coded VNCa images yielded higher overall sensitivity (624/665 [94%; 95% CI, 0.89–0.96] vs 485/665 [73%; 95% CI, 0.67–0.80]), specificity (4775/4990 [96%; 95% CI, 0.90–0.98] vs 4066/4990 [82%; 95% CI, 0.79–0.84]), and accuracy (5399/5655 [96%; 95% CI, 0.93–0.98] vs 4551/5655 [81%; 95% CI, 0.74–0.86]) for the assessment of thoracic disk herniation compared to standard CT (all p < .001). Interrater agreement was excellent for VNCa and fair for standard CT (ϰ = 0.82 vs 0.37; p < .001). In addition, VNCa imaging achieved higher scores regarding diagnostic confidence, image quality, and noise compared to standard CT (all p < .001). Conclusions: Color-coded VNCa imaging yielded substantially higher diagnostic accuracy and confidence for assessing thoracic disk herniation compared to standard CT.
Background: To assess the potential of radiomic features to quantify components of blood in intraaortic vessels to non-invasively predict moderate-to-severe anemia in non-contrast enhanced CT scans. Methods: One hundred patients (median age, 69 years; range, 19–94 years) who received CT scans of the thoracolumbar spine and blood-testing for hemoglobin and hematocrit levels ± 24 h between 08/2018 and 11/2019 were retrospectively included. Intraaortic blood was segmented using a spherical volume of interest of 1 cm diameter with consecutive radiomic analysis applying PyRadiomics software. Feature selection was performed applying analysis of correlation and collinearity. The final feature set was obtained to differentiate moderate-to-severe anemia. Random forest machine learning was applied and predictive performance was assessed. A decision-tree was obtained to propose a cut-off value of CT Hounsfield units (HU). Results: High correlation with hemoglobin and hematocrit levels was shown for first-order radiomic features (p < 0.001 to p = 0.032). The top 3 features showed high correlation to hemoglobin values (p) and minimal collinearity (r) to the top ranked feature Median (p < 0.001), Energy (p = 0.002, r = 0.387), Minimum (p = 0.032, r = 0.437). Median (p < 0.001) and Minimum (p = 0.003) differed in moderate-to-severe anemia compared to non-anemic state. Median yielded superiority to the combination of Median and Minimum (p(AUC) = 0.015, p(precision) = 0.017, p(accuracy) = 0.612) in the predictive performance employing random forest analysis. A Median HU value ≤ 36.5 indicated moderate-to-severe anemia (accuracy = 0.90, precision = 0.80). Conclusions: First-order radiomic features correlate with hemoglobin levels and may be feasible for the prediction of moderate-to-severe anemia. High dimensional radiomic features did not aid augmenting the data in our exemplary use case of intraluminal blood component assessment.
Objectives: To compare dual-energy CT (DECT) and MRI for assessing presence and extent of traumatic bone marrow edema (BME) and fracture line depiction in acute vertebral fractures. Methods: Eighty-eight consecutive patients who underwent dual-source DECT and 3-T MRI of the spine were retrospectively analyzed. Five radiologists assessed all vertebrae for presence and extent of BME and for identification of acute fracture lines on MRI and, after 12 weeks, on DECT series. Additionally, image quality, image noise, and diagnostic confidence for overall diagnosis of acute vertebral fracture were assessed. Quantitative analysis of CT numbers was performed by a sixth radiologist. Two radiologists analyzed MRI and grayscale DECT series to define the reference standard. Results: For assessing BME presence and extent, DECT showed high sensitivity (89% and 84%, respectively) and specificity (98% in both), and similarly high diagnostic confidence compared to MRI (2.30 vs. 2.32; range 0–3) for the detection of BME (p = .72). For evaluating acute fracture lines, MRI achieved high specificity (95%), moderate sensitivity (76%), and a significantly lower diagnostic confidence compared to DECT (2.42 vs. 2.62, range 0–3) (p < .001). A cutoff value of − 0.43 HU provided a sensitivity of 89% and a specificity of 90% for diagnosing BME, with an overall AUC of 0.96. Conclusions: DECT and MRI provide high diagnostic confidence and image quality for assessing acute vertebral fractures. While DECT achieved high overall diagnostic accuracy in the analysis of BME presence and extent, MRI provided moderate sensitivity and lower confidence for evaluating fracture lines.
Background: This prospective randomized trial is designed to compare the performance of conventional transarterial chemoembolization (cTACE) using Lipiodol-only with additional use of degradable starch microspheres (DSM) for hepatocellular carcinoma (HCC) in BCLC-stage-B based on metric tumor response. Methods: Sixty-one patients (44 men; 17 women; range 44–85) with HCC were evaluated in this IRB-approved HIPPA compliant study. The treatment protocol included three TACE-sessions in 4-week intervals, in all cases with Mitomycin C as a chemotherapeutic agent. Multiparametric magnetic resonance imaging (MRI) was performed prior to the first and 4 weeks after the last TACE. Two treatment groups were determined using a randomization sheet: In 30 patients, TACE was performed using Lipiodol only (group 1). In 31 cases Lipiodol was combined with DSMs (group 2). Response according to tumor volume, diameter, mRECIST criteria, and the development of necrotic areas were analyzed and compared using the Mann–Whitney-U, Kruskal–Wallis-H-test, and Spearman-Rho. Survival data were analyzed using the Kaplan–Meier estimator. Results: A mean overall tumor volume reduction of 21.45% (± 62.34%) was observed with an average tumor volume reduction of 19.95% in group 1 vs. 22.95% in group 2 (p = 0.653). Mean diameter reduction was measured with 6.26% (± 34.75%), for group 1 with 11.86% vs. 4.06% in group 2 (p = 0.678). Regarding mRECIST criteria, group 1 versus group 2 showed complete response in 0 versus 3 cases, partial response in 2 versus 7 cases, stable disease in 21 versus 17 cases, and progressive disease in 3 versus 1 cases (p = 0.010). Estimated overall survival was in mean 33.4 months (95% CI 25.5–41.4) for cTACE with Lipiosol plus DSM, and 32.5 months (95% CI 26.6–38.4), for cTACE with Lipiodol-only (p = 0.844), respectively. Conclusions: The additional application of DSM during cTACE showed a significant benefit in tumor response according to mRECIST compared to cTACE with Lipiodol-only. No benefit in survival time was observed.
Objectives: To analyze the performance of radiological assessment categories and quantitative computational analysis of apparent diffusion coefficient (ADC) maps using variant machine learning algorithms to differentiate clinically significant versus insignificant prostate cancer (PCa). Methods: Retrospectively, 73 patients were included in the study. The patients (mean age, 66.3 ± 7.6 years) were examined with multiparametric MRI (mpMRI) prior to radical prostatectomy (n = 33) or targeted biopsy (n = 40). The index lesion was annotated in MRI ADC and the equivalent histologic slides according to the highest Gleason Grade Group (GrG). Volumes of interest (VOIs) were determined for each lesion and normal-appearing peripheral zone. VOIs were processed by radiomic analysis. For the classification of lesions according to their clinical significance (GrG ≥ 3), principal component (PC) analysis, univariate analysis (UA) with consecutive support vector machines, neural networks, and random forest analysis were performed. Results: PC analysis discriminated between benign and malignant prostate tissue. PC evaluation yielded no stratification of PCa lesions according to their clinical significance, but UA revealed differences in clinical assessment categories and radiomic features. We trained three classification models with fifteen feature subsets. We identified a subset of shape features which improved the diagnostic accuracy of the clinical assessment categories (maximum increase in diagnostic accuracy ΔAUC = + 0.05, p < 0.001) while also identifying combinations of features and models which reduced overall accuracy. Conclusions: The impact of radiomic features to differentiate PCa lesions according to their clinical significance remains controversial. It depends on feature selection and the employed machine learning algorithms. It can result in improvement or reduction of diagnostic performance.
Myocardial fibrosis and inflammation by CMR predict cardiovascular outcome in people living with HIV
(2021)
Objectives_: The goal of this study was to examine prognostic relationships between cardiac imaging measures and cardiovascular outcome in people living with human immunodeficiency virus (HIV) (PLWH) on highly active antiretroviral therapy (HAART).
Background: PLWH have a higher prevalence of cardiovascular disease and heart failure (HF) compared with the noninfected population. The pathophysiological drivers of myocardial dysfunction and worse cardiovascular outcome in HIV remain poorly understood.
Methods: This prospective observational longitudinal study included consecutive PLWH on long-term HAART undergoing cardiac magnetic resonance (CMR) examination for assessment of myocardial volumes and function, T1 and T2 mapping, perfusion, and scar. Time-to-event analysis was performed from the index CMR examination to the first single event per patient. The primary endpoint was an adjudicated adverse cardiovascular event (cardiovascular mortality, nonfatal acute coronary syndrome, an appropriate device discharge, or a documented HF hospitalization).
Results: A total of 156 participants (62% male; age [median, interquartile range]: 50 years [42 to 57 years]) were included. During a median follow-up of 13 months (9 to 19 months), 24 events were observed (4 HF deaths, 1 sudden cardiac death, 2 nonfatal acute myocardial infarction, 1 appropriate device discharge, and 16 HF hospitalizations). Patients with events had higher native T1 (median [interquartile range]: 1,149 ms [1,115 to 1,163 ms] vs. 1,110 ms [1,075 to 1,138 ms]); native T2 (40 ms [38 to 41 ms] vs. 37 ms [36 to 39 ms]); left ventricular (LV) mass index (65 g/m2 [49 to 77 g/m2] vs. 57 g/m2 [49 to 64 g/m2]), and N-terminal pro–B-type natriuretic peptide (109 pg/l [25 to 337 pg/l] vs. 48 pg/l [23 to 82 pg/l]) (all p < 0.05). In multivariable analyses, native T1 was independently predictive of adverse events (chi-square test, 15.9; p < 0.001; native T1 [10 ms] hazard ratio [95% confidence interval]: 1.20 [1.08 to 1.33]; p = 0.001), followed by a model that also included LV mass (chi-square test, 17.1; p < 0.001). Traditional cardiovascular risk scores were not predictive of the adverse events.
Conclusions: Our findings reveal important prognostic associations of diffuse myocardial fibrosis and LV remodeling in PLWH. These results may support development of personalized approaches to screening and early intervention to reduce the burden of HF in PLWH (International T1 Multicenter Outcome Study; NCT03749343).