Refine
Document Type
- Article (20)
Language
- English (20)
Has Fulltext
- yes (20)
Is part of the Bibliography
- no (20)
Keywords
- Artificial intelligence (3)
- CT (3)
- Magnetic resonance imaging (3)
- Algorithms (2)
- Bone density (2)
- Osteoporosis (2)
- Radiomics (2)
- Spine (2)
- Tomography (x-ray computed) (2)
- Age determination by skeleton (1)
Institute
- Medizin (20)
- Informatik (1)
- Informatik und Mathematik (1)
Purpose: To investigate the diagnostic performance of noise-optimized virtual monoenergetic images (VMI+) in dual-energy CT (DECT) of portal vein thrombosis (PVT) compared to standard reconstructions. Method: This retrospective, single-center study included 107 patients (68 men; mean age, 60.1 ± 10.7 years) with malignant or cirrhotic liver disease and suspected PVT who had undergone contrast-enhanced portal-phase DECT of the abdomen. Linearly blended (M_0.6) and virtual monoenergetic images were calculated using both standard VMI and noise-optimized VMI+ algorithms in 20 keV increments from 40 to 100 keV. Quantitative measurements were performed in the portal vein for objective contrast-to-noise ratio (CNR) calculation. The image series showing the greatest CNR were further assessed for subjective image quality and diagnostic accuracy of PVT detection by two blinded radiologists. Results: PVT was present in 38 subjects. VMI+ reconstructions at 40 keV revealed the best objective image quality (CNR, 9.6 ± 4.3) compared to all other image reconstructions (p < 0.01). In the standard VMI series, CNR peaked at 60 keV (CNR, 4.7 ± 2.1). Qualitative image parameters showed the highest image quality rating scores for the 60 keV VMI+ series (median, 4) (p ≤ 0.03). The greatest diagnostic accuracy for the diagnosis of PVT was found for the 40 keV VMI+ series (sensitivity, 96%; specificity, 96%) compared to M_0.6 images (sensitivity, 87%; specificity, 92%), 60 keV VMI (sensitivity, 87%; specificity, 97%), and 60 keV VMI+ reconstructions (sensitivity, 92%; specificity, 97%) (p ≤ 0.01). Conclusions: Low-keV VMI+ reconstructions resulted in significantly improved diagnostic performance for the detection of PVT compared to other DECT reconstruction algorithms.
Highlights
• MRI and ultrasound provided significant correlations between findings suggestive of vasculitis and the final diagnosis.
• Careful selection of available imaging techniques is warranted considering the time course, location, and clinical history.
• Considering its moderate diagnostic power to distinguish tracer uptake, a holistic view of PET/CT findings is essential.
Abstract
Purpose: To assess the diagnostic value of different imaging modalities in distinguishing systemic vasculitis from other internal and immunological diseases.
Methods: This retrospective study included 134 patients with suspected vasculitis who underwent ultrasound, magnetic resonance imaging (MRI), or 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) between 01/2010 and 01/2019, finally consisting of 70 individuals with vasculitis. The main study parameter was the confirmation of the diagnosis using one of the three different imaging modalities, with the adjudicated clinical and histopathological diagnosis as the gold standard. A secondary parameter was the morphological appearance of the vessel affected by vasculitis.
Results: Patients with systemic vasculitis had myriad clinical manifestations with joint pain as the most common symptom. We found significant correlations between different imaging findings suggestive of vasculitis and the final adjudicated clinical diagnosis. In this context, on MRI, vessel wall thickening, edema, and diameter differed significantly between vasculitis and non-vasculitis groups (p < 0.05). Ultrasound revealed different findings that may serve as red flags in identifying patients with vasculitis, such as vascular occlusion or halo sign (p = 0.02 vs. non-vasculitis group). Interestingly, comparing maximal standardized uptake values from PET/CT examinations with vessel wall thickening or vessel diameter did not result in significant differences (p > 0.05).
Conclusions: We observed significant correlations between different imaging findings suggestive of vasculitis on ultrasound or MRI and the final adjudicated diagnosis. While ultrasound and MRI were considered suitable imaging methods for detecting and discriminating typical vascular changes, 18F-FDG PET/CT requires careful timing and patient selection given its moderate diagnostic accuracy.
Purpose: To identify transjugular intrahepatic portosystemic shunt (TIPS) thrombosis in abdominal CT scans applying quantitative image analysis.
Materials and methods: We retrospectively screened 184 patients to include 20 patients (male, 8; female, 12; mean age, 60.7 ± 8.87 years) with (case, n = 10) and without (control, n = 10) in-TIPS thrombosis who underwent clinically indicated contrast-enhanced and unenhanced abdominal CT followed by conventional TIPS-angiography between 08/2014 and 06/2020. First, images were scored visually. Second, region of interest (ROI) based quantitative measurements of CT attenuation were performed in the inferior vena cava (IVC), portal vein and in four TIPS locations. Minimum, maximum and average Hounsfield unit (HU) values were used as absolute and relative quantitative features. We analyzed the features with univariate testing.
Results: Subjective scores identified in-TIPS thrombosis in contrast-enhanced scans with an accuracy of 0.667 – 0.833. Patients with in-TIPS thrombosis had significantly lower average (p < 0.001), minimum (p < 0.001) and maximum HU (p = 0.043) in contrast-enhanced images. The in-TIPS / IVC ratio in contrast-enhanced images was significantly lower in patients with in-TIPS thrombosis (p < 0.001). No significant differences were found for unenhanced images. Analyzing the visually most suspicious ROI with consecutive calculation of its ratio to the IVC, all patients with a ratio < 1 suffered from in-TIPS thrombosis (p < 0.001, sensitivity and specificity = 100%).
Conclusion: Quantitative analysis of abdominal CT scans facilitates the stratification of in-TIPS thrombosis. In contrast-enhanced scans, an in-TIPS / IVC ratio < 1 could non-invasively stratify all patients with in-TIPS thrombosis.
Background: Various studies have been made about the most effective and safest type of treatment for vertebral compression fractures (VCFs). Long-term results are needed for qualitative evaluation.
Purpose: The purpose of the study is to evaluate the effectiveness of percutaneous vertebroplasty (PVP) and percutaneous kyphoplasty (PKP) procedures for VCFs.
Materials and Methods: Forty-nine patients who received either PVP or PKP between 2002 and 2015 returned a specially developed questionnaire and were included in a cross-sectional outcome analysis. The questionnaire assessed pain development by use of a visual analog scale (VAS). Imaging data (CT scans) were retrospectively analyzed for identification of cement leakage.
Results: Patients’ VAS scores significantly decreased after treatment (7.0 ± 3.4 => 3.7 ± 3.4), (p < 0.001). The average pain reduction in patients treated with PVP was −3.3 ± 3.8 (p < 0.001) (median −3.5) and −4.0 ± 3.9 (p < 0.001) (median −4.5) in patients treated with PKP. Fifteen Patients (41.7%) receiving PVP and four patients (30.7%) receiving PKP experienced recurrence of pain. Cement leakage occurred in 10 patients (22.73%). Patients with cement leakage showed comparable VAS scores after treatment (6.8 ± 3.5 => 1.4 ± 1.6), (p = 0.008). Thirty-nine patients reported an increase in mobility (79.6%) and 41 patients an improvement in quality of life (83.7%).
Conclusion: Pain reduction by means of PVP or PKP in patients with VCFs was discernible over the period of observation. Percutaneous vertebroplasty and PKP contribute to the desired treatment results. However, the level of low pain may not remain constant.
Simple Summary: Early and accurate diagnosis of breast cancer that has spread to other organs and tissues is crucial, as therapeutic decisions and outcome expectations might change. Computed tomography (CT) is often used to detect breast cancer’s spread, but this method has its weaknesses. The computer-assisted technique “radiomics” extracts grey-level patterns, so-called radiomic features, from medical images, which may reflect underlying biological processes. Our retrospective study therefore evaluated whether breast cancer spread can be predicted by radiomic features derived from iodine maps, an application on a new generation of CT scanners visualizing tissue blood flow. Based on 77 patients with newly diagnosed breast cancer, we found that this approach might indeed predict cancer spread to other organs/tissues. In the future, radiomics may serve as an additional tool for cancer detection and risk assessment.
Abstract: Dual-energy CT (DECT) iodine maps enable quantification of iodine concentrations as a marker for tissue vascularization. We investigated whether iodine map radiomic features derived from staging DECT enable prediction of breast cancer metastatic status, and whether textural differ- ences exist between primary breast cancers and metastases. Seventy-seven treatment-naïve patients with biopsy-proven breast cancers were included retrospectively (41 non-metastatic, 36 metastatic). Radiomic features including first-, second-, and higher-order metrics as well as shape descriptors were extracted from volumes of interest on iodine maps. Following principal component analysis, a multilayer perceptron artificial neural network (MLP-NN) was used for classification (70% of cases for training, 30% validation). Histopathology served as reference standard. MLP-NN predicted metastatic status with AUCs of up to 0.94, and accuracies of up to 92.6 in the training and 82.6 in the validation datasets. The separation of primary tumor and metastatic tissue yielded AUCs of up to 0.87, with accuracies of up to 82.8 in the training, and 85.7 in the validation dataset. DECT iodine map-based radiomic signatures may therefore predict metastatic status in breast cancer patients. In addition, microstructural differences between primary and metastatic breast cancer tissue may be reflected by differences in DECT radiomic features.
Dual-energy CT (DECT) has emerged into clinical routine as an imaging technique with unique postprocessing utilities that improve the evaluation of different body areas. The virtual non-calcium (VNCa) reconstruction algorithm has shown beneficial effects on the depiction of bone marrow pathologies such as bone marrow edema. Its main advantage is the ability to substantially increase the image contrast of structures that are usually covered with calcium mineral, such as calcified vessels or bone marrow, and to depict a large number of traumatic, inflammatory, infiltrative, and degenerative disorders affecting either the spine or the appendicular skeleton. Therefore, VNCa imaging represents another step forward for DECT to image conditions and disorders that usually require the use of more expensive and time-consuming techniques such as magnetic resonance imaging, positron emission tomography/CT, or bone scintigraphy. The aim of this review article is to explain the technical background of VNCa imaging, showcase its applicability in the different body regions, and provide an updated outlook on the clinical impact of this technique, which goes beyond the sole improvement in image quality.
Background: Dual-source dual-energy computed tomography (DECT) offers the potential for opportunistic osteoporosis screening by enabling phantomless bone mineral density (BMD) quantification. This study sought to assess the accuracy and precision of volumetric BMD measurement using dual-source DECT in comparison to quantitative CT (QCT). Methods: A validated spine phantom consisting of three lumbar vertebra equivalents with 50 (L1), 100 (L2), and 200 mg/cm3 (L3) calcium hydroxyapatite (HA) concentrations was scanned employing third-generation dual-source DECT and QCT. While BMD assessment based on QCT required an additional standardised bone density calibration phantom, the DECT technique operated by using a dedicated postprocessing software based on material decomposition without requiring calibration phantoms. Accuracy and precision of both modalities were compared by calculating measurement errors. In addition, correlation and agreement analyses were performed using Pearson correlation, linear regression, and Bland-Altman plots. Results: DECT-derived BMD values differed significantly from those obtained by QCT (p < 0.001) and were found to be closer to true HA concentrations. Relative measurement errors were significantly smaller for DECT in comparison to QCT (L1, 0.94% versus 9.68%; L2, 0.28% versus 5.74%; L3, 0.24% versus 3.67%, respectively). DECT demonstrated better BMD measurement repeatability compared to QCT (coefficient of variance < 4.29% for DECT, < 6.74% for QCT). Both methods correlated well to each other (r = 0.9993; 95% confidence interval 0.9984–0.9997; p < 0.001) and revealed substantial agreement in Bland-Altman plots. Conclusions: Phantomless dual-source DECT-based BMD assessment of lumbar vertebra equivalents using material decomposition showed higher diagnostic accuracy compared to QCT.
Background: This prospective randomized trial is designed to compare the performance of conventional transarterial chemoembolization (cTACE) using Lipiodol-only with additional use of degradable starch microspheres (DSM) for hepatocellular carcinoma (HCC) in BCLC-stage-B based on metric tumor response. Methods: Sixty-one patients (44 men; 17 women; range 44–85) with HCC were evaluated in this IRB-approved HIPPA compliant study. The treatment protocol included three TACE-sessions in 4-week intervals, in all cases with Mitomycin C as a chemotherapeutic agent. Multiparametric magnetic resonance imaging (MRI) was performed prior to the first and 4 weeks after the last TACE. Two treatment groups were determined using a randomization sheet: In 30 patients, TACE was performed using Lipiodol only (group 1). In 31 cases Lipiodol was combined with DSMs (group 2). Response according to tumor volume, diameter, mRECIST criteria, and the development of necrotic areas were analyzed and compared using the Mann–Whitney-U, Kruskal–Wallis-H-test, and Spearman-Rho. Survival data were analyzed using the Kaplan–Meier estimator. Results: A mean overall tumor volume reduction of 21.45% (± 62.34%) was observed with an average tumor volume reduction of 19.95% in group 1 vs. 22.95% in group 2 (p = 0.653). Mean diameter reduction was measured with 6.26% (± 34.75%), for group 1 with 11.86% vs. 4.06% in group 2 (p = 0.678). Regarding mRECIST criteria, group 1 versus group 2 showed complete response in 0 versus 3 cases, partial response in 2 versus 7 cases, stable disease in 21 versus 17 cases, and progressive disease in 3 versus 1 cases (p = 0.010). Estimated overall survival was in mean 33.4 months (95% CI 25.5–41.4) for cTACE with Lipiosol plus DSM, and 32.5 months (95% CI 26.6–38.4), for cTACE with Lipiodol-only (p = 0.844), respectively. Conclusions: The additional application of DSM during cTACE showed a significant benefit in tumor response according to mRECIST compared to cTACE with Lipiodol-only. No benefit in survival time was observed.
Myocardial fibrosis and inflammation by CMR predict cardiovascular outcome in people living with HIV
(2021)
Objectives_: The goal of this study was to examine prognostic relationships between cardiac imaging measures and cardiovascular outcome in people living with human immunodeficiency virus (HIV) (PLWH) on highly active antiretroviral therapy (HAART).
Background: PLWH have a higher prevalence of cardiovascular disease and heart failure (HF) compared with the noninfected population. The pathophysiological drivers of myocardial dysfunction and worse cardiovascular outcome in HIV remain poorly understood.
Methods: This prospective observational longitudinal study included consecutive PLWH on long-term HAART undergoing cardiac magnetic resonance (CMR) examination for assessment of myocardial volumes and function, T1 and T2 mapping, perfusion, and scar. Time-to-event analysis was performed from the index CMR examination to the first single event per patient. The primary endpoint was an adjudicated adverse cardiovascular event (cardiovascular mortality, nonfatal acute coronary syndrome, an appropriate device discharge, or a documented HF hospitalization).
Results: A total of 156 participants (62% male; age [median, interquartile range]: 50 years [42 to 57 years]) were included. During a median follow-up of 13 months (9 to 19 months), 24 events were observed (4 HF deaths, 1 sudden cardiac death, 2 nonfatal acute myocardial infarction, 1 appropriate device discharge, and 16 HF hospitalizations). Patients with events had higher native T1 (median [interquartile range]: 1,149 ms [1,115 to 1,163 ms] vs. 1,110 ms [1,075 to 1,138 ms]); native T2 (40 ms [38 to 41 ms] vs. 37 ms [36 to 39 ms]); left ventricular (LV) mass index (65 g/m2 [49 to 77 g/m2] vs. 57 g/m2 [49 to 64 g/m2]), and N-terminal pro–B-type natriuretic peptide (109 pg/l [25 to 337 pg/l] vs. 48 pg/l [23 to 82 pg/l]) (all p < 0.05). In multivariable analyses, native T1 was independently predictive of adverse events (chi-square test, 15.9; p < 0.001; native T1 [10 ms] hazard ratio [95% confidence interval]: 1.20 [1.08 to 1.33]; p = 0.001), followed by a model that also included LV mass (chi-square test, 17.1; p < 0.001). Traditional cardiovascular risk scores were not predictive of the adverse events.
Conclusions: Our findings reveal important prognostic associations of diffuse myocardial fibrosis and LV remodeling in PLWH. These results may support development of personalized approaches to screening and early intervention to reduce the burden of HF in PLWH (International T1 Multicenter Outcome Study; NCT03749343).
Objectives: To compare radiation dose and image quality of single-energy (SECT) and dual-energy (DECT) head and neck CT examinations performed with second- and third-generation dual-source CT (DSCT) in matched patient cohorts. Methods: 200 patients (mean age 55.1 ± 16.9 years) who underwent venous phase head and neck CT with a vendor-preset protocol were retrospectively divided into four equal groups (n = 50) matched by gender and BMI: second (Group A, SECT, 100-kV; Group B, DECT, 80/Sn140-kV), and third-generation DSCT (Group C, SECT, 100-kV; Group D, DECT, 90/Sn150-kV). Assess- ment of radiation dose was performed for an average scan length of 27 cm. Contrast-to-noise ratio measure- ments and dose-independent figure-of-merit calcu- lations of the submandibular gland, thyroid, internal jugular vein, and common carotid artery were analyzed quantitatively. Qualitative image parameters were evalu- ated regarding overall image quality, artifacts and reader confidence using 5-point Likert scales. Results: Effective radiation dose (ED) was not signifi- cantly different between SECT and DECT acquisition for each scanner generation (p = 0.10). Significantly lower effective radiation dose (p < 0.01) values were observed for third-generation DSCT groups C (1.1 ± 0.2 mSv) and D (1.0 ± 0.3 mSv) compared to second-generation DSCT groups A (1.8 ± 0.1 mSv) and B (1.6 ± 0.2 mSv). Figure-of- merit/contrast-to-noise ratio analysis revealed superior results for third-generation DECT Group D compared to all other groups. Qualitative image parameters showed non-significant differences between all groups (p > 0.06). Conclusion: Contrast-enhanced head and neck DECT can be performed with second- and third-generation DSCT systems without radiation penalty or impaired image quality compared with SECT, while third-generation DSCT is the most dose efficient acquisition method. Advances in knowledge: Differences in radiation dose between SECT and DECT of the dose-vulnerable head and neck region using DSCT systems have not been evaluated so far. Therefore, this study directly compares radiation dose and image quality of standard SECT and DECT protocols of second- and third-generation DSCT platforms.