Refine
Document Type
- Article (20)
Language
- English (20)
Has Fulltext
- yes (20)
Is part of the Bibliography
- no (20)
Keywords
- Bone density (3)
- Cardiovascular magnetic resonance (3)
- Computed tomography (3)
- Magnetic resonance imaging (3)
- Osteoporosis (3)
- Spine (3)
- Algorithms (2)
- Computed Tomography (2)
- Diagnostic imaging (2)
- Multidetector computed tomography (2)
Institute
- Medizin (20)
Background: Reducing time and contrast agent doses are important goals to provide cost-efficient cardiovascular magnetic resonance (CMR) imaging. Limited information is available regarding the feasibility of evaluating left ventricular (LV) function after gadobutrol injection as well as defining the lowest dose for high quality scar imaging. We sought to evaluate both aspects separately and systematically to provide an optimized protocol for contrast-enhanced CMR (CE-CMR) using gadobutrol.
Methods: This is a prospective, randomized, single-blind cross-over study performed in two different populations. The first population consisted of 30 patients with general indications for a rest CE-CMR who underwent cine-imaging before and immediately after intravenous administration of 0.1 mmol/kg body-weight of gadobutrol. Quantitative assessment of LV volumes and function was performed by the same reader in a randomized and blinded fashion. The second population was composed of 30 patients with indication to late gadolinium enhancement (LGE) imaging, which was performed twice at different gadobutrol doses (0.1 mmol/kg vs. 0.2 mmol/kg) and at different time delays (5 and 10 min vs. 5, 10, 15 and 20 min), within a maximal interval of 21 days. LGE images were analysed qualitatively (contrast-to-noise ratio) and quantitatively (LGE%-of-mass).
Results: Excellent correlation between pre- and post-contrast cine-imaging was found, with no difference of LV stroke volume and ejection fraction (p = 0.538 and p = 0.095, respectively). End-diastolic-volume and end-systolic-volume were measured significantly larger after contrast injection (p = 0.008 and p = 0.001, respectively), with a mean difference of 3.7 ml and 2.9 ml, respectively. LGE imaging resulted in optimal contrast-to-noise ratios 10 min post-injection for a gadobutrol dose of 0.1 mmol/kg body-weight and 20 min for a dose of 0.2 mmol/kg body-weight. At these time points LGE quantification did not significantly differ (0.1 mmol/kg: 11% (16.4); 0.2 mmol/kg: 12% (14.5); p = 0.059), showing excellent correlation (ICC = 0.957; p < 0.001).
Conclusion: A standardized CE-CMR rest protocol giving a dose of 0.1 mmol/kg of gadobutrol before cine-imaging and performing LGE 10 min after injection represents a fast low-dose protocol without significant loss of information in comparison to a longer protocol with cine-imaging before contrast injection and a higher dose of gadobutrol. This approach allows to reduce examination time and costs as well as minimize contrast-agent exposure.
Background: Myocardial perfusion with cardiovascular magnetic resonance (CMR) imaging is an established diagnostic test for evaluation of myocardial ischaemia. For quantification purposes, the 16 segment American Heart Association (AHA) model poses limitations in terms of extracting relevant information on the extent/severity of ischaemia as perfusion deficits will not always fall within an individual segment, which reduces its diagnostic value, and makes an accurate assessment of outcome data or a result comparison across various studies difficult. We hypothesised that division of the myocardial segments into epi- and endocardial layers and a further circumferential subdivision, resulting in a total of 96 segments, would improve the accuracy of detecting myocardial hypoperfusion. Higher (sub-)subsegmental recording of perfusion abnormalities, which are defined relatively to the normal reference using the subsegment with the highest value, may improve the spatial encoding of myocardial blood flow, based on a single stress perfusion acquisition. Objective: A proof of concept comparison study of subsegmentation approaches based on transmural segments (16 AHA and 48 segments) vs. subdivision into epi- and endocardial (32) subsegments vs. further circumferential subdivision into 96 (sub-)subsegments for diagnostic accuracy against invasively defined obstructive coronary artery disease (CAD). Methods: Thirty patients with obstructive CAD and 20 healthy controls underwent perfusion stress CMR imaging at 3 T during maximal adenosine vasodilation and a dual bolus injection of 0.1mmol/kg gadobutrol. Using Fermi deconvolution for blood flow estimation, (sub-)subsegmental values were expressed relative to the (sub)subsegment with the highest flow. In addition, endo−/epicardial flow ratios were calculated based on 32 and 96 (sub-)subsegments. A receiver operating characteristics (ROC) curve analysis was performed to compare the diagnostic performance of discrimination between patients with CAD and healthy controls. Observer reproducibility was assessed using Bland-Altman approaches. Results: Subdivision into more and smaller segments revealed greater accuracy for #32, #48 and # 96 compared to the standard #16 approach (area under the curve (AUC): 0.937, 0.973 and 0.993 vs 0.820, p<0.05). The #96-based endo−/epicardial ratio was superior to the #32 endo−/epicardial ratio (AUC 0.979, vs. 0.932, p<0.05). Measurements for the #16 model showed marginally better reproducibility compared to #32, #48 and #96 (mean difference± standard deviation: 2.0±3.6 vs. 2.3±4.0 vs 2.5±4.4 vs. 4.1±5.6). Conclusions: Subsegmentation of the myocardium improves diagnostic accuracy and facilitates an objective cutoff-based description of hypoperfusion, and facilitates an objective description of hypoperfusion, including the extent and severity of myocardial ischaemia. Quantification based on a single (stress-only) pass reduces the overall amount of gadolinium contrast agent required and the length of the overall diagnostic study.
Background: Bone age (BA) assessment performed by artificial intelligence (AI) is of growing interest due to improved accuracy, precision and time efficiency in daily routine. The aim of this study was to investigate the accuracy and efficiency of a novel AI software version for automated BA assessment in comparison to the Greulich-Pyle method.
Methods: Radiographs of 514 patients were analysed in this retrospective study. Total BA was assessed independently by three blinded radiologists applying the GP method and by the AI software. Overall and gender-specific BA assessment results, as well as reading times of both approaches, were compared, while the reference BA was defined by two blinded experienced paediatric radiologists in consensus by application of the Greulich-Pyle method.
Results: Mean absolute deviation (MAD) and root mean square deviation (RSMD) were significantly lower between AI-derived BA and reference BA (MAD 0.34 years, RSMD 0.38 years) than between reader-calculated BA and reference BA (MAD 0.79 years, RSMD 0.89 years; p < 0.001). The correlation between AI-derived BA and reference BA (r = 0.99) was significantly higher than between reader-calculated BA and reference BA (r = 0.90; p < 0.001). No statistical difference was found in reader agreement and correlation analyses regarding gender (p = 0.241). Mean reading times were reduced by 87% using the AI system.
Conclusions: A novel AI software enabled highly accurate automated BA assessment. It may improve efficiency in clinical routine by reducing reading times without compromising the accuracy compared with the Greulich-Pyle method.
Objectives: To evaluate the predictive value of volumetric bone mineral density (BMD) assessment of the lumbar spine derived from phantomless dual-energy CT (DECT)-based volumetric material decomposition as an indicator for the 2-year occurrence risk of osteoporosis-associated fractures. Methods: L1 of 92 patients (46 men, 46 women; mean age, 64 years, range, 19–103 years) who had undergone third-generation dual-source DECT between 01/2016 and 12/2018 was retrospectively analyzed. For phantomless BMD assessment, dedicated DECT postprocessing software using material decomposition was applied. Digital files of all patients were sighted for 2 years following DECT to obtain the incidence of osteoporotic fractures. Receiver operating characteristic (ROC) analysis was used to calculate cut-off values and logistic regression models were used to determine associations of BMD, sex, and age with the occurrence of osteoporotic fractures. Results: A DECT-derived BMD cut-off of 93.70 mg/cm3 yielded 85.45% sensitivity and 89.19% specificity for the prediction to sustain one or more osteoporosis-associated fractures within 2 years after BMD measurement. DECT-derived BMD was significantly associated with the occurrence of new fractures (odds ratio of 0.8710, 95% CI, 0.091–0.9375, p < .001), indicating a protective effect of increased DECT-derived BMD values. Overall AUC was 0.9373 (CI, 0.867–0.977, p < .001) for the differentiation of patients who sustained osteoporosis-associated fractures within 2 years of BMD assessment. Conclusions: Retrospective DECT-based volumetric BMD assessment can accurately predict the 2-year risk to sustain an osteoporosis-associated fracture in at-risk patients without requiring a calibration phantom. Lower DECT-based BMD values are strongly associated with an increased risk to sustain fragility fractures.
Key Points: Dual-energy CT–derived assessment of bone mineral density can identify patients at risk to sustain osteoporosis-associated fractures with a sensitivity of 85.45% and a specificity of 89.19%. The DECT-derived BMD threshold for identification of at-risk patients lies above the American College of Radiology (ACR) QCT guidelines for the identification of osteoporosis (93.70 mg/cm 3 vs 80 mg/cm 3 ).
Objectives: To investigate the diagnostic accuracy of color-coded contrast-enhanced dual-energy CT virtual noncalcium (VNCa) reconstructions for the assessment of lumbar disk herniation compared to unenhanced VNCa imaging.
Methods: A total of 91 patients were retrospectively evaluated (65 years ± 16; 43 women) who had undergone third-generation dual-source dual-energy CT and 3.0-T MRI within an examination interval up to 3 weeks between November 2019 and December 2020. Eight weeks after assessing unenhanced color-coded VNCa reconstructions for the presence and degree of lumbar disk herniation, corresponding contrast-enhanced portal venous phase color-coded VNCa reconstructions were independently analyzed by the same five radiologists. MRI series were additionally analyzed by one highly experienced musculoskeletal radiologist and served as reference standard.
Results: MRI depicted 210 herniated lumbar disks in 91 patients. VNCa reconstructions derived from contrast-enhanced CT scans showed similar high overall sensitivity (93% vs 95%), specificity (94% vs 95%), and accuracy (94% vs 95%) for the assessment of lumbar disk herniation compared to unenhanced VNCa images (all p > .05). Interrater agreement in VNCa imaging was excellent for both, unenhanced and contrast-enhanced CT (κ = 0.84 vs κ = 0.86; p > .05). Moreover, ratings for diagnostic confidence, image quality, and noise differed not significantly between unenhanced and contrast-enhanced VNCa series (all p > .05).
Conclusions: Color-coded VNCa reconstructions derived from contrast-enhanced dual-energy CT yield similar diagnostic accuracy for the depiction of lumbar disk herniation compared to unenhanced VNCa imaging and therefore may improve opportunistic retrospective lumbar disk herniation assessment, particularly in case of staging CT examinations.
Key Points
• Color-coded dual-source dual-energy CT virtual noncalcium (VNCa) reconstructions derived from portal venous phase yield similar high diagnostic accuracy for the assessment of lumbar disk herniation compared to unenhanced VNCa CT series (94% vs 95%) with MRI serving as a standard of reference.
• Diagnostic confidence, image quality, and noise levels differ not significantly between unenhanced and contrast-enhanced portal venous phase VNCa dual-energy CT series.
• Dual-source dual-energy CT might have the potential to improve opportunistic retrospective lumbar disk herniation assessment in CT examinations performed for other indications through reconstruction of VNCa images.
Abnormal venous atrial (VA) connections present a congenital heart disease (CHD) challenge for pediatric cardiologists. Fully anatomical evaluation is very difficult in prenatal and perinatal follow-up, but it has a profound impact on surgical correction and outcome. The echocardiogram is first-line imaging and represents the gold standard tool for simple abnormal VA connection. CT and MRI are mandatory for more complex heart disease and “nightmare cases”. 3D post-processing of volumetric CT and MRI acquisition helps to clarify anatomical relationships and allows for the creation of 3D printing models that can become crucial in customizing surgical strategy.
Dual-energy CT (DECT) has emerged into clinical routine as an imaging technique with unique postprocessing utilities that improve the evaluation of different body areas. The virtual non-calcium (VNCa) reconstruction algorithm has shown beneficial effects on the depiction of bone marrow pathologies such as bone marrow edema. Its main advantage is the ability to substantially increase the image contrast of structures that are usually covered with calcium mineral, such as calcified vessels or bone marrow, and to depict a large number of traumatic, inflammatory, infiltrative, and degenerative disorders affecting either the spine or the appendicular skeleton. Therefore, VNCa imaging represents another step forward for DECT to image conditions and disorders that usually require the use of more expensive and time-consuming techniques such as magnetic resonance imaging, positron emission tomography/CT, or bone scintigraphy. The aim of this review article is to explain the technical background of VNCa imaging, showcase its applicability in the different body regions, and provide an updated outlook on the clinical impact of this technique, which goes beyond the sole improvement in image quality.
Highlights
• MRI and ultrasound provided significant correlations between findings suggestive of vasculitis and the final diagnosis.
• Careful selection of available imaging techniques is warranted considering the time course, location, and clinical history.
• Considering its moderate diagnostic power to distinguish tracer uptake, a holistic view of PET/CT findings is essential.
Abstract
Purpose: To assess the diagnostic value of different imaging modalities in distinguishing systemic vasculitis from other internal and immunological diseases.
Methods: This retrospective study included 134 patients with suspected vasculitis who underwent ultrasound, magnetic resonance imaging (MRI), or 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) between 01/2010 and 01/2019, finally consisting of 70 individuals with vasculitis. The main study parameter was the confirmation of the diagnosis using one of the three different imaging modalities, with the adjudicated clinical and histopathological diagnosis as the gold standard. A secondary parameter was the morphological appearance of the vessel affected by vasculitis.
Results: Patients with systemic vasculitis had myriad clinical manifestations with joint pain as the most common symptom. We found significant correlations between different imaging findings suggestive of vasculitis and the final adjudicated clinical diagnosis. In this context, on MRI, vessel wall thickening, edema, and diameter differed significantly between vasculitis and non-vasculitis groups (p < 0.05). Ultrasound revealed different findings that may serve as red flags in identifying patients with vasculitis, such as vascular occlusion or halo sign (p = 0.02 vs. non-vasculitis group). Interestingly, comparing maximal standardized uptake values from PET/CT examinations with vessel wall thickening or vessel diameter did not result in significant differences (p > 0.05).
Conclusions: We observed significant correlations between different imaging findings suggestive of vasculitis on ultrasound or MRI and the final adjudicated diagnosis. While ultrasound and MRI were considered suitable imaging methods for detecting and discriminating typical vascular changes, 18F-FDG PET/CT requires careful timing and patient selection given its moderate diagnostic accuracy.
Objectives: To determine the diagnostic accuracy of dual-energy CT (DECT) virtual noncalcium (VNCa) reconstructions for assessing thoracic disk herniation compared to standard grayscale CT. Methods: In this retrospective study, 87 patients (1131 intervertebral disks; mean age, 66 years; 47 women) who underwent third-generation dual-source DECT and 3.0-T MRI within 3 weeks between November 2016 and April 2020 were included. Five blinded radiologists analyzed standard DECT and color-coded VNCa images after a time interval of 8 weeks for the presence and degree of thoracic disk herniation and spinal nerve root impingement. Consensus reading of independently evaluated MRI series served as the reference standard, assessed by two separate experienced readers. Additionally, image ratings were carried out by using 5-point Likert scales. Results: MRI revealed a total of 133 herniated thoracic disks. Color-coded VNCa images yielded higher overall sensitivity (624/665 [94%; 95% CI, 0.89–0.96] vs 485/665 [73%; 95% CI, 0.67–0.80]), specificity (4775/4990 [96%; 95% CI, 0.90–0.98] vs 4066/4990 [82%; 95% CI, 0.79–0.84]), and accuracy (5399/5655 [96%; 95% CI, 0.93–0.98] vs 4551/5655 [81%; 95% CI, 0.74–0.86]) for the assessment of thoracic disk herniation compared to standard CT (all p < .001). Interrater agreement was excellent for VNCa and fair for standard CT (ϰ = 0.82 vs 0.37; p < .001). In addition, VNCa imaging achieved higher scores regarding diagnostic confidence, image quality, and noise compared to standard CT (all p < .001). Conclusions: Color-coded VNCa imaging yielded substantially higher diagnostic accuracy and confidence for assessing thoracic disk herniation compared to standard CT.
This prospective study sought to evaluate potential savings of radiation dose to medical staff using real-time dosimetry coupled with visual radiation dose feedback during angiographic interventions. For this purpose, we analyzed a total of 214 angiographic examinations that consisted of chemoembolizations and several other types of therapeutic interventions. The Unfors RaySafe i2 dosimeter was worn by the interventionalist at chest height over the lead protection. A total of 110 interventions were performed with real-time radiation dosimetry allowing the interventionalist to react upon higher x-ray exposure and 104 examinations served as the comparative group without real-time radiation monitoring. By using the real-time display during interventions, the overall mean operator radiation dose decreased from 3.67 (IQR, 0.95–23.01) to 2.36 μSv (IQR, 0.52–12.66) (−36%; p = 0.032) at simultaneously reduced operator exposure time by 4.5 min (p = 0.071). Dividing interventions into chemoembolizations and other types of therapeutic interventions, radiation dose decreased from 1.31 (IQR, 0.46-3.62) to 0.95 μSv (IQR, 0.53-3.11) and from 24.39 (IQR, 12.14-63.0) to 10.37 μSv (IQR, 0.85-36.84), respectively, using live-screen dosimetry (p ≤ 0.005). Radiation dose reductions were also observed for the participating assistants, indicating that they could also benefit from real-time visual feedback dosimetry during interventions (−30%; p = 0.039). Integration of real-time dosimetry into clinical processes might be useful in reducing occupational radiation exposure time during angiographic interventions. The real-time visual feedback raised the awareness of interventionalists and their assistants to the potential danger of prolonged radiation exposure leading to the adoption of radiation-sparing practices. Therefore, it might create a safer environment for the medical staff by keeping the applied radiation exposure as low as possible.