Medizin
Refine
Document Type
- Article (3)
Language
- English (3)
Has Fulltext
- yes (3) (remove)
Is part of the Bibliography
- no (3)
Keywords
Institute
- Medizin (3)
Background: Myocardial perfusion with cardiovascular magnetic resonance (CMR) imaging is an established diagnostic test for evaluation of myocardial ischaemia. For quantification purposes, the 16 segment American Heart Association (AHA) model poses limitations in terms of extracting relevant information on the extent/severity of ischaemia as perfusion deficits will not always fall within an individual segment, which reduces its diagnostic value, and makes an accurate assessment of outcome data or a result comparison across various studies difficult. We hypothesised that division of the myocardial segments into epi- and endocardial layers and a further circumferential subdivision, resulting in a total of 96 segments, would improve the accuracy of detecting myocardial hypoperfusion. Higher (sub-)subsegmental recording of perfusion abnormalities, which are defined relatively to the normal reference using the subsegment with the highest value, may improve the spatial encoding of myocardial blood flow, based on a single stress perfusion acquisition. Objective: A proof of concept comparison study of subsegmentation approaches based on transmural segments (16 AHA and 48 segments) vs. subdivision into epi- and endocardial (32) subsegments vs. further circumferential subdivision into 96 (sub-)subsegments for diagnostic accuracy against invasively defined obstructive coronary artery disease (CAD). Methods: Thirty patients with obstructive CAD and 20 healthy controls underwent perfusion stress CMR imaging at 3 T during maximal adenosine vasodilation and a dual bolus injection of 0.1mmol/kg gadobutrol. Using Fermi deconvolution for blood flow estimation, (sub-)subsegmental values were expressed relative to the (sub)subsegment with the highest flow. In addition, endo−/epicardial flow ratios were calculated based on 32 and 96 (sub-)subsegments. A receiver operating characteristics (ROC) curve analysis was performed to compare the diagnostic performance of discrimination between patients with CAD and healthy controls. Observer reproducibility was assessed using Bland-Altman approaches. Results: Subdivision into more and smaller segments revealed greater accuracy for #32, #48 and # 96 compared to the standard #16 approach (area under the curve (AUC): 0.937, 0.973 and 0.993 vs 0.820, p<0.05). The #96-based endo−/epicardial ratio was superior to the #32 endo−/epicardial ratio (AUC 0.979, vs. 0.932, p<0.05). Measurements for the #16 model showed marginally better reproducibility compared to #32, #48 and #96 (mean difference± standard deviation: 2.0±3.6 vs. 2.3±4.0 vs 2.5±4.4 vs. 4.1±5.6). Conclusions: Subsegmentation of the myocardium improves diagnostic accuracy and facilitates an objective cutoff-based description of hypoperfusion, and facilitates an objective description of hypoperfusion, including the extent and severity of myocardial ischaemia. Quantification based on a single (stress-only) pass reduces the overall amount of gadolinium contrast agent required and the length of the overall diagnostic study.
Background: Bone age (BA) assessment performed by artificial intelligence (AI) is of growing interest due to improved accuracy, precision and time efficiency in daily routine. The aim of this study was to investigate the accuracy and efficiency of a novel AI software version for automated BA assessment in comparison to the Greulich-Pyle method.
Methods: Radiographs of 514 patients were analysed in this retrospective study. Total BA was assessed independently by three blinded radiologists applying the GP method and by the AI software. Overall and gender-specific BA assessment results, as well as reading times of both approaches, were compared, while the reference BA was defined by two blinded experienced paediatric radiologists in consensus by application of the Greulich-Pyle method.
Results: Mean absolute deviation (MAD) and root mean square deviation (RSMD) were significantly lower between AI-derived BA and reference BA (MAD 0.34 years, RSMD 0.38 years) than between reader-calculated BA and reference BA (MAD 0.79 years, RSMD 0.89 years; p < 0.001). The correlation between AI-derived BA and reference BA (r = 0.99) was significantly higher than between reader-calculated BA and reference BA (r = 0.90; p < 0.001). No statistical difference was found in reader agreement and correlation analyses regarding gender (p = 0.241). Mean reading times were reduced by 87% using the AI system.
Conclusions: A novel AI software enabled highly accurate automated BA assessment. It may improve efficiency in clinical routine by reducing reading times without compromising the accuracy compared with the Greulich-Pyle method.
Background: Reducing time and contrast agent doses are important goals to provide cost-efficient cardiovascular magnetic resonance (CMR) imaging. Limited information is available regarding the feasibility of evaluating left ventricular (LV) function after gadobutrol injection as well as defining the lowest dose for high quality scar imaging. We sought to evaluate both aspects separately and systematically to provide an optimized protocol for contrast-enhanced CMR (CE-CMR) using gadobutrol.
Methods: This is a prospective, randomized, single-blind cross-over study performed in two different populations. The first population consisted of 30 patients with general indications for a rest CE-CMR who underwent cine-imaging before and immediately after intravenous administration of 0.1 mmol/kg body-weight of gadobutrol. Quantitative assessment of LV volumes and function was performed by the same reader in a randomized and blinded fashion. The second population was composed of 30 patients with indication to late gadolinium enhancement (LGE) imaging, which was performed twice at different gadobutrol doses (0.1 mmol/kg vs. 0.2 mmol/kg) and at different time delays (5 and 10 min vs. 5, 10, 15 and 20 min), within a maximal interval of 21 days. LGE images were analysed qualitatively (contrast-to-noise ratio) and quantitatively (LGE%-of-mass).
Results: Excellent correlation between pre- and post-contrast cine-imaging was found, with no difference of LV stroke volume and ejection fraction (p = 0.538 and p = 0.095, respectively). End-diastolic-volume and end-systolic-volume were measured significantly larger after contrast injection (p = 0.008 and p = 0.001, respectively), with a mean difference of 3.7 ml and 2.9 ml, respectively. LGE imaging resulted in optimal contrast-to-noise ratios 10 min post-injection for a gadobutrol dose of 0.1 mmol/kg body-weight and 20 min for a dose of 0.2 mmol/kg body-weight. At these time points LGE quantification did not significantly differ (0.1 mmol/kg: 11% (16.4); 0.2 mmol/kg: 12% (14.5); p = 0.059), showing excellent correlation (ICC = 0.957; p < 0.001).
Conclusion: A standardized CE-CMR rest protocol giving a dose of 0.1 mmol/kg of gadobutrol before cine-imaging and performing LGE 10 min after injection represents a fast low-dose protocol without significant loss of information in comparison to a longer protocol with cine-imaging before contrast injection and a higher dose of gadobutrol. This approach allows to reduce examination time and costs as well as minimize contrast-agent exposure.