Refine
Document Type
- Article (2)
Language
- English (2)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Keywords
- Reproducibility (2) (remove)
Institute
- Medizin (2) (remove)
Highlights
• The goal was to assess the intra- and inter-scanner reproducibility of qMRI data.
• Mean scan-rescan variations were not exceeding 2.14%.
• Mean inter-scanner model deviations were not exceeding 5.21%.
• Provided that identical acquisition sequences are used, discrepancies between qMRI data acquired with different scanner models are low.
Abstract
Background: Quantitative MRI (qMRI) techniques allow assessing cerebral tissue properties. However, previous studies on the accuracy of quantitative T1 and T2 mapping reported a scanner model bias of up to 10% for T1 and up to 23% for T2. Such differences would render multi-centre qMRI studies difficult and raise fundamental questions about the general precision of qMRI. A problem in previous studies was that different methods were used for qMRI parameter mapping or for measuring the transmitted radio frequency field B1 which is critical for qMRI techniques requiring corrections for B1 non-uniformities.
Aims: The goal was to assess the intra- and inter-scanner reproducibility of qMRI data at 3 T, using two different scanner models from the same vendor with exactly the same multiparametric acquisition protocol.
Methods: Proton density (PD), T1, T2* and T2 mapping was performed on healthy subjects and on a phantom, performing each measurement twice for each of two scanner models. Although the scanners had different hardware and software versions, identical imaging sequences were used for PD, T1 and T2* mapping, adapting the codes of an existing protocol on the older system line by line to match the software version of the newer scanner. For T2-mapping, the respective manufacturer’s sequence was used which depended on the software version. However, system-dependent corrections were carried out in this case. Reproducibility was assessed by average values in regions of interest.
Results: Mean scan-rescan variations were not exceeding 2.14%, with average values of 1.23% and 1.56% for the new and old system, respectively. Inter-scanner model deviations were not exceeding 5.21% with average values of about 2.2–3.8% for PD, 2.5–3.0% for T2*, 1.6–3.1% for T1 and 3.3–5.2% for T2.
Conclusions: Provided that identical acquisition sequences are used, discrepancies between qMRI data acquired with different scanner models are low. The level of systematic differences reported in this work may help to interpret multi-centre data.
Background: High reproducibility of LV mass and volume measurement from cine cardiovascular magnetic resonance (CMR) has been shown within single centers. However, the extent to which contours may vary from center to center, due to different training protocols, is unknown. We aimed to quantify sources of variation between many centers, and provide a multi-center consensus ground truth dataset for benchmarking automated processing tools and facilitating training for new readers in CMR analysis.
Methods: Seven independent expert readers, representing seven experienced CMR core laboratories, analyzed fifteen cine CMR data sets in accordance with their standard operating protocols and SCMR guidelines. Consensus contours were generated for each image according to a statistical optimization scheme that maximized contour placement agreement between readers.
Results: Reader-consensus agreement was better than inter-reader agreement (end-diastolic volume 14.7 ml vs 15.2–28.4 ml; end-systolic volume 13.2 ml vs 14.0–21.5 ml; LV mass 17.5 g vs 20.2–34.5 g; ejection fraction 4.2 % vs 4.6–7.5 %). Compared with consensus contours, readers were very consistent (small variability across cases within each reader), but bias varied between readers due to differences in contouring protocols at each center. Although larger contour differences were found at the apex and base, the main effect on volume was due to small but consistent differences in the position of the contours in all regions of the LV.
Conclusions: A multi-center consensus dataset was established for the purposes of benchmarking and training. Achieving consensus on contour drawing protocol between centers before analysis, or bias correction after analysis, is required when collating multi-center results.