Reproducibility across runs -- MEGA-PRESS -- GABA measurements

Hi,

I ran the MEGA-PRESS sequence invivo for 4 times and wanted to see the variability of the GABA measurement. I used GANNET (.RDA files - obtained from our SIEMENS PRISMA machine). I did not prefer to use the .RDA files, as they are already combined and prone to artifact, but for the first pass, I tried to analyze it in GANNET.
Among 4 runs, the Gaba/Cr value differs like this -
run 1 - 0.101
run 2 - 0.084
run 3 - 0.116
run 4 - 0.098
The voxel position, protocol was kept the same across 4 runs and the subject’s movement is minimized as much as possible. FWHM <12 in all the scans.

Is this much variation normal? (Fits are also attached of 2 runs). Please give me suggestions, if I have done something wrong or need to improve.

off1_GABAGlx_vox1_fit.pdf (321.6 KB)
off3_GABAGlx_vox1_fit.pdf (322.7 KB)

Regards,
Swagata

Hi Swagata,

Test-retest CoV for GABA+ is usually on the order of magnitude of 10%, so I think what you’re seeing here isn’t too much out of the ordinary. I would caution against drawing any conclusions from a single set of acquisitions, especially since you used the RDA files, which we strongly advise to not use, as you say correctly.

Your data quality (SNR, linewidth, fit quality) looks reasonable (not excellent) from the two plots that you shared.

Best,
Georg

Hi George,

Thank you for the prompt response. Currently, I am trying to analyze the .dat data (which is not combined) in Osprey.

You mentioned that the data quality is reasonable but not excellent based on SNR, linewidth, fit quality which I believe, area, FWHM, and Fiterror in the plots. How do you understand this? Is there any ideal range of values for these metrics?
For example, the fit error should be below 4% or so. FWHM should be below 8Hz and the area should be above some (x) value.

I am sorry if it sounds too naive, but I am trying to better my skills to collect MRS data hence, posting these questions.
Or should I do anything during acquisition that might help? (Like doing shimming extra carefully)

[all the 4 runs attached]off1_GABAGlx_vox1_fit.pdf (321.6 KB)
off2_GABAGlx_vox1_fit.pdf (321.5 KB)
off3_GABAGlx_vox1_fit.pdf (322.7 KB)
off4_GABAGlx_vox1_fit.pdf (322.7 KB)

Thanks,
Swagata

I don’t have strict cut-offs, but the advantage of having looked at probably 5,000 good and bad spectra over the years. :wink: MRS in general still has some catching up to do in terms of automated QA, and as a result, the mysterious ‘visual inspection’ remains a thing people will write in papers as a valid way of QA (I’m not innocent of this, in absence of better methods, but painfully aware that observer-dependent analysis is a recipe for un-reproducible science). It’s not helping that a lot of different software packages define QA metrics in different ways. The terminology consensus that was recently published should help with this, as well as an emerging set of open-source projects that are in constant development, but it’ll take a while to catch on, I guess.

In GannetLoad, I’m usually looking out for a well-behaved (flat) baseline, a symmetric Gaussian-shaped GABA+ signal without a lot of subtraction artefacts (humps on either side), and a decent model fit to it. 5% FitError is usually something I’d consider good, indeed. You’ll get a feel for the SNR, too - generally, you want the noise to be ‘small’ compared to the amplitude of the signal that you’re trying to fit. (That seems an obvious statement, but I’ve seen data where the noise was on the order of magnitude of the GABA peak itself, and people thought they could get away with that).

You can consider manual shimming, but it’s tedious and adds a lot of time that you likely won’t have during a clinical research protocol. If you’re doing a cortical voxel, you’re likely fine. Then again, nothing beats just collecting a bunch of test data and getting a feel for it…

Does any of that make sense to you?
Best,
Georg

Hi George,

Again, thanks for this detailed explanation.

Yes, It does make sense but, I guess, I will realize more as I will spend more time with MRS data and literature. Also, I agree with you that there is no general consensus regarding the quality of the MRS data (based on the few papers, I have read) and the ‘mysterious’ visual inspection remains the standard to assess it. This makes the journey of beginners or those who are independently learning the MRS technique following literature incredibly difficult. So, I really appreciate the purpose of this forum and the efforts to develop open-source MRS projects to standardize this field. I will keep all your points in mind during data collection and analysis.

Also, as you mentioned, manual shimming can improve it but it is tedious and time-consuming. Can you suggest to me some literature/guidance regarding this? It would be really great because my project is not for clinical purposes rather, more for a neuroscience research question.

Thanks,
Swagata

My starting point would be Christoph Juchem’s consensus paper on B0 shimming.

Thank you for your kind feedback. Come back to this forum whenever you need help.