HERMES data quality

Hello everyone. We acquired some HERMES data in both the ACC and the PCC. I preprocessed them with fsl_mrs. I would like to get some feedback on the quality of the data, particularly about the GABA and GSH peaks.

Most of the PCC data looks ok. Here are the summed/subtracted spectra of all our participants (apodized for visualization). Red spectra are 1.9 standard deviations from the mean, blue is the mean spectrum.



And these are the ACC data, which look more problematic. Particularly concerning is the lipid contamination seen on the ‘A+B+C+D’ spectra, should i exclude those if my goal is to measure GABA and GSH?



I want to show you some examples of some individual GABA spectra and see if you agree with my judgment. These are not apodized.

Here are some examples which i think look fine


Here are some examples i think i should exclude judging by the glx region on the left and the low SNR.


Here is an example i think i should exclude based on the linewidth

And here are some examples i think i should exclude due to subtraction artifacts




There are also some examples where the GABA peak is barely visible without apodization, and i am not sure what to make of these


Sorry of these are too many examples! I would appreciate any feedback.

Diego

Hi Diego,

Good to hear from you. I agree that a lot of the ACC data have very problematic lipid contamination and you should discard those. You’ve also correctly identified datasets with subtraction artefacts, which also should be excluded in their current state, although they might be salvageable.

I’m not sure how you’re doing the sub-spectrum alignment in FSL. HERMES is a four-step experiment where each of the sub-spectra has different characteristics. When applied, the editing pulses saturate very strong reporter signals (the 1.9-ppm GABA-on pulses remove the NAA methyl singlet, and the 4.56-ppm GSH-on pulses remove the residual water peak), and aligning the four sub-spectra to each other becomes much more challenging than aligning a two-step MEGA experiment. There’s a whole bunch of alignment procedures that Richard’s group has published on over the years (Frequency and phase correction for multiplexed edited MRS of GABA and glutathione - PubMed and Correcting frequency and phase offsets in MRS data using robust spectral registration - PubMed) that try to address this, but I don’t know if this is done in FSL; you might need to build some custom functions.

Hi Georg. Thank you for your feedback.

I started using FSL mainly because Osprey’s pipeline produces notable subtraction artifacts in our data, and because it was easier for me to manipulate the data with fsl’s Python API.

This is what our PCC spectra end up looking like with Osprey’s preprocessing:


This is probably because our GSH-ON spectra’ baseline is shifted down compared to A and B.

The in-house pipeline we built with fsl tools uses a standard spectral registration method for both aligning the transients within the edit dimension and aligning the 4 subspectra to each other, the other two methods you referenced (prob and robust spectral registration) aren’t implemented on fsl as of now. We shift the baseline of B, C, and D so that their > 0 ppm region matches A, this is done after residual water removal and spectral alignment within each experiment, and then we align across them. This worked better when we performed all the previous preprocessing steps on each transient rather than on the average, averaging early led to subtraction artifacts.

This is what the pipeline looks like:

I tried incorporating the other two spectral registration methods (with the matlab scripts included in the Osprey repo), but the end result doesn’t seem to be much better.

This is how the GABA and GSH spectra look like if i perform robust registration at the start of the pipeline (C and D’s baseline is still shifted down) and then perform the rest of the preprocessing with fsl-tools (right column). The left column shows the spectra with the original pipeline we used.

If instead i perform probabilistic spectral registration at the end of the original pipeline (replacing the step of averaging transients), i (maybe?) get some slight improvement in our PCC data.


And the subtraction artifacts of some of our ACC data improve.


but as a whole, the ACC data looks worse this way (compare to the figure in the first post)


Right now i think it might be ok to work with the PCC as it is. The ACC data might require some more work or i might just have to remove a lot of subjects, including the ones with subtraction artifacts and potentially salvageable data. I would like to know your thoughts on whether i could improve the preprocessing in some way. Sorry for the long post