FSL-MRS spec2nii philips dicom dimension issue

First of all, thank you for your quick answers @wclarke and @admin, that is really great.
Regarding the processing, I looked it up in the exam card and Spectral Correction is set to yes. So, I just applied ECC on the water reference, residual water removal and phase correction. The data looks very nice now.
Regarding the basis sets, I updated conda and converted the .raw. files to a fsl basis set. After that I tried to fit the data, but it didn’t look nice.

I just recognized that our bandwidth is set to 2000 Hz while the one from Chris Juchems is either 2500 or 4000 Hz. So perhaps this is the problem?
Do you have any idea?

Many thanks,
Verena

Hi Verena,

Could you try running the following on the converted basis spectra:

basis_tools conj converted_basis/ converted_basis_conj/

And then rerun the fit using --basis converted_basis_conj/

You also need to have a think about what metabolites to include. There are a huge number in those basis set, far more than is sensible to include. Can I suggest you restrict the list to the most visible metabolite peaks for now? E.g. Tau, sI, PCr, PCho, NAAG, NAA, mI, Lac, GPC Glu, Gln, GABA, Cr, Asp, Asc, Ala. This can be achieved by just deleting the irrelavent json files from the basis directory.

You will also need to add some default MM to that basis set as per the documentation here and here.

Hello,

thank you again for your quick answer @wclarke.
I restricted the list of metabolites, applied:
basis_tools conj converted_basis/ converted_basis_conj/
and added some default MM to that basis with
basis_tools add_set --add_MM basis_without_mm/ basis_with_default_mm/

Then I rerun the fitting with
fsl_mrs--data SPEC_E/fsl_mrs_proc/acc_metab.nii.gz --basis basis_with_default_mm --output fsl_mrs_fit_mm2 --metab_groups MM09 MM12 MM14 MM17 MM21 --report.
But the result was not better.

I think the problem is that the basis set I am using has the wrong bandwidth (4000 Hz instead of 2000 Hz). Because when I visualize my basis spectra it is shifted compared to the peaks of my spectrum.

So, I think I have to resample the basis spectra at the different bandwidth or to simulate my own basis spectra? Unfortunately, I am not very familiar with either of these approaches and would appreciate advice on which is better.

Many thanks,
Verena

Hi Verena,

Now I’m not too sure what is going on. FSL should handle the BW resampling. Is the basis set appropriate for the field strength? Could you perhaps send me an example of your data and the basis set.

Will

Hi Will,
yes, sure!
I thought the folders were generated for 3T, but I couldn’t find the information again, so I will a have closer look.
Here you have a folder for one participant, which includes all raw data and preprocessed data of the acc and the right putamen, the last fitting I did and the basis folder (conj and with MM) I used.
SPEC.zip (1.3 MB)

I downloaded the basis set from MR Spectroscopy Basis Sets | MR SCIENCE Laboratory and chose the two folder for Philips PRESS TE 35:
one with BW 2500 and NPts 1024 RawBasis_for_PRESSPhilips_TE_35_BW_2500_NPts_1024.zip (202.4 KB)
and the other one with BW 4000 and NPts 2048 RawBasis_for_PRESSPhilips_TE_35_BW_4000_NPts_2048.zip (644.7 KB).
I could just use the second one for my fitting, because for BW 2500, I got this error:

Thank you for your help! If you need more data, please just ask.

Verena

Hi Verena,

Just to be sure: when you ran basis_tools convert, did you specify the bandwidth of your data (2000 Hz) or of the basis set (4000 Hz)? I believe the latter is correct in this context… ie:

basis_tools convert RawBasis_for_PRESSPhilips_TE_35_BW_4000_NPts_2048 --bandwidth 4000 --fieldstrength 3.0 fsl_basis_dir

Alex.

The error you see for the other basis set (1024 pts, 2500 Hz) is due to the reduced time coverage of that basis set: 1024/2500=0.409 s. It should have at least the same temporal coverage as your data (1024/2000=0.512 s); the 2048 pts/ 4000 Hz basis set meets this criterion (0.512 s)

Hi Alex,

thank you for your help.
No, I run the basis tools with BW 2000 because I thought I should put my true bandwidth in.
I just run the fitting again and it looks much better this time.

Thank you!!

So, here’s what I have so far…

# Convert the basis set (obtained from http://juchem.bme.columbia.edu/mr-spectroscopy-basis-sets), specifying bandwidth of that basis set
basis_tools convert RawBasis_for_PRESSPhilips_TE_35_BW_4000_NPts_2048 --bandwidth 4000 --fieldstrength 3.0 fsl_basis_dir

# Take a meaningful subset of the many available basis components
mkdir fsl_basis_reduced;
cd  RawBasis_for_PRESSPhilips_TE_35_BW_4000_NPts_2048
cp Tau.json sI.json PCr.json PCho.json NAAG.json NAA.json mI.json Lac.json GPC.json Glu.json Gln.json GABA.json Cr.json Asp.json Asc.json Ala.json  ../fsl_basis_reduced
cd ..

mrs_tools vis fsl_basis_reduced

Figure_1

… which gives a reasonable fit without the default MM components

fsl_mrs --data acc_press.nii.gz --h2o acc_press_ref.nii.gz  --basis ../../from_juchem/fsl_basis_reduced

For some reason basis_tools add_set --add_MM breaks though…

Figure_2

Turns out basis_tools add_set --add_MM uses the opposite conjugate sense from the other basis components :roll_eyes:

Hopefully @wclarke can suggest a clearner solution, but my crude workaround is to just take the conjugate of the MM parts:

# add default MM components
basis_tools add_set --add_MM fsl_basis_reduced fsl_basis_with_MM

# take the conjugate of the generated mm components only...
cd fsl_basis_with_MM
mkdir mmbits
mv MM* mmbits/
basis_tools conj mmbits .

The resulting basis set, complete with default MM components:

Figure_3

Then finally:

fsl_mrs --data acc_press.nii.gz --h2o acc_press_ref.nii.gz  --basis ../../from_juchem/fsl_basis_with_MM --output this_had_better_work

Behold! It fits :slight_smile:

Amazing work! That’s exactly right what you did.

Someone was asking about the MM on the FSL JISC mailing list. Sorry about that, at some point I’ve tied myself in knots with the basis phase and frequency convention. They differ between the FSL simulator, LCModel .basis, and LCModel .raw format. I’ll get that patched up in the next release.

One thing you need to do with that final fit is add
-metab_groups MM09 MM12 MM14 MM17 MM21
to ensure the MM can broaden separately. Details are in the documentation.

Hi,

thanks to both of you for your amazing help.
I was able to reproduce the results and add a T1 image to the fitting. The output looks good. :smile:

Thank you very much!
Verena

Great, please do ask about anything else!

Hello again,
two more questions actually came to my mind.
First, I noticed that in the reports individual metabolites cannot be determined (999 in the table).
For example in the left dlpfc Ala, Glen, Gly, MM14, NAAG and PCho can not be defined.

Since it varies between regions and subjects, I was wondering what that could be due to. Is it simply due to the location of the individual voxel or can I influence this by preprocessing after all?

Furthermore, I wanted to ask if the baseline looks good like this, as it is slightly shifted down or skewed in some subjects. Is this ok or do I still need to change something in the analysis?
For example here or in the plot above:

Thank you for your help and feedback.

Many greetings,
Verena

Hi Verena,

Regarding your first question – I don’t think that’s too surprising. You might want to look into the linewidth/FHWM for these subjects (I believe it’s reported in the QC parameters tab of the output) – it’s likely that these vary between subjects and between regions, and likely that in some subjects (perhaps those with broader linewidths) the algorithm will have a harder time isolating some of the more difficult components. For this reason, a number of the major metabolites which you’re most likely interested in will often be reported as an aggregate sum of related but difficult-to-separate signals, eg tNAA (NAA+NAAG), tCho (PCho+GPC), tCr (Cr+PCr), Glx (Glu+Gln+…).

Regarding your second question: hard to say, but it does look a little low to me, and (maybe correspondingly) some of your concentration estimates perhaps a little higher than expected – but not outrageously so. You could try experimenting with the baseline flexibility a bit, or see what happens if you take it away completely (--baseline_order -1), but I’m not sure you’ll get much benefit in doing so (if anything, I’d speculate that slightly over-estimated broad MM components may encourage the baseline to sag a bit… but I don’t have any good advice on how to check or tune this)

Hi @wclarke : one thing I’ve noticed in several of Verena’s plots is what looks like a single incongruous datapoint at the far left of the ppm range – I had a quick look through the code and couldn’t find an obvious cause, nor could I determine whether this was purely visual or could actually impact the fit… but this could potentially be a factor here.

Alex.

Hi @alex and @VerenaD ,

Sorry for the delay, I’ve just returned from two weeks holiday.

Alex is correct on the first point, a number of peaks are highly correlated (with correlations <-0.5) and so the program can use one to fit both, I would therefore suggest using the --combine command to sum these peaks (as has been done automatically with Cr and PCr), and the to look at these combined results. Another way is to use the MCMC solver (--algo MH) which takes longer but will do a much better job of estimating the peaks together (but not at actually fitting the true underlying contributions!).

@VerenaD Do you add the MM to a separate metabolite group as described in the documentation? Otherwise maybe try reducing the baseline order as Alex suggested.

The small graphical artefact is just the edge of the optimisation zone (at 4.2 ppm), the baseline is only estimated across this and therefore can create a step. I should make this plot show only within the bounds to avoid this small tick.

Hi,

thank you for your answers.
@alex I experimented a bit with the baseline and we decided to leave it at the default setting. And so if we are interested in Glu we should combine it with Gln to Glx?

@wclarke I think I will try to go for the MCMC solver - thank you for that advice.
And yes I add the MM to the fsl_mrs command with --metab_groups MM09 MM12 MM14 MM17 MM21.

Thank you!
Verena

Hi All,
Sorry to be late to the conversation!
A few minor points that might be helpful:

  • The _act.SPAR and SDAT files that contain the water suppressed data will be eddy current corrected when the flag that @admin mentioned PostProc-> Spectral Correction is set to YES
  • Even if that flag is set to YES, the corresponding _ref.SPAR and SDAT files for the water unsuppressed data will not be eddy current corrected
  • To take full advantage of the processing pipeline available with fsl-MRS, if you have a research agreement with Philips, you can export the .data/.list files quite easily. There files will have each transient and each coil stored separately. In the export of these files you can choose whether or not to apply the eddy current correction in the exported data

Hope this helps!
Erin

1 Like

Hi Erin,

thank you for your answer!

My images were saved as .dcm files, but I think the ECC processing is transferable. I have now applied ECC to the ref data only in my processing steps.
When exporting the data, we only have the option of saving it as enhanced or classic DICOM and suppressing the patient data. At least I don’t remember seeing anything like that - unfortunately.

Verena

Hi Verena,
When you open the Data Export window, do you see a second tab for non-DICOM export?
Also, which site are you at?
Thanks!
Erin

Hi Erin,

I can have a look tomorrow at the scanner.
I am at the TU Munich - Klinikum rechts der Isar.

Thank you !
Verena