Organizing fMRS .IMA files

Hi Georg (@admin) and Helge (@helge)

I have fMRS data in individual IMA files. Each subject has 90 shots/files, and I will be combining them into blocks of 8 shots each (some shots correspond to instruction slides and are unused). Because single-average IMAs need to be in separate folders, should I split these files into folders that correspond to the blocks I want to average (for example, folder1 contains block1 data with shots 1-8, folder2 contains block2 data with shots 9-16, etc.)? Or is there a way to subgroup them within Osprey where I can have all 90 files in one folder?

I can write a script to subgroup the data by block if needed, but I wanted to make sure I’m not overlooking anything.

Thanks!
Meredith

Hi @Meredith,

For now, I’d create one test subjects where you separate the blocks into folders as you have proposed. However, I’m not sure if this could result in any hiccups in the processing because total averages, etc are parsed from the DICOM header. In this case, you may have to change the average numbers in the struct.

Best,
Helge

Thanks! I’ll give that a try and see what happens.

Hi @Helge,

Happy to report that separating the fMRS blocks into folders worked for the subject I tested! To scale up for multiple subjects, my plan is to write a script to generate all the folder paths and paste that into the OspreyJob json file. Does this seem reasonable? Or is there another approach you would recommend?

My pre-planned analysis was to use LCModel for fitting, so for now I’m using Osprey for pre-processing and saving the LCModel .RAW files for analysis. However, I’m unsure about how to generate the .control files described here: https://schorschinho.github.io/osprey/the-osprey-job-file.html#data-handling-and-modeling-options Is there an option that I should flag in the OspreyJob json?

Thanks!
Meredith

Hi @Meredith,

Great to hear that this is working out fine! Sounds like a reasonable approach to me.

Quick question: Are you using conventional MRS or GABA-edited MRS. If it is the former and you have LCModel installed on the same machine as Osprey you can run the whole analysis from within Osrpey using LCModel. You just have to add “method”: “LCModel” and set saveLCM to 1 in the json file.

For GABA-edited MRS you would have to change these control files manually. I can send you an example file.

Best,
Helge

I’m using conventional MRS. Currently, LCModel is installed on a different machine. I’m using Osprey on Mac, and LCModel is on Linux. Do I understand correctly from this thread that I would need to build LCModel for Mac? Or I can look into the possibility of doing everything from the Linux machine.

Edit to add: If I fit with LCModel through Osprey, can I specify control parameters like ATTH20 and WCONC?

Hi @Meredith,

Osprey is already shipped with compiled binaries macOS if you aren’t using a very old OS it should work out of the box. You can test it with the jobSDAT_LCModel.m example file in the exampledata/sdat/Unedited folder and see if it is running on your machine. The advantage of calling LCModel this way is that the downstream modules in Osprey (Coreg, Seg, Quantify, Overview) will still work and calculate the right concentrations. You will also be able to browse through the fits in the GUI.

If you want to make changes to WCONC etc. or write the control files for your own Linux machine using, you can take a look at osp_fitInitialise.m lines 305 to 339 to see how this works and to change any parameters.

Helge

:exploding_head:
jobSDAT_LCModel.m worked great! I much prefer to work from my Mac, so this is probably the route I’ll go with.

If I have a json job file, where do I specify the basis set?

Thank you for your help and patience as I wrap my head around all this! :slightly_smiling_face:

Kind regards,
Meredith

Great to hear that.

Good catch. The current .jason implementation doesn’t allow you to specify a .BASIS file. So what you will have to do is run OspreyJob and afterwards at a line of code that specifies the .BASIS file.

MRSCont.opts.fit.basisSetFile = 'pathToYourFile/basiset.basis' or you can let Osprey generate the .BASIS file for you.

1 Like

Hi @Helge,

Another fMRS job file question: Since I have 10 spectra per subject, do I need to replicate the path to the water file and T1w file 10 times (1 per fMRS spectrum) even though they are the same files?

Thanks!
Meredith

Hi @Meredith,

yes - you’ll definitely have to do this for the water and the T1w files.

For the T1w files I’d recommend running the segmentation for 1 fMRS acquisition first and then duplicating the scans and SPM outputs with matching filenames. This way you don’t have to re-run the same segmentation for each block. For example, you run:

.../sub-01/block-01/anat/sub-01/sub-01_block-01_T1w.nii.gz

and get the outputs

.../derivatives/FilesSPM/c1_sub-01_block-01_T1w_space-scanner_spm12_pseg.nii.gz
...
.../derivatives/SegMaps/sub-01_block-01_PRESS_35_act_space-scanner_Voxel-1_label-CSF.nii.gz
...

And then, you duplicate these files for each block:

.../derivatives/FilesSPM/c1_sub-01_block-02_T1w_space-scanner_spm12_pseg.nii.gz
...
.../derivatives/SegMaps/sub-01_block-02_PRESS_35_act_space-scanner_Voxel-1_label-CSF.nii.gz
...
.../derivatives/FilesSPM/c1_sub-01_block-n_T1w_space-scanner_spm12_pseg.nii.gz
...
.../derivatives/SegMaps/sub-01_block-n_PRESS_35_act_space-scanner_Voxel-1_label-CSF.nii.gz
...

Hope this makes sense.

Best,
Helge

Makes sense. Thank you!

Hi @Helge,

I realized that I will already have segmentation data for these fMRS subjects after analyzing their resting-state MRS (same session and voxel). Can I supply paths to those data via files_seg in the job json, or would you still recommend duplicating the files as you previously suggested?

Thanks!
Meredith

Hi @Meredith,

Sorry for the delay.

Yes you should be able to use the file_seg to give Osprey external segmentation files. You can look at the jobSDAT.m file for the explicit definitions/order. The first file should be gray matter, the second one white matter and the last one CSF.

I have tested this for a few subjects, but not for different sessions or blocks. I would try it for two blocks first and see how it goes.

BEst,
Helge

1 Like

Hi @Helge,

Thanks for the pointers. In jobSDAT.m, it looks like files_seg is a cell array, where the first cell contains c1/c2/c3 for the first subject, the second cell is c1/c2/c3 for the second subject, and so on. How would I specify these in a json? OspreyJob doesn’t like curly braces { } to group c1/c2/c3, and if I use square brackets [ ], OspreyJob gives an error that it’s only one entry even though I’ve specified the files for more than one block.

Thanks!
Meredith

Hi @Meredith,

This was indeed not possible. I have updated the OspreyJob function to accept more than one segmentation file per subject which is now online on the develop branch. You’ll have to add them with separate square brackets. See also attached file for reference (converted to .txt for uploading purposes).

Helge

jobSDAT.txt (2.3 KB)

Great! I’ll give it a try. Thanks @Helge!

Hi @Helge,

I finally had a chance to test files_seg in a json job for multiple subjects x multiple blocks.

I had to ensure that each subject’s individual block folders had the subject id in the folder name so that the coreg/sement steps didn’t overwrite files. This may be specific to how I have the data organized, but

this did not work (runs but overwrites the SegMaps and VoxelMasks because file names are not subject-specific and gave the wrong tissue fractions in some cases):

sub-001
    block1
    block2
sub-002
    block1
    block2

and this works:

sub-001
   sub-001_block1
   sub-001_block2
sub-002
   sub-002_block1
   sub-002_block2

A simple fix, so once I got that sorted out, all the Osprey steps ran okay.

Thanks!
Meredith

1 Like