Using stat.csv files with RunOspreyJob

Dear Osprey Guru’s

I have got Osprey running through the command line (Using RunOspreyJob(‘jobfile.json’)) . Works fine up until the MRSCont = OspreyOverview(MRSCont); step, most likely because I do not have a stat.csv file for the script to read and sort my data. How should this stat.csv file be organised? should it be like this?:


or should it be more like the job file?:
“group1”: [“/Path/To/sub-01_ses-01.sdat”,“/Path/To/sub-02_ses-01.sdat”,…,“/Path/To/sub-14_ses-01.sdat”]
“group2”: [“/Path/To/sub-01_ses-02.sdat”,“/Path/To/sub-02_ses-02.sdat”,…,“/Path/To/sub-14_ses-02.sdat”]

Also, in either setup, I assume I could use any naming structure for the group name - so for the first example I could just use:

Any help appreciated. I Have posted this as a part of the “Commnad line only Osprey” thread as well, just in case some one is looking there as well.

Hey @PGMM,

the stat.csv file should be organized into columns with a functional minimum of two columns named group (numerical variable to separate the scans) and subject (a string to identify each line of the csv). Each row corresponds to one subject in the order they appear in your files definition in the job file. The group variable is subsequently used for the visualization. The variable can be used to separate groups or experimental conditions etc… If you have only one condition to group set it to 1 in all rows. I introduced the subject variable with the idea that you could send fully deidentified results from the quantify folder and the subject identifiers separately.

Here’s a minimal example with 5 groups:


And a more elaborate version with other variables:

age (years),group name,group,subject,sex

Let me know if this works for you.

stat.csv (104 Bytes)

Thanks Helge, that makes sense, and I got it to work. Kept it simple for the first one, but may add variables for correlations to see what Osprey can do.


1 Like