Error with GE PFile - possibly multiple software versions?

Hi Osprey team,

I’m trying to process ~50 HERMES scans that have been collected over the last few years. I can run a few at a time, but when I try and put them all together, I get this error.

Index exceeds the number of array
elements (1).

Error in osp_plotModule (line 331)
                    StatText =
                    ['SNR('
                    MRSCont.processed.(which){1,kk}.QC_names{SubSpec}
                    '): '
                    num2str(MRSCont.QM.SNR.(which)(Exp,kk))
                    '; FWHM ('
                    MRSCont.processed.(which){1,kk}.QC_names{SubSpec}
                    '): '...

Error in osp_plotAllPDF (line 54)
                            osp_plotModule(MRSCont,
                            'OspreyProcess',
                            kk,[1
                            ss],
                            Names{mm});

Error in OspreyProcess (line 757)
    osp_plotAllPDF(MRSCont,
    'OspreyProcess')

My theory is that a scanner software upgrade moved around some of the flags that get carried between Load and Process and so it’s not reading correctly across versions? Could that be it? Any suggestions? It would be nice to do everyone together but could split based on GE versioning if needed.

Also my theory might be totally wrong!

Thanks in advance!

Hi Marilena,

Just to be sure: does it work separately for both the pre- and post-upgrade data, with reasonable-looking fits in both cases? How about if you take a simple case, running one pre-upgrade spectrum and one post-upgrade spectrum together (preferably ones which you’ve already tested separately)?

I’m not aware of anything major in the HERMES implementation which might give rise to this behavior across upgrades, so I wonder if there may just be one bad dataset which is breaking things? Could you check the output in Figures/OspreyProcess and see whether you have output for at least some of the subjects (and perhaps, try removing the dataset immediately after the last one you see output for)?

A bug was introduced with the plotting functioning in an earlier version. @Helge fixed this just the other day, so you might wanna try updating to the latest developmental version.

Thank you both!

This seems to be a multifold error.

Before I first posted I tried running some early scans and some late scans separately and then independently - which is I why I thought the issue was likely a difference in software versions, but I think it was an issue of me taking things out of the middle of the job file.

It will run everyone in small batches, but freaks out at a big batch, which i think is a computer space issue, rather than a Osprey issue.

Thank you both for your help!

Thanks @alex and @mmikkel for stepping in.

I’ve heard similar issues from another user with around 100 MEGA-PRESS Siemens datasets. Apparently, the amount of raw data in the struct can lead to a memory issue on the machine. We will potentially come up with a solution for this.

Thanks @Helge - that definitely seems to be the issue for me. Only 45 scans but HERMES and on a laptop. It also seems to be a processing, not storage, issue.

Is the best workaround to process in sub-batches for now?

Yes - that’s the only workaround for now. I’ll try to reproduce this locally by analyzing all Big GABA Siemens or GE datasets at once.

Ah, interesting. For what it’s worth, I find that Matlab itself tends to leak memory in some cases (Linux platform, on several versions over the past few years). It usually only becomes a problem for me in much larger batches (thousands, rather than hundreds of fits), but even despite being particularly dilligent about closing everything, deleting unused objects and encouraging the underlying java system to do more rigorous garbage collection, the usage creeps up. The only solution I’ve found is to break up the larger jobs and completely restart matlab in between (this even on a system with 120Gb RAM… so we’re not talking about trivial amounts of memory leaking here). I haven’t managed to isolate this to any particular component or function.

I don’t think this is the same problem you’re seeing here, but I guess my point is that you should consider the possibility that this isn’t actually just an Osprey bug per se, but rather a lower-level issue which Osprey happens to be triggering…

Just found a bug in the OspreyLoad code resulting in the raw uncombined spectra being kept in the MRS container. This led to an unreasonable file size of about 1GB per subject and the related issue when more than 50 subjects are added to the analysis.

I’ve fixed this in the code which reduces the file size to about 50 MB per subject, and I’m confident that this should solve the issue.

1 Like