Osprey, Gannet, and LCModel- orders of magnitude different

When I processed MEGA-PRESS data using Osprey, Gannet, and LCModel separately, I found that the results from Osprey and Gannet were orders of magnitude different from those calculated by LCModel. Why is this the case?

Which results are you exactly comparing?

Very briefly, Osprey and Gannet return estimates from a couple different quantification methods (tCr ratios and various levels of water scaling sophistication). LCModel also returns tCr ratios, and they should always be comparable. The LCModel water scaling is very different though - basically, LCModel can’t do tissue correction on its own and, by default, assumes pure white matter and short echo times, unless you override those settings. It shouldn’t be off by orders of magnitude though.

Thank you. Maybe I didn’t make myself clear. I’m referring to smaller differences like 1 and 4, not orders of magnitude.What I mean is the water scaling results.
Is this the reason why most papers use tCr ratios instead of water scaling? Because all apps can calculate the tCr ratios and it is comparable between using different machines. Regarding LCModel not having tissue correction, I think it’s because if we want to get a tCr ratio, it is calculated by dividing GABA by tCr, so the tissue correction can be canceled at the same time.

Additionally, I have a quick question: Why does LCModel always have the highest CV compared to other apps like Osprey and Gannet? It seems like I can’t adjust parameters in LCModel.

I would say tCr referencing is popular because it is easy. Water-scaling requires another (short) scan, co-registration, segmentation, and then a whole host of assumptions and calculations. Like less than 10 years ago there wasn’t really software out there to do that for you, everyone wrote it themselves (which is another reason why you see water-scaled literature values differ).

I’m not sure why LCModel has higher CVs compared to the other two in your data - it’s hard to tell without seeing exactly what you’re doing. Parameters in LCModel are adjusted through the control file (see the manual) - it is a learning curve, let me know what you want to know.

1 Like

If the estimates from LCModel are really orders of magnitude different, I would suspect that either water scaling wasn’t actually done, or there was an issue with the reference fit – you’d need to have a look at the diagnostic output to be sure though.

If you’re looking at GABA scaled to Creatine (from the edit OFF subspectrum, presumably), one possibility is that your basis set and preprocessing make different assumptions about how the DIFFerence spectrum is calculated (possible factor-of-two scaling difference).

The rest of the difference could accounted for more complete scaling models in Osprey/Gannet, and will also depend on your baseline model, whether you explicitly include 3ppm macromolecules in your basis set, and of course whether you’re including these in your final GABA+ estimate or just looking at the GABA alone. See, amongst others https://doi.org/10.1002/nbm.4618 and https://doi.org/10.1002/nbm.4702

This depends what exactly you’re referring to with the CV. If you’re using the term loosely for the different estimates for fit reliability (eg, CLRB of %SD for LCModel, and a FitError based on the standard deviation of the residuals for the other algorithms), these are fundamentally different metrics and aren’t really comparable between algorithms.

If you’re referring to test-retest CV, that’s a bit surprising – most studies I’m aware of tend to find the opposite.

2 Likes