I don’t have strict cut-offs, but the advantage of having looked at probably 5,000 good and bad spectra over the years. MRS in general still has some catching up to do in terms of automated QA, and as a result, the mysterious ‘visual inspection’ remains a thing people will write in papers as a valid way of QA (I’m not innocent of this, in absence of better methods, but painfully aware that observer-dependent analysis is a recipe for un-reproducible science). It’s not helping that a lot of different software packages define QA metrics in different ways. The terminology consensus that was recently published should help with this, as well as an emerging set of open-source projects that are in constant development, but it’ll take a while to catch on, I guess.
In GannetLoad, I’m usually looking out for a well-behaved (flat) baseline, a symmetric Gaussian-shaped GABA+ signal without a lot of subtraction artefacts (humps on either side), and a decent model fit to it. 5% FitError is usually something I’d consider good, indeed. You’ll get a feel for the SNR, too - generally, you want the noise to be ‘small’ compared to the amplitude of the signal that you’re trying to fit. (That seems an obvious statement, but I’ve seen data where the noise was on the order of magnitude of the GABA peak itself, and people thought they could get away with that).
You can consider manual shimming, but it’s tedious and adds a lot of time that you likely won’t have during a clinical research protocol. If you’re doing a cortical voxel, you’re likely fine. Then again, nothing beats just collecting a bunch of test data and getting a feel for it…
Does any of that make sense to you?