Gannet Fit Error

Hello MRSHub Community,

I am facing some problems understanding and interpreting the Gannet Fit Error from Gannet. I believe that the formula for Fit Error computation found on the website (Data quality metrics) differs from the one found in the paper (https://onlinelibrary.wiley.com/doi/full/10.1002/jmri.24478). I have also attempted to understand the Gannet fit code on GitHub and believe that it uses the formula from the paper, which involves taking the standard deviation of the fitting residual divided by the amplitude of the fitted peak.

I have some questions regarding the reliability of this metric for evaluating spectral quality. For instance, if two signals have the same standard deviation but one of them has a wrongly overestimated fitted peak, it would receive a better quality metric. Does it make sense to consider this scenario?

Also, is it possible to calculate the Cramer-Rao lower bound (CRLB) through Gannet? If not, how could I implement it?

Thank you for the help,
Gabriel

Hi @Gabriel_Dias,

Thank you for pointing out the error of the fit error description on the Gannet website! I’ve been meaning to complete that page. I’ve corrected it. It now matches the formula as shown in the original Gannet paper (except for the factor of 100 that is applied that is not reported in the paper).

Regarding your second point regarding the two signals scenario: I’m not quite understanding your point as this is not plausible. A wrongly overestimated fitted peak in Gannet would by necessity have large residuals, so the fit error would consequently be higher than the correctly estimated peak. Could you please clarify what you mean?

Gannet does not calculate CRLBs as this error estimation approach does not suit the modeling we use. If we were using basis sets, then it would perhaps make sense.

I hope this helps.

Mark

1 Like

Hi @mmikkel,

Regarding my second point, after reading the Gannet documentation more carefully, I realize that it was not plausible.

Thank you very much for all the information; it was really helpful.

Gabriel

1 Like

I’m curious about why we use the amplitude of the fitted peak to calculate the Fit Error. In my view, using the GABA area might better represent the concentration of GABA, and the fit error could be more reliable. Could you please explain the rationale behind this approach?

Dividing by the model amplitude is to normalize the standard deviation of the residuals so that the number is more easily interpretable. As this is the error of the model and not the error in the quantification of the concentration, using the model amplitude makes more sense, in my opinion.

But note that it’s not a perfect estimate because you could still fit some signal that is artifactual and still get a low fit error (actually, @Gabriel_Dias, this is what I think you were getting at?).

The takeaway is that fit error is just one metric to assess data and modeling quality, but it shouldn’t be taken to be the only thing to look at.

Mark

1 Like

Thank you for your reply! I still want to know how to measure the quality of data. While I understand that SNR and FWHM reflect data quality, I find it difficult to judge data quality based on them. For example, initially, I measure data quality by performing a test-retest (back-to-back) during pre-analysis, and I used CV to measure data quality. However, when I conducted formal experiments, I didn’t measure it repeatedly, so now I am unsure how to measure data quality.

Assessing data quality is inherently holistic. SNR and FWHM are important to examine, but what constitutes “acceptable” levels will depend on several factors, such as where the data were acquired in the brain or body.

Visual inspection of your data, in my opinion, is the best way to assess how good your data are.

I recommend reading the following papers for more on assessing the data quality of MRS datasets: