The 2s for 1H-MRS is pretty much a canonized trade-off. Yes, in an ideal world, you want to measure at as long a TR as possible, so that the different metabolite signals don’t get weighted by their individual T1s.
The enemy you’re up against in the in-vivo is SNR, since you’re constrained by the maximum duration of the experiment. SNR grows with the square root of the number of averages, as you know. It also grows with TR (as more longitudinal magnetization recovers between shots), but with diminishing returns for TR >> T1.
So you’re looking for a ‘sweet spot’ TR affording you enough ‘SNR per unit time’. The optimal TR to optimize that metric is super-short with a low flip-angle, but at the expense of introducing a hell of a lot of T1 weighting. TR = 2s is a trade-off between all of these considerations. One might argue that the T1 weighting resulting from that is still large enough to make it difficult to compare metabolite estimates (and they’d be right, it’s one of the main reasons no one really likes to call their estimates “concentrations”). But as long as we don’t have a surefire way to measure individual T1 reliably and fast enough, we have to pick a can of worms to open. It’s the reason why I’m personally really excited about multi-parametric MRS (essentially a fingerprinting technique where instead of repeating the same sequence 64, 128, or 300 times, you vary TE, TR and the flip angle between each average, based on a pre-optimized schedule, and then model the results to extract estimates for T1/T2). It might not solve everything, but at least make the worm dish a bit more palatable (read: if we have to do relaxation correction, it’s probably better to use individual estimates than literature values).
This might be a bit lengthy and you might have guessed it already, but maybe this was helpful.