Hi @PGMM, I haven’t solved this yet, no. As @admin confirms, it’s not a Big Sur issue but an M1 chip issue. The Apple M1 chip is built on the ARM64 architecture instead of the X86-64 architecture that Intel chips are built on. I have a compatible gfortran compiler, but still doesn’t work. My only guess is that the LCModel source code won’t compile on ARM64? But I’ll keep trying.
Hi @mmikkel and all. I see the same issue with gfortran 11.1 natively on M1 (arm64) – however I note that it is possible to build and run lcmodel on an M1 mac using an x86_64 gcc/gfortran release (I used 10.2.0.4) and rosetta2 translation – transparently, but presumably with a bit of a performance hit.
In my case, the apple Migration Assistant kindly brought over a working x86_64 version of gcc/gfortran from my previous machine, so this was surprisingly painless. If you’re not so fortunate, it should still be possible to coerce homebrew to get all the necessary parts (slow and awkward, but it worked eventually):
This assumes that any existing (eg, native arm64) homebrew is installed somewhere else (perhaps /opt/homebrew), and adopts the default /usr/local/… prefix for x86_64. It’s probably cleaner to use a different prefix, but also more complicated. See Installation — Homebrew Documentation
Check that the target is as expected…
/usr/local/bin/gfortran -v
There should be a line like Target: x86_64-apple-darwin20; then just specify the full path to that x86_64 gfortran binary for compilation of LCModel, like:
This might not be the ideal workaround but I have a new Mac with an M1 chip and I just installed the gfortran 11.2 for Big Sur for Intel on the Mac. It gives me some warnings but appears to run LCModel ok…
Off-topic, but thought I’d ask: What do all the new M1 users here think about it? Does it play well with most of the stuff we typically deal with (MATLAB, Python, R, gcc, Adobe stuff etc.) or are there any big cons? Does someone run virtual machines with VMWare Fusion?
I’ve had a superb experience with my M1 MacBook Pro and Mac mini. Definitely feels speedier than the former Intel chip Macs. Nearly every app I use has produced an M1-native version of their software. And those who haven’t can still run fine thanks to the Rosetta 2 emulation software built into the Mac.
The only downside is virtual machines. VMWare hasn’t released a (stable) version of their app for M1 Macs as far as I’m aware, but I’m sure that will happen in the coming months. VirtualBox still hasn’t announced when they will make their software M1 compatible.
That said, Parallels is now M1-compatible, but I’ve yet to try it out. But I believe that’s only to run Windows on your Mac, not Linux. UTM is another option, but not sure if it supports many virtual machines.
I’ve also been pleasantly surprised by the performance of the M1 (mac mini); native (arm64) apps are incredibly fast, and the architecture is well-suited to the sort of processing many of us do. For most “normal” things the Rosetta 2 emulation is perfectly fine (and completely transparent); I don’t really notice a performance drop, but I haven’t tried any heavier processing in emulated mode – maybe there’s a more noticeable difference then.
arm64 virtual machines (eg, arm64 Ubuntu) work nicely under UTM, with quite good performance – but if you’re thinking about Intel (x86, amd64) virtual machines, just don’t. Intolerably slow; flashbacks to 2001-standard virtualization. Note that (at least) Debian/Ubuntu and Windows have arm64 versions available, but there’s not so much third-party software support for those operating systems on that architecture. This is certainly the biggest downside.
Finally got my hands on a new Macbook Pro. Martin’s loop runs faster than on my old Intel machine, 13.5 s (12.6 s user) vs 17.5 s (16.0 s user). (Compiled with the x86_64b compiler as discussed above.)
Using the information on this post I was able to build LCModel on Ubuntu 20.04, and get a successful test with some of the test data supplied here. However, when I tried to utilize some data from our lab (single voxel Siemens 3T rda files) I kept getting the diagnostic message “zerovx 5.” Despite troubleshooting on my end I have yet to find a way to get an output from LCModel using my data. I was wondering if this has to do with the data preprocessing? Are there some conversions we have to do to make it readable for the program? The files work great in TARQUIN.
Are these single-average RDAs (i.e. you have a whole folder of them per measurement) or are they averaged on the scanner into a single file? The error message indicates that LCModel can’t find the time-domain signal inside the files.
You could try loading, processing and exporting them (in LCModel .raw format) with FID-A/Osprey, FSL-MRS, or spant - I believe all of these have import/export functions for RDA and .raw.
Sorry to resurrect an old thread but I had a question about compiling LCModel on M1. I’ve been able to successfully run LCModel on my M1 Macbook-pro, Monterey 12.1, using two methods:
GFortan 12.1 for Monterey - ARM, using locally-compiled LCModel version
GFotran 10.2 for Catalina - INTEL, using Catalina-compiled LCModel binary in the Osprey repo
For the compilation I followed @admin’s suggestion:
gfortran -c -fno-backslash -fno-f2c -O3 -fall-intrinsics -std=legacy -Wuninitialized -ffpe-summary=none LCModel.f
gfortran LCModel.o -o lcmodel
I ran some of our 7T SV-MRS data through both compilations of LCModel (via the Osprey wrapper with a common basis-set, control file, and input data–no pre-processing) and found some numerical differences in the results. About 80% of the amplitude estimates agreed perfectly, and a further 17% of the remaining deviations were less that 1%, so this was relatively minor for most fits. However, there are some larger differences for the more temperamental basis functions (NAAG, Ala, Glc, Gly, GPC, sI, Lac, particularly for Lipids & MMs).
Is anyone aware of a fortran/architecture issue that could cause numerical discrepancies? Or perhaps an issue with my attempt at installation? As I said, the differences are relatively minor, but I’d feel more comfortable if I had an explanation, and I’m definitely no expert in Fortran or chip architecture…
Is it possible to compile natively with GFortran 12.1 now? Great! I’ll try this for myself a bit later.
It’s best to start with the same gfortran version and identical compiler flags on each architecture before digging too deeply into any possible lower-level issues, but regarding architectural differences: basically, yes. The “floating point” representation of numbers can often lead to very slight rounding errors. Even in cases where the actual floating point representation of the numbers is similar across architectures, the optimized order of low-level arithmetic operations on a particular compiled architecture may differ – consider for example (x*x)*(x*x) vs (x*x)*x)*x; equivalent, but not identical. This will affect how the rounding errors accumulate over successive operations, so could in principle lead to rather more substantial divergence than one would anticipate for a seemingly insignificant rounding error.
That said, the riskiest optimizations in that regard (eg, those associated with the -ffast-math option) are disabled by default – and should stay that way if cross-platform equivalence is desirable. To improve comparability, you could try reducing or removing the optimization level flag (-O3) or disabling some of the other floating-point optimizations (mostly listed half way down this page; -ffp-contract=off would be the next thing to try).
I also note that the -ffpe-summary=none option is explicitly disabling the reporting of overflow/underflow conditions, which might be a factor of concern here. The fact that you’re seeing larger differences does raise some doubts about the stability of the model in the context of subtle numerical differences…
Thanks so much for the in-depth response, that’s really helpful. I may look into disabling the floating point optimizations, and see if that makes a difference.
I found the installer following Mark’s link. Seems they added the arm architecture for 12.1 earlier this year!
Do you think I should omit this option for future installations?
This is what worried me. I wouldn’t have expected to see much difference, if any.
This won’t change the actual behavior, just the reporting of it. In several of the earlier test-cases in this thread (even on a single architecture with generally consistent output) there would be quite a number of warnings issued when this option was omitted…
For most users, this won’t be useful information – but for trying to understand where things may be going awry in this case, it may (or may not) be key.
Hi,
I did try this on the Mac Intel and the resulting LCmodel works, but poorly. Most spectra that do get a proper fit result with the pre-compiled version on Ubuntu 22 fail in the MacIntel compiled version.
Now, with a new macBook M2 I had another go at it. The result was functionally another disappointing executable like I had on the Intel based MacBook. There are differences: static library linking is not working, but if you leave that out it appears to compile OK (but the executable does not function properly). Here is what I tried :
Or not using the patched version:
gfortran -c -fno-backslash -fno-f2c -ffast-math -O2 -ffpe-summary=none -Wuninitialized -std=legacy -fall-intrinsics LCModel.f
-The next step will fail because a linked library is not available as static
-Also, the -s option is ignored
gfortran -s -static LCModel.o -o lcmodel
-New attempt
gfortran LCModel.o -o lcmodel
-This produced an exectuable
mkdir ~/.lcmodel/bin
cp lcmodel ~/.lcmodel/bin
-The resulting lcmodel does not give a good result with a control file that does work on the compiled version for Linux (Ubuntu 22)
~/.lcmodel < [some_good_spectrum].control
-Maybe some day we can get this AI code converter to work for this rather large chunk of FORTRAN:
-That would confirm that AI can mean something for MR spectroscopy
BTW there are a lot of code warnings in the compilation. I looked at this in Visual Studio Code with a FORTAN extension. That helps a bit in reading the code, but trying to solve all the parameter type compilation warnings is a daunting task.
I hope someone likes the challenge and bring LCmodel to a Mac M2 processor compatible state.
That’s interesting; most of the numeric differences I’m aware of have been rather subtle. If you’re seeing completely different behaviour, I’d suspect some difference in the control file or basis set.
Could you elaborate on how it’s failing – poor fit, or failing with an error message? If you’re not getting the expected output files, you could also look for generated text files called, eg, “fort.7”, “fort.9” which might contain some more diagnostic information (in most cases the LCModel manual elaborates a little on the somewhat cryptic status codes therein)
You might also want to check for “unusual” (non alpha-numeric) characters in your file paths (eg, within the control file), which might be handled differently across platforms.