Building LCModel

I had an older version (8.2) of gfortran lying around and managed to get it to compile on my 9 years old Macbook Pro with a combination of Martin’s and Georg’s instructions:

gfortran -c -fno-backslash -fno-f2c -O2 -ffpe-summary=none -std=legacy -Wuninitialized -fall-intrinsics LCModel_patched.f

gfortran LCModel_patched.o -o lcmodel

since there aren’t statically linked library versions. The -fallow-argument-mismatch isn’t an option for the older compiler.

@martin, your for loop takes just over 22 seconds on my 2.3 GHz i7 Macbook Pro. I tried compiling with/without the fast-math option. With gives the same concs as your Ubuntu build, without gives the same concs as you reference build.

Cheers,
Dave

out2.pdf (24.7 KB)

3 Likes

Thanks @dave - I had a suspicion changing one of the compiler flags might fix the failing internal test and removing -ffast-math seems to do the trick on my system.

So hopefully no more need for the patch, just:

gfortran -c -fno-backslash -fno-f2c -O3 -ffpe-summary=none -Wuninitialized -std=legacy -fall-intrinsics LCModel.f
gfortran LCModel.o -o lcmodel

works for me. Removing the fast-math flag doesn’t seem to hurt performance and O3 optimisation got the loop benchmark down to 17 seconds :grinning:

Martin

2 Likes

Thanks both, this is awesome. Indeed this removes the need to patch the .f file, and the exact same list of commands can be run for Ubuntu and Catalina. I’m updating my post above accordingly.

1 Like

Thanks MRS nerds!! :smile: I will see if I can wrangle some time this week to give it a try myself.

3 Likes

LCMullins! Good luck!

1 Like

So I got a bit suspicious of all the compiler flags provided in the example Makefile after the problems with -ffast-math. A more minimal build command is simply:

gfortran -ffpe-summary=none -std=legacy -O3 LCModel.f -o lcmodel

My Windows box had gfortran (8.3.0) installed for R development and:

gfortran.exe -ffpe-summary=none -std=legacy -O3 LCModel.f -o LCModel.exe

works fine. Actually better than fine, the benchmark for the windows build from the LCModel website took 22 seconds, whereas the locally compiled version took 12.5s on an AMD Ryzen 5 3600. It’s possible the “official” build works better with Intel.

So close to a sub-second analysis!

Martin

2 Likes

Thanks everyone for sharing these instructions. BTW… I registered lcmodel · GitLab in case it will be useful to host the lcmodel code moving forwards.

2 Likes

Hi Everyone,

Just another data point. I downloaded and compiled on Win10 gfortran 7.3 (which I had already installed for another project). Used Martin’s shorter compile line:

gfortran.exe -ffpe-summary=none -std=legacy -O3 LCModel.f -o LCModel.exe

Compile time was about 29 sec on an older intel i7-6900k 3.2 GHz. Used Martin’s test data.

Quick aside … this was the first time I ever ran LCModel in my life! Really.

Did a quick loop x10 and got 19.01 sec processing time. Results similar to ref and Ubuntu though I haven’t look extremely closely at all the table entries.

Best,

Brian.

PS. Does anyone know what brought about this change in events with Steven?

2 Likes

This one’s for Martin … I took the afternoon to geek out a bit. Ran out this bit of Python code and managed to process 20 data files in 12.39 seconds for an average processing time of 0.62 sec! Admittedly, it was just your test data copied to 20 different file names … but still nice to see.

Enjoy,

Brian.

import time
import multiprocessing
import subprocess

def calculate(idx):

    lines = []
    lines.append("$LCMODL")
    lines.append("key=210387309")
    lines.append("nunfil=1024")
    lines.append("deltat=5e-04")
    lines.append("hzpppm=127.786142")
    lines.append("filbas='3t.basis'")
    lines.append("filraw='data"+idx+".raw'")
    lines.append("filps='out"+idx+".ps'")
    lines.append("$END")
    msg = '\n'.join(lines)
    msg = msg.encode('utf-8')

    proc = subprocess.Popen(
        ['LCModel.exe',],        
        shell=True,
        stdin=subprocess.PIPE,
        stdout=subprocess.PIPE,
        stderr=subprocess.PIPE,
    )
    stdout_value, stderr_value = proc.communicate(msg)
    return idx+'_done'


def pipe_lcm_multi():
    
    t0 = time.time()
    
    pool = multiprocessing.Pool(None)
    tasks = [str(i) for i in range(20)]
    results = []
    r = pool.map_async(calculate, tasks, callback=results.append)
    r.wait() # Wait on the results
    print(results)

    t1 = time.time()
    print('time = ',str(t1-t0))


#--------------------------------------------------------------------
# Test Code

if __name__ == '__main__':
    """
    Copied data.raw to data1.raw, data2.raw ... data19.raw
    and added control.file values into the string list above

    Results on i7-6900k, 8 cores, 16 threads was 12.39 sec
    for 20 files, or 0.62 sec each
    """
    pipe_lcm_multi()
2 Likes

Thanks @bsoher! Could you bump up the runs to 64 and run again on an AMD 64-Core Threadripper 3990X for me :wink:.

Comped to the historic cost of of the license - a ‎£3.5k desktop CPU is not as crazy at is sounds for LCModel users.

Martin, thanks very much for the build instructions.

Two further comments:

Line 2500 of lcmodel.f, add a line reading go to 200 to bypass the license check altogether, and obviate the need for adding the master key to your .control files.

Regarding numeric differences, I just ran the full public Big GABA dataset through a local build (Debian 9.1, gfortran 6.3.0-18+deb9u1), I’d say it agrees pretty well in this instance :slight_smile: Of course this may well differ according to library versions…

(N=222, total processing time 8.65 seconds with a bit of parallelisation: it’s fast)

lcm

3 Likes

The other part of that comparison… lcm_ba

2 Likes

Hi @alex,

My first rough check for consistency with the “official” build on the LCModel website was with the -ffast-math flag and I saw some minor differences. @dave later spotted that removing this flag gave better agreement with the official build.

Thanks for doing a more rigorous check. Did you use the -ffast-math flag?

Martin

In this instance, the outcome was identical with and without -ffast-math. That said: these are simple mega-press datasets quantified with the mega-press-3 sptype (implies nobase: no baseline model) and nsimul=0, so it’s also possible that the minor differences arise in one of those areas…

1 Like

So I finally got around to building LC Model on my MAC. Seems to have worked fine (once I modified the test control file from Martin appropriately).

I will have to add the LCModel directory to my PATH and do a few more things to optimise it’s usage - but underway at least.
Next question though,
Does anyone know where I can find a tool to convert Philips .SDAT/.SPAR data to LCModels RAW format?
I have some old datab previously run using LC-Model, and want to see how this build compares. (I saved the RAW files for the WS data, but not the USW files).
Cheers
Paul.

Hey Paul,

In R you could do:

library(spant)
read_mrs("in_file.SPAR") %>% write_mrs("out_file.RAW", format = "lcm_raw")
2 Likes

I managed to compile the code on Visual Studio 2017 with the Intel Fortran compiler for Windows 10 platform. I’ve compiled for both x86 and x64 but no noticeable differences in the performance (I too got almost instantaneous processing time with 32-bit).

I could share the VS project folder if anyone’s interested.

1 Like

Hi Paul,

FID-A works as well using the io_loadspec_sdat and io_lcmwrite functions. Osprey uses them to (optionally) save the final pre-processed data.

Good luck!
Best,
Georg

1 Like

@admin Thanks so much for these useful instructions! I ran Osprey-generated files on my macOS Big Sur and got identical results as with the official build.

3 Likes

Has anyone had any luck getting the makebasis command to run? I’m on a mac and can run data using a control file but haven’t been able to do anything else.