INSTALL 55.3 KB
Newer Older
1 2 3 4 5 6 7
Installation guide
******************


Introduction to building GROMACS
================================

8 9
These instructions pertain to building GROMACS 2018. You might also
want to check the up-to-date installation instructions.
10 11 12 13 14 15 16


Quick and dirty installation
----------------------------

1. Get the latest version of your C and C++ compilers.

17
2. Check that you have CMake version 3.4.3 or later.
18 19 20 21 22 23 24 25 26 27 28 29 30

3. Get and unpack the latest version of the GROMACS tarball.

4. Make a separate build directory and change to it.

5. Run "cmake" with the path to the source as an argument

6. Run "make", "make check", and "make install"

7. Source "GMXRC" to get access to GROMACS

Or, as a sequence of commands to execute:

31 32
   tar xfz gromacs-2018.tar.gz
   cd gromacs-2018
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57
   mkdir build
   cd build
   cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
   make
   make check
   sudo make install
   source /usr/local/gromacs/bin/GMXRC

This will download and build first the prerequisite FFT library
followed by GROMACS. If you already have FFTW installed, you can
remove that argument to "cmake". Overall, this build of GROMACS will
be correct and reasonably fast on the machine upon which "cmake" ran.
On another machine, it may not run, or may not run fast. If you want
to get the maximum value for your hardware with GROMACS, you will have
to read further. Sadly, the interactions of hardware, libraries, and
compilers are only going to continue to get more complex.


Quick and dirty cluster installation
------------------------------------

On a cluster where users are expected to be running across multiple
nodes using MPI, make one installation similar to the above, and
another using an MPI wrapper compiler and which is building only
mdrun, because that is the only component of GROMACS that uses MPI.
58 59 60 61
The latter will install a single simulation engine binary, i.e.
"mdrun_mpi" when the default suffix is used. Hence it is safe and
common practice to install this into the same location where the non-
MPI build is installed.
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139


Typical installation
--------------------

As above, and with further details below, but you should consider
using the following CMake options with the appropriate value instead
of "xxx" :

* "-DCMAKE_C_COMPILER=xxx" equal to the name of the C99 Compiler you
  wish to use (or the environment variable "CC")

* "-DCMAKE_CXX_COMPILER=xxx" equal to the name of the C++98 compiler
  you wish to use (or the environment variable "CXX")

* "-DGMX_MPI=on" to build using MPI support (generally good to
  combine with building only mdrun)

* "-DGMX_GPU=on" to build using nvcc to run using NVIDIA CUDA GPU
  acceleration or an OpenCL GPU

* "-DGMX_USE_OPENCL=on" to build with OpenCL support enabled.
  "GMX_GPU" must also be set.

* "-DGMX_SIMD=xxx" to specify the level of SIMD support of the node
  on which GROMACS will run

* "-DGMX_BUILD_MDRUN_ONLY=on" for building only mdrun, e.g. for
  compute cluster back-end nodes

* "-DGMX_DOUBLE=on" to build GROMACS in double precision (slower,
  and not normally useful)

* "-DCMAKE_PREFIX_PATH=xxx" to add a non-standard location for CMake
  to search for libraries, headers or programs

* "-DCMAKE_INSTALL_PREFIX=xxx" to install GROMACS to a non-standard
  location (default "/usr/local/gromacs")

* "-DBUILD_SHARED_LIBS=off" to turn off the building of shared
  libraries to help with static linking

* "-DGMX_FFT_LIBRARY=xxx" to select whether to use "fftw", "mkl" or
  "fftpack" libraries for FFT support

* "-DCMAKE_BUILD_TYPE=Debug" to build GROMACS in debug mode


Building older versions
-----------------------

Installation instructions for old GROMACS versions can be found at the
GROMACS documentation page.


Prerequisites
=============


Platform
--------

GROMACS can be compiled for many operating systems and architectures.
These include any distribution of Linux, Mac OS X or Windows, and
architectures including x86, AMD64/x86-64, several PowerPC including
POWER8, ARM v7, ARM v8, and SPARC VIII.


Compiler
--------

GROMACS can be compiled on any platform with ANSI C99 and C++11
compilers, and their respective standard C/C++ libraries. Good
performance on an OS and architecture requires choosing a good
compiler. We recommend gcc, because it is free, widely available and
frequently provides the best performance.

You should strive to use the most recent version of your compiler.
140 141
Since we require full C++11 support the minimum supported compiler
versions are
142

143
* GNU (gcc) 4.8.1
144

145
* Intel (icc) 15.0
146

147
* LLVM (clang) 3.3
148 149 150 151 152 153 154

* Microsoft (MSVC) 2015

Other compilers may work (Cray, Pathscale, older clang) but do not
offer competitive performance. We recommend against PGI because the
performance with C++ is very bad.

155
The xlc compiler is not supported and has not been tested on POWER
156 157
architectures for GROMACS-2018. We recommend to use the gcc compiler
instead, as it is being extensively tested.
158

159 160
You may also need the most recent version of other compiler toolchain
components beside the compiler itself (e.g. assembler or linker);
161
these are often shipped by your OS distribution’s binutils package.
162 163 164 165 166 167 168 169

C++11 support requires adequate support in both the compiler and the
C++ library. The gcc and MSVC compilers include their own standard
libraries and require no further configuration. For configuration of
other compilers, read on.

On Linux, both the Intel and clang compiler use the libstdc++ which
comes with gcc as the default C++ library. For GROMACS, we require the
170
compiler to support libstc++ version 4.8.1 or higher. To select a
171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186
particular libstdc++ library, use:

* For Intel: "-DGMX_STDLIB_CXX_FLAGS=-gcc-name=/path/to/gcc/binary"
  or make sure that the correct gcc version is first in path (e.g. by
  loading the gcc module). It can also be useful to add
  "-DCMAKE_CXX_LINK_FLAGS="-Wl,-rpath,/path/to/gcc/lib64
  -L/path/to/gcc/lib64"" to ensure linking works correctly.

* For clang: "-DCMAKE_CXX_FLAGS=--gcc-
  toolchain=/path/to/gcc/folder". This folder should contain
  "include/c++".

On Windows with the Intel compiler, the MSVC standard library is used,
and at least MSVC 2015 is required. Load the enviroment variables with
vcvarsall.bat.

187
To build with any compiler and clang’s libcxx standard library, use
188 189 190 191 192 193 194 195 196
"-DGMX_STDLIB_CXX_FLAGS=-stdlib=libc++
-DGMX_STDLIB_LIBRARIES='-lc++abi -lc++'".

If you are running on Mac OS X, the best option is the Intel compiler.
Both clang and gcc will work, but they produce lower performance and
each have some shortcomings. clang 3.8 now offers support for OpenMP,
and so may provide decent performance.

For all non-x86 platforms, your best option is typically to use gcc or
197
the vendor’s default or recommended compiler, and check for
198 199
specialized information below.

200 201 202 203 204 205
For updated versions of gcc to add to your Linux OS, see

* Ubuntu: Ubuntu toolchain ppa page

* RHEL/CentOS: EPEL page or the RedHat Developer Toolset

206 207 208 209 210 211 212 213 214 215 216 217 218

Compiling with parallelization options
--------------------------------------

For maximum performance you will need to examine how you will use
GROMACS and what hardware you plan to run on. Often OpenMP parallelism
is an advantage for GROMACS, but support for this is generally built
into your compiler and detected automatically.


GPU support
~~~~~~~~~~~

219
GROMACS has excellent support for NVIDIA GPUs supported via CUDA. On
220 221 222 223 224 225 226 227 228 229 230 231 232 233
Linux, NVIDIA CUDA toolkit with minimum version 6.5 is required, and
the latest version is strongly encouraged. Using Intel or Microsoft
MSVC compilers requires version 7.0 and 8.0, respectively. NVIDIA GPUs
with at least NVIDIA compute capability 2.0 are required. You are
strongly recommended to get the latest CUDA version and driver that
supports your hardware, but beware of possible performance regressions
in newer CUDA versions on older hardware. Note that compute capability
2.0 (Fermi) devices are no longer supported from CUDA 9.0 and later.
While some CUDA compilers (nvcc) might not officially support recent
versions of gcc as the back-end compiler, we still recommend that you
at least use a gcc version recent enough to get the best SIMD support
for your CPU, since GROMACS always runs some code on the CPU. It is
most reliable to use the same C++ compiler version for GROMACS code as
used as the host compiler for nvcc.
234 235 236

To make it possible to use other accelerators, GROMACS also includes
OpenCL support. The minimum OpenCL version required is 1.1. The
237 238 239 240 241 242 243
current OpenCL implementation is recommended for use with GCN-based
AMD GPUs, on Linux we recommend the ROCm runtime. It is also supported
with NVIDIA GPUs, but using the latest NVIDIA driver (which includes
the NVIDIA OpenCL runtime) is recommended. Also note that there are
performance limitations (inherent to the NVIDIA OpenCL runtime). It is
not possible to configure both CUDA and OpenCL support in the same
version of GROMACS.
244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260


MPI support
~~~~~~~~~~~

GROMACS can run in parallel on multiple cores of a single workstation
using its built-in thread-MPI. No user action is required in order to
enable this.

If you wish to run in parallel on multiple machines across a network,
you will need to have

* an MPI library installed that supports the MPI 1.3 standard, and

* wrapper compilers that will compile code using that library.

The GROMACS team recommends OpenMPI version 1.6 (or higher), MPICH
261
version 1.4.1 (or higher), or your hardware vendor’s MPI installation.
262 263
The most recent version of either of these is likely to be the best.
More specialized networks might depend on accelerations only available
264
in the vendor’s library. LAM-MPI might work, but since it has been
265 266 267 268 269 270 271
deprecated for years, it is not supported.


CMake
-----

GROMACS builds with the CMake build system, requiring at least version
272
3.4.3. You can check whether CMake is installed, and what version it
273
is, with "cmake --version". If you need to install CMake, then first
274
check whether your platform’s package management system provides a
275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292
suitable version, or visit the CMake installation page for pre-
compiled binaries, source code and installation instructions. The
GROMACS team recommends you install the most recent version of CMake
you can.


Fast Fourier Transform library
------------------------------

Many simulations in GROMACS make extensive use of fast Fourier
transforms, and a software library to perform these is always
required. We recommend FFTW (version 3 or higher only) or Intel MKL.
The choice of library can be set with "cmake
-DGMX_FFT_LIBRARY=<name>", where "<name>" is one of "fftw", "mkl", or
"fftpack". FFTPACK is bundled with GROMACS as a fallback, and is
acceptable if simulation performance is not a priority. When choosing
MKL, GROMACS will also use MKL for BLAS and LAPACK (see linear algebra
libraries). Generally, there is no advantage in using MKL with
293 294 295 296 297
GROMACS, and FFTW is often faster. With PME GPU offload support using
CUDA, a GPU-based FFT library is required. The CUDA-based GPU FFT
library cuFFT is part of the CUDA toolkit (required for all CUDA
builds) and therefore no additional software component is needed when
building with CUDA GPU acceleration.
298 299 300 301 302 303 304 305


Using FFTW
~~~~~~~~~~

FFTW is likely to be available for your platform via its package
management system, but there can be compatibility and significant
performance issues associated with these packages. In particular,
306
GROMACS simulations are normally run in “mixed” floating-point
307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324
precision, which is suited for the use of single precision in FFTW.
The default FFTW package is normally in double precision, and good
compiler options to use for FFTW when linked to GROMACS may not have
been used. Accordingly, the GROMACS team recommends either

* that you permit the GROMACS installation to download and build
  FFTW from source automatically for you (use "cmake
  -DGMX_BUILD_OWN_FFTW=ON"), or

* that you build FFTW from the source code.

If you build FFTW from source yourself, get the most recent version
and follow the FFTW installation guide. Choose the precision for FFTW
(i.e. single/float vs. double) to match whether you will later use
mixed or double precision for GROMACS. There is no need to compile
FFTW with threading or MPI support, but it does no harm. On x86
hardware, compile with *both* "--enable-sse2" and "--enable-avx" for
FFTW-3.3.4 and earlier. From FFTW-3.3.5, you should also add "--
325
enable-avx2" also. On Intel processors supporting 512-wide AVX,
326 327
including KNL, add "--enable-avx512" also. FFTW will create a fat
library with codelets for all different instruction sets, and pick the
328 329 330 331 332 333
fastest supported one at runtime. On ARM architectures with NEON SIMD
support and IBM Power8 and later, you definitely want version 3.3.5 or
later, and to compile it with "--enable-neon" and "--enable-vsx",
respectively, for SIMD support. If you are using a Cray, there is a
special modified (commercial) version of FFTs using the FFTW interface
which can be slightly faster.
334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350


Using MKL
~~~~~~~~~

Use MKL bundled with Intel compilers by setting up the compiler
environment, e.g., through "source /path/to/compilervars.sh intel64"
or similar before running CMake including setting
"-DGMX_FFT_LIBRARY=mkl".

If you need to customize this further, use

   cmake -DGMX_FFT_LIBRARY=mkl \
         -DMKL_LIBRARIES="/full/path/to/libone.so;/full/path/to/libtwo.so" \
         -DMKL_INCLUDE_DIR="/full/path/to/mkl/include"

The full list and order(!) of libraries you require are found in
351
Intel’s MKL documentation for your system.
352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370


Other optional build components
-------------------------------

* Run-time detection of hardware capabilities can be improved by
  linking with hwloc, which is automatically enabled if detected.

* Hardware-optimized BLAS and LAPACK libraries are useful for a few
  of the GROMACS utilities focused on normal modes and matrix
  manipulation, but they do not provide any benefits for normal
  simulations. Configuring these is discussed at linear algebra
  libraries.

* The built-in GROMACS trajectory viewer "gmx view" requires X11 and
  Motif/Lesstif libraries and header files. You may prefer to use
  third-party software for visualization, such as VMD or PyMol.

* An external TNG library for trajectory-file handling can be used
371 372
  by setting "-DGMX_EXTERNAL_TNG=yes", but TNG 1.7.10 is bundled in
  the GROMACS source already.
373 374 375 376 377 378 379 380

* An external lmfit library for Levenberg-Marquardt curve fitting
  can be used by setting "-DGMX_EXTERNAL_LMFIT=yes", but lmfit 6.1 is
  bundled in the GROMACS source already.

* zlib is used by TNG for compressing some kinds of trajectory data

* Building the GROMACS documentation is optional, and requires
381
  ImageMagick, pdflatex, bibtex, doxygen, python 2.7, sphinx 1.4.1,
382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410
  and pygments.

* The GROMACS utility programs often write data files in formats
  suitable for the Grace plotting tool, but it is straightforward to
  use these files in other plotting programs, too.


Doing a build of GROMACS
========================

This section will cover a general build of GROMACS with CMake, but it
is not an exhaustive discussion of how to use CMake. There are many
resources available on the web, which we suggest you search for when
you encounter problems not covered here. The material below applies
specifically to builds on Unix-like systems, including Linux, and Mac
OS X. For other platforms, see the specialist instructions below.


Configuring with CMake
----------------------

CMake will run many tests on your system and do its best to work out
how to build GROMACS for you. If your build machine is the same as
your target machine, then you can be sure that the defaults and
detection will be pretty good. However, if you want to control aspects
of the build, or you are compiling on a cluster head node for back-end
nodes with a different architecture, there are a few things you should
consider specifying.

411 412
The best way to use CMake to configure GROMACS is to do an “out-of-
source” build, by making another directory from which you will run
413 414 415 416 417 418 419
CMake. This can be outside the source directory, or a subdirectory of
it. It also means you can never corrupt your source code by trying to
build it! So, the only required argument on the CMake command line is
the name of the directory containing the "CMakeLists.txt" file of the
code you want to build. For example, download the source tarball and
use

420 421
   tar xfz gromacs-2018.tgz
   cd gromacs-2018
422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510
   mkdir build-gromacs
   cd build-gromacs
   cmake ..

You will see "cmake" report a sequence of results of tests and
detections done by the GROMACS build system. These are written to the
"cmake" cache, kept in "CMakeCache.txt". You can edit this file by
hand, but this is not recommended because you could make a mistake.
You should not attempt to move or copy this file to do another build,
because file paths are hard-coded within it. If you mess things up,
just delete this file and start again with "cmake".

If there is a serious problem detected at this stage, then you will
see a fatal error and some suggestions for how to overcome it. If you
are not sure how to deal with that, please start by searching on the
web (most computer problems already have known solutions!) and then
consult the gmx-users mailing list. There are also informational
warnings that you might like to take on board or not. Piping the
output of "cmake" through "less" or "tee" can be useful, too.

Once "cmake" returns, you can see all the settings that were chosen
and information about them by using e.g. the curses interface

   ccmake ..

You can actually use "ccmake" (available on most Unix platforms)
directly in the first step, but then most of the status messages will
merely blink in the lower part of the terminal rather than be written
to standard output. Most platforms including Linux, Windows, and Mac
OS X even have native graphical user interfaces for "cmake", and it
can create project files for almost any build environment you want
(including Visual Studio or Xcode). Check out running CMake for
general advice on what you are seeing and how to navigate and change
things. The settings you might normally want to change are already
presented. You may make changes, then re-configure (using "c"), so
that it gets a chance to make changes that depend on yours and perform
more checking. It may take several configuration passes to reach the
desired configuration, in particular if you need to resolve errors.

When you have reached the desired configuration with "ccmake", the
build system can be generated by pressing "g".  This requires that the
previous configuration pass did not reveal any additional settings (if
it did, you need to configure once more with "c").  With "cmake", the
build system is generated after each pass that does not produce
errors.

You cannot attempt to change compilers after the initial run of
"cmake". If you need to change, clean up, and start again.


Where to install GROMACS
~~~~~~~~~~~~~~~~~~~~~~~~

GROMACS is installed in the directory to which "CMAKE_INSTALL_PREFIX"
points. It may not be the source directory or the build directory.
You require write permissions to this directory. Thus, without super-
user privileges, "CMAKE_INSTALL_PREFIX" will have to be within your
home directory. Even if you do have super-user privileges, you should
use them only for the installation phase, and never for configuring,
building, or running GROMACS!


Using CMake command-line options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Once you become comfortable with setting and changing options, you may
know in advance how you will configure GROMACS. If so, you can speed
things up by invoking "cmake" and passing the various options at once
on the command line. This can be done by setting cache variable at the
cmake invocation using "-DOPTION=VALUE". Note that some environment
variables are also taken into account, in particular variables like
"CC" and "CXX".

For example, the following command line

   cmake .. -DGMX_GPU=ON -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/marydoe/programs

can be used to build with CUDA GPUs, MPI and install in a custom
location. You can even save that in a shell script to make it even
easier next time. You can also do this kind of thing with "ccmake",
but you should avoid this, because the options set with "-D" will not
be able to be changed interactively in that run of "ccmake".


SIMD support
~~~~~~~~~~~~

GROMACS has extensive support for detecting and using the SIMD
capabilities of many modern HPC CPU architectures. If you are building
511
GROMACS on the same hardware you will run it on, then you don’t need
512 513 514 515 516 517 518 519 520 521 522
to read more about this, unless you are getting configuration warnings
you do not understand. By default, the GROMACS build system will
detect the SIMD instruction set supported by the CPU architecture (on
which the configuring is done), and thus pick the best available SIMD
parallelization supported by GROMACS. The build system will also check
that the compiler and linker used also support the selected SIMD
instruction set and issue a fatal error if they do not.

Valid values are listed below, and the applicable value with the
largest number in the list is generally the one you should choose. In
most cases, choosing an inappropriate higher number will lead to
523 524 525
compiling a binary that will not run. However, on a number of
processor architectures choosing the highest supported value can lead
to performance loss, e.g. on Intel Skylake-X/SP and AMD Zen.
526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541

1. "None" For use only on an architecture either lacking SIMD, or
   to which GROMACS has not yet been ported and none of the options
   below are applicable.

2. "SSE2" This SIMD instruction set was introduced in Intel
   processors in 2001, and AMD in 2003. Essentially all x86 machines
   in existence have this, so it might be a good choice if you need to
   support dinosaur x86 computers too.

3. "SSE4.1" Present in all Intel core processors since 2007, but
   notably not in AMD Magny-Cours. Still, almost all recent processors
   support this, so this can also be considered a good baseline if you
   are content with slow simulations and prefer portability between
   reasonably modern processors.

542 543
4. "AVX_128_FMA" AMD Bulldozer, Piledriver (and later Family 15h)
   processors have this.
544 545

5. "AVX_256" Intel processors since Sandy Bridge (2011). While this
546 547 548 549 550 551 552 553 554 555 556 557 558 559
   code will work on the  AMD Bulldozer and Piledriver processors, it
   is significantly less efficient than the "AVX_128_FMA" choice above
   - do not be fooled to assume that 256 is better than 128 in this
   case.

6. "AVX2_128" AMD Zen microarchitecture processors (2017); it will
   enable AVX2 with 3-way fused multiply-add instructions. While the
   Zen microarchitecture does support 256-bit AVX2 instructions, hence
   "AVX2_256" is also supported, 128-bit will generally be faster, in
   particular when the non-bonded tasks run on the CPU – hence the
   default "AVX2_128". With GPU offload however "AVX2_256" can be
   faster on Zen processors.

7. "AVX2_256" Present on Intel Haswell (and later) processors
560 561 562
   (2013), and it will also enable Intel 3-way fused multiply-add
   instructions.

563 564 565 566 567 568 569 570 571
8. "AVX_512" Skylake-X desktop and Skylake-SP Xeon processors
   (2017); it will generally be fastest on the higher-end desktop and
   server processors with two 512-bit fused multiply-add units (e.g.
   Core i9 and Xeon Gold). However, certain desktop and server models
   (e.g. Xeon Bronze and Silver) come with only one AVX512 FMA unit
   and therefore on these processors "AVX2_256" is faster (compile-
   and runtime checks try to inform about such cases). Additionally,
   with GPU accelerated runs "AVX2_256" can also be faster on high-end
   Skylake CPUs with both 512-bit FMA units enabled.
572

573
9. "AVX_512_KNL" Knights Landing Xeon Phi processors
574

575
10. "IBM_QPX" BlueGene/Q A2 cores have this.
576

577
11. "Sparc64_HPC_ACE" Fujitsu machines like the K computer have
578 579
    this.

580
12. "IBM_VMX" Power6 and similar Altivec processors have this.
581

582
13. "IBM_VSX" Power7, Power8 and later have this.
583

584
14. "ARM_NEON" 32-bit ARMv7 with NEON support.
585

586
15. "ARM_NEON_ASIMD" 64-bit ARMv8 and later.
587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632

The CMake configure system will check that the compiler you have
chosen can target the architecture you have chosen. mdrun will check
further at runtime, so if in doubt, choose the lowest number you think
might work, and see what mdrun says. The configure system also works
around many known issues in many versions of common HPC compilers.

A further "GMX_SIMD=Reference" option exists, which is a special SIMD-
like implementation written in plain C that developers can use when
developing support in GROMACS for new SIMD architectures. It is not
designed for use in production simulations, but if you are using an
architecture with SIMD support to which GROMACS has not yet been
ported, you may wish to try this option instead of the default
"GMX_SIMD=None", as it can often out-perform this when the auto-
vectorization in your compiler does a good job. And post on the
GROMACS mailing lists, because GROMACS can probably be ported for new
SIMD architectures in a few days.


CMake advanced options
~~~~~~~~~~~~~~~~~~~~~~

The options that are displayed in the default view of "ccmake" are
ones that we think a reasonable number of users might want to consider
changing. There are a lot more options available, which you can see by
toggling the advanced mode in "ccmake" on and off with "t". Even
there, most of the variables that you might want to change have a
"CMAKE_" or "GMX_" prefix. There are also some options that will be
visible or not according to whether their preconditions are satisfied.


Helping CMake find the right libraries, headers, or programs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If libraries are installed in non-default locations their location can
be specified using the following variables:

* "CMAKE_INCLUDE_PATH" for header files

* "CMAKE_LIBRARY_PATH" for libraries

* "CMAKE_PREFIX_PATH" for header, libraries and binaries (e.g.
  "/usr/local").

The respective "include", "lib", or "bin" is appended to the path. For
each of these variables, a list of paths can be specified (on Unix,
633
separated with “:”). These can be set as enviroment variables like:
634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676

   CMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda cmake ..

(assuming "bash" shell). Alternatively, these variables are also
"cmake" options, so they can be set like
"-DCMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda".

The "CC" and "CXX" environment variables are also useful for
indicating to "cmake" which compilers to use. Similarly,
"CFLAGS"/"CXXFLAGS" can be used to pass compiler options, but note
that these will be appended to those set by GROMACS for your build
platform and build type. You can customize some of this with advanced
CMake options such as "CMAKE_C_FLAGS" and its relatives.

See also the page on CMake environment variables.


CUDA GPU acceleration
~~~~~~~~~~~~~~~~~~~~~

If you have the CUDA Toolkit installed, you can use "cmake" with:

   cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda

(or whichever path has your installation). In some cases, you might
need to specify manually which of your C++ compilers should be used,
e.g. with the advanced option "CUDA_HOST_COMPILER".

To make it possible to get best performance from NVIDIA Tesla and
Quadro GPUs, you should install the GPU Deployment Kit and configure
GROMACS to use it by setting the CMake variable
"-DGPU_DEPLOYMENT_KIT_ROOT_DIR=/path/to/your/kit". The NVML support is
most useful if "nvidia-smi --applications-clocks-
permission=UNRESTRICTED" is run (as root). When application clocks
permissions are unrestricted, the GPU clock speed can be increased
automatically, which increases the GPU kernel performance roughly
proportional to the clock increase. When using GROMACS on suitable
GPUs under restricted permissions, clocks cannot be changed, and in
that case informative log file messages will be produced. Background
details can be found at this NVIDIA blog post. NVML support is only
available if detected, and may be disabled by turning off the
"GMX_USE_NVML" CMake advanced option.

677 678 679 680 681 682 683 684 685 686 687 688
By default, code will be generated for the most common CUDA
architectures. However, to reduce build time and binary size we do not
generate code for every single possible architecture, which in rare
cases (say, Tegra systems) can result in the default build not being
able to use some GPUs. If this happens, or if you want to remove some
architectures to reduce binary size and build time, you can alter the
target CUDA architectures. This can be done either with the
"GMX_CUDA_TARGET_SM" or "GMX_CUDA_TARGET_COMPUTE" CMake variables,
which take a semicolon delimited string with the two digit suffixes of
CUDA (virtual) architectures names, for instance “35;50;51;52;53;60”.
For details, see the “Options for steering GPU code generation”
section of the nvcc man / help or Chapter 6. of the nvcc manual.
689 690 691 692 693 694

The GPU acceleration has been tested on AMD64/x86-64 platforms with
Linux, Mac OS X and Windows operating systems, but Linux is the best-
tested and supported of these. Linux running on POWER 8, ARM v7 and v8
CPUs also works well.

695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714
Experimental support is available for compiling CUDA code, both for
host and device, using clang (version 3.9 or later). A CUDA toolkit
(>= v7.0) is still required but it is used only for GPU device code
generation and to link against the CUDA runtime library. The clang
CUDA support simplifies compilation and provides benefits for
development (e.g. allows the use code sanitizers in CUDA host-code).
Additionally, using clang for both CPU and GPU compilation can be
beneficial to avoid compatibility issues between the GNU toolchain and
the CUDA toolkit. clang for CUDA can be triggered using the
"GMX_CLANG_CUDA=ON" CMake option. Target architectures can be selected
with  "GMX_CUDA_TARGET_SM", virtual architecture code is always
embedded for all requested architectures (hence
GMX_CUDA_TARGET_COMPUTE is ignored). Note that this is mainly a
developer-oriented feature and it is not recommended for production
use as the performance can be significantly lower than that of code
compiled with nvcc (and it has also received less testing). However,
note that with clang 5.0 the performance gap is significantly narrowed
(at the time of writing, about 20% slower GPU kernels), so this
version could be considered in non performance-critical use-cases.

715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778

OpenCL GPU acceleration
~~~~~~~~~~~~~~~~~~~~~~~

The primary target of the GROMACS OpenCL support is accelerating
simulations on AMD hardware, both discrete GPUs and APUs (integrated
CPU+GPU chips). The GROMACS OpenCL on NVIDIA GPUs works, but
performance and other limitations make it less practical (for details
see the user guide).

To build GROMACS with OpenCL support enabled, two components are
required: the OpenCL headers and the wrapper library that acts as a
client driver loader (so-called ICD loader). The additional, runtime-
only dependency is the vendor-specific GPU driver for the device
targeted. This also contains the OpenCL compiler. As the GPU compute
kernels are compiled  on-demand at run time, this vendor-specific
compiler and driver is not needed for building GROMACS. The former,
compile-time dependencies are standard components, hence stock
versions can be obtained from most Linux distribution repositories
(e.g. "opencl-headers" and "ocl-icd-libopencl1" on Debian/Ubuntu).
Only the compatibility with the required OpenCL version 1.1 needs to
be ensured. Alternatively, the headers and library can also be
obtained from vendor SDKs (e.g. from AMD), which must be installed in
a path found in "CMAKE_PREFIX_PATH" (or via the environment variables
"AMDAPPSDKROOT" or "CUDA_PATH").

To trigger an OpenCL build the following CMake flags must be set

   cmake .. -DGMX_GPU=ON -DGMX_USE_OPENCL=ON

On Mac OS, an AMD GPU can be used only with OS version 10.10.4 and
higher; earlier OS versions are known to run incorrectly.


Static linking
~~~~~~~~~~~~~~

Dynamic linking of the GROMACS executables will lead to a smaller disk
footprint when installed, and so is the default on platforms where we
believe it has been tested repeatedly and found to work. In general,
this includes Linux, Windows, Mac OS X and BSD systems. Static
binaries take more space, but on some hardware and/or under some
conditions they are necessary, most commonly when you are running a
parallel simulation using MPI libraries (e.g. BlueGene, Cray).

* To link GROMACS binaries statically against the internal GROMACS
  libraries, set "-DBUILD_SHARED_LIBS=OFF".

* To link statically against external (non-system) libraries as
  well, set "-DGMX_PREFER_STATIC_LIBS=ON". Note, that in general
  "cmake" picks up whatever is available, so this option only
  instructs "cmake" to prefer static libraries when both static and
  shared are available. If no static version of an external library is
  available, even when the aforementioned option is "ON", the shared
  library will be used. Also note that the resulting binaries will
  still be dynamically linked against system libraries on platforms
  where that is the default. To use static system libraries,
  additional compiler/linker flags are necessary, e.g. "-static-libgcc
  -static- libstdc++".

* To attempt to link a fully static binary set
  "-DGMX_BUILD_SHARED_EXE=OFF". This will prevent CMake from
  explicitly setting any dynamic linking flags. This option also sets
  "-DBUILD_SHARED_LIBS=OFF" and "-DGMX_PREFER_STATIC_LIBS=ON" by
779
  default, but the above caveats apply. For compilers which don’t
780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812
  default to static linking, the required flags have to be specified.
  On Linux, this is usually "CFLAGS=-static CXXFLAGS=-static".


Portability aspects
~~~~~~~~~~~~~~~~~~~

A GROMACS build will normally not be portable, not even across
hardware with the same base instruction set, like x86. Non-portable
hardware-specific optimizations are selected at configure-time, such
as the SIMD instruction set used in the compute kernels. This
selection will be done by the build system based on the capabilities
of the build host machine or otherwise specified to "cmake" during
configuration.

Often it is possible to ensure portability by choosing the least
common denominator of SIMD support, e.g. SSE2 for x86, and ensuring
the you use "cmake -DGMX_USE_RDTSCP=off" if any of the target CPU
architectures does not support the "RDTSCP" instruction.  However, we
discourage attempts to use a single GROMACS installation when the
execution environment is heterogeneous, such as a mix of AVX and
earlier hardware, because this will lead to programs (especially
mdrun) that run slowly on the new hardware. Building two full
installations and locally managing how to call the correct one (e.g.
using a module system) is the recommended approach. Alternatively, as
at the moment the GROMACS tools do not make strong use of SIMD
acceleration, it can be convenient to create an installation with
tools portable across different x86 machines, but with separate mdrun
binaries for each architecture. To achieve this, one can first build a
full installation with the least-common-denominator SIMD instruction
set, e.g. "-DGMX_SIMD=SSE2", then build separate mdrun binaries for
each architecture present in the heterogeneous environment. By using
custom binary and library suffixes for the mdrun-only builds, these
813
can be installed to the same location as the “generic” tools
814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994
installation. Building just the mdrun binary is possible by setting
the "-DGMX_BUILD_MDRUN_ONLY=ON" option.


Linear algebra libraries
~~~~~~~~~~~~~~~~~~~~~~~~

As mentioned above, sometimes vendor BLAS and LAPACK libraries can
provide performance enhancements for GROMACS when doing normal-mode
analysis or covariance analysis. For simplicity, the text below will
refer only to BLAS, but the same options are available for LAPACK. By
default, CMake will search for BLAS, use it if it is found, and
otherwise fall back on a version of BLAS internal to GROMACS. The
"cmake" option "-DGMX_EXTERNAL_BLAS=on" will be set accordingly. The
internal versions are fine for normal use. If you need to specify a
non-standard path to search, use
"-DCMAKE_PREFIX_PATH=/path/to/search". If you need to specify a
library with a non-standard name (e.g. ESSL on AIX or BlueGene), then
set "-DGMX_BLAS_USER=/path/to/reach/lib/libwhatever.a".

If you are using Intel MKL for FFT, then the BLAS and LAPACK it
provides are used automatically. This could be over-ridden with
"GMX_BLAS_USER", etc.

On Apple platforms where the Accelerate Framework is available, these
will be automatically used for BLAS and LAPACK. This could be over-
ridden with "GMX_BLAS_USER", etc.


Changing the names of GROMACS binaries and libraries
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

It is sometimes convenient to have different versions of the same
GROMACS programs installed. The most common use cases have been single
and double precision, and with and without MPI. This mechanism can
also be used to install side-by-side multiple versions of mdrun
optimized for different CPU architectures, as mentioned previously.

By default, GROMACS will suffix programs and libraries for such builds
with "_d" for double precision and/or "_mpi" for MPI (and nothing
otherwise). This can be controlled manually with "GMX_DEFAULT_SUFFIX
(ON/OFF)", "GMX_BINARY_SUFFIX" (takes a string) and "GMX_LIBS_SUFFIX"
(also takes a string). For instance, to set a custom suffix for
programs and libraries, one might specify:

   cmake .. -DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_mod -DGMX_LIBS_SUFFIX=_mod

Thus the names of all programs and libraries will be appended with
"_mod".


Changing installation tree structure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

By default, a few different directories under "CMAKE_INSTALL_PREFIX"
are used when when GROMACS is installed. Some of these can be changed,
which is mainly useful for packaging GROMACS for various
distributions. The directories are listed below, with additional notes
about some of them. Unless otherwise noted, the directories can be
renamed by editing the installation paths in the main CMakeLists.txt.

"bin/"
   The standard location for executables and some scripts. Some of the
   scripts hardcode the absolute installation prefix, which needs to
   be changed if the scripts are relocated.

"include/gromacs/"
   The standard location for installed headers.

"lib/"
   The standard location for libraries. The default depends on the
   system, and is determined by CMake. The name of the directory can
   be changed using "GMX_LIB_INSTALL_DIR" CMake variable.

"lib/pkgconfig/"
   Information about the installed "libgromacs" library for "pkg-
   config" is installed here.  The "lib/" part adapts to the
   installation location of the libraries.  The installed files
   contain the installation prefix as absolute paths.

"share/cmake/"
   CMake package configuration files are installed here.

"share/gromacs/"
   Various data files and some documentation go here. The "gromacs"
   part can be changed using "GMX_DATA_INSTALL_DIR". Using this CMake
   variable is the preferred way of changing the installation path for
   "share/gromacs/top/", since the path to this directory is built
   into "libgromacs" as well as some scripts, both as a relative and
   as an absolute path (the latter as a fallback if everything else
   fails).

"share/man/"
   Installed man pages go here.


Compiling and linking
---------------------

Once you have configured with "cmake", you can build GROMACS with
"make". It is expected that this will always complete successfully,
and give few or no warnings. The CMake-time tests GROMACS makes on the
settings you choose are pretty extensive, but there are probably a few
cases we have not thought of yet. Search the web first for solutions
to problems, but if you need help, ask on gmx-users, being sure to
provide as much information as possible about what you did, the system
you are building on, and what went wrong. This may mean scrolling back
a long way through the output of "make" to find the first error
message!

If you have a multi-core or multi-CPU machine with "N" processors,
then using

   make -j N

will generally speed things up by quite a bit. Other build generator
systems supported by "cmake" (e.g. "ninja") also work well.


Building only mdrun
~~~~~~~~~~~~~~~~~~~

This is now supported with the "cmake" option
"-DGMX_BUILD_MDRUN_ONLY=ON", which will build a different version of
"libgromacs" and the "mdrun" program. Naturally, now "make install"
installs only those products. By default, mdrun-only builds will
default to static linking against GROMACS libraries, because this is
generally a good idea for the targets for which an mdrun-only build is
desirable.


Installing GROMACS
------------------

Finally, "make install" will install GROMACS in the directory given in
"CMAKE_INSTALL_PREFIX". If this is a system directory, then you will
need permission to write there, and you should use super-user
privileges only for "make install" and not the whole procedure.


Getting access to GROMACS after installation
--------------------------------------------

GROMACS installs the script "GMXRC" in the "bin" subdirectory of the
installation directory (e.g. "/usr/local/gromacs/bin/GMXRC"), which
you should source from your shell:

   source /your/installation/prefix/here/bin/GMXRC

It will detect what kind of shell you are running and set up your
environment for using GROMACS. You may wish to arrange for your login
scripts to do this automatically; please search the web for
instructions on how to do this for your shell.

Many of the GROMACS programs rely on data installed in the
"share/gromacs" subdirectory of the installation directory. By
default, the programs will use the environment variables set in the
"GMXRC" script, and if this is not available they will try to guess
the path based on their own location.  This usually works well unless
you change the names of directories inside the install tree. If you
still need to do that, you might want to recompile with the new
install location properly set, or edit the "GMXRC" script.


Testing GROMACS for correctness
-------------------------------

Since 2011, the GROMACS development uses an automated system where
every new code change is subject to regression testing on a number of
platforms and software combinations. While this improves reliability
quite a lot, not everything is tested, and since we increasingly rely
on cutting edge compiler features there is non-negligible risk that
the default compiler on your system could have bugs. We have tried our
best to test and refuse to use known bad versions in "cmake", but we
strongly recommend that you run through the tests yourself. It only
takes a few minutes, after which you can trust your build.

The simplest way to run the checks is to build GROMACS with
"-DREGRESSIONTEST_DOWNLOAD", and run "make check". GROMACS will
automatically download and run the tests for you. Alternatively, you
can download and unpack the GROMACS regression test suite
995 996 997 998
http://gerrit.gromacs.org/download/regressiontests-2018.tar.gz tarball
yourself and use the advanced "cmake" option "REGRESSIONTEST_PATH" to
specify the path to the unpacked tarball, which will then be used for
testing. If the above does not work, then please read on.
999 1000 1001 1002 1003 1004

The regression tests are also available from the download section.
Once you have downloaded them, unpack the tarball, source "GMXRC" as
described above, and run "./gmxtest.pl all" inside the regression
tests folder. You can find more options (e.g. adding "double" when
using double precision, or "-only expanded" to run just the tests
1005
whose names match “expanded”) if you just execute the script without
1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148
options.

Hopefully, you will get a report that all tests have passed. If there
are individual failed tests it could be a sign of a compiler bug, or
that a tolerance is just a tiny bit too tight. Check the output files
the script directs you too, and try a different or newer compiler if
the errors appear to be real. If you cannot get it to pass the
regression tests, you might try dropping a line to the gmx-users
mailing list, but then you should include a detailed description of
your hardware, and the output of "gmx mdrun -version" (which contains
valuable diagnostic information in the header).

A build with "-DGMX_BUILD_MDRUN_ONLY" cannot be tested with "make
check" from the build tree, because most of the tests require a full
build to run things like "grompp". To test such an mdrun fully
requires installing it to the same location as a normal build of
GROMACS, downloading the regression tests tarball manually as
described above, sourcing the correct "GMXRC" and running the perl
script manually. For example, from your GROMACS source directory:

   mkdir build-normal
   cd build-normal
   cmake .. -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
   make -j 4
   make install
   cd ..
   mkdir build-mdrun-only
   cd build-mdrun-only
   cmake .. -DGMX_MPI=ON -DGMX_GPU=ON -DGMX_BUILD_MDRUN_ONLY=ON -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
   make -j 4
   make install
   cd /to/your/unpacked/regressiontests
   source /your/installation/prefix/here/bin/GMXRC
   ./gmxtest.pl all -np 2

If your mdrun program has been suffixed in a non-standard way, then
the "./gmxtest.pl -mdrun" option will let you specify that name to the
test machinery. You can use "./gmxtest.pl -double" to test the double-
precision version. You can use "./gmxtest.pl -crosscompiling" to stop
the test harness attempting to check that the programs can be run. You
can use "./gmxtest.pl -mpirun srun" if your command to run an MPI
program is called "srun".

The "make check" target also runs integration-style tests that may run
with MPI if "GMX_MPI=ON" was set. To make these work with various
possible MPI libraries, you may need to set the CMake variables
"MPIEXEC", "MPIEXEC_NUMPROC_FLAG", "MPIEXEC_PREFLAGS" and
"MPIEXEC_POSTFLAGS" so that "mdrun-mpi-test_mpi" would run on multiple
ranks via the shell command

   ${MPIEXEC} ${MPIEXEC_NUMPROC_FLAG} ${NUMPROC} ${MPIEXEC_PREFLAGS} \
         mdrun-mpi-test_mpi ${MPIEXEC_POSTFLAGS} -otherflags

A typical example for SLURM is

   cmake .. -DGMX_MPI=on -DMPIEXEC=srun -DMPIEXEC_NUMPROC_FLAG=-n -DMPIEXEC_PREFLAGS= -DMPIEXEC_POSTFLAGS=


Testing GROMACS for performance
-------------------------------

We are still working on a set of benchmark systems for testing the
performance of GROMACS. Until that is ready, we recommend that you try
a few different parallelization options, and experiment with tools
such as "gmx tune_pme".


Having difficulty?
------------------

You are not alone - this can be a complex task! If you encounter a
problem with installing GROMACS, then there are a number of locations
where you can find assistance. It is recommended that you follow these
steps to find the solution:

1. Read the installation instructions again, taking note that you
   have followed each and every step correctly.

2. Search the GROMACS webpage and users emailing list for
   information on the error. Adding
   "site:https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users"
   to a Google search may help filter better results.

3. Search the internet using a search engine such as Google.

4. Post to the GROMACS users emailing list gmx-users for
   assistance. Be sure to give a full description of what you have
   done and why you think it did not work. Give details about the
   system on which you are installing.  Copy and paste your command
   line and as much of the output as you think might be relevant -
   certainly from the first indication of a problem. In particular,
   please try to include at least the header from the mdrun logfile,
   and preferably the entire file.  People who might volunteer to help
   you do not have time to ask you interactive detailed follow-up
   questions, so you will get an answer faster if you provide as much
   information as you think could possibly help. High quality bug
   reports tend to receive rapid high quality answers.


Special instructions for some platforms
=======================================


Building on Windows
-------------------

Building on Windows using native compilers is rather similar to
building on Unix, so please start by reading the above. Then, download
and unpack the GROMACS source archive. Make a folder in which to do
the out-of-source build of GROMACS. For example, make it within the
folder unpacked from the source archive, and call it "build-gromacs".

For CMake, you can either use the graphical user interface provided on
Windows, or you can use a command line shell with instructions similar
to the UNIX ones above. If you open a shell from within your IDE (e.g.
Microsoft Visual Studio), it will configure the environment for you,
but you might need to tweak this in order to get either a 32-bit or
64-bit build environment. The latter provides the fastest executable.
If you use a normal Windows command shell, then you will need to
either set up the environment to find your compilers and libraries
yourself, or run the "vcvarsall.bat" batch script provided by MSVC
(just like sourcing a bash script under Unix).

With the graphical user interface, you will be asked about what
compilers to use at the initial configuration stage, and if you use
the command line they can be set in a similar way as under UNIX.

Unfortunately "-DGMX_BUILD_OWN_FFTW=ON" (see Using FFTW) does not work
on Windows, because there is no supported way to build FFTW on
Windows. You can either build FFTW some other way (e.g. MinGW), or use
the built-in fftpack (which may be slow), or using MKL.

For the build, you can either load the generated solutions file into
e.g. Visual Studio, or use the command line with "cmake --build" so
the right tools get used.


Building on Cray
----------------

GROMACS builds mostly out of the box on modern Cray machines, but you
may need to specify the use of static binaries with
"-DGMX_BUILD_SHARED_EXE=off", and you may need to set the F77
1149 1150 1151
environmental variable to "ftn" when compiling FFTW. The ARM ThunderX2
Cray XC50 machines differ only in that the recommended compiler is the
ARM HPC Compiler ("armclang").
1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191


Building on Solaris
-------------------

The built-in GROMACS processor detection does not work on Solaris, so
it is strongly recommended that you build GROMACS with
"-DGMX_HWLOC=on" and ensure that the "CMAKE_PREFIX_PATH" includes the
path where the hwloc headers and libraries can be found. At least
version 1.11.8 of hwloc is recommended.

Oracle Developer Studio is not a currently supported compiler (and
does not currently compile GROMACS correctly, perhaps because the
thread-MPI atomics are incorrectly implemented in GROMACS).


Building on BlueGene
--------------------


BlueGene/Q
~~~~~~~~~~

There is currently native acceleration on this platform for the Verlet
cut-off scheme. There are no plans to provide accelerated kernels for
the group cut-off scheme, but the default plain C kernels will work
(slowly).

Only the bgclang compiler is supported, because it is the only
availble C++11 compiler. Only static linking is supported.

Computation on BlueGene floating-point units is always done in double-
precision. However, mixed-precision builds of GROMACS are still normal
and encouraged since they use cache more efficiently.

You need to arrange for FFTW to be installed correctly, following the
above instructions. You may prefer to configure FFTW with "--disable-
fortran" to avoid complications.

MPI wrapper compilers should be used for compiling and linking. The
1192
MPI wrapper compilers can make it awkward to attempt to use IBM’s
1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217
optimized BLAS/LAPACK called ESSL (see the section on linear algebra
libraries. Since mdrun is the only part of GROMACS that should
normally run on the compute nodes, and there is nearly no need for
linear algebra support for mdrun, it is recommended to use the GROMACS
built-in linear algebra routines - this is never a problem for normal
simulations.

The recommended configuration is to use

   cmake .. -DCMAKE_C_COMPILER=mpicc \
            -DCMAKE_CXX_COMPILER=mpicxx \
            -DCMAKE_TOOLCHAIN_FILE=Platform/BlueGeneQ-static-bgclang-CXX.cmake \
            -DCMAKE_PREFIX_PATH=/your/fftw/installation/prefix \
            -DGMX_MPI=ON \
            -DGMX_BUILD_MDRUN_ONLY=ON
   make
   make install

which will build a statically-linked MPI-enabled mdrun for the compute
nodes. Otherwise, GROMACS default configuration behaviour applies.

It is possible to configure and make the remaining GROMACS tools with
the compute-node toolchain, but as none of those tools are MPI-aware,
this would not normally be useful. Instead, users should plan to run
these on the login node, and perform a separate GROMACS installation
1218
for that, using the login node’s toolchain - not the above platform
1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256
file, or any other compute-node toolchain. This may require requesting
an up-to-date gcc or clang toolchain for the front end.

Note that only the MPI build is available for the compute-node
toolchains. The GROMACS thread-MPI or no-MPI builds are not useful at
all on BlueGene/Q.


Fujitsu PRIMEHPC
~~~~~~~~~~~~~~~~

This is the architecture of the K computer, which uses Fujitsu
Sparc64VIIIfx chips. On this platform, GROMACS has accelerated group
kernels using the HPC-ACE instructions, no accelerated Verlet kernels,
and a custom build toolchain. Since this particular chip only does
double precision SIMD, the default setup is to build GROMACS in
double. Since most users only need single, we have added an option
GMX_RELAXED_DOUBLE_PRECISION to accept single precision square root
accuracy in the group kernels; unless you know that you really need 15
digits of accuracy in each individual force, we strongly recommend you
use this. Note that all summation and other operations are still done
in double.

The recommended configuration is to use

   cmake .. -DCMAKE_TOOLCHAIN_FILE=Toolchain-Fujitsu-Sparc64-mpi.cmake \
            -DCMAKE_PREFIX_PATH=/your/fftw/installation/prefix \
            -DCMAKE_INSTALL_PREFIX=/where/gromacs/should/be/installed \
            -DGMX_MPI=ON \
            -DGMX_BUILD_MDRUN_ONLY=ON \
            -DGMX_RELAXED_DOUBLE_PRECISION=ON
   make
   make install


Intel Xeon Phi
~~~~~~~~~~~~~~

1257 1258
Xeon Phi processors, hosted or self-hosted, are supported. Only
symmetric (aka native) mode is supported on Knights Corner. The
1259
performance depends among other factors on the system size, and for
1260 1261
now the performance might not be faster than CPUs. When building for
it, the recommended configuration is
1262 1263 1264 1265 1266

   cmake .. -DCMAKE_TOOLCHAIN_FILE=Platform/XeonPhi
   make
   make install

1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281
The Knights Landing-based Xeon Phi processors behave like standard x86
nodes, but support a special SIMD instruction set. When cross-
compiling for such nodes, use the "AVX_512_KNL" SIMD flavor. Knights
Landing processors support so-called “clustering modes” which allow
reconfiguring the memory subsystem for lower latency. GROMACS can
benefit from the quadrant or SNC clustering modes. Care needs to be
taken to correctly pin threads. In particular, threads of an MPI rank
should not cross cluster and NUMA boundaries. In addition to the main
DRAM memory, Knights Landing has a high-bandwidth stacked memory
called MCDRAM. Using it offers performance benefits if it is ensured
that "mdrun" runs entirely from this memory; to do so it is
recommended that MCDRAM is configured in “Flat mode” and "mdrun" is
bound to the appropriate NUMA node (use e.g. "numactl --membind 1"
with quadrant clustering mode).

1282 1283 1284 1285 1286 1287 1288 1289 1290

Tested platforms
================

While it is our best belief that GROMACS will build and run pretty
much everywhere, it is important that we tell you where we really know
it works because we have tested it. We do test on Linux, Windows, and
Mac with a range of compilers and libraries for a range of our
configuration options. Every commit in our git source code repository
1291
is currently tested on x86 with a number of gcc versions ranging from
1292 1293
4.8.1 through 7, versions 16 and 18 of the Intel compiler, and Clang
versions 3.4 through 5. For this, we use a variety of GNU/Linux
1294 1295 1296 1297
flavors and versions as well as recent versions of Windows. Under
Windows, we test both MSVC 2015 and version 16 of the Intel compiler.
For details, you can have a look at the continuous integration server
used by GROMACS, which runs Jenkins.
1298 1299 1300 1301

We test irregularly on ARM v7, ARM v8, BlueGene/Q, Cray, Fujitsu
PRIMEHPC, Power8, Google Native Client and other environments, and
with other compilers and compiler versions, too.