HOW TO INSTALL MPI ON CLUSTER HOW TO
Here is an example of how to compile a single C source code file using mpicc using level 3 optimization: mpicc -o mpi_example -O3 mpi_example.c.They are mpicc for C programs, mpiCC for C++ programs, mpif77 for Fortran 77 and mpif90 for Fortran 90 programs. MPI has predefined scripts that can be used for compiling and linking programs.They automatically call the correct include file and libraries, and use the same compiler that was used to build the MPI library. Use the same compiler to compile your MPI program as was used to build the MPI library. For example: mpich3/3.0.4-intel13.0 was built with the Intel v13.0 compilers.The name of the compiler used to build the MPI library is included in the name of the module. How to Select a Compiler To Compile Your MPI Program See the Multi Core Job Examples page of how to submit parallel jobs to the batch queue for information on how to specify the ethernet or infiniband connections. MPI code built for the infinband network connection will not work on the nodes with just the 1 Gb network connection. MPI code built for an ethernet connection will only use the 1 Gb network on the IB nodes instead of the faster Infiniband network device. For example, the command module load mpich3 will install the default version of MPICH3.Select the version that you prefer and issue the module load modulefile command to load it. module avail mvapich – list the versions of MVAPICH using an infiniband connection.module avail mpich – list the versions of MPICH using an ethernet connection.The following module commands will list the different versions that are available: If your MPI program has a lot of inter node communication, it will run faster on the nodes with infiniband connection. Issue the command “features -a” on discovery to see which nodes have infiniband connection and to see the feature names ( ex. MVAPICH (MVAPICH2) is used for message passing between compute nodes with an infiniband connection.MPICH (MPICH2 and MPICH3) is used for message passing between compute nodes with an ethernet connection.There are two types of MPI, the message passing library, installed on the Discovery cluster.