Using MPI on Clipper

Tags HPC Clipper MPI

What is MPI?

From Wikipedia:

The Message Passing Interface (MPI) is a portable message-passing standard designed to function on parallel computing architectures. The MPI standard defines the syntax and semantics of library routines that are useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several open-source MPI implementations, which fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.

MPI can be used to communicate between processes located on a single system or on a collection of distributed systems.

Clipper provides four different implementations of MPI: OpenMPI, MVAPICH2, MPICH and Intel MPI. Each has been compiled with GCC and with support the cluster’s high speed, low latency Infiniband network.

MPI Hello World

The following examples utilize the following MPI Hello World C application written by Wes Kendall:

// Author: Wes Kendall
// Copyright 2011 www.mpitutorial.com
// This code is provided freely with the tutorials on mpitutorial.com. Feel
// free to modify it for your own use. Any distribution of the code must
// either provide a link to www.mpitutorial.com or keep this header intact.
//
// An intro MPI hello world program that uses MPI_Init, MPI_Comm_size,
// MPI_Comm_rank, MPI_Finalize, and MPI_Get_processor_name.
//
#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
  // Initialize the MPI environment. The two arguments to MPI Init are not
  // currently used by MPI implementations, but are there in case future
  // implementations might need the arguments.
  MPI_Init(NULL, NULL);

  // Get the number of processes
  int world_size;
  MPI_Comm_size(MPI_COMM_WORLD, &world_size);

  // Get the rank of the process
  int world_rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

  // Get the name of the processor
  char processor_name[MPI_MAX_PROCESSOR_NAME];
  int name_len;
  MPI_Get_processor_name(processor_name, &name_len);

  // Print off a hello world message
  printf("Hello world from processor %s, rank %d out of %d processors\n",
         processor_name, world_rank, world_size);

  // Finalize the MPI environment. No more MPI calls can be made after this
  MPI_Finalize();
}
OpenMPI

OpenMPI

After saving the example program, load the openmpi module and compile the program with the mpicc compiler wrapper:

module load openmpi

mpicc mpi_hello_world.c -o mpi_hello_world

Create a Slurm script mpi_hello_world.sbatch requesting multiple tasks (cores) across multiple nodes. Please note that OpenMPI should use mpirun instead of srun:

#!/bin/bash
#SBATCH --job-name=mpi-hello-world
#SBATCH --output=mpi-hello-world.out
#SBATCH --time=00:30:00
#SBATCH --partition=cpu
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=10
#SBATCH --mem-per-cpu=4G
#SBATCH --mail-user username@gvsu.edu
#SBATCH --mail-type BEGIN
#SBATCH --mail-type END
#SBATCH --mail-type FAIL

module purge
module load openmpi

mpirun ~/mpi-hello-world-openmpi/mpi_hello_world

Submit the job to the cluster:

sbatch mpi_hello_world.sbatch

The output shows multiple processes running across multiple cores and systems:

Hello world from processor c001.clipper.gvsu.edu, rank 1 out of 20 processors
Hello world from processor c002.clipper.gvsu.edu, rank 14 out of 20 processors
Hello world from processor c002.clipper.gvsu.edu, rank 15 out of 20 processors
Hello world from processor c001.clipper.gvsu.edu, rank 6 out of 20 processors
Hello world from processor c002.clipper.gvsu.edu, rank 18 out of 20 processors
Hello world from processor c002.clipper.gvsu.edu, rank 16 out of 20 processors
Hello world from processor c002.clipper.gvsu.edu, rank 12 out of 20 processors
Hello world from processor c002.clipper.gvsu.edu, rank 13 out of 20 processors
Hello world from processor c002.clipper.gvsu.edu, rank 19 out of 20 processors
Hello world from processor c002.clipper.gvsu.edu, rank 17 out of 20 processors
Hello world from processor c002.clipper.gvsu.edu, rank 11 out of 20 processors
Hello world from processor c002.clipper.gvsu.edu, rank 10 out of 20 processors
Hello world from processor c001.clipper.gvsu.edu, rank 4 out of 20 processors
Hello world from processor c001.clipper.gvsu.edu, rank 8 out of 20 processors
Hello world from processor c001.clipper.gvsu.edu, rank 2 out of 20 processors
Hello world from processor c001.clipper.gvsu.edu, rank 3 out of 20 processors
Hello world from processor c001.clipper.gvsu.edu, rank 7 out of 20 processors
Hello world from processor c001.clipper.gvsu.edu, rank 0 out of 20 processors
Hello world from processor c001.clipper.gvsu.edu, rank 9 out of 20 processors
Hello world from processor c001.clipper.gvsu.edu, rank 5 out of 20 processors
MPICH
Intel MPI
MVAPICH2