Skip to content

Latest commit

 

History

History

README.md

Message Passing Interface (MPI)

Mindset

Multi-Universe model: all processors run exactly the same program without sharing memories. Without communication, they will end up with returning the same result. Then, we introduce a web-based oracle object MPI.COMM_WORLD. When a processor queries that object with the Get_rank() function, that object returns a number that corresponds to the processor ID.

Watch YouTube Video

List of packages

  1. Python: mpi4py
  2. Julia: MPI.jl
  3. Backends: MPICH, OpenMPI, Intel MPI et al.

Coding example: Distributed hello-world with MPI.jl

We are working in the Julia project folder, using the project's local environment

  1. Add MPI.jl to your project dependencies.
julia> using Pkg; Pkg.activate("."); Pkg.add("MPI")
  1. Configure the MPI backend (doc)
julia> using Pkg; Pkg.add("MPIPreferences");

julia> using MPIPreferences; MPIPreferences.use_system_binary()

You will see a LocalPreferences.toml in your working folder.

  1. You need to build the MPI package again for the new MPI backend.
julia --project -e 'using Pkg; Pkg.build("MPI")'
  1. You may test the program with
mpiexec -n 3 julia --project mpi.jl

Another example

Go through this example: https://juliaparallel.org/MPI.jl/dev/examples/06-scatterv/

Using school cluster.

  1. Please check the tested LSF script. This script can be executed on a cluster with
bsub < julia-helloworld-lsf.job
  1. The slurm script is not tested. It can be executed on a cluster with
sbatch < julia-helloworld-slurm.slurm