Three MPI exercises moving from distributed matrix-vector multiplication into multiple cuts at parallel Conway's Game of Life. Each exercise ships working code, lab-machine collection scripts, runtime data, and (where collected) graphs.
Third homework of the Parallel Programming course at the University of Athens (Department of Informatics & Telecommunications). Solo project. Runs were collected on the UoA DI lab machines via mpiexec -f machines -n <n>.
Part of a three-piece arc:
- parallel-systems-hw1 — pthread
- parallel-systems-hw2 — OpenMP
- parallel-systems-hw3 (you are here) — MPI
Note on numbering. Exercises in this homework start at
excersize2. The first exercise of HW3 (excersize1) was scoped out and never completed — not preserved here, not pretending it exists.
Compute Ax = b for a randomly-generated n × n integer matrix A and vector x, distributing columns of A (not rows) across MPI processes.
The interesting bit isn't the multiplication — it's the column scatter. In a row-major C array, the columns are not contiguous in memory: column 0 lives at indices 0, n, 2n, 3n, …. To scatter columns directly with MPI_Scatterv you have to:
- Define a custom datatype with
MPI_Type_vector(n, k, n, MPI_INT, &col_type)— countnblocks ofkints, each separated by striden(wherek= columns per process). - Then
MPI_Type_create_resized(col_type, 0, k * sizeof(int), &resized_type)so thatMPI_Scattervinterprets displacements in units ofkints instead of the natural type extent.
Without the Type_create_resized step, displacements computed against the natural extent of col_type jump over data and you get garbage in non-root ranks. The readme.txt shipped with the original submission has the full ASCII-diagram walkthrough of this in Greek.
cd excersize2
make
mpiexec -f ./machines -n <num_of_processes> ./bin/main
# correctness check vs serial baseline:
make test
./bin/main_test
# "no news is good news" — if it prints nothing, parallel output matches serial.
./scripts/collect_data # full sweep across lab machines
python scripts/make_graphs.pyscripts/askdate is the helper that checks which lab machines are available before the run.
Two ports of Conway's Game of Life to MPI, side-by-side:
game_of_life.c— implementation A.game_of_life2.c— implementation B (different decomposition / communication strategy).
The two binaries (bin/game_of_life, bin/game_of_life2) make the same trade-off visible from opposite directions: how much locality you give up to halo-exchange, and how much parallel work you can extract from a stencil computation under MPI's send/receive cost model.
cd excersize3
make
mpiexec -f ./machines -n <n> ./bin/game_of_life <generations> <grid_size>
mpiexec -f ./machines -n <n> ./bin/game_of_life2 <generations> <grid_size>
./scripts/collect_data.shAssignment brief not preserved.
A further variant of the same MPI Game of Life. Two binaries, game_of_life and game_of_life3, with game_of_life3.c exploring a different partitioning or exchange pattern than ex3's game_of_life2.c.
cd excersize4
make
mpiexec -f ./machines -n <n> ./bin/game_of_life <generations> <grid_size>
mpiexec -f ./machines -n <n> ./bin/game_of_life3 <generations> <grid_size>
./scripts/collect_data.shAssignment brief not preserved.
Each exercise expects a machines file in its directory listing the hostnames mpiexec should spawn processes on. The scripts/askdate helper checks which machines are alive before a run; update machines accordingly.
- Built with
mpicc(MPICH or OpenMPI). - Assignment briefs (PDFs) preserved for exercise 2 only; briefs for 3 and 4 were not preserved.
- Original-submission Greek
readme.txtfiles preserved alongside each exercise where they exist — the one inexcersize2/is particularly worth reading for the MPI custom-type explanation.
MIT — applies to my own code in this repo. Assignment-distributed materials retain their original course copyright.