Skip to content

AlexTuring010/parallel-systems-hw3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

parallel-systems-hw3

Three MPI exercises moving from distributed matrix-vector multiplication into multiple cuts at parallel Conway's Game of Life. Each exercise ships working code, lab-machine collection scripts, runtime data, and (where collected) graphs.

Third homework of the Parallel Programming course at the University of Athens (Department of Informatics & Telecommunications). Solo project. Runs were collected on the UoA DI lab machines via mpiexec -f machines -n <n>.

Part of a three-piece arc:

  1. parallel-systems-hw1 — pthread
  2. parallel-systems-hw2 — OpenMP
  3. parallel-systems-hw3 (you are here) — MPI

Note on numbering. Exercises in this homework start at excersize2. The first exercise of HW3 (excersize1) was scoped out and never completed — not preserved here, not pretending it exists.

Exercise 2 — MPI matrix-vector multiplication, column-distributed

Compute Ax = b for a randomly-generated n × n integer matrix A and vector x, distributing columns of A (not rows) across MPI processes.

The interesting bit isn't the multiplication — it's the column scatter. In a row-major C array, the columns are not contiguous in memory: column 0 lives at indices 0, n, 2n, 3n, …. To scatter columns directly with MPI_Scatterv you have to:

  1. Define a custom datatype with MPI_Type_vector(n, k, n, MPI_INT, &col_type) — count n blocks of k ints, each separated by stride n (where k = columns per process).
  2. Then MPI_Type_create_resized(col_type, 0, k * sizeof(int), &resized_type) so that MPI_Scatterv interprets displacements in units of k ints instead of the natural type extent.

Without the Type_create_resized step, displacements computed against the natural extent of col_type jump over data and you get garbage in non-root ranks. The readme.txt shipped with the original submission has the full ASCII-diagram walkthrough of this in Greek.

cd excersize2
make
mpiexec -f ./machines -n <num_of_processes> ./bin/main

# correctness check vs serial baseline:
make test
./bin/main_test
# "no news is good news" — if it prints nothing, parallel output matches serial.

./scripts/collect_data           # full sweep across lab machines
python scripts/make_graphs.py

scripts/askdate is the helper that checks which lab machines are available before the run.

Exercise 3 — MPI Game of Life (two implementations)

Two ports of Conway's Game of Life to MPI, side-by-side:

  • game_of_life.c — implementation A.
  • game_of_life2.c — implementation B (different decomposition / communication strategy).

The two binaries (bin/game_of_life, bin/game_of_life2) make the same trade-off visible from opposite directions: how much locality you give up to halo-exchange, and how much parallel work you can extract from a stencil computation under MPI's send/receive cost model.

cd excersize3
make
mpiexec -f ./machines -n <n> ./bin/game_of_life  <generations> <grid_size>
mpiexec -f ./machines -n <n> ./bin/game_of_life2 <generations> <grid_size>
./scripts/collect_data.sh

Assignment brief not preserved.

Exercise 4 — MPI Game of Life (third variant)

A further variant of the same MPI Game of Life. Two binaries, game_of_life and game_of_life3, with game_of_life3.c exploring a different partitioning or exchange pattern than ex3's game_of_life2.c.

cd excersize4
make
mpiexec -f ./machines -n <n> ./bin/game_of_life  <generations> <grid_size>
mpiexec -f ./machines -n <n> ./bin/game_of_life3 <generations> <grid_size>
./scripts/collect_data.sh

Assignment brief not preserved.

Running on the UoA lab machines

Each exercise expects a machines file in its directory listing the hostnames mpiexec should spawn processes on. The scripts/askdate helper checks which machines are alive before a run; update machines accordingly.

Notes

  • Built with mpicc (MPICH or OpenMPI).
  • Assignment briefs (PDFs) preserved for exercise 2 only; briefs for 3 and 4 were not preserved.
  • Original-submission Greek readme.txt files preserved alongside each exercise where they exist — the one in excersize2/ is particularly worth reading for the MPI custom-type explanation.

License

MIT — applies to my own code in this repo. Assignment-distributed materials retain their original course copyright.

About

Parallel Programming HW3 (UoA DI): three MPI exercises — column-distributed matrix-vector multiplication (custom MPI_Type_vector + Type_create_resized + Scatterv) and two Game of Life variants. Tested across UoA DI lab machines. Solo project.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors