Computer Science Grade 11 20 min

7. Message Passing Interface (MPI): Communication Between Processes

Explore MPI for communication between processes in a distributed memory environment.

Tutorial Preview

1

Introduction & Learning Objectives

Learning Objectives Explain the purpose of MPI as a standard for parallel computing. Identify and describe the core MPI functions: MPI_Init, MPI_Finalize, MPI_Comm_size, and MPI_Comm_rank. Differentiate between a process's rank and the total communicator size. Write a basic MPI program that follows the Single Program, Multiple Data (SPMD) model. Implement point-to-point communication between two processes using MPI_Send and MPI_Recv. Compile and run a simple MPI program using standard commands like `mpicc` and `mpirun`. Analyze and predict the output of a simple MPI program involving basic communication. How do thousands of individual computers in a supercomputer work together to predict a hurricane's path or render an animated movie? 🤖💬🤖 This lesson introduc...
2

Key Concepts & Vocabulary

TermDefinitionExample ProcessAn independent instance of a running program. In an MPI application, you typically launch multiple processes that all run the same code simultaneously.If you run an MPI program with `mpirun -n 4 my_program`, you are creating four separate processes, each executing the `my_program` code. Communicator (MPI_Comm)A group of processes that are allowed to communicate with each other. The most common communicator is `MPI_COMM_WORLD`, which includes all processes launched for the application.Think of `MPI_COMM_WORLD` as a private chat room that is automatically created for all your processes when the program starts. RankA unique, non-negative integer ID assigned to each process within a communicator. Ranks start at 0 and go up to (size - 1).In a group of 4 processes,...
3

Core Syntax & Patterns

Basic MPI Program Structure 1. MPI_Init(...) 2. Get Rank & Size 3. Perform Parallel Work 4. MPI_Finalize() Every MPI program must start by initializing the MPI environment with `MPI_Init` and end by cleaning it up with `MPI_Finalize`. Between these calls, you typically get the process's rank and the total size to control its behavior. Point-to-Point Syntax: MPI_Send MPI_Send(&data, count, datatype, destination_rank, tag, communicator); Used by a process to send a message. You must specify: what data to send (`&data`), how many elements (`count`), the type of data (`datatype`), which process to send it to (`destination_rank`), a message identifier (`tag`), and the communication group (`communicator`). Point-to-Point Syntax: MPI_Recv MPI_Recv(&buffer...

4 more steps in this tutorial

Sign up free to access the complete tutorial with worked examples and practice.

Sign Up Free to Continue

Sample Practice Questions

Easy
Which MPI function must be called in every MPI program before any other MPI-specific functions are used?
A.MPI_Finalize
B.MPI_Comm_size
C.MPI_Init
D.MPI_Start
Easy
What does the `MPI_Comm_rank` function retrieve for a process?
A.The total number of processes in the communicator.
B.The unique integer ID of the calling process within its communicator.
C.The processing speed or priority of the core.
D.The status of the last communication operation.
Easy
In an MPI program launched with the command `mpirun -n 8 ./my_app`, what value will be returned by `MPI_Comm_size`?
A.7
B.8
C.0
D.random number depending on the system.

Want to practice and check your answers?

Sign up to access all questions with instant feedback, explanations, and progress tracking.

Start Practicing Free

More from I. Concurrent and Parallel Programming: Unleashing the Power of Multiple Cores

Ready to find your learning gaps?

Take a free diagnostic test and get a personalized learning plan in minutes.