Computer Science Grade 11 20 min

5. Parallel Programming Models: Shared Memory vs. Distributed Memory

Explore different parallel programming models, including shared memory and distributed memory, and their trade-offs.

Tutorial Preview

1

Introduction & Learning Objectives

Learning Objectives Differentiate between the shared memory and distributed memory parallel programming models. Identify the hardware architecture associated with each model (e.g., multi-core CPU vs. computer cluster). Explain the role of synchronization in shared memory systems to prevent race conditions. Describe the process of explicit communication (message passing) in distributed memory systems. Analyze a given computational problem and recommend the more suitable programming model. Compare the primary advantages and disadvantages of each model, including scalability and programming complexity. How does a supercomputer with thousands of processors work together on one massive problem, like simulating a galaxy? 🌌 They have to talk, but how? In this lesson, we'll e...
2

Key Concepts & Vocabulary

TermDefinitionExample Shared Memory ModelA parallel programming model where multiple processors or cores share access to a single, global memory space. Communication between tasks is implicit, done by reading from and writing to this common memory.A multi-core CPU in your laptop. All cores can access the same RAM. A program can create multiple threads, and each thread can read or modify a shared variable, like a counter. Distributed Memory ModelA parallel programming model where each processor has its own private memory. Processors cannot directly access another's memory and must communicate explicitly by sending and receiving messages over a network.A Beowulf cluster, which is a group of standard computers connected by a network. To calculate a large sum, one computer (the 'mas...
3

Core Syntax & Patterns

Shared Memory Pattern: The Lock 1. Identify shared resource. 2. Create a lock for that resource. 3. Before accessing the resource: `acquire_lock()`. 4. After accessing the resource: `release_lock()`. Use this pattern whenever multiple threads need to read and modify the same variable or data structure. The lock ensures that the block of code between `acquire` and `release` is 'atomic'β€”it cannot be interrupted by other threads trying to access the same resource. Distributed Memory Pattern: Send/Receive Pair Process A: `send(data, destination=Process_B)` Process B: `data = receive(source=Process_A)` This is the fundamental communication pattern in distributed memory. For every `send` operation, there must be a corresponding `receive`. The processes must agree on the...

4 more steps in this tutorial

Sign up free to access the complete tutorial with worked examples and practice.

Sign Up Free to Continue

Sample Practice Questions

Challenging
A multithreaded program requires access to two shared resources, Resource A and Resource B, each protected by its own lock, Lock A and Lock B. A deadlock occurs. Based on the tutorial's advice, what is the most effective strategy to prevent this?
A.Combine both resources into a single larger resource protected by one lock.
B.Use `send()` and `receive()` instead of locks.
C.Establish a global order, forcing all threads to acquire Lock A before acquiring Lock B.
D.Increase the number of threads so that one is more likely to acquire both locks successfully.
Challenging
You are designing a parallel application for a large cluster. A large, read-only configuration file (100 MB) must be available to all 1,000 processes. To minimize startup time and network congestion, what is the most efficient communication strategy?
A.The master process reads the file and sends it, one integer at a time, to all 999 other processes.
B.The master process reads the file once and performs a single 'broadcast' operation to send the entire file to all other processes simultaneously.
C.Each of the 1,000 processes independently reads the file from a shared network drive.
D.The master process sends the entire file to Process 1, which sends it to Process 2, and so on, in a chain.
Challenging
Why is debugging a race condition in a shared memory program often considered more difficult than debugging a message passing error (e.g., a mismatched send/receive) in a distributed memory program?
A.Race conditions are non-deterministic and may only appear under specific, hard-to-reproduce thread timings.
B.Message passing errors produce compiler warnings, whereas race conditions do not.
C.Distributed memory programs can be run on a single machine for debugging, but shared memory programs cannot.
D.The tools for debugging shared memory programs are less advanced than those for distributed memory.

Want to practice and check your answers?

Sign up to access all questions with instant feedback, explanations, and progress tracking.

Start Practicing Free

More from I. Concurrent and Parallel Programming: Unleashing the Power of Multiple Cores

Ready to find your learning gaps?

Take a free diagnostic test and get a personalized learning plan in minutes.