Computer Science Grade 11 20 min

10. Case Study: Building a Parallel Web Server

Apply the concepts learned in this chapter to build a parallel web server that can handle multiple requests concurrently.

Tutorial Preview

1

Introduction & Learning Objectives

Learning Objectives Explain the limitations of a sequential web server. Differentiate between the 'thread-per-request' and 'thread pool' models for parallel servers. Diagram the flow of a client request through a parallel web server. Identify potential concurrency issues like race conditions in a server context. Analyze the trade-offs between server performance and resource consumption. Design a high-level pseudocode implementation of a basic parallel web server using a thread pool. Ever wonder how a website like YouTube can serve videos to millions of people at the exact same time without crashing? 🤔 Let's pull back the curtain and see how it's done! This case study will walk you through the architectural design of a web server, starting from...
2

Key Concepts & Vocabulary

TermDefinitionExample Web ServerA program that listens for incoming network requests (typically over HTTP) from clients like web browsers, processes those requests, and sends back responses, such as HTML files, images, or data.When you type `www.example.com` into your browser, you are sending an HTTP request to a web server. The server finds the `index.html` file and sends it back to your browser to display. SocketA software endpoint that establishes a two-way communication link between two programs on a network. A server uses a 'listening socket' to wait for new connections.Think of it like a phone number for a program. The server has a well-known number (IP address + port, e.g., `192.168.1.1:80`) that clients can 'call' to connect. Sequential ServerA server that hand...
3

Core Syntax & Patterns

The Listener Loop Pattern server_socket.bind(address) server_socket.listen() while (true): client_connection = server_socket.accept() handle_request(client_connection) This is the fundamental structure of any server. It binds to a specific IP address and port, listens for incoming connections, and then enters an infinite loop to 'accept' new connections one by one and pass them off for processing. Thread-per-Request Pattern while (true): client_connection = server_socket.accept() new_thread = create_thread(target=handle_request, args=(client_connection,)) new_thread.start() A simple approach to parallelism. Inside the listener loop, for every new connection accepted, the server immediately creates a brand new thread to handle that specific client. This is...

4 more steps in this tutorial

Sign up free to access the complete tutorial with worked examples and practice.

Sign Up Free to Continue

Sample Practice Questions

Challenging
A parallel web server with a thread pool of 16 is running on a 16-core machine. Under heavy load, the server's response time becomes very slow, but monitoring tools show that CPU utilization is consistently low (~10%). What is the most likely bottleneck causing this issue, based on the tutorial's common pitfalls?
A.blocking I/O operation (e.g., `read_data_from_slow_disk()`) is being performed in the main listener loop before tasks are submitted to the pool.
B.The thread pool size is too small for the 16-core machine, causing excessive context switching.
C.race condition is causing threads to constantly overwrite each other's work, leading to wasted CPU cycles.
D.The server has run out of memory for the thread pool's task queue.
Challenging
You are designing a parallel server on an 8-core machine. The server's primary task is to act as a proxy, receiving a request and then fetching data from a very slow, external third-party API. Which thread pool sizing strategy is most justifiable and why?
A.Exactly 8 threads, to match one thread per CPU core for maximum computational efficiency.
B.size significantly larger than 8 (e.g., 50-100), because most threads will be in a non-CPU-intensive 'waiting' state, allowing other threads to use the CPU.
C.Exactly 1 thread, to serialize all requests to the slow API and prevent it from being overloaded.
D.'thread-per-request' model, as the overhead of thread creation is negligible compared to the API wait time.
Challenging
Which pseudocode snippet best represents the logic for a worker thread's target function that handles a request and safely updates a shared visitor counter?
A.def handle(conn, lock, counter): counter += 1 lock.acquire() process_request(conn) lock.release() conn.close()
B.def handle(conn, lock, counter): process_request(conn) counter.increment_atomically() conn.close()
C.def handle(conn, lock, counter): process_request(conn) lock.acquire() counter += 1 lock.release() conn.close()
D.def handle(conn, lock, counter): lock.acquire() process_request(conn) counter += 1 conn.close() lock.release()

Want to practice and check your answers?

Sign up to access all questions with instant feedback, explanations, and progress tracking.

Start Practicing Free

More from I. Concurrent and Parallel Programming: Unleashing the Power of Multiple Cores

Ready to find your learning gaps?

Take a free diagnostic test and get a personalized learning plan in minutes.