Clarkson University

Parallel Programming CS 443/543

Course Syllabus

Course Website:
Lectures: TBD

Professor: William Hesse
Office: Science Center 383
Phone: 268-2387

Office hours:TBD

Official Course Description
The performance of single microprocessors is no longer increasing rapidly, and most of the increase in computing power in the future is anticipated to come from multiprocessor and parallel systems. But parallel programming is much more difficult than writing single-threaded sequential programs, and this course will introduce students to the techniques, design strategies, and programming interfaces for creating reliable and efficient parallel programs. Students will program for clusters of workstations using the MPI parallel message passing library, and will write multi-threaded programs for shared-memory multiprocessors. Students will learn methods and tools for predicting and measuring the performance of parallel algorithms. Students taking CS 543 will read and discuss research papers on parallel architectures and algorithms.

Course Objectives:

  1. Students will design, implement, and measure the performance of parallel programs using the MPI library.
  2. Students will design and program multithreaded programs using the Pthreads library
  3. Students will debug parallel programs using debuggers, logging, and code analysis.
  4. Students will learn fundamental parallel algorithms and parallel design priciples that are used to construct robust and efficient parallel programs.
  5. Students will read current papers in parallel programming research, including modern hardware implementations, sofware support libraries, and algorithms.


"Parallel Programming with MPI", by Peter Pacheco, published by Morgan Kaufmann.
"Pthreads Programming: A POSIX Standard for Better Multiprocessing", by Bradford Nichols et. al., published by O'Reilly.

The grading in this class is a numerical score, based on all components of the course. Assignments, and tests will be curved at the time they are graded, and there will be no curve applied to the final class averages. The components are weighted according to this table:

There will be three exams during the semester, and a final exam. You are responsible for all material in the lecture, as well as the reading assignments. Important topics from the reading assignments will be reviewed in lecture.

Assignments must be completed individually, unless noted otherwise. This means all of the obvious things, like no copying of code, etc. Within these guidelines, I strongly encourage students to study and work together, and to discuss assignments. This is a major way to learn more and to get better grades. If you get a significant amount of help from someone else, note this on your submission. This won't affect your grade, it just verifies that you aren't getting excessive help from others and also trying to hide it. If you tell me about the help you are getting, then you are not cheating. I may tell you to get less help in the future, but you will not be subject to any penalties.

Class Participation includes presenting research papers you have been assigned to present (for graduate students), and participating in all discussions of research papers.

Late work is subject to a penalty of 20% per week.


Week: Topics

  1. Message Passing & the MPI Library
  2. Collective Communication: Scatter-Gather & Prefix Sum
  3. Deadlock and Safe MPI programs
  4. Logging, Debugging, and trace visualization
  5. The Cost of Communication: Bandwidth & Latency
  6. Design patterns: FSAs, Centralized Servers, and Distributed control
  7. Load Sharing
  8. Shared Memory and Multithreading
  9. Pthread basics
  10. Preemptive multithreading and memory races
  11. Critical Sections, locks, mutexes, and condition variables
  12. Multiprocessor Caching and Memory Consistency
  13. Amdahl's Law and Performance Measurement
  14. Analysis of Parallel Algorithms

Author: William Hesse
Last Modified: Mar 15, 2007