Clarkson University

Parallel Computation CS 643

Course Syllabus

Course Website:
Lectures: T,Th 1:00-2:15 SC 342

Professor: William Hesse
Office: Science Center 383
Phone: 268-2387

Office hours:
Monday 1-2, 4-5
Tuesday 11-12
Wednesday 2-3
Thursday 12-1
Friday 10-11, 1:30-2:30
Or by appointment

Course Objectives:

  1. Students will learn to design, implement, and measure the performance of parallel programs using the MPI library.
  2. Students will learn the important theoretical models of parallel computation, and be able to analyse the performance of algorithms using these models.
  3. Students will learn the fundamental parallel algorithms that are used to construct efficient parallel programs.
  4. Students will read current papers in parallel programming research, including modern hardware implementations, sofware support libraries, and algorithms.


"Parallel Programming with MPI", by Peter Pacheco, published by Morgan Kaufmann.

The grading in this class is a numerical score, based on all components of the course. Assignments, and tests will be curved at the time they are graded, and there will be no curve applied to the final class averages. The components are weighted according to this table:

There will be three exams during the semester. You are responsible for all material in the lecture, as well as the reading assignments. Important topics from the reading assignments will be reviewed in lecture.

Assignments must be completed individually, unless noted otherwise. This means all of the obvious things, like no copying of code, etc. Within these guidelines, I strongly encourage students to study and work together, and to discuss assignments. This is a major way to learn more and to get better grades. If you get a significant amount of help from someone else, note this on your submission. This won't affect your grade, it just verifies that you aren't getting excessive help from others and also trying to hide it. If you tell me about the help you are getting, then you are not cheating. I may tell you to get less help in the future, but you will not be subject to any penalties.

Class Participation includes presenting research papers you have been assigned to present, and participating in all discussions of research papers.

Late work is subject to a penalty of 20% per week.


Week: Topics

  1. Message Passing & the MPI Library
  2. Amdahl's Law and Performance Measurement
  3. Collective Communication: Scatter-Gather & Prefix Sum
  4. Message-Passing Models: Modelling Bandwidth & Latency
  5. Leader Selection and Symmetry Breaking
  6. Topologies & Locality
  7. The Bulk-Synchronous Parallelism (BSP) model
  8. Shared-Memory Parallel Models
  9. Serialization, Atomicity, and Memory Consistency Protocols
  10. The OpenMP shared-memory specification
  11. Analysis of Parallel Algorithms
  12. Routing & Congestion
  13. Wormhole Routing & the 2D Mesh

Author: William Hesse
Last Modified: Aug 29, 2006