Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Parallel programming : concepts and practice
Schmidt B., González-Domínguez J., Hundt C., Schlarb M., Morgan Kaufmann Publishers Inc., Cambridge, MA, 2018. 416 pp. Type: Book (978-0-128498-90-3)
Date Reviewed: Jun 29 2018

Parallel programming is not optional. In the past, algorithm designers focused on the design of sequential algorithms, because of Moore’s law, and hardware speed improvements did the rest. Today, computer scientists and software engineers must write highly parallelizable code to exploit the capabilities of current hardware architectures, which include multicore processors, powerful general-purpose graphics processing units (GPGPUs), and the computer clusters used in cloud computing and big data applications. Of course, the careful design of the sequential parts of parallel algorithms cannot be overlooked (remember Amdahl’s law), yet traditional algorithm design techniques must be complemented by the use of parallel programming technologies.

Parallel programming: concepts and practice provides a good introduction to some of the most popular parallel programming tools available today. As a textbook, it covers some of the fundamentals, from Flynn’s taxonomy of parallel architectures (that is SIMD or MIMD) to high-performance computing trends (that is, the aforementioned GPGPUs); from the theoretical parallel random access machine (PRAM) model to the von Neumann bottleneck and memory hierarchies; and from Amdahl’s and Gustafson’s laws to Foster’s parallel algorithm design methodology [1].

Up to this point, the book surveys topics you can find in any other parallel computing textbook. However, this quick introduction to the field of parallel computing covers only one-fourth of the book. The other three quarters are more tutorial-like, almost like some of the many online blogs you can find to learn about programming techniques and tools. In this case, brief fragments of text introduce code snippets to acquaint students with C++11 multithreading, OpenMP, CUDA, the message passing interface (MPI), and UPC++.

C++ multithreading is covered in detail; two chapters describe the multithreading support included as part of the C++11 standard (and, occasionally, C++14).

The OpenMP pragma-based standard for shared-memory parallel programming in C++ is introduced as a more programmer-friendly alternative to C++ threads.

NVIDIA’s CUDA is the de facto standard for heterogeneous parallel programming systems (that is, GPUs). This book covers its fundamentals, including how to effectively use the GPU memory hierarchy and how to overlap communication and computation to exploit the supercomputing capabilities of modern GPUs.

Whereas threads, OpenMP, and CUDA are used in the shared-memory architectures of multicore and many-core systems, MPI is the traditional standard for programming clusters and supercomputers comprised of nodes that communicate through an interconnection network. A chapter on MPI covers blocking and nonblocking communication through message passing, both point to point and through the use of “collectives” (MPI routines that implement common communication patterns that involve multiple processes).

Finally, unified parallel C++ is a library-based extension of C++ that follows the partitioned global address space (PGAS) approach, whereby the programmer has access to a global shared address space, which is logically divided among processes with affinities to parts of the shared memory. UPC++ and other PGAS languages (such as Chapel, X10, and Fortress) provide a common programming model that can be used across a whole parallel system, blurring the lines between programming shared memory (threads, CUDA, OpenMP) and distributed-memory systems (MPI).

The book briefly introduces the five parallel programming technologies mentioned above. Key features are analyzed in detailed code examples, which can be downloaded from the book’s web page (https://parallelprogrammingbook.org/). The case studies follow an almost literate programming style (à la Knuth) and provide a hands-on approach to learning parallel programming by example.

Each chapter includes some programming exercises as well as a handful of relevant bibliographic references, mainly to the published standards, well-known textbooks, and notable research papers that introduced the ideas discussed in this friendly guide to parallel programming. The book’s tutorial-like style will certainly benefit students who prefer learning in front of a computer screen and DIYers/self-learners who would refrain from reading more thorough academic textbooks on parallel computing.

Reviewer:  Fernando Berzal Review #: CR146120 (1809-0476)
1) Foster, I. Designing and building parallel programs. Addison-Wesley, Boston, MA, 1995.
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Parallel Programming (D.1.3 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Parallel Programming": Date
How to write parallel programs: a first course
Carriero N. (ed), Gelernter D. (ed), MIT Press, Cambridge, MA, 1990. Type: Book (9780262031714)
Jul 1 1992
Parallel computer systems
Koskela R., Simmons M., ACM Press, New York, NY, 1990. Type: Book (9780201509373)
May 1 1992
Parallel functional languages and compilers
Szymanski B. (ed), ACM Press, New York, NY, 1991. Type: Book (9780201522433)
Sep 1 1993
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy