Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Fundamentals of parallel multicore architecture
Solihin Y., Chapman & Hall/CRC, Boca Raton, FL, 2016. 494 pp. Type: Book (978-1-482211-18-4)
Date Reviewed: Nov 18 2016

Multicore is the default configuration in computing systems today, from smartphones to desktops and servers. Usually independent jobs are done in parallel on these cores, simplifying parallel programming challenges. To achieve meaningful returns from multicore technology, programs need to be enabled for parallel execution, particularly on a shared memory model. Though logically multicore matches well with the shared memory parallel programming paradigm using multiple threads, this book shows that there is much more to multicore than a symmetric multiprocessor (SMP) on a chip. Another aspect of the associated technology in hardware and software is that the traditional separation of programming and architecture is no longer that clean.

The book starts with a sketch of the situation at the arrival of multicore architecture and puts that in perspective. It then switches to the concerns of parallel programming, covering the commonly used shared memory and message passing approaches. Much of the book works on the shared memory theme. Chapter 3 outlines shared memory programming and discusses common strategies to identify and expose parallelism that are amenable for exploitation in a shared memory environment. Loop parallelism and techniques to reveal parallelism are covered well. The chapter also introduces the thread-processor mapping problem. While a lot of the work in parallelism has been for scientific computations, often based on matrices, parallelism is increasingly becoming critical in many other areas including data science. Often characterized by linked data structures and a significant amount of loop carried dependence, exposing parallelism in these domains requires different techniques. From this perspective, working with linked data structures is covered in chapter 4, mostly focusing on linked-list models.

Memory hierarchy is a major factor in effective shared memory programming, particularly the handling of cache coherence. Chapter 5 elaborates on memory hierarchy. Starting with basic cache memory principles, it goes on to discuss the design of cache memory in multicore machines. Chapter 6 is a buffer chapter introducing the three key problems of cache coherence, synchronization, and memory consistency, which together define the ability to achieve effective performance using a shared memory system. The next three chapters discuss these topics in detail. Chapter 7 is on cache coherence. Chapter 8 is on hardware support for synchronization, and chapter 9 discusses memory consistency models. Consistency models start with the traditional sequential consistency and the various popular relaxed consistency models. Advanced issues of cache coherence are discussed in chapter 10. Chapter 11 discusses issues of an interconnection network, which is a major concern when a large number of processing units are involved in the system. The final chapter introduces the emerging single instruction, multiple thread (SIMT) model. This is generally seen as a good fit for multicore machines for parallelism.

The book is quite readable and uses comfortable language for a textbook. One of the nice things about the book is that apart from the purely architectural perspective of multicore processing, a number of key topics, like cache coherence, synchronization, and so on, that significantly impact the utility and usability of a given system architecture are discussed in detail. A set of exercises is given at the end of each chapter. There is a bibliography, though a bit short, at the end for useful reading material. An interesting feature is the set of interviews with experts in various subdomains of interest in this space. They share their perspective on the trends and the future. Of course, these may become quickly outdated in this area.

Overall, this is a nice book if you are interested in parallel programming, a term that is, these days, almost synonymous to programming itself!

Reviewer:  M Sasikumar Review #: CR144934 (1702-0080)
Bookmark and Share
  Featured Reviewer  
 
Parallel Architectures (C.1.4 )
 
 
Parallel Programming (D.1.3 ... )
 
 
Parallelism And Concurrency (F.1.2 ... )
 
 
Concurrent Programming (D.1.3 )
 
Would you recommend this review?
yes
no
Other reviews under "Parallel Architectures": Date
A chaotic asynchronous algorithm for computing the fixed point of a nonnegative matrix of unit spectral radius
Lubachevsky B., Mitra D. Journal of the ACM 33(1): 130-150, 1986. Type: Article
Jun 1 1986
iWarp
Gross T., O’Hallaron D., MIT Press, Cambridge, MA, 1998. Type: Book (9780262071833)
Nov 1 1998
Industrial strength parallel computing
Koniges A. Morgan Kaufmann Publishers Inc., San Francisco, CA,2000. Type: Divisible Book
Mar 1 2000
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy