Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Programming for hybrid multi/manycore MPP systems
Levesque J., Vose A., Chapman&Hall/CRC, Boca Raton, FL, 2018. 341 pp. Type: Book (978-1-439873-71-7)
Date Reviewed: May 16 2019

To quote John Levesque, coauthor and director of Cray’s Supercomputing Center of Excellence: “Ask not what your compiler can do for you, ask what you can do for your compiler.” As the book states, it is necessary for computer scientists to help the compiler to implement next-generation computationally intense computing challenges. With tens to hundreds of processors, these multi/manycore systems will be the norm for big data’s parallel processing demands. The compiler will continue to lead the way in converting the programmer’s desires into code that runs and interacts across processors. By describing how compilers currently optimize programs, the authors describe techniques for writing more efficient parallel processing code. This applies to how memory allocation, memory alignment, and interprocedural analysis affect program design.

The book takes the reader through a 40-plus-year history of high-performance computing and compiler design. Each section starts with Levesque recalling riveting aspects of this history. For example, in the 70s, computers like the ILLIAC IV excelled in vectorization and Cray released vectorizing compilers. Many small software firms were developing vectorization preprocessors for Fortran. At Pacific Sierra Research, they actually “struck oil,” as lucrative oil companies were relying on compiler-focused systems for their big data vectorization needs.

The book mostly uses the Fortran language. Many computer science (CS) students are surprised to learn of Fortran’s continued wide use, especially for high-performance computing with the newest Fortran 2018 version (62 years!). For language diversity, there are quotes from Pascal designer and ACM Turing Award winner Niklaus Wirth, as well as C++ designer and ACM Fellow Bjarne Stroustrup. This thread of CS history should help readers less interested in the algorithmic syntax featured throughout the book. Even the LaTeX typeset used in the book was developed by previous coworker Leslie Lamport, with an ACM Turing Award for “fundamental contributions to the theory and practice of distributed and concurrent systems.” Finally, to really excite compiler students, the translation lookaside buffer (TLB) is afforded its own descriptive appendix.

One example where the programmer can help the compiler is shown with a computer message from the Cray compiler stating it couldn’t optimize or vectorize a loop as it would require rank expansion, which is slower. Here the authors describe a Lawrence Livermore National Laboratory computer (LULESH) and its attempts at more efficient vectorization. The programmer isn’t just writing optimized code, but must also deal with the input problem (the I/O bottleneck). Other chapters deal with the complexities of parallel processing portability (especially across language choices) and I/O decomposition, which challenges performance and portability.

With performance paramount, the book also examines various runtime statistics gathering schemes. These can help the programmer and the compiler optimize performance. One system, NASA parallel benchmark (NPB), is examined since it is also used when porting applications. While a lot of the book is about writing more efficient code for highly parallel processing, compiler writers will be interested in the intricate details needed to support that efficiency, and hardware designers can see the types of applications. And while Fortran is a central theme of the book, the outlined approaches should help others in the big data application field.

The final chapter on future hardware advancements looks at central processing units (CPUs) from the x86 family and others in the ARM family--with names such as Broadcom Vulcan, Cavium ThunderX, Fujitsu Post-K, and Qualcomm Centriq. Readers are introduced to future memory technologies such as die-stacking in the final section, “Future Hardware Conclusions,” as well as increased thread counts, wider vectors, and complex memory hierarchies.

This book should interest a cross section of readers, from high-performance computational analysis programmers to computer languages and compiler historians. With the Fortran language currently making interesting historical appearances--for example, Hidden figures, a film about the African-American women who learned and applied Fortran for the space program, starting in 1961--this book shows how Fortran is still one of the most powerful languages for high-performance computing (also known as big data). And who better to bring out that interest than the coauthor Levesque, who in 1989 co-wrote A guidebook to Fortran and supercomputers [1].

Reviewer:  Scott Moody Review #: CR146572 (1908-0296)
1) Levesque, J. M.; Williamson, J. W. A guidebook to Fortran on supercomputers. Academic Press, San Diego, CA, 1989.
Bookmark and Share
  Featured Reviewer  
 
Compilers (D.3.4 ... )
 
 
Distributed Systems (D.4.7 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Compilers": Date
An architecture for combinator graph reduction
Philip John J., Academic Press Prof., Inc., San Diego, CA, 1990. Type: Book (9780124192409)
Feb 1 1992
Crafting a compiler with C
Fischer C., Richard J. J., Benjamin-Cummings Publ. Co., Inc., Redwood City, CA, 1991. Type: Book (9780805321661)
Feb 1 1992
A methodology and notation for compiler front end design
Brown C., Paul W. J. Software--Practice & Experience 14(4): 335-346, 1984. Type: Article
Jun 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy