There is an imbalance between the fast speed of central processing units (CPUs) and the long access time of memory subsystems. This is the so-called “memory wall,” and it significantly limits potential performance in current and future computer architectures. One could tackle this issue by using core or disk memory compression to reduce the volume of exchanged data. Using lossless compression would preserve computational accuracy, but the expected payoff would be much lower than with more efficient lossy compression schemes.
The authors of this paper discuss the practical impact of such losses on actual computations and propose APAX and fpzip, two predictive coders specialized for floating-point data. These two approaches are evaluated to determine how they affect the end results of three simulation benchmarks--LULESH, Miranda, and pF3D--which represent different domains of physics, such as hydrodynamics and laser-plasma interactions. The authors emphasize that physically meaningful differences between compressed and uncompressed runs should be evaluated, in addition to traditional measures such as mean square errors. Comparisons of data are based on, for instance, the symmetry of the computed fields, the structure of intensity histograms, the height of turbulent mixing layers, and the spectrum of perturbations as a function of spatial frequency. Detailed analyses show that compression ratios of up to four times can be used most of the time without jeopardizing the practical validity of these simulations.
This easy-to-read paper should be of value to scientific computing specialists and computer architects interested in achieving maximal performance on high-performance computing systems.