Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Compressed matrix multiplication
Pagh R. ACM Transactions on Computation Theory5 (3):1-17,2013.Type:Article
Date Reviewed: Jul 18 2014

This paper is concerned with approximating a matrix product, that is, finding the large entries (AB){i,j} [and the corresponding indices (i,j)] from A and B without computing AB first. Two main applications are cited explicitly, and a third implicitly:

(1) computing a covariance matrix from samples, where most of the variables are independent, and hence the sampled covariances are mostly small (and uninteresting);
(2) JPEG-style transformation of column vectors to a basis in which they are (approximately) sparse; and
(3) when AB is genuinely sparse, that is, most of the entries are not merely small but actually zero.

Let A and B be n × n matrices (the paper does consider nonsquare matrices as well), with b large (or nonzero) entries. In the case of a sparse product, let be the number of entries in AB that are not trivially zero, that is, they are either nonzero, or zero because two or more nonzero summands cancel. Let ||A||F denote the Frobenius norm of the matrix A, that is, the square root of the sum of the squares of the entries. The aim, then, is to compute a matrix C such that ||C-AB||F is small.

Without actually computing AB, the author regards AB as being a sparse signal to the approximated result. The methods are probabilistic, “compressing” A and B, via hash functions, into polynomials, using fast Fourier transform methods to multiply the polynomials, and then using the author’s algorithm FindSignificantEntries to reconstruct the location of the large elements of AB. This algorithm is not easy to follow: the reader should be warned at the outset that S is a multiset, not a set, but iteration over S is over distinct members of S. I understand the logic in terms of , but confess to not following the reduction to b.

The results are somewhat technical, since they are all probabilistic in terms of the hash functions, but a typical result would be an algorithm to compute a sparse product, with high probability, in time ~O(N+nb), where N bounds the number of nonzeros in A and B.

There is no experimental analysis, which is rather a pity, as

(1) the paper is full of ~O- and o-expressions, which makes practical appreciation hard;

(2) the key reduction is to fast Fourier transforms, which are generally implemented efficiently on modern hardware; and

(3) it is claimed (page 9:8) that the method only makes a single pass over A and B (assumed stored in column-major and row-major order, respectively), which is a significant improvement on straightforward multiplication.
Reviewer:  J. H. Davenport Review #: CR142523 (1410-0876)
Bookmark and Share
  Featured Reviewer  
 
Hash-Table Representations (E.2 ... )
 
 
Numerical Algorithms And Problems (F.2.1 )
 
 
Mathematical Software (G.4 )
 
 
Probability And Statistics (G.3 )
 
Would you recommend this review?
yes
no
Other reviews under "Hash-Table Representations": Date
On the use of extendible hashing without hashing
Bechtold U., Kuspert K. Information Processing Letters 19(1): 21-26, 1984. Type: Article
Mar 1 1985
Analysis of new variants of coalesced hashing
Chen W., Vitter J. (ed) ACM Transactions on Database Systems 9(4): 616-645, 1984. Type: Article
Jun 1 1985
A polynomial time generator for minimal perfect hash functions
Sager T. Communications of the ACM 28(5): 523-532, 1985. Type: Article
Jun 1 1986
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy