A simulated machine, the Threaded Interpreted Graph Reduction Engine (TIGRE), has been created and used to investigate cache techniques on VAX and DEC computing systems. Alternative algorithms are compared, and data are presented that show both the improvements to be expected from algorithmic variants and the effect of cache size for a variety of problems. The problems include Fibonacci searches, prime number investigations, and the eight queens puzzle.
The two most significant conclusions are that a speed-up by a factor of about two can be expected from the new method, and that generally a cache of about 64 Kbytes is adequate. Proposals are made for a super TIGRE, but the authors seem to have no plan to extend the work to the more common IBM-type machines.