The unbounded search problem considered here is defined as follows. Let F : N → { X , Y } be a function such that if F ( n ) = Y, then F ( m ) = Y for all m ≥ n. The task is to determine the location of the first Y, or equivalently, of the last X value of F. The cost c ( n ) of a search algorithm is the number of tests “f ( i ) = X ?” where n is the location of the first Y value.
This problem was introduced by Bentley and Yao [1], and it is equivalent to table lookup in an infinite ordered table. Moreover, from any search algorithm having cost c ( n ), one can immediately construct a prefix-free binary encoding scheme of the integers in which the codeword for n has c ( n ) bits [1,2].
Generalizing Bentley and Yao’s construction, Reingold and Shen [2] have presented a sequence of unbounded search algorithms and associated lower bounds that starts with linear search. The second algorithm is a sort of binary search, but the third is almost identical with the ultimate algorithm of Bentley and Yao [1], and for each i ≥ 3, the improvement from the ith algorithm to the i +1st is equally dramatic. Finally, in their previous paper [2] the authors diagonalized over the new sequence of algorithms and obtained an “almost ultimate” algorithm that is optimal within an additive factor of &agr; ( n ) (&agr; is the canonical function such that &agr; ( A ( n ) ) = n, where A is Ackermann’s function).
This paper shows that this process can be iterated ad infinitum, each time obtaining an algorithm and a corresponding lower bound that are much closer together. This iteration is done via generalized Ackermann functions and ordinals &igr; ≤ &egr; 0, that is, if &igr; is a limit ordinal then the corresponding algorithm is based on a diagonalization over the fundamental sequence for &igr;. Nevertheless, the analysis of the algorithm requires a surprising effort and is mainly based on the Cantor normal form. These techniques seem to be of independent interest.
This paper closes the gap between the cost of the best algorithm and the best lower bound to a razor-thin edge. The authors extend this result to the asymmetric case, in which the cost for a Y test differs from that of an X test.
The paper is of great theoretical interest in that it presents a problem having a highly nontrivial speed-up. The smallest arguments n 0 such that the new algorithm behaves dramatically better than its predecessor for all n ≥ n 0 grow extremely fast, however. Thus the paper is of fundamental epistemological importance but of almost no practical relevance.