This paper deals with the notion of invariant functions; these are functions that remain unchanged under applications of loop iterations. For example, in:
WHILE s < a DO s := s + c (s , a , c ∈ &ZZ; )
the function f ( s ) = s mod c yields the same value before and after each iteration; hence, f is an invariant function. Clearly, invariant functions are related to loop invariants, which are called “increasing” predicates in this paper. Three theorems explore this relation. The proofs are trivial, using the fact that predicates are related to their characteristic functions. A notion of strength is defined for invariant functions, stronger functions being “more injective than” weaker ones. Strongest invariant functions are related to the function computed by a while statement. The terminology is unfortunate, since it is possible to have several “strongest” invariant functions. A whole section is devoted to mathematical exercises for this notion, using unnecessarily unfamiliar notation. The oddest is the use of set CRT a , b : integer; c : real SUB a < b end in place of the more familiar { ( a , b , c )∉ &ZZ; X &ZZ: X &RR; | a < b };.
The motivation for all this seems to be the generation of loop invariants from strongest invariant functions, which are to be synthesized from the while statement. However, the only cases considered are loop bodies with assignments of the form s := s op c, with op being taken from the usual arithmetic operators. These hardly constitute a representative sample of while statements, much less a “systematic analysis.”
“Paradoxically (and ironically),” the authors state, the study of strongest invariant functions has “shown us how little we really understand” while statements. The little we really understand was formulated by Floyd, Hoare, and Dijkstra more than a decade ago.