The theory, avoiding any notion of possible worlds, assumes that beliefs are sentences of first-order logic stored in the agent’s head. The standard first-order language used contains its own truth predicate (applicable to sentences) and is extended with quotation, thus allowing other agents’ beliefs to be described, as well as proofs within the system. A neat trick is used to allow quantification into quoted expressions.
The theory is applied in a robot to formalize its perception, introspection, planning, and inference about other agents’ beliefs and their inferences. A novel formalization is given to the notion of “knowing what,” which is relative to the purpose for which the knowledge will be used. In the Appendix, it is shown that the theory remains consistent even if its language is self-describing.
This is a well-written research paper for the AI community, requiring very little background in logic. It is an improvement compared with previous work by Moore [1] and Konolige [2]; it does not predict that agents immediately believe everything that can be deduced from their beliefs, and it gives a more realistic account of “knowing what,” “knowing how,” and “knowing that,” allowing quantification over beliefs. The theory is directly amenable to implementation. However, the author himself stresses the weak point that his proof techniques do not use resolution.