The paper discusses the effectiveness of inspections and peer reviews as softwa- re validation tools. It describes the use of the techniques, in their classical form, at the IBM Santa Teresa Laboratory. The author goes on to provide the formula for the “ideal” number of errors projected in the released product and the “desirable” ratio between the errors found through inspection/reviews/unit testing and integration testing.
One sees the importance of having such a gauge of effectiveness of the group inspection processes. Without reading the reference [1] on which this part of the paper is largely based, one will not be convinced, though, that the ideals given here are really such ideals.
The paper offers some interesting data on defects dislodged by various stages of reviews and inspections. The author also discusses what I assume to be the ratio of correction costs (rather than detection) of errors during inspection, machine testing, and production. This work would have benefitted from careful editing.