The authors have developed the idea of generalized timed Petri nets, and they show how this concept can be used to model the performance of a multiprocessor system. The idea is to build a Markov chain relating the states in which the system could be, compute the steady-state probabilities for each state, and thereby obtain desired performance measures. A major benefit of their formulation is that it permits memory requests from the processors to be exponentially distributed while the actual memory access times are constant (and not exponentially distributed). Careful comparisons are made in the paper with other literature on the subject.
The multiprocessor configuration assumed is of several processors accessing several banks of shared memory over multiple shared buses. A memory request from a processor is satisfied when it gets a bus and the memory bank that it needs. The main result is that there is, especially when buses are few, a critical memory request probability below which the speedup is almost equal to the number of processors and above which bus contention causes the speedup to drop rapidly.
For a 6-processor, 6-memory-bank, 3-bus case, the authors show that phe traffic to one of the memory banks could become as high as one-third of all memory references without causing a significant degradation of performance. Unfortunately, this observation is not generalized, for example, to different numbers of buses.
The authors also take pains to point out the difference between speedup, a performance measure in which only the time spent by a processor waiting for a bus or memory bank in conflict is considered wasted, and processing power, a measure in which even the time actively spent in satisfying a memory request is considered wasted. While this meticulous differentiation is important from a performance measurement point of view, it is hardly surprising that the speedup invariably turns out to be higher than the processing power.