The authors inform us that a simulation distributed over several processors will perform differently depending on the way in which its parts are mapped onto the processors. They state further that efficiency can be gained if the loads on each processor are evenly balanced and if message passing between processors can be kept as low as possible. One way of balancing the loads is by vectored simulation: permuting the mapping of the system to the network allows multiple independent runs to be performed simultaneously. The authors attempt to compare existing systems with vectored distributed simulation using the Wolf rollback algorithm, and they claim a significant reduction in error propagation as compared with time warp.
However, the examples used to illustrate how efficiency may be affected by using different mappings and the idea of vectored simulation with its purported advantages are much too trivial. I wonder how vectoring would be applied in more complicated physical systems. Is there, or could there be, a methodology or automatic procedure that could apply to systems in general? Until these questions have been considered, I question the practical value of this approach.