In May 2016, the driver of a Tesla Model S equipped with the autopilot feature was killed when his vehicle crashed into a tractor-trailer . Tesla confirmed that the vehicle’s autopilot was active and that its brakes were not activated, neither by the driver nor the automatic braking system.
Users of such systems should be protected from similar scenarios caused by malfunction or user error. One approach to testing is based on data collected by early adopters. The more traditional approach is to develop specifications for safety-critical systems, implement the system, and test its performance against the specifications. The latter can be prohibitively expensive, and may delay the introduction of beneficial features.
The authors of this paper propose to reduce expenses and improve testing by increasing the degree of automation through synergistic execution, where a system is validated against its model by running the implementation and the task model side by side and comparing their behavior. A significant drawback is the coverage of scenarios that include user error: normative models only describe the expected behavior of the system, and including possible user errors greatly expands the number of scenarios. The authors introduce the mutation of scenarios, limited to ones that correspond to user errors, such as unintended actions or unsuitable strategies.
They validate their approach by comparing the behavior of a model for an airplane flight control unit against a simulation. While they demonstrate the feasibility, the currently available methods and tools are still very restricted. Combined with the difficulty of creating fully specified models of systems like self-driving vehicles, users will have to live with the risk inherent in using products that are under continuing development.