“First, do no harm” meets the Internet of Things (IoT)? This paper proposes that we need to be sure that medical systems are functional, safe, and secure--of course. Further, the authors try to show how these three concerns can be written separately and then automatically combined and checked to see if the system meets its requirements. They describe the need for stakeholders to be involved in an iterative requirements process. So far, so good. The authors advocate the Event-B method and Rodin toolset. They explain the method well. Given correct specifications, the B method guarantees correct code. The method uses set theory and formal logic to express specifications. This is difficult for humans to do correctly.
One specification (from the “Security Model” section of the paper) has a common error. They wished to state that only a privileged user can change the settings on a dialysis machine. Their formula (inv5) is written like this “Privileged ⇒ SettingEnabled” which is the converse of the correct translation: “SettingEnabled ⇒ Privileged.” Errors like this convinced previous methodologists [1,2] to recommend graphical and/or tabular notations.
Moreover, I am not convinced that Event-B can handle nonfunctional requirements like “the patient’s data must be strongly encrypted even while it is transmitted between devices.” I would like to see more on how they elicit, refine, and state such requirements. Several methods exist. For example, Stoneburner [3] has a useful framework and Haley et al. [4] a complex logic-based method.