Do moral issues exist in dealing with artificial intelligence (AI) systems such as robots? The word “patiency” in the title refers to whether an AI system can be treated in a way that is morally “wrong.” The author concludes that, while it is possible to create a machine susceptible to morally wrong harm, no reasons exist for that creation. People should not abdicate their role as principal agents of their own technology. Furthermore, maltreating a machine that has moral standing could lower standards for the treatment of humans and animals.
Robots, like plush toys, should not deceive people into thinking they must treat them as creatures that deserve empathy. Transparency is important. In the future, it may turn out that the development of advanced AI systems will have to follow biologically inspired templates that feel pain. If so, we should reconsider the issue, but we have not yet reached that point.
The paper explains descriptive ethics and normative ethics, focusing on the latter. The two objectives of an ethical system are 1) coherence: ought implies can; and 2) new norms should minimally restructure existing norms. AI influences ethical norms, and the question is whether new norms make it moral to delegate responsibility to entities that are designed by humans.
Treating AI systems as moral agents may, in the future, be a way to control these opaque systems or to advance their development; but, again, we have not reached that stage. These provocative ideas may be difficult for a person unfamiliar with the literature.