This is a timely paper in light of recent stories about bias in artificial intelligence (AI) systems, such as the COMPAS system used in Florida to predict recidivism. The tutorial’s aim is to describe what the authors call a “fairness-first approach” to machine learning. This is similar to a security-first view of system
construction, that is, fairness should be built in from the start rather than bolted on later. The tutorial covers industry best practices, sources of bias, algorithmic techniques for fairness, and fairness methods in practice. The notion of fairness is discussed along with various definitions, for example, individual and group fairness. This includes fairness in ranking users for things like credit offers.
A bibliography provides further references on this important topic. For those who missed the conference tutorial, and because this paper is just a brief invitation to it, readers should refer to the papers listed in the bibliography for more comprehensive introductions to the topic of fairness in machine learning.