The potentially high costs of noise reduction often come up in the context of algorithms, where there are growing objections to “algorithmic bias.” As we have seen, algorithms eliminate noise and often seem appealing for that reason. Indeed, much of this book might be taken as an argument for greater reliance on algorithms, simply because they are noiseless. But as we have also seen, noise reduction can come at an intolerable cost if greater reliance on algorithms increases discrimination on the basis of race and gender, or against members of disadvantaged groups. […]
Undoubtedly, we need to draw attention to the costs of noiseless but biased algorithms, just as we need to consider the costs of noiseless but biased rules. The key question is whether we can design algorithms that de better than real-world human judges on a combination of criteria that matter: accuracy and noise reduction, and nondiscrimination and fairness. A great deal of evidence suggests that algorithms can outperform human beings on whatever combination of criteria we select. (Note that we said can and not will.) For instance, as described in chapter 10, an algorithm can be more accurate than human judges with respect to bail decisions while producing less racial discrimination than human beings do. Similarly, a résumé-selection algorithm can select a better and more diverse pool of talent than human résumé screeners do.
These examples and many others lead to an inescapable conclusion: although a predictive algorithm in an uncertain world is unlikely to be perfect, it can be far less imperfect than noisy and often-biased human judgement. This superiority holds in terms of both validity (good algorithms almost always predict better) and discrimination (good algorithms can be less biased than human judges). If algorithms make fewer mistakes than human experts do and yet we have an intuitive preference for people, then our intuitive preferences should be carefully examined.