This book is a Marxist critique of the impact that artificial intelligence (AI), specifically machine learning, will have on labor. It was originally a dissertation, which was modified for release as a book. This is problematic for a reviewer, as dissertations are written primarily for a candidate’s dissertation committee to demonstrate the candidate’s knowledge of the subject area and how to conduct research within that subject area. They are not written to be informative, interesting, or readable. Books derived from dissertations do not fare much better, although there are rare exceptions.
A few quotes from the author succinctly summarize the book. Steinhoff states: “I argue that the AI industry demonstrates the increasing autonomy of capital from labour and not the other way around.” This means, as I understand it, that while it is possible for AI to increase the autonomy of labor, the current trajectory we are on will do the opposite.
He goes on to say: “Central to my argument is an empirical study of work in the AI industry based on interviews I conducted in 2017-2018 with workers (labour) and management (representatives of capital) in the AI industry.” I am a little concerned that the conclusions are based on interviews from a sample of 15 interviewees as analyzed by the author.
Then, a few lines later, he says: “The empirical study is interpreted through a reading of Marx which draws on labour process theory and value-form analysis of the machine learning labour process, or the concrete series of actions that go into producing AI commodities.” While I applaud the author’s honesty, I am deeply concerned that the study is so blatantly ideological. Ideological research tends to start with the principles of the ideology and look for evidence to support them. When a researcher begins with a set of beliefs, it is far more likely that he or she will find evidence to support those beliefs rather than to refute them. I suppose a critic might point out that the importance of objectivity is a tenet of my ideology, and there would be merit to that assertion. While I realize that there is no such thing as perfect objectivity, I do think that one must attempt to be as objective as they possibly can. Having said that, we can move on.
The founder of labor process theory was an early 20th century American Marxist named Harry Braverman who criticized Taylorism (scientific management) because it resulted in deskilling the work done by blue-collar workers. For those who are not well versed in the tenets of Marxism, this suggests that something of value (skills and/or knowledge) was extracted from labor in the process of automation that benefited management (or capital) in the form of greater profits. This assertion is not without merit, although it is one of many ways to interpret the impact of scientific management. When a process is idealized and automated, the skills of the workers are transferred to the process. But the view of Braverman is a little naive in that value is added to idealize and automate the process, which is far beyond the skills and knowledge possessed by the worker. In fact, in The principles of scientific management , Taylor specifically states that workers cannot idealize their own processes, suggesting that additional skills and knowledge are necessary.
Nonetheless, the more efficient processes benefit capital (or management if you prefer) who enjoy greater power and greater profits at the expense of labor, who experience less autonomy and somewhat dehumanized working conditions. If you accept this premise, it can be applied to enterprise information systems, expert systems, and now to machine learning, although the latter is not quite as obvious as machine learning extracts knowledge from data rather than from workers. I do not dispute that this is a legitimate interpretation. But it is only one of many. I would argue that all this automation led to a dramatic improvement in quality of life for most of the people on earth, although some would dispute that as well.
The concluding chapter, “Harry Braverman Overdrive,” is a pun on Mona Lisa overdrive , the third volume in William Gibson’s “Sprawl” trilogy. For those who are not familiar with the “Sprawl” trilogy, the first book is Neuromancer , which presents a darkly dystopian future in which technology has run amuck, drug use is rampant, and people are dehumanized. This suggests that the future created by our pursuit of machine intelligence will create a world that we would not want to live in.
Do I think that AI will eliminate a lot of jobs? Absolutely! I am not sure what the timeframe is for this, but it is soon enough for us to start thinking seriously about it. Will this result in a lot of workers being displaced and having to find new marketable skills, if that is even possible? Again, absolutely! Will the world be a better place? Well, there’s the rub. Whether the world is a better place or not depends on how you measure “better.” And how you measure better depends on your ideological framework. So, the best we can say is that we don’t know but we can speculate. To that end, this book offers some speculation, albeit dystopian speculation. And, it could have done this in a way that is much easier to understand. But it is a book based on a dissertation. And, for that, it did about as good a job as one might expect.
Who might be interested in this book? Sadly, its audience is somewhat limited to academic Marxists and researchers in this area who would agree ideologically with the author and chuckle contentedly at the pun in the concluding chapter. I realize this scares off most potential readers. And this book is not for everybody. But its message that AI poses some serious threats is something that we need to start thinking seriously about.