Kate Crawford’s recent book Atlas of AI (2021) debunks the many myths that surround the field of Artificial Intelligence (AI) today, myths that the AI industry feeds and indulges in passionately, primarily pushed by great economic interests.Footnote 1 The Atlas of AI is a very timely and much-needed book, as experts and policymakers are currently engaged in many initiatives and discussions to regulate AI. Proper and effective governance of this technology will only be possible if it is grounded in the reality of AI, its materiality, socio-political context, and actual impacts on individuals, society, and the environment. This is precisely what Crawford brings to the field with her new book.

Nietzsche saw the debunking of myths as one of the main tasks of philosophy [2]. He called it: to philosophise with a hammer. The hammer knocks on claims to evaluate them; if they sound hollow or empty, it destroys them. This is what Crawford does in her book: to think of AI with a hammer, evaluate the many claims of AI promoters, and expose the empty ones.

The main and overarching myth that Crawford hits is contained in the very name given to this technology: Artificial Intelligence. “AI is neither artificial nor intelligent” (p. 8). To the contrary, Crawford demonstrates how: “artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructure, logistics, histories, and classifications” (p. 8). She shows the way “AI is born from salt lakes in Bolivia and mines in Congo, constructed from crowd-workers-labeled datasets that seek to classify human actions, emotions, and identities. It is used to navigate drones over Yemen, direct immigration police in the United States, and modulate credit scores of human value and risk across the world.” (p. 218).

Each chapter is dedicated to a particular aspect of the deeply embodied, material and political nature of AI. Chapter 1 entitled “Earth” starts by taking us to Silver Peak in Nevada’s Clayton Valley, rich in lithium, an essential mineral to build the batteries that sustain contemporary digital technologies. This first chapter shows that AI cannot be only apprehended as complex and abstract computational processes. This technology fundamentally relies upon “the atmosphere, the oceans, the earth’s crust, the deep time of the planet, and the brutal impacts on disadvantaged populations around the world” (p. 28). Chapter 2 – “Labor”—pursues the exploration of the extractive nature of AI, moving from geological extraction to extraction of human labour. Here, the myth that she deconstructs is that AI would make human labor pain free, clean, “frictionless” (see quote p. 64). Instead, the hardship is pushed away, rendered invisible, hidden behind systems that are claimed to be working automatically. “Thousands of people are needed to support the illusion of automation: tagging, correcting, evaluating, and editing AI systems to make them appear seamless” (p. 219).

Chapter 3, “Data”, further explores the many forms of extraction that make AI, this time looking at data extraction. It highlights the process of construction of datasets and what it means for individuals and communities to become data points, often without their consent. Chapter 3 also explores the implications of reducing “an infinitely complex world” into simplified categories (p. 98). Classificatory logics are the main topic of Chapter 4 in which Crawford continues to investigate the conditions of production of AI and renders these visible to the readers. Here as well, a broader approach to AI is needed, one that contends with the “underlying social, political, and economic structures” (p. 128) as these serve to impose particular categories, identities, and models on individuals and communities. Chapter 5 dedicated to “Affect” further pursues the extractive logic at stake in AI, this time looking at the extraction of interior states to bring these “under the umbrella of a rational, knowable, and measurable rubric suitable to laboratories, corporations and governments” (p. 167). Finally, the sixth Chapter sheds light on the military logics that have “infused” into AI, “from explicitly battlefield-oriented notions, such as target, asset, and anomaly detection, to subtler categories of high, medium, and low risk […] creating epistemological frameworks that would inform both industry and academia” (p. 185). The Atlas of AI concludes with a powerful “coda” dedicated to Space. After all the various forms of extraction explored in the book, the planet has reached a limit. But leaders of big technology companies refuse to put an end to the extractive logic of digital capitalism and find in space “the new frontier for extraction” (p. 230).

One should not expect in the Atlas of AI a “balanced” view on AI, weighing benefits and harms. This book is a radical critique of AI and its many myths, a much-needed and welcome one. But Crawford’s hammer is not only one that destroys what is empty, it is also one that creates – the hammer of the craftsperson. The Atlas of AI is a creative effort that takes the reader beyond the “conventional maps of AI to locate it in a wider landscape” (p. 218). Crawford takes us beyond the Silicon Valley, into unfamiliar places, from Amazon’s fulfilment centre in Robbinsville, New Jersey (US), to the data archives of the US National Institute of Standards and Technology (NIST), passing by the Chinese rare earth mine of Bayan Obo, or Papua New Guinea where Paul Erkman studied emotional expressions of indigenous people. As Crawford notes, “an atlas is as much an act of creativity — a subjective, political, and aesthetic intervention — as it is a scientific collection.” (p. 10) The choice of the atlas is a powerful response to the narrow, abstract, and disembodied imaginary that surrounds AI. She embraces her situatedness and that of her discourse, taking us, like an anthropologist, in her field studies to the many places that make AI around the world.

As such, she manages to hold a discourse on the conditions of production of AI that is at the same time wider than the current dominant discourse and situated. In that sense, Crawford re-embodies AI. She places this technology within the complex web of relations within which we exist — technical, environmental, social and political relations at the local and global levels. In that sense, the Atlas of AI offers a groundwork for the ethics of AI, not a decontextualised form of ethics made of abstract principles, but one that is radically anchored within practices and relations, as the ethics of care has defined it [1, 3]. It is only from within these practices and relations that AI can be better understood, and eventually regulated. As she writes, “if data is seen as abstract and immaterial, then it more easily falls outside of traditional understandings and responsibilities of care, consent, or risk.” (p. 113).

The Atlas of AI is a seminal work that brings AI within our circle of care. This is where we encounter the third and last image of the hammer with which Nietzsche philosophises: the hammer of the doctor that is used to test reflexes on the knees: the hammer of the one who examines and heals. Crawford’s book is a great contribution to the field, as efforts are made at various levels, national and international, in companies and educational institutions, to mitigate the harms of this technology. Crawford underlines that this can only happen if we “challenge the structures of power that AI currently reinforces and create the foundations for a different society” (p. 227).