by Kate Crawford
Yale University Press
336 pp., $28.00
Electronics have completely reshaped the cultural landscape. Recent decades saw a global shift in social interaction from the visceral to the virtual. Intellectual life has moved from the printed page to the glowing screen. If young people want a career that will remain relevant, they’re well advised to study information technology. Meanwhile, traditional modes of life are discarded in an accelerating race toward an imagined future.
Throughout this rapid transition, no prospect has inspired as much mystique and wild speculation as artificial intelligence (AI). Theoretically, sensation and complex thought—the reasoning faculty that defines the human mind—can be enacted in silicon-based processors just as well as in neurons. Moreover, pure electronic cognition is far faster, allegedly unbiased, and potentially richer and more accurate than any human brain. Having captured our perpetual gaze, the luminous robot is now looking back at us.
Thankfully, Kate Crawford lassos this ethereal idea of synthetic intelligence and pulls it back to earth. Her central argument is that “AI is neither artificial nor intelligent.” She traces the origins of machine learning systems to the mining and industrial operations that formed them. Most importantly, she accurately describes how the innovators’ idealism leads ultimately to dehumanization for regular people.
Unfortunately, her gratuitous use of the language of social justice to indict AI will alienate those who prefer that she stick to reasoned argument over academic virtue-signaling. Indeed, as Crawford laments all the ways AI enables mass surveillance, labor exploitation, and digital manipulation of behavior, she’s mainly concerned with its effect on women and “people of color.” One gets the sense that white males are magically shielded from technocratic predation. Nevertheless, if the reader extracts her rigorous analysis from its politically correct matrix, one finds a detailed map of the technological systems that control our lives.
Crawford opens the book with one of her strongest arguments: AI’s effect on the environment. Cataloging digital culture’s environmental impacts, from worldwide lithium-mining to the electricity hogged by sprawling data centers, she gives the lie to Big Tech’s promises of “carbon neutrality” and “sustainability.” Tons of waste materials produced by ravenous mineral extraction—particularly acidic and radioactive byproducts of mining—are dumped into water supplies. China is a major culprit in this toxic practice, being the source of some 95 percent of rare-earth minerals.
Tech companies’ electricity consumption is just as conspicuous. Energy-intensive operations like natural language processing, cloud computing, and mass data analysis burn ungodly amounts of coal and natural gas. For the most part, Crawford scolds Google, Amazon, and Microsoft for their elephantine carbon footprints. Regardless of one’s stance on anthropogenic climate change, she shines a withering light on “green tech” hypocrisy.
Crawford also scrutinizes Big Tech’s predatory labor practices, both in-house and out-sourced. Unsurprisingly, Amazon catches most of the flak. If the reader can get past her repeated accusations of systemic racism and sexism, Crawford’s portrayal of hapless Amazon “associates” slaving away in micromanaged fulfillment centers is the stuff of nightmares. Amazon’s relentless robot overlords and saccharine sloganeering—“Bias for action,” “Earn trust of others”—could drive the most industrious of workers to just stay home and collect his universal basic income.
The section on “Prehistories of Workplace AI” traces the origin of Amazon’s digital work efficiency panopticon back to the 18th-century inspection house, pioneered by the British naval engineer Samuel Bentham. By situating managers on elevated platforms in the center of a factory, Bentham ensured workers could be constantly surveilled for laziness and bad habits. Over time, the inspection house model would be applied to prisons, warehouses, and elsewhere. Today, with the advent of ubiquitous cameras, listening devices, and online tracking, the inspection house is expanding into every facet of our professional and private lives.
Invoking a poetic analogy, Crawford portrays mass surveillance and data-mining as ideological extensions of unchecked mineral extraction. In order for artificial intelligence systems to attain a nuanced model of the world—particularly human life—machine learning algorithms must consume massive amounts of data. Most of it comes from the images and text that people freely share across the Internet: endless social media posts, news articles, private messages, and so on. In the age of AI, data is the new oil.
Drawing another parallel, Crawford points out that mug shot portfolios released online by law enforcement agencies were among the first data sets used to hone facial recognition software. Today, most of our faces are posted online, recorded via closed-circuit television, and scraped up to “teach” algorithms the nuances of facial recognition. These AI systems also “learn” to characterize our identities. More advanced systems are “trained” to evaluate emotional states—happy or sad; benign or threatening.
This rampant data extraction has culminated in both overt and de facto social credit scores. In our newly technocratic society, even normal citizens are treated like prisoners; they are scanned, labeled, and herded by machines according to the whims of those in power.
This is where Crawford succumbs to two serious weaknesses. First, she’s hyperfocused on how technocratic social systems dehumanize and misidentify minorities. She ignores the fact that the digital security state is mainly interested in cracking down on those labeled “racists” and “right-wing extremists.” Second, as she describes the application of facial recognition to correctly identify terrorists, illegal immigrants, and criminals en masse, she tempts the reader to conclude that maybe technocracy isn’t so bad after all.
In a sense, this ambiguity pervades The Atlas of AI. On the one hand, the ingenious innovations and elegant mechanisms of Silicon Valley are awe-inspiring, whatever their potential dangers. It’s incredible that computer scientists have created neural networks that can learn on their own, analyze massive data sets that no human could comprehend, perform precision tasks such as aerial dogfighting maneuvers or heart surgery, and to some extent, develop what appear to be virtual personalities of their own.
On the other hand, these technologies naturally lead toward a digital dystopia, at least for those subject to the machines. Already, the mass deployment of artificial intelligence systems—unencumbered by serious government regulations or even a basic respect for personal autonomy—threatens our human dignity. Our private lives are regularly invaded without a second thought. Our choices are increasingly guided—or determined—by alien algorithms.
In a world where machines are the highest power, traditional cultures have little hope of survival. So what are organic human beings supposed to do? Do we fight or submit? Is there even a possibility for compromise?
Crawford is rightly skeptical of the concept of “ethical AI.” She concludes that
the infrastructures and forms of power that enable and are enabled by AI skew strongly toward the centralization of control.… [T]he master’s tools will never dismantle the master’s house.
But she also refuses to accept the “dogma of inevitability.” She thinks humankind can prevail by “bringing together issues of climate justice, labor rights, racial justice, data protection, and the overreach of police and military power.” Were it not for her tired buzzwords and lefty bias, I’d like to agree with some of that, too.
But this brings us to a third weakness in Crawford’s book. The rapid advance of artificial intelligence, robotics, and digital saturation has so much momentum, it may as well be a force of nature. Any real resistance would require mass rejection across society, and probably state intervention. Ongoing antitrust efforts are encouraging, but they won’t halt the world-shaking projects underway in Silicon Valley, and certainly not in Tel Aviv, Taiwan, or Shenzhen. Sufficient political consensus seems unlikely in the fragmenting West, just as state intervention is laughable in the overtly technocratic East.
For those who abhor a world ruled by algorithms and automation, the most realistic approach may be to jump out of this freight train’s path—if only to regroup—rather than to waste energy on feeble attempts to derail it. With that caveat, Kate Crawford’s book succeeds in its primary goal. She draws a detailed map of the technologies disrupting our established social orders—from private life to vocation to sacred values. Unfortunately, as we trace a line to the edge of her atlas, no clear escape route comes into view.