Jump directly to the Content
Jump directly to the content
Article
The Glass Cage: Automation and Us
The Glass Cage: Automation and Us
Nicholas Carr
W. W. Norton & Company, 2014
288 pp., 26.95

Buy Now

Alan Jacobs


The View from the Glass Cage

Automation and human responsibility.

Some years ago, Nicholas Carr published a long essay in The Atlantic called "Is Google Making Us Stupid?" He wrote of Google's ambitious leaders that "their easy assumption that we'd all 'be better off' if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling … . In Google's world, the world we enter when we go online, there's little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive." The book that emerged from that article was called The Shallows: What the Internet Is Doing to Our Brains, and the titular metaphor drew on the two-dimensionality of the "web": everything networked, everything accessible from everywhere, all laid out on a flat mental surface, ready for analysis and recombination. No depth, no shadows, no mystery.

In his new book, The Glass Cage: Automation and Us, Carr continues to pursue this line of thought, but complicates his earlier critique in a twofold way. First, he sees the Googlers' desire to "outsource" intelligence to nonhuman brains as just one among many examples of automation; and second, he situates the automating impulse in a long historical context. The result is a book that impressively extends and deepens the argument of The Shallows. Carr has proven to be among the shrewdest and most thoughtful critics of our current technological regime; his primary goal is to exhort us to develop strategies of resistance.

It cannot be stressed too strongly that resistance does not entail rejection. Carr makes this point repeatedly. "Computer automation makes our lives easier, our chores less burdensome. We're often able to accomplish more in less time—or to do things we simply couldn't do before." And: "Automation and its precursor, mechanization, have been marching forward for centuries, and by and large our circumstances have improved greatly as a result. Deployed wisely, automation can relieve us of drudge work and spur us on to more challenging and fulfilling endeavors." However, Carr believes, our enthusiasm for what automation can do tends to blind us to what it cannot—and, perhaps still more important, to what it does quietly when we're not looking. Automation has "hidden effects." Carr's job here is to bring them out of hiding and into the light.

Perhaps the most dramatic chapter in The Glass Cage concerns the increasingly sophisticated automation of the complicated act of flying an airplane. Carr is quick to note that flying is now safer than it ever has been, and in no way contests the universally held view that automation of many of the tasks of flying has dramatically reduced "human error." But automation of flying has also had other consequences. Drawing on an impressively large body of research, Carr singles out three—all of which apply equally well to many other automated systems.

The first is "automation complacency." Precisely because automated flying typically works so well, pilots come to expect that it will always work that well, and their attention flags. What in an un-automated environment would be a clear sign of something going wrong is not so perceived by pilots who are overly trusting of their equipment—and perhaps is not perceived at all.

The second is "automation bias." Pilots tend to trust their automated systems more than they trust their own training, experience, and sensory evidence. They may be able to see quite plainly that the aircraft is lower than it's supposed to be, or feel very strongly that it's coming in for a landing at too sharp an angle or too great a speed, but if the instrument readings and autopilot behavior indicate that everything is all right, then they will typically defer to the automated systems. And perhaps this is, most of the time, the right thing to do. But sometimes that deference proves to be misplaced, and people die.

Moreover, when pilots do choose to take control—or are forced to do so by the failure of their systems—they often make bad decisions because they are out of practice. This is the third consequence of reliance on automation: highly trained experts whose skills atrophy because they have few or no opportunities to keep them sharp—until a crisis hits, which is of course the worst possible situation in which to try to recapture one's old abilities.

These three tendencies—complacency, bias, and skill-atrophy—have beyond any question cost lives. But it seems almost certain that more lives would be lost if we eliminated automated systems and returned to full-time human piloting. Moreover, even if we did decide to go back, we would have to ask: How far back? After all, in the early days of flight pilots had no instruments at all: no altimeters (to measure altitude), no attitude indicators (to show the angle of the plane in relation to the earth), nothing but the pilots' own senses. Do we want our 747 captains to fly by the seat of their pants as they did in the good old days—the days when they could only kill themselves, not three hundred people sitting in rows behind them? Surely not. But if not that far, then how far back might we profitably go?

Carr does not provide detailed answers to such questions, though he does suggest that "striking a balance" between automation and human responsibility might not be as hard as many assume. For instance, a flight automation system—or almost any other kind of automation system that might in some circumstances require human action—"can be programmed to shift control over critical functions from the computer back to the operator at frequent but irregular intervals. Knowing that they may need to take command at any moment keeps people attentive and engaged, promoting situational awareness and learning." Carr also thinks that video games are unfairly maligned, especially in this context: "In addition to their considerable ingenuity and occasional beauty, the best games … show how applications can encourage the development of skills rather than their atrophy."

But answering these questions is not Carr's chief task in The Glass Cage. He wants to think about why we are reluctant even to ask them—even to consider the possibility that automation might not fix, or even improve, every task that can plausibly be automated. Can't we muster at least a little skepticism in the face of automation's long history—and no doubt long future—of extravagant promises?

Carr thinks that such skepticism is called for not primarily in order to save lives; again, I don't think he seriously doubts that automated flying is significantly safer than manual flying was. As The Glass Cage moves along, it comes to explore some deeper questions—questions that we can't even begin to answer without some conception of what counts as human flourishing. Indeed, it is only in light of some such conception that we can even think seriously about what automation is—and does.

In one chapter, Carr considers the relationship between the Inuit people and their often-challenging environment. "The Inuit's extraordinary wayfinding skills," he writes, "are born not of technological prowess—they've eschewed maps, compasses, and other instruments—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, tides, and currents. The Inuit are masters of perception." Or rather, they were, because in recent years they have come to rely more and more on gps systems, and as a result have ceased to practice their traditional wayfinding methods as consistently, and have less reason to pass them along to the next generation. "A singular talent that has defined and distinguished a people for thousands of years may well evaporate over the course of a generation or two."

Are GPS-equipped Inuit safer than those who practiced the old ways? Let's agree that they are—though there are already stories of Inuit venturing onto thin ice because their gps systems didn't warn them and they weren't paying close attention. But even if we assume that they are safer, considerably safer, we might not conclude that as a people they are better off. Because there have been losses. For one thing, the transmission of wisdom from generation to generation has a powerful binding effect for communities. Moreover, the painstaking acquisition of difficult knowledge, knowledge that involves the senses as well as the mind, is deeply and lastingly satisfying to any person who achieves it.

Think about those old pilots who flew without instruments: they were attuned to their machine and to the air through which it flew with astonishing precision and nuance—indeed they had to be, if they were going to survive. With their arms and legs they continually adjusted the speed and attitude of the craft; their eyes moved constantly to measure the distance to the ground and scan the horizon for signs of weather; they even discerned changes in humidity by smell; and their brains continually sorted through this outrageously complex and ever-changing mass of data and made confident decisions in light of it. Once they had acquired the experience to make much of this habitual, they were able to enter into that blissful state that the psychologist Mihály Csíkszentmihályi has famously called "flow": a total absorption in the task at hand, in which body, mind, and environment seem to cohere into a single gestalt.

As Carr points out, this flow can be achieved in many ways, and one of the most common, people report, is in driving an automobile. Like Carr, who begins his book with an anecdote about learning to drive, I don't feel that nearly as much as I did when I drove vehicles with manual transmissions; but I feel it much more than I will when, in my dotage, I am ferried from place to place by a self-driving Googlemobile. I'll get in, tell the machine the address of my destination, and then probably read a book—a worthwhile activity, to be sure—until we arrive. I may never notice what route the Googlemobile takes me, whether we go north or south or east or west, or how heavy the traffic is. I might not even notice the weather. Perhaps the book will be sufficiently good to compensate for this loss of connection with my environment.

I don't know whether master Inuit wayfinders experience flow as they make their way through a difficult landscape, but navigation in any environment works the brain pretty hard, and we sidestep that work, even if we take over the driving from Google, when gps gives us turn-by-turn directions. As Carr points out, "Map reading … strengthens our sense of place and hones our navigational skills," and the brainwork we do with map in hand helps us orient ourselves even when we don't have maps: "Paper maps don't just shepherd us from one place to the next; they teach us how to think about space." This is marked in the brain by increasing activity in the hippocampus, whereas when we merely follow gps instructions the hippocampus remains inert. One neuroscientist Carr quotes fears that, "should the hippocampus begin to atrophy from a lack of use in navigation, the result could be a general loss of memory and a growing risk of dementia. 'Society is geared in many ways toward shrinking the hippocampus,' she told an interviewer. 'In the next twenty years, I think we're going to see dementia occurring earlier and earlier.' "

Perhaps here we will have a measurable—and terrifying—cost to automation. But more generally, Carr wants us to ask what value we place on the loss of opportunities to experience flow—the loss even of opportunities to develop and exercise skills that challenge and reward us. Carr readily admits that these are extraordinarily difficult questions. "How do you measure the expense of an erosion of effort and engagement, or a waning of agency and autonomy, or a subtle deterioration of skill? You can't. Those are the kinds of shadowy, intangible things that we rarely appreciate until after they're gone, and even then we may have trouble expressing the losses in concrete terms. But the costs are real." They are real for the Inuit, they are real for pilots, and they are real for us.

Nicholas Carr is asking us to count those costs, as a prelude to figuring out whether we can minimize them. Scanning through the early reviews of The Glass Cage, I can't help noticing how deeply reluctant people are even to begin addressing the questions he raises. I have seen Carr called a Luddite (of course), a paranoiac, and even a "scaredy-cat." And among the leading apostles of automation, Carr has discerned an Orwellian tendency to portray costs as benefits. He notes that "Peter Thiel, a successful entrepreneur and investor who has become one of Silicon Valley's most prominent thinkers, grants that 'a robotics revolution would basically have the effect of people losing their jobs.' But, he hastens to add, 'it would have the benefit of freeing people up to do many other things.'" Ah, that's better. As Carr wryly notes, "Being freed up sounds a lot more pleasant than being fired."

Thiel's comment, and the bizarrely out-of-kilter early responses to Carr's book, provide sufficient evidence for Carr's claim that "the belief in technology as a benevolent, self-healing, autonomous force is seductive"—so seductive that "we're not very good at thinking rationally about automation or understanding its implications." The result is that "the deck is stacked, economically and emotionally, in automation's favor." The great value of The Glass Cage is that it does a little to unstack that deck.

Alan Jacobs teaches in the Honors College of Baylor University. He is the author most recently of The Book of Common Prayer: A Biography (Princeton Univ. Press).

Most ReadMost Shared