Article
The Glass Cage: Automation and Us
The Glass Cage: Automation and Us
Nicholas Carr
W. W. Norton & Company, 2014
288 pp., $26.95

Buy Now

Alan Jacobs


The View from the Glass Cage

Automation and human responsibility.

icon1 of 3view all

Some years ago, Nicholas Carr published a long essay in The Atlantic called "Is Google Making Us Stupid?" He wrote of Google's ambitious leaders that "their easy assumption that we'd all 'be better off' if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling … . In Google's world, the world we enter when we go online, there's little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive." The book that emerged from that article was called The Shallows: What the Internet Is Doing to Our Brains, and the titular metaphor drew on the two-dimensionality of the "web": everything networked, everything accessible from everywhere, all laid out on a flat mental surface, ready for analysis and recombination. No depth, no shadows, no mystery.

In his new book, The Glass Cage: Automation and Us, Carr continues to pursue this line of thought, but complicates his earlier critique in a twofold way. First, he sees the Googlers' desire to "outsource" intelligence to nonhuman brains as just one among many examples of automation; and second, he situates the automating impulse in a long historical context. The result is a book that impressively extends and deepens the argument of The Shallows. Carr has proven to be among the shrewdest and most thoughtful critics of our current technological regime; his primary goal is to exhort us to develop strategies of resistance.

It cannot be stressed too strongly that resistance does not entail rejection. Carr makes this point repeatedly. "Computer automation makes our lives easier, our chores less burdensome. We're often able to accomplish more in less time—or to do things we simply couldn't do before." And: "Automation and its precursor, mechanization, have been marching forward for centuries, and by and large our circumstances have improved greatly as a result. Deployed wisely, automation can relieve us of drudge work and spur us on to more challenging and fulfilling endeavors." However, Carr believes, our enthusiasm for what automation can do tends to blind us to what it cannot—and, perhaps still more important, to what it does quietly when we're not looking. Automation has "hidden effects." Carr's job here is to bring them out of hiding and into the light.

Perhaps the most dramatic chapter in The Glass Cage concerns the increasingly sophisticated automation of the complicated act of flying an airplane. Carr is quick to note that flying is now safer than it ever has been, and in no way contests the universally held view that automation of many of the tasks of flying has dramatically reduced "human error." But automation of flying has also had other consequences. Drawing on an impressively large body of research, Carr singles out three—all of which apply equally well to many other automated systems.

The first is "automation complacency." Precisely because automated flying typically works so well, pilots come to expect that it will always work that well, and their attention flags. What in an un-automated environment would be a clear sign of something going wrong is not so perceived by pilots who are overly trusting of their equipment—and perhaps is not perceived at all.

The second is "automation bias." Pilots tend to trust their automated systems more than they trust their own training, experience, and sensory evidence. They may be able to see quite plainly that the aircraft is lower than it's supposed to be, or feel very strongly that it's coming in for a landing at too sharp an angle or too great a speed, but if the instrument readings and autopilot behavior indicate that everything is all right, then they will typically defer to the automated systems. And perhaps this is, most of the time, the right thing to do. But sometimes that deference proves to be misplaced, and people die.

Moreover, when pilots do choose to take control—or are forced to do so by the failure of their systems—they often make bad decisions because they are out of practice. This is the third consequence of reliance on automation: highly trained experts whose skills atrophy because they have few or no opportunities to keep them sharp—until a crisis hits, which is of course the worst possible situation in which to try to recapture one's old abilities.

These three tendencies—complacency, bias, and skill-atrophy—have beyond any question cost lives. But it seems almost certain that more lives would be lost if we eliminated automated systems and returned to full-time human piloting. Moreover, even if we did decide to go back, we would have to ask: How far back? After all, in the early days of flight pilots had no instruments at all: no altimeters (to measure altitude), no attitude indicators (to show the angle of the plane in relation to the earth), nothing but the pilots' own senses. Do we want our 747 captains to fly by the seat of their pants as they did in the good old days—the days when they could only kill themselves, not three hundred people sitting in rows behind them? Surely not. But if not that far, then how far back might we profitably go?

bottom_line
icon1 of 3view all

Most ReadMost Shared


Seminary/Grad SchoolsCollege Guide