Jump directly to the Content
Jump directly to the content
Article

Alan Jacobs


The Virtues of Resistance

Computer Control, part 3

This is the concluding article in a three-part series.

Article 1: Computer Control

Article 2: Life Among the Cyber-Amish

Dr. Gelernter:

People with advanced degrees aren't as smart as they think they are. If you'd had any brains you would have realized that there are a lot of people out there who resent bitterly the way techno-nerds like you are changing the world and you wouldn't have been dumb enough to open an unexpected package from an unknown source.

In the epilog of your book, "Mirror Worlds," you tried to justify your research by claiming that the developments you describe are inevitable, and that any college person can learn enough about computers to compete in a computer-dominated world. Apparently, people without a college degree don't count. In any case, being informed about computers won't enable anyone to prevent invasion of privacy (through computers), genetic engineering (to which computers make an important contribution), environmental degradation through excessive economic growth (computers make an important contribution to economic growth) and so forth.

As for the inevitability argument, if the developments you describe are inevitable, they are not inevitable in the way that old age and bad weather are inevitable. They are inevitable only because techno-nerds like you make them inevitable. If there were no computer scientists there would be no progress in computer science. If you claim you are justified in pursuing your research because the developments involved are inevitable, then you may as well say that theft is inevitable, therefore we shouldn't blame thieves.

But we do not believe that progress and growth are inevitable.

We'll have more to say about that later.

FC



1


"Dr. Gelernter" is David Gelernter, a computer scientist at Yale University, who received this letter on April 23, 1995. "FC," other documents from the same author explained, stands for Freedom Club—but despite the use of plural pronouns in this letter and many others, one person wrote the message: Theodore Kaczynski, otherwise known as the Unabomber. On June 23, 1993, Gelernter had opened "an unexpected package" that immediately exploded, wounding him severely. In 1998, Theodore Kaczynski pled guilty to the charge of being the "unknown source" of the package that injured Gelernter.

It seemed strange to many that Kaczynski should single out Gelernter, who is distinctive among computer scientists for his aesthetic sensibilities and his lack of enthusiasm for technology as such. Indeed, some of Gelernter's warnings about over-reliance on computers can sound oddly like statements in the Unabomber's notorious Manifesto. (A more likely antagonist would be someone like Ted Nelson, inventor and promoter of "hypertext," who in his 1974 book Computer Lib/Dream Machines exhorted, "You can and must understand computers NOW.") But perhaps it was Gelernter's very humaneness that, to Kaczynski, made him so dangerous: by striving, in several books, to demystify computer technology and usage; by designing hardware and software that would be comfortable, functional, and unintimidating to ordinary users; by insisting that people with no formal training in computer programming could nevertheless come to understand at least the basics of how computers work, Gelernter might actually do more to solidify the place of computers in our everyday lives than the real "techno-nerds" ever could.

Kaczynski's arguments stand in direct contradiction to the thoughts and concerns that have motivated this series of essays. Like Gelernter, I have assumed that the continuing, indeed the increasing, centrality of computers to our culture is "inevitable." I suspect that Kaczynski secretly thought so too: he was certainly smart enough to know that the use of computers is not curtailed by the bombing of a computer scientist. If he had real hopes of lessening our dependence on computers, he would have attacked the machines themselves—or the factories that made them—just as the 19th-century Luddites destroyed the knitting machines that were putting them out of work. Kaczynski's resort to mail bombs is really an admission of futility.

But I do not believe that the inevitability of computers equals the inevitability of theft. Theft is a crime, the computer a technological product; and the problem with technology is always to find a way to put it to proper uses while avoiding putting it to dangerous, destructive, or immoral uses. True, any knowledge I gain about computers will do nothing to halt experiments in genetic engineering or slow "excessive economic growth," though I can imagine ways in which computer-literate others might contribute to those causes; I also think it safe to say that my refraining from computer literacy, or even computer usage, won't be of any help. But within my own daily sphere of action, I believe that increasing my ability to use computers can be helpful to me. (And it can surely help me to preserve my privacy, though that goal is not high on my list.)

I was encouraged, as I began this project in self-education, to discover the very comment from Gelernter's Mirror Worlds that angered Kaczynski. I was likewise emboldened by this statement from the engineer Henry Petroski: "I believe that anyone today is capable of comprehending the essence of, if not of contributing to, even the latest high technology"—though I think I would have felt considerably more emboldened if this sentence had come at the beginning of a 400-page book about computers, instead of a 400-page book called The Pencil. (It's a wonderful book, though.) I have tried to record, especially in the second essay in this series, some of the rewards (as well as some of the frustrations) that I have received in my plunge into the world of computer technology, especially my encounter with Linux and the world of open-source software. But I am faced now with certain important questions that I have not even begun to address.

2


Looking back over the reading I have done in preparing to write this essay, I notice a widespread tendency to speak of the concerns raised by the increasing prevalence of computers as technological concerns; the assumption shared by almost all parties is that any "problem" following from the cultural dominance of computers is but a special case of what the philosopher Martin Heidegger famously called "the question concerning technology." Henry Petroski emphasizes the links between pencils and computers: both are technological products. Kaczynski sneers at "techno-nerds"; some years later, as I noted in the first essay in this series, Gelernter would tacitly respond by writing that "to hate technology is in the end to hate humanity, to hate yourself, because technology is what human beings do." (Thus the title of another of Petroski's books: To Engineer Is Human.)

But I have become convinced that technology as such is not the issue at all.

We come closer to the heart of the matter when we think of computers in terms of information technology. Here the work of the philosopher Albert Borgmann is important. In his seminal book Holding on to Reality: The Nature of Information at the Turn of the Millennium, Borgmann identifies three types of information:

  1. Information about reality. In this category Borgmann includes many forms of "reports and records," from "medicine rings" constructed by the Blackfoot Indians of Montana, and the altar Abram built to the Lord at Hebron, to many forms of the written word.

  2. Information for reality, or "cultural information." This includes recipes and instructions of all types: "there are plans, scores, and constitutions, information for erecting buildings, making music, and ordering society."

  3. Information as reality. This is the peculiar province of certain, especially digital, technologies: in it, "the paradigms of report and recipe are succeeded by the paradigm of the recording. The technological information on a compact disc is so detailed and controlled that it addresses us virtually as reality."

The power that we have achieved to produce so much of this third type of information, and produce it so skillfully, concerns Borgmann deeply. He believes that throughout most of human history we have managed a degree of balance between "signs and things," but in these last days have achieved a technology of signs so masterful that it "steps forward as a rival of reality."

Borgmann's book is excellent in many ways, but in his complaints about the dangers of a world dominated by technologically produced signs he often descends into a metaphorical vagueness—the sort of vagueness that tends to get a writer called a Luddite. For instance, he is somewhat unhappy about the creation of enormous and sophisticated databases of ancient Greek and Roman texts because he believes that, in the use of such databases, "texts get flattened out, and scholars get detached from their work." But what Borgmann means by "flattened" and "detached" never becomes clear, at least to me.

In more anecdotal passages, though, his argument takes on meaningful flesh, and does so in ways that illuminate the issues I am concerned with. Considering Microsoft's virtual version of London's National Gallery (a cd-rom from the mid-'90s), Borgmann comments,

No amount of commentary can substitute for the grandly bourgeois and British setting of Trafalgar Square whose center is marked by the monumental column that supports Lord Nelson, one of the protagonists in Britain's rise to world power. But it is not simply a matter of perfecting technological information to the point where users of the Microsoft Art Gallery can have an interactive full motion video that furnishes them with the experience of strolling through the museum and ambling out on Trafalgar Square to admire the Nelson column. The highly impoverished quality of such walking aside, virtual reality, however expansive, is finally bounded and connects of itself to nothing while the actual Gallery borders on Trafalgar Square adjoining in turn St. Martin's Church and neighboring Charing Cross and so on in the inexhaustible texture of streets and focal points that is London.

The virtual gallery necessarily lacks the surround of the real (historical and physical) world: it cannot provide the contexts, contrasts, and surprises that that world offers.

To be sure, the virtual world offers contexts, contrasts, and surprises of its own—after all, a virtual Louvre is available for purchase also, which makes it possible to compare the holdings of two great museums without having to take a train through the Chunnel. On the other hand, as long as I am sitting in front of my computer I can't take the trip from London to Paris. I can't experience the important feeling of disorientation, so striking to almost every American (even before trains connected the cities), that derives from experiencing the geographical proximity of these two dramatically different capitals. I can't know the neighborhoods in which the great museums are situated. It would not even be possible for any Londoner or Parisian, no matter how eager they might be, to be rude to me.

These experiences would be unavailable because I would be sitting at my desk, looking at my computer, and scanning the images produced by software that I purchased—images that can inform me about, but not allow me to experience, the different sizes of the paintings, or their full dimensionality, since the textures produced by different brush techniques are often invisible even on the highest-resolution monitor. (Such problems, of course, also place limitations on books and indeed all forms of mechanical reproduction of the visual arts.)

I can easily imagine the responses advocates of this technology would make to the points Borgmann and I are raising. In fact, I do not need to imagine them: I can simply consult a book like Multi-media: From Wagner to Virtual Reality, and find on almost every page celebrations of the immense aesthetic and informational capabilities of computer technology. Scott Fisher enthuses: "The possibilities of virtual realities, it appears, are as limitless as the possibilities of reality." Lynn Hershman claims that digital works of art allow people to replace "longing, nostalgia and emptiness with a sense of identity, purpose and hope." Marcos Novak imagines "liquid architectures in cyberspace," in which "the next room is always where I need it to be and what I need it to be."

I do not wish to dispute any of these claims; they are often interesting and sometimes compelling. Rather, I merely wish to note that the conflict between Borgmann and the celebrants of multimedia centers on two issues: first, the relative value of different kinds of information, and second, the importance of wide accessibility of information. Borgmann makes a strong case for the depth of the losses incurred when we forsake information about and for reality in favor of information as reality; and he shows how the ready accessibility of an overwhelming range of technological information creates the temptation always to settle for the instantly available. After all, it takes a lot more trouble and money to buy tickets and drive to see the Angeles Quartet than to sample my collection of their cds—and the inertia can be hard to resist even if I know that the "context" and "surround" of the live performance offer me a quality and quantity of experiential information not available on compact disc.

What Borgmann does not adequately address is the compensatory value of technological information for those who do not, and cannot reasonably hope to, have access to the "real thing"; nor does he give judicious assessment of the claim that the marshaling of diverse kinds of information on a single computer enables the user to produce and control context in a way that has its own distinctive value. And so the argument goes on—indeed, I believe that it is in its early stages, because I believe that as yet we have no conceptual vocabulary adequate to assessing these various and often competing goods.

Therefore, I don't claim that I can even begin to answer the questions raised by the technophiles and their critics. But I do believe that in raising and considering them, I am led back to the unique role of the computer as an information machine—to my claim that the lexicon of "technology" doesn't help us very much as we try to think well about these things. Borgmann has clarified the situation considerably, but to get to the heart of things, we need to consider the intellectual origin of the modern computer, in a paper written by the English mathematician Alan Turing in 1938.

3


The paper is called "On Computable Numbers," and its chief purpose was to work through a question (called the Entscheidungsproblem, or "decision problem") that had been raised a few years earlier by the German mathematician David Hilbert, and had been complicated by the work of the mathematical logician Kurt Gödel. I cannot explain this problem, because I do not understand it; but for our purposes here what matters is a thought experiment Turing undertook in his pursuit of the problem: he imagined a certain machine, which he would call a "universal machine," though later it became known as a "Turing machine." He wrote: "It is possible to invent a single machine which can be used to compute any computable sequence," and one could say—indeed, many have said—that in imagining it Turing did invent it. He did not build a computer at that time, but he showed that such a machine could be built, and that the key to it would be the simplicity of its basic functions: "Let us imagine the operations performed … to be split up into 'simple operations' which are so elementary that it is not easy to imagine them further divided." In fact, today's digital computer chips, based as they are on a binary system where the only possibilities are zero or one, on or off, work in a manner so simple that it cannot possibly be "further divided."

How operations so basic can be multiplied and combined until they produce the extravagantly complex results that we see on our computers today is explained, with wonderful clarity, by W. Daniel Hillis in his book The Pattern on the Stone; but what is so extraordinary about Turing's little paper is his ability to intuit, long before our current sciences of chaos and complexity, that the more simple his imagined machine was, the more powerful and indeed universal it could become.1 It is the very simplicity of the Turing machine's organizing structure that enables it, as Turing demonstrated, to perfectly imitate any other machine organized on similar principles.

Today's computers come remarkably close to being universal machines in practice as well as in theory. My laptop fulfills the functions that, when I was in high school, were fulfilled by the following "machines": typewriter, radio, stereo, television, film projector, calculator, ledger, address book, mailbox, tape recorder, chessboard, games arcade, clock, newspaper, magazine, encyclopedia, dictionary, thesaurus, film projector, slide projector—even library and museum. And that is of course a very incomplete list. This comprehensive ability to imitate—what I will call the computer's "mimeticism"—is what makes the computer so different from any other form of technology; it is also what makes the challenge of responding wisely to the machine's enormous promise so formidable.

In daily practice, it seems to me, the most important consequences of the potent mimeticism of the computer are two: the constriction of spatial experience, and the reduction of the play of the human body. When my computer becomes the sole, or at least prime, source for a great deal of information that once I would have sought from many different machines, located in many different places in my house, my community, or beyond, the meaningful space of my daily life is more and more often reduced to the size of my screen. As a direct result, sometimes the only parts of my body that receive meaningful employment in my daily labors are my eyes and my fingers—I don't even have to turn my head to find out what time it is, nor, if I wish to listen to music (for example), do I have to get up, cross the room, find a cd, insert it in my cd player, and turn it on. I need do no more than shift my eyeballs and tap a few keys.

Interestingly, fictional dreams of "virtual reality"—starting, perhaps, with Vernor Vinge's 1981 story "True Names" and proceeding through William Gibson's Neuromancer (1984) and Neal Stephenson's Snow Crash (1992)—imagine realms of purely mental experience: one lives in a digitally generated world, possessing an equally digital "body." One's real, material corpus lies motionless at some insignificant point in "meatspace" while one's mind explores the Metaverse (Stephenson) or the Other Plane (Vinge).

Such fantasies enact, as many commentators have noted, a classically Gnostic longing for liberation from the body. And even for those of us who have no interest in experiential games of that particular kind, if we feel that our most important work is done at our computers, then our bodies' needs—food, sleep, exercise, urination, defecation—can seem irritatingly distracting or even embarrassing. As though bodily functions were signs of weakness; as though thought alone dignifies us.

Hence one of Vinge's characters, an elderly woman, wishes to record her whole being in the bits and bytes of the Other Plane so that, as she puts it, "when this body dies, I will still be"—transformed, Vinge suggests, into a more exciting, elegant, and powerful self than her embodied self ever was or could have been.

4


Perhaps what I am saying here is little more than a rephrasing of Borgmann's distinction between information about and for reality (which I get by moving physically about in "meatspace") and information as reality (which the computer, by miming so many machines and therefore encouraging me to stay in front of it, wants me to be content with). But I believe I am pointing to something that Borgmann does not address except, perhaps, by implication: the relation between thinking and embodied experience.

In order to elucidate this point, let's revisit that fruitful period of 60 or so years ago during which our computerized world was launched. If the work of Turing and Shannon laid the theoretical groundwork for the rise to dominance of the computer, some of the key imaginative groundwork was laid by a man named Vannevar Bush, who during World War II (while Turing was building computers to break the codes created by the German Enigma machines) was the chief scientific adviser to President Roosevelt. As the combat drew to a close, and as the technological achievements of the war years filtered into civilian life to find new uses, Bush understood that one of the great problems of the coming decades would be the organization of information. He believed that what was needed, and what indeed could be built, was a "memory extender," or a "Memex" for short.

Bush's Memex, which he conceived in the form of a large desk with multiple hidden components, would be able to store information of many types, visual and aural—words, plans, drawings, diagrams, voice recordings, music—and would possess an elaborate mechanism to file, classify, and cross-reference all that it contained. In short, Bush imagined a personal computer with an Internet connection (though in his prospectus the Memex was mechanical in a Rube Goldbergish sort of way, rather than digital and electronic).

What I find especially telling is the title Bush gave to the essay—it appeared in The Atlantic Monthly in June 1945—in which he described the Memex: "As We May Think." Bush's argument is that the technologies of warfare can be converted into technologies of knowledge, and that the conversion will enable us to think differently and better.

It strikes me that the hidden, and vital, connection between these two technologies is the principle of action at a distance. After the horrific trench warfare of World War I, military minds spent much of the next 20 years engineering combat machines that would enable armies to inflict damage on enemies too far away to be seen, much less fought with hand-to-hand. From the expanded use of hand grenades, to the increase in the range of artillery, to the development of plans for extensive strategic bombing, the methods of warfare during the second world war sought to act against the enemy from long range. (Of course, all parties to the war developed similar methods and machines, so none got its wish of being able to fight from a position of safety.)

Vannevar Bush seems to have translated this principle to the struggle to acquire and organize information: he imagines people of the future conquering their enemies—Ignorance and Disorder—without ever leaving their Memexes. Military technology and information technology, in Bush's vision, turn out to have the same goals: the maximizing of efficiency and the minimizing of both risk and the expense of energy. It is a vision prompted by a belief in the scarcity of resources and the imminence of danger; and it has become the vision of the Information Age.

Because we believe in this vision, because we think (against all the evidence) that we need to conserve our intellectual resources—or, perhaps, simply because we are lazy—we listen eagerly to those who offer us machines that are more and more truly universal; and we become increasingly deaf to the call of other voices from other rooms. In such a climate, one is tempted to believe that what the Universal Machine doesn't offer can't be of such value that it would be worthwhile to get up from one's desk and seek it out. I recall a forecast Jean-François Lyotard made in The Postmodern Condition, almost 20 years ago: "We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translated into computer language."

In 19th-century Oxford, a little poem circulated featuring as its purported speaker Benjamin Jowett, translator of Plato and Master of Balliol:

First come I, my name is Jowett;
There's no knowledge but I know it.
I am the master of this College;
What I don't know isn't knowledge.

The personal computer is the Jowett of our time: what it doesn't know isn't knowledge.

5


It was, I now see, an intuited sense of the dangers posed by the Jowettization of the computer that led me to conduct the experiment with Linux that I described in my previous essay: I was seeking (with apologies to the prophet Isaiah) to make the straight paths crooked and the plain places rough. If David Gelernter—as I noted in the first essay of this series—wants software that will make the computer "transparent" to our desires, I craved opacity. I had become so used to my computer, so disposed to exploit its resources and explore its capabilities, that I had begun to wonder, like one of the travelers in Bunyan's Pilgrim's Progress, if perhaps this smooth, broad road were a little too inviting, a little too easy to traverse; a feeling that intensified at those points when the tiniest of difficulties presented itself and, lo, Bill Gates appeared at my elbow, saying, "Here, let me help you with that."

Some years ago the novelist John Updike wrote this telling reflection on much art, especially visual art, of the 20th century: "we feel in each act not only a plenitude (ambition, intuition, expertise, delight, etc.) but an absence—a void that belongs to these creative acts. Nothing is preventing them." In contrast, "works like Madame Bovary and Ulysses glow with the heat of resistance that the will to manipulate meets in banal, heavily actual subjects."2

Precisely: resistance. The mind needs resistance in order to function properly; it understands itself and its surroundings through encountering boundaries, borders, limits—all that pushes back or refuses to yield. Now, Updike believes that artistic greatness is often achieved by those who overcome such resistance; but the resistance must be felt, and forcefully felt, for that overcoming to be artistically productive. I am no artist, and I doubt that Updike would feel plenitude in anything I do; but his notion seems immensely relevant to my condition nonetheless.

A curious feature of this resistance is that it can only happen when each party is exerting pressure on the other; and as my computing life became smoother and more featureless, I became increasingly unable to tell whether this was because my computer was yielding to my desires or I to its. The more confused and uncomfortable a computer user is, the more enthralled he or she becomes to the computer's preferences; such a user offers little resistance to the "defaults." The issue of resistance is significant for every computer user, though in different ways.

So I plunged into the world of open-source software precisely because, in the words of the aficionado I quoted in my previous essay, "nothing in Linux works the first time." I wanted to be puzzled; I wanted to be at a loss sometimes. I wanted to have to get up and go to the library or bookstore; I wanted to need to call a friend for help. Linux user groups—there are hundreds of them across the country and thousands around the world—periodically stage "Installfests," where users bring their computers and software and either help or get help from others.

In short, running Linux often involves moving one's body, expanding one's spatial environment, and interacting with other people in a kind of ad hoc community. The resistance offered by the collaborative and decentered development of Linux, and its consequent lack of immediate "user-friendliness," may create frustrations, but it also encourages the cultivation of certain virtues—patience, humility, teachableness—and opens the user to a range of benefits. I have described this project of mine as a quest for control, but in some ways it would be more accurate to describe it as a quest for a situation in which control is always being negotiated; where the boundaries shift because the forces of resistance wax and wane, on both sides.

However, the Linux experiment, I must admit, is one that I now find hard to sustain. Like most people, I have daily responsibilities that do not allow me to spend an indefinite amount of time fiddling with configuration files, or solving whatever the Linux conundrum of the moment happens to be. Sometimes I have to go back to what I know, whether I want to or not. And in this context the new Unix-based Macintosh os x begins to feel lile a rather insidious temptation: whenever I start to feel a longing for "resistance," I can always fire up the Terminal and use old-fashioned text-based applications, like the Lynx web browser, Pine for email, emacs for text editing—though whenever these pleasures ebb I can immediately switch back to the inimitable Mac eye-candy. If using Linux is like moving into a log cabin, using os x is like visiting a dude ranch: you know that whenever "roughing it" grows tiresome or uncomfortable, all the comforts of capitalist modernity are ready and waiting to meet your needs.

But still, I think, my experiment has reminded me that the ways we use our computers could be other—there are alternative models of organizing and deploying information than those which our computers supply by default, out of the box. Even when I set aside my Linux box and return to my Macintosh, I find myself using that computer in a more self-conscious way, with a clearer sense of its capabilities and its limitations. And I find it easier to spend time away from the computer, reacquainting myself with some of the nondigital ways of thinking and learning and writing with which I grew up. I value my computer as much as, or more than, I ever have; but I feel that in some sense I have put it in its place.

And what is its place? As a tool: an unprecedentedly resourceful and adaptable tool, to be sure, but no more. It lacks purposes of its own. Those it appears to have are the purposes of its makers, and these may or may not be our purposes; we should ask whether they are. Many years ago Jacques Ellul called us to "the measuring of technique by other criteria than those of technique itself," and this obligation is all the more vital when the "technique" involved is a universal machine that shapes, or seeks to shape, how we may think. Ellul even goes so far as to call that task of measurement "the search for justice before God."

Now, it is very difficult to think, as one sits down before one's computer keyboard, that what the prophets of Israel call shalom could be at stake. The incongruity is striking: "the search for justice before God" seems so noble, even heroic an endeavor; mouse, keyboard, and screen seem thoroughly, insignificantly everyday by comparison. Yet we are accustomed, in other (generally more poetic and "humanistic") contexts, to hearing and affirming that God makes his will and character known through the ordinary. It's just hard to believe that we can hear the still small voice in the midst of the technological ordinary: can God make himself manifest through the binary logic of silicon chips?

The effort to think spiritually about computers meets a great deal of resistance, we might say—something is preventing it. (Maybe many things are.) But, as Updike teaches us, resistance can be enormously productive, if we neither ignore it nor are daunted by it. If I had to say what was the most important lesson I learned from my plunge into the strange world of computer technology and open-source software, it was that I need to start thinking in the way Ellul counsels: to pursue "computer control" not in order to repudiate those machines but in order to harness them and employ them in the search for justice before God.

Alan Jacobs is professor of English at Wheaton College. He is the author most recently of A Theology of Reading: The Hermeneutics of Love (Westview Press).



Most ReadMost Shared