Jump directly to the Content
Jump directly to the content
ArticleComments [1]

Science in Focus: Cary Gray


9 Algorithms that Changed the Future, Part 1

A well-guided tour.

Many of us who study or teach computer science have experienced difficulty in trying to explain just what our field is, whether to family or friends. To make matters worse, departments called "computer science" may teach very different fields, and even within a single department the faculty may have very different ideas of what it is that they do. One of the toughest audiences is students, especially those who are sure that they already know computing—and more so if they've learned to do a bit of programming. I've even had occasion to deal with a college president who stated that his experience using a personal computer convinced him that there was nothing to justify a college-level major in computer science.

John MacCormick's 9 Algorithms that Changed the Future joins a small set of books that have tried to communicate the nature of the field for a general audience. MacCormick provides something like a quick package tour, with stops at a few highlights—the "great algorithms" of the title. It is a stretch to call some of the topics "algorithms," but it is likely that the sites on a tour with the title "Classical Greece" would treat the title differently from the way a historian or classicist might; such may be the price of trying to make the material accessible to a wider audience. As with package tours in the physical world, the structure offers brevity, but the tourist's recollection may end up fragmented.

MacCormick is a fine tour guide. He has chosen his topics well to demonstrate deep insights, and the fact that almost all the methods he explains lie just out of sight to users of the Internet keeps the reader connected to familiar ground. The descriptions are very accessible, and the "tricks"—the key insights that make each method work—are clearly identified. But this brings out a hazard of truly profound insights: they can change not only the future but our understanding of the past as well. A breakthrough can make what was a very difficult problem seem obvious, obscuring its significance. (Don Knuth, one of the towering figures in computing, offered just this caution to my class of new PhD students. He noted that sometimes his challenge as a thesis adviser was convincing students that they had done something significant!)

When giving a tour, I know that I am tempted to point out every connection—which may not be helpful to the first-time visitor. As one revisiting familiar territory, I can't help but notice that MacCormick's list of nine algorithms includes two pairs of twins: data compression and error-correcting codes push the same idea in different directions; and key exchange and signatures appear together in the earliest descriptions of public-key cryptography. So the reader should be grateful for MacCormick's restraint.

Alan Turing's 1936 paper on computability, the subject of chapter 10, is a fundamental result for computation. But there is little evidence of a line from this work by Turing to the appearance of computing as a field of its own fifteen or twenty years later. It is more helpful to think of Turing's proof (along with his theoretical "computing machines") as a mathematical artifact that was repurposed by early computer scientists for use as a foundation stone in their new effort. Turing's proof was originally written in the context the question of whether mathematics could be fully systematized. The Church-Turing thesis has to be understood in that context: it is the assertion that the equivalent formalisms of Church and Turing provide a definition inside mathematics for the intuitive notion of an "effective procedure"—informally, what someone hired to compute might be instructed to do. (See the excellent article from the Stanford Encyclopedia of Philosophy.)

The history of computing is full of false starts and dead ends; sometimes dead or dormant ideas are resurrected or rediscovered when the technologies can at last support them. In such cases there may or may not be a direct line from the early description to the ultimate realization. Sometimes an idea is ignored for no sound reason: one of the three independent inventors of public-key cryptography, Ralph Merkle, had his paper on the subject rejected because reviewers thought it couldn't work—and that no one would be interested even if it did. There are also ideas that require long gestation: the relational database model, for example, required about a decade of hard implementation work and technological advance to move from theory to practice.

MacCormick has provided a nice introductory tour, suitable for those who are willing to commit to only a brief visit. Perhaps the taste that he provides will inspire some of those tourists to a more extensive exploration.

Cary Gray is associate professor of Computer Science at Wheaton College.

Most ReadMost Shared