ArticleComments [0]
Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers
Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers
John MacCormick
Princeton University Press, 2012
232 pp., $27.95

Buy Now

Science in Focus: Andrew M. C. Dawes


9 Algorithms that Changed the Future, Part 2

Tricks of the trade.

icon1 of 1iconview all

Behind every interaction with a computer there are countless tricks written into the software, hardware, and protocols responsible for the user experience—tricks that we typically take for granted. Surprisingly, most of these algorithms predate the modern internet by several decades. The study of algorithms came of age in the 1930s with the work of Alan Turing and was followed by a string of revolutionary works in the '40s and '50s. Despite a rich history, and a crucial role in modern technology, computer science is often misunderstood to be some combination of programming, gaming, and hardware design. Though each of these uses the results of computer science, the field of computer science is much more general (and more broadly applicable). The algorithms we use today will work on any type of computer far into the future—even those we haven't imagined building yet. In contrast, any given game, program, or laptop is likely to work well for a few years at best.

Computer science studies the full range of what are called computable problems. It may seem to be a circular definition, but let's say that computable problems are problems that could be solved by a computer. It is easy to believe that everything you use a computer for (email, video, chat, shopping, etc.) can be described in terms of computable problems. In 9 Algorithms that Changed the Future, John MacCormick illustrates the magical mix of tricks, genius, and raw number-crunching power that computers use to solve the everyday problems behind activities like web searches and secure online banking. This book stands out for presenting complicated algorithms in a way that is accessible to a wide variety of readers.

The major challenge in writing such a book is to provide enough detail and depth to satisfy those readers who are familiar with computer science while providing entry points for readers who are new to the field. As a computer science hobbyist, I was pleasantly surprised that the topics and discussions, despite being presented in a very approachable way, included many of the finer details of these remarkable tricks. What makes the book successful is the careful use of layered descriptions, where the first pass at explaining a new concept is based on a straightforward analogy (i.e., no math). As an example, shared secret cryptography is first presented in terms of mixing paint colors.

How do two people create a shared secret without sharing their own secrets? The paint analogy assumes the two parties have a hidden stash of secret colors. Each person mixes one secret color with a public paint sample. Then the two parties swap these mixtures, and each adds his or her own secret color to the mixture. The mixing all takes place in a hidden space so no individual secrets are revealed (mixed paint colors cannot be unmixed). After this process, the two parties have created the same "shared secret" color—all without revealing their own secret colors.

Of course computers don't have paint chips inside, so once the concept of the algorithm is clear, the author moves in and replaces paint mixing with basic multiplication. Multiplication can be reversed (by factoring), so the final conceptual evolution replaces multiplication with discrete exponentiation—the computational version of paint mixing. The brilliance of this layered approach is that even if readers skip the more technical sections, they'll still walk away with an understanding of the fundamental concept behind the algorithms, and an appreciation for the subtle tricks that make them work.

MacCormick's final chapter takes a philosophical turn, considering the kinds of problems that can or cannot be computed. Although the pioneering work on such problems came in the 1930s, the implications of these early explorations are still not completely clear. Of particular interest is the work of Church and Turing, which demonstrates the existence of problems that cannot be computed—the undecidable problems. As a physicist, I recognize that the class of undecidable problems bears resemblance to the notion of quantum mechanical indeterminacy (or Heisenberg uncertainty). There are some problems that computers, even those with unlimited resources, cannot solve. Similarly, scientific instruments, even those with incredible precision, cannot perform a perfectly accurate measurement of some properties. In this sense, both physics and computer science find that fundamental principles prevent perfection.

Some find such limits frustrating, while others find them liberating. In either case, we have room for new algorithms and plenty of ways to improve computing far into the future. One hope I have for this book is that it will reveal the richness of this field to young men and women as they prepare for their own futures. The rest of us, perhaps already on our career paths, gain much insight into our connection to computers and how computers manage to reliably and securely mediate our connections to one another.

Andrew M. C. Dawes is assistant professor of physics at Pacific University in Forest Grove, Oregon.

See also Part 1 of this round, by Cary Gray.

bottom_line
icon1 of 1iconview all
Most ReadMost SharedMost Commented


Seminary/Grad SchoolsCollege Guide