The Techno-Human Condition (The MIT Press)
Braden R. Allenby
The MIT Press, 2011
240 pp., $24.95
Christina Bieber Lake
Most people are blissfully unaware of the World Transhumanist Organization, even now as it is going by its catchy new name, Humanity+. Guided by their dedication to "elevating the human condition," their website provides a mind-blowing look at our possible posthuman future. In H+ magazine, you can read about new developments in cybersex, current plans to keep the human brain from degenerating, and the advent of NEMS, nanoelectromechanical systems, through which scientists are hoping to build machines smaller than red blood cells in order to improve sensory and motor capabilities. You can also learn the top-ten Transhumanist pickup lines, including "Wanna get our bodies frozen together so we can be immortal like ice ice baby?" The pickup lines notwithstanding, transhumanists are serious folks who take a no-holds barred approach to biotechnology. Max More provides a clear statement of their goals. The transhumanist seeks "the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values."
The guiding principles and values have always been the rub. To study transhumanist literature is to quickly recognize that the highest value held by transhumanism is the promise of technology itself. But lest we think these outliers are foreign to us, it is sobering to recognize that transhumanism is really only a pumped up (enhanced!) version of what Albert Borgmann astutely calls the "persistent glamour" of technology: the tendency of advanced technological societies to turn to technology first to solve problems. The dreamer says, "I want to lasso the moon!" To which the glamour of technology replies, "There's an app for that!!" Writ large, the appeal is nearly irresistible.
In The Techno-Human Condition, Arizona State University professors Braden Allenby and Daniel Sarewitz argue that the only certain thing about our posthuman future is that nothing is certain. Promised the moon, we may get a second sun. Or no sun. Or no moon. Or, more likely, altered tides, tsunamis, destruction of coastal cities, and subsequent re-development of inland cities (viva Detroit!). Promised an enhanced brain, we may get that, but it will not work exactly the way we anticipated. It is more than just the possibility of unintended consequences; it is that all consequences will be unintended. No matter what, complexity will prevail. Like Columbus, we will set out to find India, but what we will find instead will be "new, curious, and unexpected." And so today, rather than haggling over the future, we should "question the grand frameworks of our time" that lead us to think we control it.
Humans have always used technology, so arguments that see technology as the ultimate savior or ultimate destroyer of humanity are worse than non-starters; they misdirect the conversation, preventing us from asking the most important questions. And so Allenby and Sarewitz propose a new taxonomy for discussing how technology functions. Level I concerns the immediate effectiveness of the technology, or how it is used to perform what is desired: an airplane gets us more quickly to our destination. Level II describes the systematic complexity that a given technology is part of, encompassing emergent behaviors that cannot be predicted. Here, the authors explain how airline technology is enmeshed in a system of schedules that affect other transportation systems, and how mass intercontinental transit contributes to the unforeseen spread of diseases like SARS. Level III describes how behaviors "co-evolve" with technologies—for instance, the way that airline travel has helped to engender mass-market consumer capitalism, consumer credit, and so on.
Though they admit that there are no clear boundaries between these levels, Allenby and Sarewitz rightly insist that anyone writing about technology should be aware of their differences. Writers who think about only one of these levels will either ignore important contributions of technology or destructively extrapolate from those successes. One example of a successful (bounded) Level I technology is the development of vaccines, especially those for polio and smallpox. Vaccines have solved the problem of the spread of these diseases much better than any competing method. But that does not give us leave to extend the reasoning to other technologies, which is what transhumanists tend to do. Transhumanist rhetoric is incoherent because Level I solutions "cannot be plausibly extended to imply that those technologies represent solutions to more complex social and cultural phenomenon. It's a category mistake." An example of this kind of mistaken extrapolation would be the argument that reprogenetic technologies (Gattaca-type selection of genetic traits for children) will eventually solve the problem of discrimination by eliminating racial differences. This "solution" would have unpredictable other outcomes, not to mention that it would do nothing to get at the underlying issues. Thus, the authors insist, we must "muddle through" these problems by way of "integrated inquiry," with full awareness of our tendency to reason in this faulty way.