My post yesterday about real singularities got nice response, here and in my link post at my own blog. I agree with The Oberamtmann’s suggestion here that the 19th and early 20th centuries saw a sort of singularity, a qualitative improvement; that’s when the world industrialized, when the demographic transition hit with the consequent fall in fertility and rapid improvement in life expectancy, and when a global economy began to form.
What’s the current singularity? It’s not so much vertical inasmuch as its driven by the development of new technologies and techniques as it is horizontal, driven by the application of new technologies and techniques across more of the world.
Damien Sullivan at my blog raised a very important point when he mentioned that augmented intelligence played a key role in the concept of the singularity. What of that here?
Intelligence is a very different thing to measure, and obviously controversial and prone to abuse. Isn’t it amazing how traditionally dominated and stigmatized groups are always found to be inherently less intelligent than the groups which dominated and stigmatized them, suggesting that any inequalities in a society are “natural”? Insofar as it can be measured, however, there is abundant evidence to suggest that the environments where humans live–particularly, the environments where young children develop, absorbing the nutrients necessary for the developing brain to develop properly–play a critical role. It is a well-established fact that malnutrition in childhood–still common in many parts in the world, with perhaps a third of children worldwide–produces serious long-term deficits in intelligence, children with histories of malnutrition scoring 10 to 15 points lower on IQ tests than their peers who were lucky enough not to suffer malnutrition. Disease burdens also seem to play a similar role, prolonged illness and parasitic intelligence depriving the developing young child, admittedly not as much in Southeast Asia as elsewhere.
Our world, I’ve noted, is a world where medicine and public health have vastly improved worldwide, with even the worst-off regions of the world comparing quite favourably to the traditional norms for human society. These improvements are continuing worldwide, occurring with a certain independence from income: relatively poor countries like Vietnam and Brazil and China and Egypt enjoy life expectancies nearly as long as people in First World countries with much higher levels of income. On a purely biological level, things are improving to the point where the traditional environmental conditions which reduced adult levels of intelligence just aren’t applying to the same degree as before. We’re in the middle of a global intelligence revolution.
Innate biological capacity matters not if the social conditions allowing this capacity to be expressed–mass education, say–aren’t available. Again, increasingly, these do exist largely independent of income, with poor countries enjoying levels of literacy and years of education which have been steadily increasing over the previous decades. The global education revolution is making it possible for people to contribute to the global economy.
These health and education revolutions, in turn, are being driven by radical social changes, as the idea of the equality of human beings spreads and becomes normative. Slavery, for instance, is legally extinct worldwide, its actual practice confined to the most marginal regions of the world. The idea that people have a right to expect the chance to enjoy better lives, that people have a right to expect the consistent and fair application of the law, that people have a right to be able to participate in the development of their culture and the unfolding of their politics, is taken for granted. Even if the implementation of these ideas is lacking, they are taken as normal.
So. We’ve got the global intelligence revolution, producing a crop of billions of people who are plugged into a global culture where it’s increasingly taken for normal that they have a right to enjoy happy, safe, and productive lives. This brings us to the question of technology.
The low-hanging fruits may have been plucked. Continued progress is going to come substantially from refinements of existing technologies and techniques. In my New Year’s Eve Demography Matters post, for instance, I expected two medical developments of note–the improved success of reproductive medicine, and the developing of strategies to slow age-related deterioration if not retard aging and death–to be big factors in our near future. Slow changes, even if more quantitative than qualitative, can change things hugely.
Meanwhile, the technologies and techniques that have transformed the richest areas of the world are being diffused throughout the world. It doesn’t matter if we’ve gigaflop computers with tens of terabytes of memory if there’s only a few of them, after all. Paul David’s 1989 paper “The Dynamo and the Computer” made the point that in the United States alone, it took more than years for electricity to progress from the prototype incandescent bulb in 1879 to the point where a majority of households were electrified.
At the turn of the century, farsighted engineers already had envisaged profound transformations that electrification would bring to factories, stores, and homes. But the materialization of such visions hardly was imminent. In 1899 in the United States, electric lighting was being used in a mere 3 percent of all residences (and in only 8 percent of urban dwelling units); the horsepower capacity of all (primary and secondary) electric motors installed in manufacturing establishments in the country represented less than 5 percent of factory mechanical drive. It would take another two decades, roughly speaking, for these aggregate measures of the extent of electrification to attain the 50 percent diffusion level. It may be remarked that, in 1900, an observer of the progress of the “Electrical Age” stood as far distant in time from the introduction of the carbon filament incandescent lamp by Edison, and Swann (1879), and of the Edison central generating station in New York and London (1881), as today we stand from comparable “breakthrough” events in the computer revolution: the introduction of the 1043 byte memory chip (1969) and the silicon micro- processor (1970) by Intel.
Our world is a world where Internet access and cell phone ownership is becoming normal.
With all these improved life chances and access to new technologies worldwide and refinements of existing technologies, what will become of us? Barring the sorts of unlikely transformations to altogether new states of beings, our world is one where this lateral expansion will create more minor miracles, and that these minor miracles will be available to more people than ever before. We may be on track for another singularity; we may be finishing up the current one. At the end of this process, however you want to classify it, we’ll almost certainly be in a recognizable future where minor miracles are quotidian. I like what sensible futurist Jamais Cascio wrote in a recent io9 essay, “Your Posthumanism is Boring Me”.
Those of us of a certain age remember the birth of Louise Brown, in 1978, quite vividly. Ms. Brown was the first baby born through in-vitro fertilization, or IVF. Many of you reading this likely know someone who has used (or conceived via) IVF; some of you may be children of IVF yourselves, and are asking now “yeah, so what?” But in 1978, you wouldn’t have been an IVF conception, you would have been a TEST TUBE BABY, and clearly the first in a line of “superbabies” just waiting to take over. I’m not exaggerating. Even James Watson, co-discoverer of DNA, was quoted as saying “All hell will break loose, politically and morally, all over the world.” But hysteria quickly turned into boredom, and the disruptive became the commonplace.
What happened with Louise Brown and IVF will be replicated across the spectrum of technologies that we now celebrate or decry as leading to our posthuman future (the title, by the way, of conservative social critic Frank Fukuyama’s book on how the technologies of human augmentation will lead to the collapse of society). Fear is replaced by familiarity. And unlike IVF, the spread of the Internet and easy communication will mean that most of us will have heard about these technologies as they develop. By the time they arrive, they’ll already be boring.
And when these artifacts hit the real world, they will come complete with the myriad insufficiencies and difficulties of real technology. Again, look at IVF: as anyone who has used it can attest, it’s not a perfect system. It’s troublesome, and frustrating, and clumsy. If nobody still thinks of IVF as being the harbinger of transformation, it’s not because it failed, but because it worked just like any other technology.
Posthumanity, from this perspective, will always be just over the horizon. Always in The Future. When the systems and augmentations we now consider to be posthuman hit the real world, they will have become simply human in scale.
That’s because augmentation – the development of systems and technologies to allow us to do and to be more than what our natural biology would allow – is intrinsic to what it means to be human. Thrown weapons expanded the range of our strength; control of fire allowed us to see in the dark; written words expanded the duration of our memories. If these all sound utterly primitive and unworthy of comment, try to imagine what it would have been like to be without them – and to find yourself competing against others equipped with them. The last hundred thousand years has been the slow history of the process of augmentation.
Our world has troubles, sure. The overall direction of the world, though, is fundamentally good; as Dan Savage said, speaking of one element of this direction, things get better. The 21st century is going to be a good one.