Yesterday, the Times Of India ran an article by Mukul Sharma (yes, the Mindsport / e4 guy). Reading it sent a little chill down my spine, for the subject of the article is a theory put forth by author and scientist Vernor Vinge which refers to a technological singularity, a point where our models must be discarded and a new reality rules (Vinge’s very accessible 1993 paper – The Coming Technological Singularity: How to Survive in the Post-Human Era). The prediction that will end all predictions, Sharma calls it.
In brief, Vinge predicts that sometime between 2005 and 2030, it is very likely that humans will create a sentient entity that would be more intelligent than ourselves. And since such a machine would be more intelligent than any human, it follows that it would be in a position to create beings more intelligent than itself. And so on. The point when the first such entity is created is so revolutionary, he calls it the Singularity. That would signal the end of the Human Era and the beginning of the Post Human Era. Vinge’s paper has the following paragraph attributed to I. J. Good -
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. … It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.
Among a whole lot of things, one point Vinge covers is the concept of weak superhumanity – superhuman intelligence that we humans would keep physically confined so as to control it. But this is impractical he says, and gives an example -
Imagine yourself confined to your house with only limited data access to the outside, to your masters. If those masters thought at a rate — say — one million times slower than you, there is little doubt that over a period of years (your time) you could come up with “helpful advice” that would incidentally set you free.
A similar device is already used by human players of complex games such as chess when their opponent is a computer. While a computer is able to calculate millions of times faster than humans, it is not intelligent. So human players often plan their moves in such a manner than the killer move is beyond the brute-force calculating ability of the computer. The computer misses the effect of the trap because it is beyond its calculating horizon and this flaw is called the horizon effect.
Among the various dangers that the Singularity poses to us humans, Vinge says the intelligence might decide it does not need us anymore. We could always program Good’s Meta-Golden Rule into the entity, he says – Treat your inferiors as you would be treated by your superiors. But would a cool calculating superintelligent entity pay heed to such a rule, when we do not? That gives us something to really think about; and also examine concepts and philosophies like liberty and freedom in such a context.
While not exactly related to this topic, the present article brought to mind another article on artificial intelligence that I read a couple of months ago in Wired magazine – Two AI Pioneers. Two Bizarre Suicides. What Really Happened?. A fascinating read, that.