By definition, the Technological Singularity is a blind spot in our predictive thinking. Futurists have a hard time imagining what life will be like after we create greater-than-human artificial intelligences. Here are seven outcomes of the Singularity that nobody thinks about — and which could leave us completely blindsided.
Top image: Ridwan Chandra.
For the purpose of this list, I decided to maintain a very loose definition of the Technological Singularity. My own personal preference is that of an intelligence explosionand the onset of multiple (and potentially competing) streams of both artificial general superintelligence (SAI) and weak AI. But the Singularity could also result in a kind of Kurzweilian future in which humanity has merged with machines. Or a Moravecian world in which our “mind children” have left the cradle to explore the cosmos, or a Hansonian society of competing uploads, featuring rapid economic and technological growth.
In addition to some of these scenarios, a Singularity could result in a complete existential shift for human civilization, like our conversion to digital life, or the rise of a world free from scarcity and suffering. Or it could result in a total disaster and a global apocalypse. Hugo de Garis has talked about a global struggle for power involving massively intelligent machines set against humanity — the so-called artilect war.
But there are some lesser known scenarios that are also worth keeping in mind, lest we be caught unawares. Here are seven of the most unexpected outcomes of the Singularity.
1. AI Wireheads
It’s generally assumed that a self-improving artificial superintelligence (SAI) will strive to become progressively smarter. But what if cognitive enhancement is not the goal? What if an AI just wants to have fun? Some futurists and scifi writers have speculated that future humans will engage in the practice of wireheading — the artificial stimulation of the brain to experience pleasure (check out Larry Niven’s Known Space stories for some good examples). An AI might conclude, for example, thatoptimizing its capacity to experience pleasure is the most purposeful and worthwhile thing it could do. And indeed, evolution guides the behavior of animals in a similar fashion. Perhaps a transcending, self-modifying AI will not be immune to similar tendencies.
At the same time, an SAI could also interpret its utility function in such a way that it decides to wirehead the entire human population. It might do this, for example, if it was pre-programmed to be “safe” and consider the best interests of humans, thus taking its injunction to an extreme. Indeed, an AI could get its value system completely botched up by concluding that maximum amounts of pleasure is the highest possible utility for itself and for humans.
As an aside, futurist Stephen Omohundro disagrees with the AI wirehead prediction, arguing that AIs will work hard to avoid becoming wireheads because it would be harmful to their goals.” Image: Mondolithic Studios.
2. “So long and thanks for all the virtual fish”
Imagine this scenario: The Technological Singularity happens — and the emerging SAI simply packs up and leaves. It could just launch itself into space and disappear forever.
But in order for this scenario to make any sense, an SAI would have to conclude, for whatever reason, that interacting with human civilization is simply not worth the trouble; it’s just time to leave Earth — Douglas Adams’ dolphin-style.
Image: Colie Wertz.
3. The Rise of an Invisible Singleton
It’s conceivable that a sufficiently advanced AI (or a transcending mind upload) could set itself up as a singleton — a hypothetical world order in which there is a single decision-making agency (or entity) at the highest level of control. But rather than make itself and its global monopoly obvious, this god-like AI could covertly exert control over the human population.
To do so, an SAI singleton would use surveillance (including reliable lie detection) and mind-control technologies, communication technologies, and other forms of artificial intelligence. Ultimately, it would work to prevent any threats to its own existence and supremacy, while exerting control over the most important parts of its territory, or domain — all the while remaining invisible in the background.
4. Our Very Own Butlerian Jihad
Another possibility is that humanity might actually defeat an artificial superintelligence— a totally unexpected outcome just based on the sheer improbability of it. No doubt, once a malign or misguided SAI (or even a weak AI) gets out of control, it will be very difficult, if not impossible, to stop. But humanity, perhaps in conjunction with a friendly AI, or by some other means, could fight back and find away to beat it down before it can invoke its will over the planet and human affairs. Alternately, future humans could work to prevent it from coming about in the first place.
Frank Herbert addressed these possibilities in the Dune series by virtue of the “Butlerian Jihad” — a cataclysmic event in which the “god of machine logic” was overthrown by humanity and a new fundamental tenet invoked: “Thou shalt not make a machine in the likeness of a human mind.” The Jihad resulted in the destruction of all intelligent machines and the rise of a new feudal society. It also resulted in the rise of the mentat order — humans with extraordinary cognitive abilities who functioned as virtual computers. more