For those concerned about the future there are a lot of things to worry about. Nuclear war, bioterrorism, asteroids, artificial intelligence, runaway climate change – the list goes on. All of these have the potential to devastate humanity. How then to pick which one is the most important to work on? I want to point out a reason to work on machine intelligence even if one thinks that there is a low probability of the technology working.
Preventing catastrophes like nuclear war does avoid human extinction and keep us on the path of growth and eventual space colonisation. However, it is unclear how pleasant this world will be for its inhabitants. If a singleton does not develop, that is “a single decision-making agency … exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy,” the logic of survival means that we will eventually end up regressing to a competitive Malthusian world. That world is one where vast numbers of beings compete for survival on subsistence incomes, as has been the case for most creatures on Earth since life first appeared billions of years ago. The creatures working to survive could be mind uploads or something else entirely. In this scenario it is competitive pressure and evolution which determine the long run outcome. There will be little if any path dependence. Just as it was not possible for a group of people planted on Earth millions of years ago to change the welfare of the beings that exist today after evolution has had its way, so too it will be impossible for anyone today to change what kinds of creatures win out in the battle for survival millions of years from now. The only impact we could have now would be to reduce the risk of life disappearing altogether at this brief bottleneck on Earth where extinction is a real possibility. The difference between the best and worst futures possible is that between the desirability of life disappearing altogether and the desirability of a Malthusian world.
As competitive pressures do not necessarily drive creatures towards states of high wellbeing, it is hard to say which of these is the better outcome. I hope that technology which allows us to consciously design our minds and therefore our experience of life will lead to a nicer outcome even in the presence of competitive pressures, but that is hard to predict. Whatever the merits of the competitive future, it falls short of what a benevolent, all-powerful being trying to maximise welfare would choose.
On the other hand if a singleton is possible or inevitable, the difference between the best and worst futures is much greater. The desires of the singleton which comes to dominate Earth will be the final word on what Earth originating life goes on to do. It will be free to create whatever utopia or dystopia it chooses without competitors or restrictions, other than those posed by the laws of physics. In this world it is possible to influence what happens millions or billions of years from now, by influencing the kind of singleton which takes over and spreads acoss the universe. The difference in desirability between the best and worst case is that between an evil singleton which unrelentingly spreads misery across the universe, and the ideal benevolent singleton which goes about turning the entire universe into the things you most value.
If you think there is much uncertainty about whether a singleton is possible, and want to maximise your expected impact on the future, you should act as though you live in a world where it is possible. You should only ignore those scenarios if they are very improbable.
What technology is most likely to deliver us a singleton in the next century or two, giving you a chance to have a big impact on the future? I think the answer is a generalised artificial intelligence, though one might also suggest a non-AI group achieving total dominance through mind uploads, ubiquitous surveillance, nanotechnology, or whatever other emerging technology.
So if any of you are tempted to dismiss the Singularity Institute because the runaway AI scenario seems so improbable: you shouldn’t. It makes sense to work on it even if it is. The same goes for those who focus on the possibility of an irreversible global government.
Update: I have tried to clarify my view in a reply to Carl Shulman below. My claim is not that the probability is irrelevant, just that it is only part of the story and that working on low probability scenarios can be justified if you can have a larger impact, which I believe is the case here. Nor do I or many people working on AI believe that an intelligence explosion scenario is particularly unlikely.
Tagged: altruism, biology, economics, evolution, singleton, the future, utilitarianism
