“Superintelligence” by Nick Bostrom has become one of the most influential books on the subject of runaway superintelligence in AI, and how AI might be contained so that AI doesn’t lead to cataclysmic outcomes for humanity.
It has definitely been a thought-provoking book, and one question I find myself asking after reading it, which ties to the premise of the book, is whether there is a limit to superintelligence and intelligence in general.
The book seems to suggest that once a computer achieves above-human level intelligence that it will be able to infinitely upgrade itself in a singularity moment and achieve some sort of godhood, but I wonder if there is an upper limit to intelligence or an upper limit beyond which additional “intelligence” does not lead to better outcomes relative to lesser intelligence.
The book suggests that a superintelligent mind may not even be comprehensible to those of us with lesser intelligence (i.e. humans), but I don’t think that’s entirely true. Let’s start to tease out what the boundaries of intelligence could be.
Talking about intelligence at all is tricky because it is often a poorly defined word. One of the commonly used rubrics of intelligence is the IQ test, which tests things such as general knowledge, reasoning, and problem-solving abilities. It is an imperfect definition, but useful to describe at least some of the qualities that people consider “intelligence”.
So a superior, godlike intelligence must possess a superior knowledge of the world and reasoning abilities. The act of thinking involves both of these qualities– we create a model (or models) of the world based on our experience & knowledge, we then try to predict what the world will look like in the future if we take a specific action, and then choose the most likeliest or desirable action.
Seen in these terms, limits to intelligence start to become visible.
First of all, in terms of knowledge, there are definite limits to what is knowable about the world (at least according to our current understanding of physics).
At the macro scale, we know very little about the universe that we live in. Some knowledge can be gained through further exploration, but there are vast reaches of the universe which are unreachable to us due to the expanding nature of the universe and the limitations of the speed of light.
Even if a superintelligence were to emerge it would not suddenly have a knowledge of the whole of the cosmos, and even if it dedicated the entirety of its existence to exploring and cataloguing the universe, the vast majority of it would be unknowable.
Humanity, and all future descendants whether they be biological or synthetic, are limited to a small radius of the universe by the laws of physics.
Of course, we can theorize about what is beyond our physical reach based on theories and simulations, but that leads to the second limitations of knowledge, the subatomic world of quantum mechanics.
Again, this is our current understanding of the universe, but at the quantum realm there are physical limits to what can be known. The quantum world behaves in chaotic, seemingly random ways with particles appearing and disappearing from existence in unpredictable ways.
There is also a law known as the uncertainty principle that limits us from knowing with absolute accuracy both the speed and position of a particle.
Why is what happens at the quantum scale important? Because in order to build an accurate model of the universe and simulate it, one needs to first understand how it works. But it appears (again according to our current understanding of the universe) that the fundamental building blocks of the universe, to a certain extent, are unknowable.
Recall that in order to think and strategize, we need to gather knowledge about the world, create a model, and simulate possible future outcomes.
But because there are limits to what can be known, not due to human limitations but due to the limitations of the physics of the universe, that necessary limits the accuracy of the models that we can create. There are absolute limits on what can be known at both the macro and micro scale.
This places a limit on the thinking ability of any superintelligent system. Their models will be imperfect, which means any simulations they can run (however quickly) will be flawed. There are things about the world that can only know about in terms of probability, and those probabilities can be wrong.
This becomes problematic for understanding and reasoning about complex systems due to chaos theory, where future outcomes of future outcomes are highly sensitive to initial conditions due to interconnection, feedback loops and other interrelated mechanisms.
This is best encapsulated in the Butterfly Effect, which is the idea that a single butterfly flapping its wings in one part of the world can lead to a storm occurring in another.
Although likely not literally true, it points to the chaotic nature of systems such as the weather which makes it impossible to predict with full accuracy, because the weather is so sensitive to initial conditions that we can’t create models that are accurate enough to represent it.
The map is not the territory as they say, and no approximation or simulation of reality can accurately model or predict future states of the world with 100% accuracy.
This means that no matter how superintelligent an AI system is, it will never be able to 100% predict:
- The weather
- The stock market
- The fate of the universe
- Human behavior
Seeing these limits to superintelligence starts to bring some comfort that no matter how intelligent a system is, it won’t always be right, although it will do better most of the time than any single human at any one of these tasks.
Of course, humans don’t just make decisions alone, they form organizations like businesses and governments, they build models of the world collaboratively and reach decisions though debate and discourse. These can create inefficiencies and politics and other pitfalls, but collective human intelligence can achieve higher levels of thought than a single human brain.
There can even be benefits to having competing agents (like in capitalism) and different perspectives in a single system, which could be superior to a single superhuman intelligent system. Of course, an AI could simulate multiple agents and create different perspectives within itself, but then it could also fall into the same pitfalls of human organizations (imagine an election campaign or war happening within the mind of a machine).
So we start to see there are some limits to superintelligence, but beyond that there may be limitations to how useful intelligence is in itself.
Going back to the IQ test, the test correlates well with academic success to a certain extent, but it doesn’t correlate with life outcomes.
There are people with below average IQs that nonetheless become successful in life and masterful at what they do, and there are those with above-average IQs who nonetheless don’t achieve much with their life.
Intelligence can also lead to success in some areas of life, but not others. An intelligent student might do well academically, but might be socially awkward, or overweight (poor health habits), or perform poorly in athletic or artistic tasks. The typical nerd stereotype is a testament to this.
So clearly intelligence alone is not a guarantor of fabulous fame, fortune and power.
A superintelligent AI system (or a human using an AI system), likewise may not automatically achieve world dominance just because it is drastically more intelligent than any human.
Intelligence can help people achieve goals, but it’s not the only requisite. Motivation, relationship building, brand awareness, creating things that other people want, luck– these are some of the necessary ingredients to actually having an impact on the world, not just smarts.
There could even be pitfalls to being too intelligent. Thought-patterns and strategies that are productive in one area of life are not as productive or are even counterproductive in other areas. Of course a superintelligent system can switch between different strategies (just as humans do), but there could be costs to juggling and switching between different systems.
There could also be analysis-by-paralysis. Overthinking things is a common problem among highly intelligent humans, a superintelligence could likewise get stuck in a loop of trying to calculate all the probabilities and weighing possible outcomes of an action, leading to inaction.
Of course, AI systems can run at speeds that greatly exceed human level thought. But there are limits here as well, because the physical world and many of the systems that we built are limited by physical laws. AI systems may be able to think in one minute what it would take a year for a human, but there are only so many actions-per-minute (to borrow a gaming term) that they can take in the real world.
So those are my thoughts on some of the limitations of superintelligence. It doesn’t completely negate the risk of a superintelligent system posing risk to humanity (breaking the Internet or launching nukes for example), but it does suggest that even advanced AI might not achieve god-level powers and there are limits to superintelligence.