Artificial Superintelligence: can we keep a super-intelligent genie locked up in its bottle?

Last update: February 20, 2024
Reading time: 4 minutes
By Brain Matters

The human race is without any doubt a very successful species. We have managed to populate the entire world, from arid deserts to icy tundras. We have tamed wild animals and we can surmount the biggest and strongest creatures with our weapons. We don’t have a natural enemy to fear and we learned to survive in extreme circumstances. All in all, you could say that we are in some sense the superior species on planet earth. But our superior position could be in danger…

Before I can explain what is threatening our position, we should take a look at the main strength that brought us to where we are today, our intelligence. The human race is, unlike other species, able to think in a very sophisticated way. For example, we are able to remember and look ahead, that way we can learn from the past and prepare for the future. Another important aspect of intelligence is language. By using language we can share information so others can build on previous knowledge that could have been acquired generations ago. These skills helped us survive and thrive. All in all, we should thank our brains for our top position in the animal kingdom. Thanks to our intelligence we managed to develop tools that became increasingly complex, from a primitive axe to a smartphone. At the moment we are on the edge of a huge technological revolution. Since the first computer was developed many more technological innovations followed. The rate of the development of innovations is starting to go up more rapidly. Technological progress is increasing exponentially, which means that soon technological advances will skyrocket. Even though we know technological advances are exponentially growing, we tend to underestimate the rate at which technological progress is made. I’ll illustrate the reason for this underestimation by using a metaphor:

“If you were to place grains on the squares of a chessboard such that the first square gets 1 grain, the second 2 grains, and the third 4 grains, etc. How many grains will be on the chessboard when you finish?”

The correct answer is 9 223 372 036 854 775 808 grains. This is way more than you expect, right? We see the same effect when we look at our view on technological progress; we don’t expect it to grow at the rate that it does. This means that science fiction themes such as super-intelligent robots that can think independently might be closer than we think. There are already robots out there that are able to ‘reproduce’ themselves without any humans involved by using genetic algorithms. But what if these fun technological developments get out of hand? What if the progress is going too fast and we lose grip? What if artificial intelligence (AI) gets more intelligent than we are? Will robots then take over our superior spot in the animal kingdom?

According to many influential scientists, such As Nick Bostrom, a scientist and philosopher of artificial intelligence, this scenario is very realistic. Likely, computers will someday be more intelligent than we are. An advantage of computers is that they have in theory an unlimited storage capacity. Whole buildings can be created to store computer data, while our brains have limited storage space since it needs to fit in our skull. A reason that computers haven't caught up on us yet is because a computer needs much more energy to function compared to our brain. The brain only needs energy equal to 20 watts to function (which is less energy than required to keep a dim light bulb lit). On the other hand, the fastest supercomputer (the USA’s frontier supercomputer)needs 21 million watts to function. Nonetheless, it’s only a matter of time before science finds a solution to the energy problem. Nick Bostrom, therefore, encourages fellow scientists to develop a way to keep artificial intelligence under control before it’s too late. He recommends developing a safety mechanism before we create more advanced AI. You might think: “Well, why don’t we just put an on-and-off switch on super-intelligent AI?”. It’s just not that simple, and I will explain why. Technically, humans have multiple off switches such as hindering the supply of oxygen by closing the trachea, stopping the blood flow by destroying the heart, or damaging the control center of the body (our brain). Although we have those off switches, it’s not easy to deactivate us because we find ways to avoid being deactivated. Super-intelligent AI could be able to do the same thing and this already started happening in some sense. For example, do you know where the off switch to the internet is? The point that I am trying to make is that we might not be able to control something superior to us. As Nick Bostrom says: ‘’We should not be overestimating our ability to keep a super-intelligent genie locked up in his bottle forever”.

Although many technological advancements are very valuable, it’s also important to be aware of the possible dangers. Soon there will be an explosion of technological developments. When exactly this will happen and what its consequences will be no one can know for sure. The least we can do is think about the possible consequences of super-intelligent AI and be prepared before it’s too late. We should make sure that we keep controlling AI and avoid that AI starts controlling us. Before we make a super-intelligent genie we should develop a lamp that is strong enough to detain it.

Author: Pauline van Gils 

Related Posts
Check onze database
Alles wat je wilt weten over het brein op één plek. 
Related posts:
Here you will write about your company, a tittle description with a maximum of 2 sentences
Copyright © 2022 Brainmatters