Artificial Intelligence (AI) is is generally good at one specific task. Whereas Artificial General Intelligence (AGI) could be applied to a wide variety of problems. But there is a growing concern regarding something called singularity where AI becomes super intelligent and takes control over its own destiny.

DeepMind

DeepMind is a company (owned by Google) that developing AGI. It first developed an ability to play some basic Atari games and then moved onto, what is considered almost impossible for AI, the game of GO. Then they moved past that and tackled another seemingly impossible challenge ‘protein folding’. 

What they have now developed are AI models that can learn any game even when it hasn’t been give the rules, let alone how to play it. This means that it can learn how to play a game from scratch without human interference or guidance. This is where cutting edge AI is, it is getting much closer to AGI. Also AI can also be called ANI (artificial narrow intelligence).

Artificial Super Intelligence

Whereas AI is already here and is developing at a very fast rate, some believe that AGI is possibly already here to some degree. Which means it has abilities similar to a human. The ability to learn new tasks rather than just be programmed to do something new. 

If you extrapolate this notion, people are considering the possibility that one day AI will be able to rewrite its own code, improving itself without human intervention thus enabling it to improve functionality without us knowing about it. Obviously this means that there could be no stopping it. It wouldn’t need anyone to programme it. 

The concern is that it could overtake human intelligence and be what we would class as super intelligent, that is more intelligent than humankind collectively. What that will look like and what that would mean for us is anyone’s guess. Fear points to a dystopian future whereas some think this will lead to utopia where AI will solve all of mankind’s problems. Some think that this will never really be achieved and if it did it would be such a long way off into the future.

Evolving exponetially

This is the controversial point at which AI in some form or other becomes super intelligent. It is the point where we have no control of it. Some describe it as a runaway intelligence or endlessly evolving and improving at an exponential rate. The unpredictability of what it might do next is what scares people, this includes those who are forefront of AI in the industry. Remember we (the experts) still don’t even really understand what is happening inside these large AI models.

It often does things that surprises the scientists working on it today. We are talking about basic AI as of now. Image if you programme it to write its own code by itself then we have no chance of understanding what is going on inside, it is not called a black box for nothing. There again we barely understand how our own brains work!

But when?

When (or if) they surpass us then what? Will they benign or could they perceive us as a threat, who knows. This what people are fearful of, the unknown. Yet this unknown seems to be racing towards us. The question is how do you define intelligence, how do you define being human, being self aware etc, we understand ourselves so little that we end up worry about something we didn’t understand in the first place. 

It could happen tomorrow, next century or never. My personal option is that we will create something that I call sentient or self aware but I suspect that it isn’t going to happen very soon especially if you consider that the human brain has billions of neurons and even the largest model cannot compete with anything like that. 

It may be smart but to have feelings, emotions and desires is quite another thing. 

Photo by Andy Kelly on Unsplash

Wikipedia have an article on this for further reading

Singularity