New post Need visibility? Apply for a FREE post for your Startup.  Apply Here

Artificial IntelligenceNews

Super-Intelligent AI Would Be Impossible For Humans To Control, Scientists Warn

3 Mins read

Humans have been fascinated by AI for years. We are amused by machines that could control cars, cook delicacies, clean our homes, compose symphonies, and even beat the world’s best chess player at his forte.

Recent advancement in the capability of Artificial Intelligence (AI), have raised several ethical questions.

One of the most concerning questions is whether, or not, it would come to a time that humans would lose control over autonomous machines.

Questions and concerns about AI uprising might seem like a well-thought out plot for a sci-fi movie with big budgets, however, the thought is a topic for a research that says the scenario is possible.

The research institute’s paper, which was published, recently, in the Journal of Artificial Intelligence Research, focuses on science and technology trailing the path that could lead to the creation of the fictitious Stem, from the 2018 movie, Upgrade.

Imagine, as an example, if a super-intelligent AI system decides that the only way to save the planet, or end human’s suffering, is to end humanity.

What do you think would stop such system from executing such decision?

Skynet (From the movie – Terminator)

That question was posed by the researchers at Max Planck Society in their recently published research.

According to findings, which was duly recorded in the paper, the society basically said that if such happens, given the present technology, we are screwed!

Manuel Cebrian, co-author of the study, and leader of the Digital Mobilization Group at the Center for Humans and Machines of the Max Planck Institute for Human Development, said in a statement that:

A super-intelligent machine that controls the world sounds like science fiction, but there are already machines that carry out certain important tasks independently, without the programmers fully understanding how they learned it […], a situation that could at some point become uncontrollable and dangerous for humanity.”

In the paper, the researchers experimented with two possible ways to control AI’s Skynet-styled domination.

Read also: Google Has an AI That Can Create Its Own Baby AI

One of the ways was to completely isolate the system from the Internet, stop it from communicating with other devices, and limiting its contact with the outside world.

While this could have stopped the apocalypse, the problem is that such restriction would greatly reduce the AI’s ability to perform the functions for which it was created.

Image source: ADVA

The other way was to design a “theoretical containment algorithm” which would ensure that an Artificial Intelligence system “cannot harm people under any circumstances.”

However, an analysis of the current computing paradigm showed that no such algorithm can be created.

Another researcher on the project, Iyad Rahwan, explained that:

If we decompose the problem into basic rules of theoretical computing, it turns out that an algorithm that instructed an AI not to destroy the world could inadvertently stop its own operations.

“If this happened, we would not know if the containment algorithm would continue to analyze the threat, or if it would have stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable.

Read also: Japan To Fund AI Matchmaking Scheme To Help Residents Find Love

Although, this is still theoretical, as we still seem a long way from having systems with such super intelligence.

However, people like Stephen Hawking and Elon Musk have made statements pointing to the likelihood of such sci-fi standard scenarios.

Professor Stephen Hawking said it is a “near certainty” that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.

Tesla’s CEO, Elon Musk, also described AI as our “biggest existential threat” and likened its development as “summoning the demon“.

Mr Musk also pointed out that some of the world’s best minds working on mitigating the threat of AI today, will have the relative intelligence of a chimpanzee, when compared with the super-intelligent machines of the future.

To make the research even more concerning, scientists added that it is highly likely that humans would have absolutely no idea, when they have finally created the system with such super intelligence.

Don’t miss any tech news ever!

We don’t spam! Read our privacy policy for more info.

804 posts

About author
When I'm not reading about tech, I'm writing about it, or thinking about the next weird food combinations to try. I do all these with my headphones plugged in, and a sticky note on my computer with the words: "The galaxy needs saving, Star Lord."
Related posts

Flutterwave suffers another major security breach, loses billions

1 Mins read
Nigerian fintech unicorn, Flutterwave, has suffered yet another devastating security breach, with unknown perpetrators illegally diverting billions of naira into unauthorized accounts…

Nigeria to open Startup House in San Francisco Tech Hub

1 Mins read
The Nigerian government is making a bold move to establish a stronger presence in the global technology scene. The Federal Executive Council…
ArticleArtificial IntelligenceRandom

The Wall-Climbing Robots: An Evolution For Infrastructural Inspection

2 Mins read
For decades, inspecting aging infrastructures especially high rise structures has been a slow, perilous, and costly endeavor. Traditional methods, such as scaffolding,…
Newsletter Subscription

🤞 Don’t miss any update!

We don’t spam! Read more in our privacy policy

Join our Telegram channel here -

1 Comment

Leave a Reply