Humans have been fascinated by AI for years. We are amused by machines that could control cars, cook delicacies, clean our homes, compose symphonies, and even beat the world’s best chess player at his forte.
Recent advancement in the capability of Artificial Intelligence (AI), have raised several ethical questions.
One of the most concerning questions is whether, or not, it would come to a time that humans would lose control over autonomous machines.
Questions and concerns about AI uprising might seem like a well-thought out plot for a sci-fi movie with big budgets, however, the thought is a topic for a research that says the scenario is possible.
The research institute’s paper, which was published, recently, in the Journal of Artificial Intelligence Research, focuses on science and technology trailing the path that could lead to the creation of the fictitious Stem, from the 2018 movie, Upgrade.
Imagine, as an example, if a super-intelligent AI system decides that the only way to save the planet, or end human’s suffering, is to end humanity.
What do you think would stop such system from executing such decision?
That question was posed by the researchers at Max Planck Society in their recently published research.
According to findings, which was duly recorded in the paper, the society basically said that if such happens, given the present technology, we are screwed!
Manuel Cebrian, co-author of the study, and leader of the Digital Mobilization Group at the Center for Humans and Machines of the Max Planck Institute for Human Development, said in a statement that:
“A super-intelligent machine that controls the world sounds like science fiction, but there are already machines that carry out certain important tasks independently, without the programmers fully understanding how they learned it […], a situation that could at some point become uncontrollable and dangerous for humanity.”
In the paper, the researchers experimented with two possible ways to control AI’s Skynet-styled domination.
One of the ways was to completely isolate the system from the Internet, stop it from communicating with other devices, and limiting its contact with the outside world.
While this could have stopped the apocalypse, the problem is that such restriction would greatly reduce the AI’s ability to perform the functions for which it was created.
The other way was to design a “theoretical containment algorithm” which would ensure that an Artificial Intelligence system “cannot harm people under any circumstances.”
However, an analysis of the current computing paradigm showed that no such algorithm can be created.
Another researcher on the project, Iyad Rahwan, explained that:
“If we decompose the problem into basic rules of theoretical computing, it turns out that an algorithm that instructed an AI not to destroy the world could inadvertently stop its own operations.“
“If this happened, we would not know if the containment algorithm would continue to analyze the threat, or if it would have stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable.“
Although, this is still theoretical, as we still seem a long way from having systems with such super intelligence.
However, people like Stephen Hawking and Elon Musk have made statements pointing to the likelihood of such sci-fi standard scenarios.
Professor Stephen Hawking said it is a “near certainty” that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.
Tesla’s CEO, Elon Musk, also described AI as our “biggest existential threat” and likened its development as “summoning the demon“.
Mr Musk also pointed out that some of the world’s best minds working on mitigating the threat of AI today, will have the relative intelligence of a chimpanzee, when compared with the super-intelligent machines of the future.
To make the research even more concerning, scientists added that it is highly likely that humans would have absolutely no idea, when they have finally created the system with such super intelligence.