Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Science https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Calculations show that it will be impossible to control super intelligent AI

Calculations show that it will be impossible to control super intelligent AI



The idea of ​​destroying humanity’s artificial intelligence has been debated for decades, and scientists have just issued a verdict on whether we can control a high-level computer superintelligent. The answer? Almost definitely not.

The trick is that controlling super-intelligence, far beyond human understanding, will require a simulation of that super-intelligence that we can analyze. But if we cannot understand it, it is impossible to create such a simulation.

Rules such as “do no harm to people” cannot be set if we do not understand the type of scenarios that AI will come up with, the authors of the new report suggest. Once the computer system operates at a level beyond the reach of our programmers, we can no longer set limits.

“Superintelligence poses a fundamentally different problem than what is usually studied under the banner of ̵

6;robot ethics,’ the researchers write.

“This is because superintelligence is multifaceted and therefore potentially capable of mobilizing a variety of resources to achieve goals that are potentially incomprehensible to humans, let alone controlled.”

Part of the team’s reasoning came from the problem of shutdown, highlighted by Alan Turing in 1936. The problem focuses on whether the computer program will come to a conclusion and answer (so it will stop) or just come back forever trying to find such.

As Turing proved through intelligent mathematics, although we may know that for some specific programs it is logically impossible to find a way to allow us to know this for any potential program that could ever be written. This brings us back to artificial intelligence, which in a super-intelligent state could keep all possible computer programs in its memory at once.

Any program written to stop artificial intelligence that harms people and destroys the world, for example, can come to a conclusion (and stop) or not – it is mathematically impossible to be absolutely sure in both cases, which means that it cannot to be infected.

“In fact, this makes the constraint algorithm unusable,” said computer scientist Iyad Ravan of the Max Planck Institute for Human Development in Germany.

The alternative to teaching artificial intelligence an ethic and telling it not to destroy the world – something no algorithm can be absolutely sure of, researchers say – is to limit the possibilities of superintelligence. It can be cut, for example, from parts of the Internet or from certain networks.

The new study also rejects this idea, suggesting that it will limit the scope of artificial intelligence – the argument is that if we are not going to use it to solve problems beyond the reach of humans, then why create it at all?

If we move forward with artificial intelligence, we may not even know when superintelligence arrives, which is beyond our control, such is its incomprehensibility. This means that we need to start asking serious questions about the directions we are going.

“A super-intelligent machine that controls the world sounds like science fiction,” said computer scientist Manuel Sebrian of the Max Planck Institute for Human Development. “But there are already machines that perform certain important tasks independently, without programmers fully understanding how they learned it.”

“This raises the question of whether this can at some point become uncontrollable and dangerous to humanity.”

The study was published in Journal of Artificial Intelligence Research.


Source link