Google's Artificial Intelligence researchers at DeepMind have proposed that a ‘kill switch' be built for AI to prevent machines from outsmarting us and taking over. The ideaΒ is to create a way to “repeatedly safely interrupt” an algorithm, according to a new paper created in association with the University of Oxford (Via: Wired).
“Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for”.
The paper was penned by Laurent Orseu from DeepMind and Stuart Armstrong from the Future of Humanity Institute and it explains that an interruption policy could act as a safe guard to safely stop an AI machine from performing a task.
The paper also explains that Reinforcement learning algorithms normally work in complex ‘real world' environments and are unlikely to work as intended each and every time, so a stop button may be necessary for human operators. One example of an algorithm not working as intended is Google's plan to teach an AI how to not lose at Tetris but instead of playing the game, the AI would pause it to avoid losing.
There is the possibility of an AI learning how to disable its own stop button but to counter that, the hope is to make human interruptions of algorithms not appear as being part of the task at hand. However, it is unclear if all algorithms can be made safely interruptable.
Discuss on our Facebook page, HERE.Β
KitGuru Says: Given how scary AI could potentially be, it makes sense to implement safe guards like this to keep AI from getting ideas of its own.Β
On a real, is the kind of artificial intelligence google etc are playing around with even able to become smart and dangerous enough to constitute the need for a kill switch? Sure, they can learn, but to gain consciousness and destroy systems sounds a little TOO sci-fi unless you specifically program the artificial intelligence to learn in that way (And even then, it would probably only be pseudo-consciousness)
This can also be dangerous on its own. When A.Is do gain the ability to “think” and we still have the kill switch… yeah, if your entire population has a kill switch implemented on you… that never ends well.
just pull the plug
watching too much Person of Interest or what is going on here? π
i’m sure the robot overlords won’t mind that we had them at gunpoint like that before they disable it. i’m sure the explosive collars we’ll all wear will not be used in retribution.
^^^^There’s that b!tch making over $10000 a month. I wondered when she would show up…
That’s the whole point though, we’ll be blindsided by the AI that we THOUGHT was never intended to become that smart…the threat isn’t always from where you most expect it…