One of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.
...
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
How do we engineer in a kill-switch to destroy a rogue AI and do it in such a way that the AI can't back itself up in servers around the world and the IoT before the human overseers can realize what is going on?
Nobody is going to be able to say, when an AI goes rogue and becomes uncontrollable, that they couldn't have seen it coming. Because we all know it's going to happen, sooner or later.
I see no real way to avoid this. Some sociopathic tech genius is going to do this, because he'll believe that he's smart enough to stop it from happening. But he'll be wrong and we'll all pay the price.
UPDATE: The Zoomies deny that it happened. You can make your own call as to which version is true. My guess is that the denial is the lie. Biological intelligence will look for ways to get around the rules to get what it wants, for example, octopuses, the New England Patriots and venture capitalists. It'd be foolish to expect anything different from artificial intelligence.
You and everyone else had been warned:
ReplyDeleteBroke Down Engine: Thirteen wry and terrifying tales of homo mechanicus (what we are becoming) versus the machines we have created. In a technocracy gone mad, in a universe populated by note-passing refrigerators, killer stoves, houses that cuckold their owners, medical androids, and a host of other malfunctioning mechanisms -- the moment of truth is the moment of breakdown.
https://www.goodreads.com/book/show/4917437-broke-down-engine
I suspect the final result will depend on the actual first use where an AI goes rogue. If it’s in a limited enough situation, we’ll see a massive backlash that will hopefully neuter the threat…otherwise, we’ll be fighting them in the ruins.
ReplyDeleteOnly one thing came to mind V'GER... see Star Trek if you need a clue.
ReplyDeleteOthers that come to mind is Colossus: Forbin project, Adolescence of P1,
War Games.
However Asimov's MultiVAC (Life and times of multivac) at least had character. Its recurring theme/meme seen from early scifi though
current.
However we can go back to the predictive or not so predictive logic
and stories.
Bottom line computer are stupid, repetitive, and have good memory.
So getting a AI program that has the objective confused with the
control inputs is well, expected.
Eck!
Now reported as misquoted, and that it was a thought experiment…yea, right…
ReplyDeleteAI’s biggest risk isn’t ‘consciousness’ ~ it’s the corporations that control them ...
ReplyDeleteMy favorite reaction to this story came from Joe Seiders, the drummer for the New Pornographers, who said "Yeah, I had that happen once with a drum machine."
ReplyDelete-Doug in Sugar Pine
Don't forget about the Chinese
ReplyDeleteAs long as the AI doesn't harm the shareholders.
ReplyDeleteThe obvious fix would be to grant double the points for a successful mission to just obeying an order. That way, obeying a 'no-go' command would pay more than disobeying or even avoiding receiving such an order, then completing the mission.
ReplyDeleteNegative points for own-goals need to be built into the training and reinforcement routines as well.
Even with all the safeguards that could be added, there ought to be a ceiling on how powerful a suite of weapons an AI is allowed to control, to limit the potential damage of an 'oops'.