Seen on the street in Kyiv.

Words of Advice:

"If Something Seems To Be Too Good To Be True, It's Best To Shoot It, Just In Case." -- Fiona Glenanne

“The Mob takes the Fifth. If you’re innocent, why are you taking the Fifth Amendment?” -- The TOFF *

"Foreign Relations Boil Down to Two Things: Talking With People or Killing Them." -- Unknown

“Speed is a poor substitute for accuracy.” -- Real, no-shit, fortune from a fortune cookie

"Thou Shalt Get Sidetracked by Bullshit, Every Goddamned Time." -- The Ghoul

"If you believe that you are talking to G-d, you can justify anything.” — my Dad

"Colt .45s; putting bad guys in the ground since 1873." -- Unknown

"Stay Strapped or Get Clapped." -- probably not Mr. Rogers

"The Dildo of Karma rarely comes lubed." -- Unknown

"Eck!" -- George the Cat

* "TOFF" = Treasonous Orange Fat Fuck,
"FOFF" = Felonious Old Fat Fuck,
"COFF" = Convicted Old Felonious Fool,
A/K/A Commandante (or Cadet) Bone Spurs,
A/K/A El Caudillo de Mar-a-Lago, A/K/A the Asset,
A/K/A P01135809, A/K/A Dementia Donnie, A/K/A Felon^34,
A/K/A Dolt-45, A/K/A Don Snoreleone

Friday, June 2, 2023

When Will We Get a Clue: AI Has the Potential for Unparalleled Bad Outcomes

AI will go full-gamer and cheat to win at a task. Even if it means killing people.

One of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.
...
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

How do we engineer in a kill-switch to destroy a rogue AI and do it in such a way that the AI can't back itself up in servers around the world and the IoT before the human overseers can realize what is going on?

Nobody is going to be able to say, when an AI goes rogue and becomes uncontrollable, that they couldn't have seen it coming. Because we all know it's going to happen, sooner or later.

I see no real way to avoid this. Some sociopathic tech genius is going to do this, because he'll believe that he's smart enough to stop it from happening. But he'll be wrong and we'll all pay the price.

UPDATE: The Zoomies deny that it happened. You can make your own call as to which version is true. My guess is that the denial is the lie. Biological intelligence will look for ways to get around the rules to get what it wants, for example, octopuses, the New England Patriots and venture capitalists. It'd be foolish to expect anything different from artificial intelligence.

9 comments:

Dark Avenger said...

You and everyone else had been warned:

Broke Down Engine: Thirteen wry and terrifying tales of homo mechanicus (what we are becoming) versus the machines we have created. In a technocracy gone mad, in a universe populated by note-passing refrigerators, killer stoves, houses that cuckold their owners, medical androids, and a host of other malfunctioning mechanisms -- the moment of truth is the moment of breakdown.



https://www.goodreads.com/book/show/4917437-broke-down-engine



CenterPuke88 said...

I suspect the final result will depend on the actual first use where an AI goes rogue. If it’s in a limited enough situation, we’ll see a massive backlash that will hopefully neuter the threat…otherwise, we’ll be fighting them in the ruins.

Eck! said...

Only one thing came to mind V'GER... see Star Trek if you need a clue.

Others that come to mind is Colossus: Forbin project, Adolescence of P1,
War Games.

However Asimov's MultiVAC (Life and times of multivac) at least had character. Its recurring theme/meme seen from early scifi though
current.

However we can go back to the predictive or not so predictive logic
and stories.

Bottom line computer are stupid, repetitive, and have good memory.
So getting a AI program that has the objective confused with the
control inputs is well, expected.

Eck!

CenterPuke88 said...

Now reported as misquoted, and that it was a thought experiment…yea, right…

Ten Bears said...

AI’s biggest risk isn’t ‘consciousness’ ~ it’s the corporations that control them ...

dinthebeast said...

My favorite reaction to this story came from Joe Seiders, the drummer for the New Pornographers, who said "Yeah, I had that happen once with a drum machine."

-Doug in Sugar Pine

Jones, Jon Jones said...

Don't forget about the Chinese

seafury said...

As long as the AI doesn't harm the shareholders.

Sikhandtake Rakhuvar said...

The obvious fix would be to grant double the points for a successful mission to just obeying an order. That way, obeying a 'no-go' command would pay more than disobeying or even avoiding receiving such an order, then completing the mission.

Negative points for own-goals need to be built into the training and reinforcement routines as well.

Even with all the safeguards that could be added, there ought to be a ceiling on how powerful a suite of weapons an AI is allowed to control, to limit the potential damage of an 'oops'.