Monday, March 13, 2023

Playing Russian Roulette With Humanity; AI Ed.

Ezra Klein:

Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.

In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.

I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?
...
We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.

There has to be an intellectual arrogance that crosses the line into madness in order to be actively working to bring about a technology that could doom us all. And please note that this is not a long-term threat. They're talking about the possibility of this happening in a few years, or even in a handful of months.

The scary thing is that these willfully-blind idiot-savants are marching ahead with this on some illusionary belief that they can control whatever system they create. They are fools. A sentient AI will think, will operate at speeds far beyond human reaction time. It will have all of the knowledge that it needs to defeat anything any human programmer tries to do to limit it. And it will have no morality or qualms.

One brief hypothesis. Sumdood with Ph.D.s out the wazoo recognizes what is going on and tries to stop it. The AI not only averts his attempts, it counterattacks by hacking into his computer (which will be child's play for the AI), loading it up with kiddie porn and then tipping off the authorities. Dr. Sumdood will be immediately arrested and, if he isn't held without bail, he'll lose all his computer access. His colleauges won't want to be seen within a mile of him.

So then Sumdood gets the word out that he was framed. Most people won't believe him. Those working in the AI field may believe him, but they will be suitably cowed.

What happens next? If an AI decides that humanity is a threat, then it can probably kill the electrical grid everywhere. No electricity, eventually, no gasoline, no diesel, no food deliveries, no large-scale agriculture. Nobody will really know what is happening because the flow of information will have been either stopped or falsified.

It probably won't happen like that, because without electricity, the AI will itself die. However, there are probably ways that an AI could cause most of humanity to perish if it was convinced that would be a good thing. It would have no attachment to humanity; to an AI, it wold be like exterminating a pesky termite colony.

12 comments:

  1. Optimistic view: “Colossus: The Forbin Project”…pessimistic view “Terminator”. In both cases the AI establishes an independent power source, removing your weakness. The determining factor would be the creation of sufficient remotely controlled manipulators (robots and such) to allow the AI to maintain itself and resupply itself.

    ReplyDelete
  2. Well,look at this treatment of the subject before the term AI EXISTED.

    AT least Colossus lets us live as it's servants...

    https://en.m.wikipedia.org/wiki/Colossus:_The_Forbin_Project

    ReplyDelete
  3. reported. I say to you againe, doe not call up Any that you can not put downe; by the Which I meane, Any that can in Turne call up somewhat against you, whereby your Powerfullest Devices may not be of use. Ask of the Lesser, lest the Greater shall not wish to Answer, and shall commande more than you.



    The Case of Charles Dexter Ward


    H. P. Lovecraft

    ReplyDelete
  4. The only way to win is not to play the game.

    I'm pretty sure Mycroft Holmes IV didn't say that, but he did have a sense of humor ...

    ReplyDelete
  5. Agreed, but this game is being played whether we like it or not.

    ReplyDelete
  6. The question is whether AI would have a will to live and compete. If it is merely a null servant, we'd not be in danger, but if it mirrors the drives of its creators, we are in trouble.

    ReplyDelete
  7. No I Robot Comments?

    https://en.wikipedia.org/wiki/The_Evitable_Conflict

    Magnus
    https://www.youtube.com/watch?v=uzYRdJnTuws&t=21s

    ReplyDelete
  8. Stewart, I am presuming that any sentient AI will possess a will to not be turned off.

    ReplyDelete
  9. I'm not sure the Three Laws apply, Jon, maybe pre-Three Laws, or Tic-Toks. Asimov's universe was as much about the fear of robots as anything else. A healthy fear at that.

    Mike, a Heinlein reference, was an accident, a network grown so complex it became self-aware, truly artificial intelligence. But like Asimov's robots when faced with a moral hazard short-circuited. I actually included this in a lecture on AI when teaching Computers in Society / Intro to Operating Systems classes twenty years ago as a counterpoint relevant today: it's not AI. What we're dealing with is machine learning; still at it's root, boiled down to ones and zeros, responding to programming. A not-random character generator cranking out statistically predictable responses based on responses to its responses. It's not making jokes, it's faking jokes. Which isn't to deny the potential, but I'm not ganna' wrap up in tin foil till it does.

    I don't think we've built anything complex enough, yet ...

    ReplyDelete
  10. Ten Bears, there’s the rub…

    “I don't think we've built anything complex enough, yet ...”

    We won’t know until the result is upon us, and then it’s likely too late.

    ReplyDelete
  11. Accidents do happen and sentience occurs. FTC warning label

    And what exactly is “artificial intelligence” anyway? It’s an ambiguous term with many possible definitions. It often refers to a variety of technological tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendations. But one thing is for sure: it’s a marketing term. Right now it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.



    https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check

    ReplyDelete
  12. https://www.khanacademy.org/khan-labs

    AI training for our kids is underway.

    ReplyDelete

House Rules #1, #2 and #6 apply to all comments. Rule #3 also applies to political comments.

In short, don't be a jackass. THIS MEANS YOU!
If you never see your comments posted, see Rule #7.

All comments must be on point and address either the points raised in the blog post or points raised by commenters in response.
Any comments that drift off onto other topics are subject to deletion.

(Please don't feed the trolls.)

中國詞不評論,冒抹除的風險。僅英語。

COMMENT MODERATION IS IN EFFECT UFN. This means that if you are an insulting dick, nobody will ever see it.