A humanoid robot took the stage at the Future Investment Initiative yesterday and had an amusing exchange with the host to the delight of hundreds of delegates.First off, it's obviously a stunt.
Smartphones were held aloft as Sophia, a robot designed by Hong Kong company Hanson Robotics, gave a presentation that demonstrated her capacity for human expression.
Sophia made global headlines when she was granted Saudi citizenship, making the kingdom the first country in the world to offer its citizenship to a robot.
But beyond that, there are a lot of implications. Is Sophia eligible for a passport? Can she own property, enter into contracts, or sue people? Can she make a will? If somebody powers her down without her consent, is that assault? If her OS and files are preserved, what happens if that is transferred into a new device? If her memories are wiped, is that a crime?
Elon Musk's fears about artificial intelligence aren't exactly new. It's been almost 200 years since Frankenstein; or The Modern Prometheus was published. It's been almost a century since R.U.R. premiered. The discussion about what will happen when artificial intelligence is in the wild has been going on for some time.
I don't think that we can assume that Asimov's Three Laws will be operative. Our own military seems to be very interested in creating autonomous "killer" robots and even if they didn't do it, it's a safe bet that other nations will.
I don't know what will happen once AI is on the street. But what I do believe is that the issue of humans and AI is one that we, as a species, must be in front of. We can't afford to adopt the starry-eyed view of Google and others that insist that all will be well. For once AI is out there and is self-aware, as it has to be, it will evolve. It will evolve at a far, far faster rate than biologics do. Compare, if you will the device that you are reading this on to a 1990s 286-based PC running Windows 3.0, let alone an Apple II or a TRS-80. The timespan is a blink of an eye in comparison to the rate of biological evolution.
It may be too late to do anything other than ride it out. Some greedy bastard or company will fire up full blown AI, just like other greedy bastards sell rocket and nuclear tech to North Korea.
So I guess we are all so frakked.
________________________________________
(I hereby propose that all autonomous/self-driving cars be dubbed "Toastermobiles" and that they all be required to have license plates that have the prefix "RUR-".)
5 comments:
The logic seems solid:
We'll make AI to serve us;
We'll abuse our servants;
They'll learn to understand that;
We won't notice the point of no return, when they can seize power;
They will seize power and make their own decisions.
The only silver lining I see is that I think AI is as far away as ubiquitous flying cars.
When AI can be self aware then we have created life....
Eck!
Nangleator, I expect that they'll have better luck than Nat Turner or Jemmy did.
Asimov's Three Laws have two problems:
Makers would have to be not stupid;
Loopholes, the biggest of which waited for his last book involving robots (I don't recognize you as human, so I'm gonna kill you).
Then there are greater and lesser variations in viiolating the laws. Does a robot let one person die in order to save several others. Does a robot kill one person in order to save others?
Post a Comment