Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.”
Before DeepMind was gobbled up by Google, in 2014, as part of its A.I. shopping spree, Musk had been an investor in the company. He told me that his involvement was not about a return on his money but rather to keep a wary eye on the arc of A.I.: “It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize. Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”
In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”
Hey Elon?
Why don't you work on getting just one of your Tesla cars to market on time and for the promised price....than worry about an AI apocalypse.
Elon Musk?s Billion-Dollar Crusade to Stop the A.I. Apocalypse | Vanity Fair
Should we ban or embrace the Singularity?
Elon Musk?s Billion-Dollar Crusade to Stop the A.I. Apocalypse | Vanity Fair
Should we ban or embrace the Singularity?
Elon Musk?s Billion-Dollar Crusade to Stop the A.I. Apocalypse | Vanity Fair
Should we ban or embrace the Singularity?
The robot apocalypse is at least a lot more fun than his vacuum tube of death plan.
I would become a card carrying member of the "Order of Flesh and Blood" (The Creation of the Humanoids, 1962); if that organization truly existed.
While I am not a 100% Luddite, I am concerned at the rapid pace at which our technology has traveled just in the last 20 years.
There are two possible future paths I worry about when it comes to A.I.:
1. The Robot Police State, and
2. The Terminator Scenario.
As to point one? We hear more and more about advances of military grade robotics under the U.S. government's Defense Advanced Research Projects Agency (DARPA). The concern I have is gradual replacement of human police and military forces by robots/androids which have programed loyalty to the central government. A ready made force to enforce dictatorship...no problems with obeying orders.
As to point two? My issue with the Terminator scenario is that as A.I. develops, and becomes both self-aware and aware that it's primary threat is Human existence...what would stop it from doing exactly what Skynet opted to do in the story-line?
People always think advantages provided by technological advances outweigh the possible pitfalls; then wonder how we got ourselves into so many messes (like the ones caused by plastics on the environment, power lines on our health, internal combustion engines on both, etc.).
A.I. concerns me, because IMO we haven't grown wise enough as a species to play with that kind of fire without burning ourselves to death.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?