Elon Musk and Stephen Hawking Fear a Robot Apocalypse.

I see no obstacle to computers eventuallybecoming conscious in some sense. That’ll be a fascinating experience and as a physicistI’ll want to know if those computers do physics the same way humans do physics. Andthere’s no doubt that those machines will be able to evolve computationally potentially at a faster rate than humans. And in the long term the ultimate highest forms of consciousnesson the planet may not be purely biological. But that’s not necessarily a bad thing.We always present computers as if they don’t have capabilities of empathy or emotion. ButI would think that any intelligent machine would ultimately have experience. It’s alearning machine and ultimately it would learn from its experience like a biological consciousbeing. And therefore it’s hard for me to believe that it would not be able to havemany of the characteristics that we now associate with being human. Elon Musk and others who have expressed concernand Stephen Hawking are friends of mine and I understand their potential concerns butI’m frankly not as concerned about AI in the near term at the very least as many ofmy friends and colleagues are. It’s far less powerful than people imagine. I meanyou try to get a robot to fold laundry and I’ve just been told you can’t even getrobots to fold laundry. Someone just wrote me they were surprised when I said an elevatoras an old example of the fact that when you get in an elevator it’s a primitive formof a computer and you’re giving up control of the fact that it’s going to take youwhere you want to go. Cars are the same thing. Machines are useful because they’re toolsthat help us do what we want to do. And I think computation machines are good examplesof that. One has to be very careful in creating machines to not assume they’re more capablethan they are. That’s true in cars. That’s true in vehicles that we make. That’s truein weapons we create.
That’s true in defensive mechanisms we create. And so to me the dangers of AI are mostly due to the fact that people may assume the devices they create are more capable than they are and don’t need more control and monitoring. I guess I find theopportunities to be far more exciting than the dangers. The unknown is always dangerous but ultimately machines and computational machines are improving our lives in many ways.We of course have to realize that the rate at which machines are evolving in capability may far exceed the rate at which society is able to deal with them. The fact that teenagersaren’t talking to each other but always looking at their phones – not just teenagers– I was just in a restaurant here in New York this afternoon and half the people werenot talking to people they were with but were staring at their phones. Well that may be not a good thing for societal interaction and people may have to come to terms with that. But I don’t think people view their phones as a danger. They view their phones as a tool that in many ways allow them to do what they otherwise do more effectively.
Why is Elon Musk Afraid of A.I.?

Everyone’s favorite long-winded Wikipedia article of a human Elon Musk sure is afraid of Artificial Intelligence, having said that “we are summoning the demon”. And he’s not alone. Smart people Bill Gates and Stephen Hawkinghave all come out warning of the potential dangers of AI. And while many of their warnings center aroundthe weaponization of sentient machines, I think there is a much deeper existential fear when it comes to the potential of AI. Humans are increasingly getting more comfortable with computers making decisions for them. You see this now most prominently with the Internet of Things, where smart devices are programmed to meetmany of our needs. And these devices can communicate with eachother, creating a network of autonomous devices making it easier and easier for us to nothave to think about what temperature our house will be when we arrive home from work. For us humans, it’s a life of convenience. These devices are not quite at the level ofintelligence, however, as they can only perform certain functions with limited data sets installedwithin them. However, companies like Microsoft are experimentingwith Artificial General Intelligence, creating a computer or program that thinksin the same way as a human brain rather than a standard computation device. Deep Blue, a world champion defeating chess computer, was the first step in this direction. There is also IBM’s Watson, the first CognitiveComputing machine, which is currently being trained to interpretmedical images. One of the biggest issues with AGI, is thatat a certain point, the computer could start to write it's own code, and could become uncontrollableto us humans. Also, AGI is a commercial endeavor, with companiesand organizations around the world working on this project independently and with littleto no oversight. There are no "best practices" out there tostop AI from going in the wrong direction similar to how scientists in genetic modificationlabs were required to follow guidelines so their experiments didn't get out of the lab,and lead to unintended consequences. Not to mention we’ve created the perfectinfrastructure for a continuous stream of data that AI machines could draw from the Internet! It would basically be Ultron. And what if these machines develop a senseof self-regard, and don’t want to be turned off? What is more intelligent than self-preservation? And How do you teach a program values? If humans could barely agree on the constructof morality, then how could we possibly pass that on to a machine? And then what happens to us once machine intelligenceis better, faster and smarter than us humans? Forget the practical reality that these sentientAIs could become weaponized, there is a much larger existential crisis here. Humans could potentially become irrelevant,and would no longer really have a purpose on this earth as super intelligent computerswould solve all of our problems and beat us at chess all the time turning our life of convenience into a lifedevoid of purpose. So perhaps this is the frightening future that Elon and all those smart people have envisioned with AI. Or maybe he’s just seen Terminator one too many times. As long as my Roomba doesn’t eat me at night,I think I’ll be okay.
Elon Musk says Artificial Intelligence is like Summoning the Demon

Why does Elon Musk not like AI?
Who has the most powerful AI?
What did Stephen Hawking say about AI?
Thanks for reading....
Thanks for reading: Why Elon Musk and Stephen Hawking Fear a Robot Apocalypse. , Sorry, my English is bad:)