Why Elon Musk and Stephen Hawking Fear a Robot Apocalypse.

What did Stephen Hawking say about AI? Who has the most powerful AI? Why does Elon Musk not like AI? Elon Musk says Artificial Intelligence is li
7 Read time

 Elon Musk and Stephen Hawking Fear a Robot Apocalypse.



I see no obstacle to computers eventuallybecoming conscious in some sense. That’ll be a fascinating experience and as a physicistI’ll want to know if those computers do physics the same way humans do physics. Andthere’s no doubt that those machines will be able to evolve computationally potentially at a faster rate than humans. And in the long term the ultimate highest forms of consciousnesson the planet may not be purely biological. But that’s not necessarily a bad thing.We always present computers as if they don’t have capabilities of empathy or emotion. ButI would think that any intelligent machine would ultimately have experience. It’s alearning machine and ultimately it would learn from its experience like a biological consciousbeing. And therefore it’s hard for me to believe that it would not be able to havemany of the characteristics that we now associate with being human. Elon Musk and others who have expressed concernand Stephen Hawking are friends of mine and I understand their potential concerns butI’m frankly not as concerned about AI in the near term at the very least as many ofmy friends and colleagues are. It’s far less powerful than people imagine. I meanyou try to get a robot to fold laundry and I’ve just been told you can’t even getrobots to fold laundry. Someone just wrote me they were surprised when I said an elevatoras an old example of the fact that when you get in an elevator it’s a primitive formof a computer and you’re giving up control of the fact that it’s going to take youwhere you want to go. Cars are the same thing. Machines are useful because they’re toolsthat help us do what we want to do. And I think computation machines are good examplesof that. One has to be very careful in creating machines to not assume they’re more capablethan they are. That’s true in cars. That’s true in vehicles that we make. That’s truein weapons we create. 


That’s true in defensive mechanisms we create. And so to me the dangers of AI are mostly due to the fact that people may assume the devices they create are more capable than they are and don’t need more control and monitoring. I guess I find theopportunities to be far more exciting than the dangers. The unknown is always dangerous but ultimately machines and computational machines are improving our lives in many ways.We of course have to realize that the rate at which machines are evolving in capability may far exceed the rate at which society is able to deal with them. The fact that teenagersaren’t talking to each other but always looking at their phones – not just teenagers– I was just in a restaurant here in New York this afternoon and half the people werenot talking to people they were with but were staring at their phones. Well that may be not a good thing for societal interaction and people may have to come to terms with that. But I don’t think people view their phones as a danger. They view their phones as a tool that in many ways allow them to do what they otherwise do more effectively. 


Why is Elon Musk Afraid of A.I.?  



Everyone’s favorite long-winded Wikipedia article of a human Elon Musk sure is afraid of Artificial Intelligence, having said that “we are summoning the demon”. And he’s not alone. Smart people Bill Gates and Stephen Hawkinghave all come out warning of the potential dangers of AI. And while many of their warnings center aroundthe weaponization of sentient machines, I think there is a much deeper existential fear when it comes to the potential of AI. Humans are increasingly getting more comfortable with computers making decisions for them. You see this now most prominently with the Internet of Things, where smart devices are programmed to meetmany of our needs. And these devices can communicate with eachother, creating a network of autonomous devices making it easier and easier for us to nothave to think about what temperature our house will be when we arrive home from work. For us humans, it’s a life of convenience. These devices are not quite at the level ofintelligence, however, as they can only perform certain functions with limited data sets installedwithin them. However, companies like Microsoft are experimentingwith Artificial General Intelligence, creating a computer or program that thinksin the same way as a human brain rather than a standard computation device. Deep Blue, a world champion defeating chess computer, was the first step in this direction. There is also IBM’s Watson, the first CognitiveComputing machine, which is currently being trained to interpretmedical images. One of the biggest issues with AGI, is thatat a certain point, the computer could start to write it's own code, and could become uncontrollableto us humans. Also, AGI is a commercial endeavor, with companiesand organizations around the world working on this project independently and with littleto no oversight. There are no "best practices" out there tostop AI from going in the wrong direction similar to how scientists in genetic modificationlabs were required to follow guidelines so their experiments didn't get out of the lab,and lead to unintended consequences. Not to mention we’ve created the perfectinfrastructure for a continuous stream of data that AI machines could draw from the Internet! It would basically be Ultron. And what if these machines develop a senseof self-regard, and don’t want to be turned off? What is more intelligent than self-preservation? And How do you teach a program values? If humans could barely agree on the constructof morality, then how could we possibly pass that on to a machine? And then what happens to us once machine intelligenceis better, faster and smarter than us humans? Forget the practical reality that these sentientAIs could become weaponized, there is a much larger existential crisis here. Humans could potentially become irrelevant,and would no longer really have a purpose on this earth as super intelligent computerswould solve all of our problems and beat us at chess all the time turning our life of convenience into a lifedevoid of purpose. So perhaps this is the frightening future that Elon and all those smart people have envisioned with AI. Or maybe he’s just seen Terminator one too many times. As long as my Roomba doesn’t eat me at night,I think I’ll be okay. 


Elon Musk says Artificial Intelligence is like Summoning the Demon





 Increasingly inclined to think that thereshould be some regulatory oversight at the nationaland international level just to make sure that we don't do something very foolish. Withartificial intelligence we are summoning the demon. You know all those stories where there's the guy with the pentagram and the holy water and he's like, yeah he's sure he can control the demon. Didn't work out. I take it there will be no HAL 9000 going up to Mars. HAL 9000 would be easy. It's way more complexthan - I mean, they would put HAL 9000 to shame. That's like a puppy dog. 

Why does Elon Musk not like AI?

 
Musk has an epic saga of voicing severe concerns over the harsh potential of AI. In 2014 he tweeted that AI can be more dangerous than nukes and told his audience AI is the biggest existential threat, and humankind needs to be very vigilant about its advancement. He stated: “with AI, we are summoning the demon.

Who has the most powerful AI?


NVIDIA has developed a new CPU, 'Grace,' that will power the world's most powerful Artificial Intelligence-capable supercomputer. Grace will be used by the Swiss National Computing Center's (CSCS) new system. It is an extensive Arm-based data center CPU developed by NVIDIA.

What did Stephen Hawking say about AI?


Hawking's biggest warning is about the rise of artificial intelligence: It will either be the best thing that's ever happened to us, or it will be the worst thing. If we're not careful, it very well may be the last thing.

Thanks for reading....

Thanks for reading: Why Elon Musk and Stephen Hawking Fear a Robot Apocalypse. , Sorry, my English is bad:)

Getting Info...

About the Author

I am a great blogger

1 comment

  1. second ago
    Very good 😊
Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.