Though we all know how rigorous Elon Musk is when it comes to artificial intelligence, this week, two experts said AI wouldn’t try to conquer the world.
Musk often claims artificial intelligence can be made or set to act against humans. He is the owner, inventor, and engineer at Tesla and SpaceX. He is also a co-founder of OpenAI – a non-profit organization that leads research on AI, and the main mission to ensure the use artificial intelligence is safe for general publicity.
But many people and professionals from Silicon Valley do not share the same position as Elon Musk.
Other people share the same opinion as Musk. He has recently written an open letter to the UN, claiming the concerns over the autonomous lethal weapons given to machines. Artificial intelligence professor Toby Walsh said that, despite the fact of the possibility of machines having heavy weapons, there is a human factor, and if such machines are used by terrorists, it is a matter of real concern. He said we don’t need that new level of fighting.
John Gianandrea, the lead of AI department at Google, says he isn’t worried about an AI-related apocalypse scenario. However, it doesn’t mean Google sees no sense in Musk’s statements. Google is working to understand potential issues that can occur in future to avoid them now. Google believes in the positive effect and perks of AI. They say they want to make sure everybody can benefit from AI in future and that it is only a positive force in this world.
However, OpenAI warns that, when we have the artificial intelligence of the human level, it will be vital to have an institution able to protect and direct the further development of AI for the sake of humanity.
Continue reading about Technologies