printable article

Originally published August 14 2014

Artificial Intelligence 'more dangerous than nukes,' warns technology pioneer Elon Musk

by J. D. Heyes

(NaturalNews) Elon Musk, the founder of Tesla Motors, maker of electric automobiles, is one of the driving forces behind super-intelligence computers that could eventually improve just about everything -- from alternatively powered autos to space travel.

But Musk says that the technology that is coming could also be extremely dangerous, and more so, even, than nuclear weapons.

As reported by Britain's Daily Mail, the billionaire Tesla founder tweeted over the first weekend in August a recommendation for a book that looks at the uprising of robots, then claiming, "We need to be super careful with AI. Potentially more dangerous than nukes."

"AI" is the acronym for artificial intelligence.

The book Musk was referring to is titled Superintelligence: Paths, Dangers, Strategies, which was written by Nick Bostrom. The non-fiction tome asks substantial questions about how humanity could potentially be forced to cope with super-intelligence computers developed by Mankind itself.

Then again, Bostrom has also argued that the world that we are currently living in isn't real and that we are actually existing in a computer simulation, like the one in The Matrix film series.

In a later tweet, Musk added, "Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable."

'A computer that thinks like a person'

As noted by the Daily Mail:

Musk's tweets follows a similar comment in June in which the Tesla-founder said believes that a horrific 'Terminator-like' scenario could be created from research into artificial intelligence.

The 42-year-old is so worried, he is investing in AI companies, not to make money, but to keep an eye on the technology in case it gets out of hand.

In March of this year, Musk invested money in AI group Vicarious, which is based in San Francisco, along with actor Ashton Kutcher and Facebook founder Mark Zuckerberg. The aim of the Vicarious group is to develop and build a "computer that thinks like a person...except it doesn't have to eat or sleep," according to Scott Phoenix, the company's co-founder.

In a recent interview with CNBC, Musk said, "I think there is potentially a dangerous outcome there."

"There have been movies about this, you know, like Terminator," Musk continued, as quoted by the Daily Mail. "There are some scary outcomes. And we should try to make sure the outcomes are good, not bad."

Currently, Vicarious is attempting to build a program that mimics the brain's neocortex -- the top layer of the cerebral hemispheres in the brains of mammals. It is about 3 mm thick and has six layers, each of which takes on a different function, including sensory perception, conscious thought, language (in humans) and spatial reasoning.

"Vicarious is developing machine learning software based on the computational principles of the human brain," says the company's website, adding that it has raised $56 million and is "not constrained by publication, grant applications, or product development cycles."

"Our first technology is a visual perception system that interprets the contents of photographs and videos in a manner similar to humans," the website states.

"Powering this technology is a new computational paradigm we call the Recursive Cortical Network."

In October 2013, Vicarious announced that it had developed an algorithm that "reliably" solves modern Captchas -- the most widely used technology to test a machine's ability to act human.

'Worst mistake in history'

You may recall that Captchas are used when you fill out forms online, for instance, to ensure that a computerized bot is not completing them. The technology prevents programming computers from filling them out and reaping rewards, such as buying bulk concert tickets.

Musk was also an early investor in another AI firm, DeepMind, which was acquired earlier this year by Google for about $678 million.

The electric car maker is not the only one sounding the alarm bell about AI. Prof. Stephen Hawking has also warned that humanity faces a potentially dangerous future as technology increasingly learns to think for itself and adapt to its surroundings and environment.

"Earlier this year, the renowned physicist discusses Jonny Depp's latest film Transcendence, which delves into a world where computers can surpass the abilities of humans," the Daily Mail reported. Hawking said that simply dismissing the film as nothing more than science fiction could be the "worst mistake in history."


All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing LLC takes sole responsibility for all content. Truth Publishing sells no hard products and earns no money from the recommendation of products. is presented for educational and commentary purposes only and should not be construed as professional advice from any licensed practitioner. Truth Publishing assumes no responsibility for the use or misuse of this material. For the full terms of usage of this material, visit