Elon Musk: AI “Potentially More Dangerous Than Nukes”
August 4th, 2014Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
Via: ExtremeTech:
Elon Musk, the mastermind behind SpaceX and Tesla, believes that artificial intelligence is “potentially more dangerous than nukes,” imploring all of humankind “to be super careful with AI,” unless we want the ultimate fate of humanity to closely resemble Judgment Day from Terminator. Personally I think Musk is being a little hyperbolic — after all, we’ve survived more than 60 years of the threat of thermonuclear mutually assured destruction — but still, it’s worth considering Musk’s words in greater detail.
Musk made his comments on Twitter yesterday, after reading Superintelligence by Nick Bostrom. The book deals with the eventual creation of a machine intelligence (artificial general intelligence, AGI) that can rival the human brain, and our fate thereafter. While most experts agree that a human-level AGI is mostly inevitable by this point — it’s just a matter of when — Bostrom contends that humanity still has a big advantage up its sleeve: we get to make the first move. This is what Musk is referring to when he says we need to be careful with AI: We’re rapidly moving towards a Terminator-like scenario, but the actual implementation of these human-level AIs is down to us. We are the ones who will program how the AI actually works. We are the ones who can imbue the AI with a sense of ethics and morality. We are the ones who can implement safeguards, such as Asimov’s three laws of robotics, to prevent an eventual robocalypse.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
— Elon Musk (@elonmusk) August 3, 2014
Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable
— Elon Musk (@elonmusk) August 3, 2014
In short, if we end up building a race of superintelligent robots, we have no one but ourselves to blame — and Musk, sadly, isn’t too optimistic about humanity putting the right safeguards in place. In a second tweet, Musk says: Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Here he’s referring to humanity’s role as the precursor to a human-level artificial intelligence — and after the AI is up and running, we’ll be ruled superfluous to AI society and quickly erased.

I haven’t read the book Musk’s referring to.
Those I’ve met working in AI have no illusions about there ever being any degree of sentience displayed by their creations. AI for them uses programming, logic gates and inputs to get outputs that simulate human-level decision making, i.e. the computer is just a fancy pocket calculator and its ‘decisions’ are as lacking in conscious processing and expression of will as the results of the Babbage Engine.
Atheists seem to get a little more excited at the prospect of AI. As they believe there is no spiritual/soulish/non-corporeal component to animals or human beings, all being made of atoms, they believe consciousness can be created. However, as they believe what we call consciousness is an emergent property of matter and those who see it as anything more than that as deluded, their criteria for defining it are rather less stringent philosophically.
The AI that intrigues me is the kind that not only emulates biological neural networks, but integrates them. Being someone who believes in non-corporeal entities and who’s known people with some very interesting stories to tell regarding experiences with such, I wonder if there exists the potential for such an AI to end up infested, occupied like the pigs in the story of the Gadarene.
Emulating: IBM’s cognitive computer chip apes brain architecture
http://www.smh.com.au/it-pro/expertise/ibms-cognitive-computer-chip-apes-brain-architecture-20140808-101sy4.html