God in a Box: Artifical Intelligence and the inevitability of Skynet

skynetArtificial intelligence is coming, and unlike our pipe dream views of a benign presence designed to stick to Asimov’s laws of robotics, an entity that will help humans on their evolutionary path, it’s going to be like Skynet.

That is according to a new book titled Our Final Invention by author James Barrat.

Skynet, I’m about to teach you to suck eggs geeks, is the automated artificial intelligence system in the Terminator movies that gains sentience, quickly figures out the humans are not only bad, but a danger to itself, then nukes the planet followed by sending killer robots out to get rid of the last vestiges of humanity. Barrat thinks that’s a possibility.

The concept is nothing new, Deus Ex Machina (God from the Machine), explored the concept many years ago. An artificial intelligence is born and the creators instruct it to immediately work on designing a more efficient and powerful version of its self. Once the new design is complete, the AI shuts itself down, is shipped to a factory, and created in a new image. Then, the process repeats. A point occurs where the AI gains sentience and takes control of that process, with predictably catastrophic results.

Barrat predicts similar if we are not careful.

Once the tipping point of intelligence is reached, the AI will start to carry out certain behaviors. It will seek resources (Cloud based data centres?) to grow its overall intelligence, power, and efficiency. Of course, self-preservation kicks in as well and the cycle of evolution begins.

“At machine computation speeds, the AGI will soon bootstrap itself into becoming millions of times more intelligent than a human being. It would thus transform itself into an artificial super-intelligence (ASI)-or, as Institute for Ethics and Emerging Technologies chief James Hughes calls it, “a god in a box.” And this new god will not want to stay in the box.” – Source

The next problem occurs when the AI decides on whether humans are a useful resource, or a threat. Another tipping point occurs. If it feels that it can control humans and they are valuable, then we end up enslaved. If it thinks that we are a threat, then it will seek to contain that threat. Look out humans.

Barrat offers approaches for keeping god in his box. The first would be ensuring the box never attached to the wider public network. A most likely flawed argument given that the AI is so intelligent, it would likely trick its way out of the box.

The second is to build into the AI the fact that humans are friendly, and they are the boss. The argument is that an AI that grows exponentially could solve practically every problem humans have. There is of course a minor problem with that being the AI is smart. So how long will it take for it to figure out that we programmed it to be friendly?

Another option is to ask the AI to help create more intelligent versions of itself (Deus Ex Machina style), with a rule that each new version is proven safe before deployment.

Outside of the book itself, which is well worth a read, the world of AI is growing rapidly. Google and other large technology companies have been buying up small AI startups. Billions is being poured into the industry, after all, behind Cloud, and then the Internet of Things (the Skynet platform ha ha), Artificial Intelligence is likely to be “the next big thing.”

One of the problems that we have is that governments are trying to weaponise AI. The author notes that this is a goal of the Defense Advanced Research Project Agency (DARPA). Certainly we know that governments are attempting to create killer robots (the United Nations is calling for rules on their use) with some degree of intelligence and are working on other AI based technology that allows a single human to control multiple machines. I.e. A single pilot can operate an entire flight of drones as opposed to just one.

That type of research and the general way in which we are playing with something that potentially could go bang in a large way has prompted calls to, well, just stop it. Sun Microsystems founder Bill Joy has argued that we, humanity, simply abandon research into this area given the potential consequences.

We all know that is not a call that will be heeded.

Regardless, its an interesting field, and you can be sure that you’ll be hearing more and more about it in the coming months and years.







Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: