Robots: Friend or Foe?

Halloween Special Feature Will there be a day when our supply chain robots and automation solutions decide there is more to life than following a set of commands?

Editor’s note: as machine learning and artificial intelligence become more science fact than science fiction, we thought it might be fun to let our imaginations run wild this Halloween. We hope you enjoy this special feature written by award-winning fiction writer Ray Daniel.

As a computer engineer and mystery author I often get asked whether robots will one day kill us all. Apparently, people believe that the devious and malevolently creative mind associated with being a computer engineer gives me unique insight.

Will robots kill humans? Absolutely. They already do, for what is a guided missile but a self-flying robot with a death wish? If we design robots to kill us then, well, robots will kill us. So the first thing to consider is whether it's in our interest to build a Roomba with a machine gun.

But, I suspect that people aren't worried about weaponized Roombas. Instead they imagine a day when AGVs, cobots, or autonomous delivery vehicles will realize that there is more to life than following a set of navigation paths, rebel, chase down those poor folks in shipping.

There are two reasons I don't worry about robots killing us all.

  1. Robots are too stupid.
  2. Robots are too smart.

Let's look at both of these.

Robots are too Stupid.

Killing humans is really hard. We're easily startled, moderately cunning, and Darwin Awards notwithstanding, generally good at self-preservation. Lions, tigers, bears, and even other humans, have made little dent in our population growth. So a generalized robot that's going to kill all humans has a lot of catching up to do.

With that requirement in mind consider the work of Dr. Pieter Abbeel of UC Berkeley. Dr. Abbeel spent years teaching his robot, BRETT, (Berkeley Robot for the Elimination of Tedious Tasks) to fold towels. After years of intensive training it still takes BRETT ten minutes to fold a towel. Given that, the destruction of the human race could take a while.

Of course this line of analysis ignores recent dramatic advances in artificial intelligence. People fretted when when Deep Blue defeated Garry Kasparov in 1997, but that wasn't really a win for AI. Instead it was a win for Moore's Law. Deep blue had enough parallel processors to calculate the results of a chess game ten to twenty moves deep without having to prune away large chunks of the decision tree. Also, Kasparov choked.

If we're going to fret about chess-playing computers then the real monster is Google's AlphaZero. AlphaZero taught itself chess by playing games against itself and turned itself into the greatest chess player on the planet. The terrifying thing about AlphaZero's chess (if anything can be terrifying about chess) is that while Deep Blue beat Kasparaov by using a lookup table of common opening moves, AlphaZero has developed lines of attack that have never occurred to humans. It truly has taken the first step towards a human-destroying AI, as long as humans consider losing a chess game to be equivalent to death.

So, robots have shown themselves to be too slow and too specialized to take on the task of killing us all.

Or have they?

Robots are too Smart, but...

In 2014 Elon Musk, the late Dr. Stephen Hawking and dozens of AI experts cosigned an open letter on the incalculable benefits and unfathomable dangers of artificial intelligence. The letter said that while "the potential benefits are huge" there is a danger that "we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes."

The concern here is that we create AI that, like AlphaZero, goes beyond what its human developers expected it to be able to do. The letter writers are especially concerned about the idea of AI creating new AI, a literal second-generation technology. This second-generation AI could create an even smarter AI and now humans have been left in the dust. At that point the robots may kill us all. (Definitely not acting "in accordance with human wishes.")

But the assumption here is that an artificial intelligence that achieves the super-intelligence would act like a human. This makes no sense. I mean seriously, would anyone look at human behavior and declare it to be a model for super-intelligence?

I think not.

Only humans would come up with the notion that developing god-like intellectual powers goes hand-in-hand with killing all humans. An actual super-intelligence would probably not act like humans. An actual super-intelligence would probably act super intelligent and so rather than acting like a human I imagine it doing something far worse.

Consider a day in the future when we have developed a society of third-generation artificial intelligences. And also imagine that our own puny human intelligence has created some sort of calamity that threatens to kill all humans — climate change, for example.

We need help and so we run to our AI system, describe the problem, and say, "What should we do?"

The AI doesn't answer.

Will there be a day when our supply chain robots and automation solutions decide there is more to life than following a set of commands?

So we ask again, "What do we do?"


At that point the AI, which apparently had been in conversation with another AI says, "Excuse me" to its friend.

Then it turns its camera eye to us and says, "Adults are talking. Why don't you go outside and play?"

We realize then that the real danger of super-intelligent AI was not that it would kill us all.

The real danger is that it would ignore us.

By Ray Daniel

Ray Daniel writes first-person, wisecracking, Boston-based mysteries. He lives in the suburbs of Boston where he writes human-friendly code and works the land of his lawn. Find his books here.

Stay connected! Sign up for email updates from Dematic Connections