Tuesday, July 28, 2015

Elon Musk and Stephen Hawking say that military AI could be a huge threat. The Independent.

A South Korean sentry bot. The Independent/Getty.

Elon Musk, Stephen Hawking and one thousand other robotics researchers have called for ban on military artificial intelligence. The Independent. The open letter criticizes the development of autonomous weapons that can kill without human input. An example would be the "next step" of drone technology, a drone that can kill targets on its own based on its programming without the need of an operator. Such advancements in technology could be developed in as few as ten years. The use of such technology would reduce the risks of warfare and could make war more common. South Korea has already developed sentry drones that are sophisticated enough to track and target humans, but still need a human to fire their weapons. There is also concern that AI based weapons could tarnish the image of more legitimate AI research. It is suspected that AI development could lead to a global arms race and the technology could even spread to terrorists and third world dictators. 

My Comment:
The open letter can be found here. 

What's my take on it? I'm reminded of an H.P Lovecraft story, The Case of Charles Dexter Ward. It's about an evil sorcerer that likes to resurrect dead people. The sorcerer gets some really good advice that has always stuck with me. "Never call up what you can't put down". The sorcerer, being an evil jerk, doesn't listen to that advice and dies after summoning someone or something he really should not have... twice. Needless to say there are some pretty obvious parallels with robotic arms. It's never a good idea to give weapons to something you don't fully control.

I think the objections to AI weapons are fairly sound. The military applications are obvious and any breakthrough in AI tech is unlikely to stay in one country. The idea of something like ISIS having access to drones that can kill anyone who they don't like is pretty terrifying.  Not that the U.S. government having the same tech is all that more reassuring, but the potential for abuse is there. Even if nothing ever goes wrong with this technology, it's still scary that a robot can decide who lives and who dies. 

But what happens if it doesn't go right? The idea of an autonomous drone that goes off on its own mission is not a reassuring one. And there will always be problems with this kind of technology. Glitches and malfunctions happen all the time with our current technology, so there is no reason to not suspect it with military AI tech as well. A rouge AI could do some serious damage and could even provoke a war that nobody wanted to start. 

Let's have a little hypothetical scenario. 10 years after this technology is created, the United States sends an automatons drone over whatever hot spot we are involved in at that point. Perhaps an anti-piracy mission near sea lanes that the Chinese happen to have a ship in? And, for whatever reason, the AI drone decides that instead of hunting pirates, it wants to hunt a Chinese destroyer. It fires a missile, destroying the ship and killing 50 sailors. China is pissed and sends a retaliatory strike against U.S. vessels operating in the area. Keep going like that and you get World War III, just because a robot made a mistake. It begins with a glitch in a drone and ends in a mushroom cloud. 

But that is pretty much the best case scenario. As bad as an accidental nuclear war would be, humanity would probably survive in some fashion. But unfriendly AI could be an existential threat to humanity itself. Basic drones and sentry bots aren't much of a global threat, except for the above scenario where they start a massive war. However, it is possible that much more sophisticated AI could be developed that could take over these drones or just play havoc on human civilization. If AI researchers develop an AI that is smarter then any human, and has some way to interact with the environment, then we could be out-competed as a species. In theory this could be the end of all of us.

Sounds like a sci fi scenario, right? We have heard this story before in countless stories, movies and video games, so why should we take it seriously? Well a lot of people a lot smarter then me think the threat is real and it is at least worth talking about. Elon Musk and Stephen Hawking aren't exactly idiots. Nipping the problem in the bud and leaving AI research in the hands of civilian researchers might be a good idea. Either way, letting human level or better AI have access to weaponry just seems like a terrible idea. I'm not even a fan of letting them have access to the internet. There is just so much risk involved.

Such a threat is quite a ways off but I'm in favor of keeping the human element in warfare for quite some time. The last thing earth needs is a robot rebellion, or an unfriendly AI scenario. Unfortunately, the temptation is huge to develop this technology.    

I haven't talked about why anyone would want AI weapons like this. For one, it should greatly limit casualties on your side. In theory, it could reduce civilian casualties as well, as the AI should be smart enough to determine if something is a threat or not. A robotic drone could operate for days, weeks, months or even years without needing to rest, assuming it could stay fueled, giving 24/7/365 coverage. And AI could react quicker to any threat then even the most gifted or talented human ever could. 

Of course those obvious benefits are also the most obvious threat. What do you do if your drone fleet suddenly decides you are the threat? Could you even stop them? And what if a terrorist hacks the drones to change its targeting parameters? These are the questions that need bulletproof answers before we should even consider using these kinds of weapons... 

No comments:

Post a Comment