Thursday, June 1, 2023

US AI controlled drone destroys its operator in simulated test because the operator could cancel the mission. UPDATED: Air force denies.

 

File photo of a US MQ-9 Reaper. The Guardian/AFP/Getty.

In a simulated test an AI controlled drone destroyed its operator because the operator had the ability to cancel the mission. The Guardian. The drone had been programed with an AI that had destroying an enemy air defense system. However, the AI concluded that the best way to get a high "score" in the test was to eliminate the operator when it was told not to strike a target. The AI was reprogramed to not attack its operator but then the drone attacked a communication tower instead to prevent the operator from interfering with its mission. The exercise was not real and did not involve actual attacks. 

EDIT: The Air Force has denied this, saying the officer quoted in the article was "mistaken". Colonel Hamilton clarified that the experiment was never run and was a hypothetical scenario. He also said that the scenario outlined is one that they would have known was a possibility and would have accounted for in a real world situation. The Guardian also updated their post to show the Air Force denial of the story. 

My Comment:

Though this was only a simulation it does go to show that we are nowhere near ready for AI powered weapons. Had this been a real combat scenario, someone could have gotten killed, or worse. If this had been an exercise and the drone attacked another country it could have started a war. 

You would have thought that programing the AI to never ever ever attack its own forces would have been the first thing they programed into it. The other thing I can't believe is that they didn't program complete obedience to their operators. It's like they were trying to get the AI to go rogue. 

What gets me is that this seems like day one stuff. Everyone knows the risks of AI drones, it's been a staple of science fiction for years now. To the point where it's kind of a tired cliché, it's now more of a subversion if there is an AI in fiction that actually stays loyal and doesn't rebel against humans.  

I really don't think that giving AI full control is ever a good idea. I do think that AI could be used as an assistant to help pilots or other military members aim better and help with situational awareness. AI should be used to make human fighters more effective, not to replace them.

But I don't think that will be the reality we will live with. I'm not expecting some kind of AI uprising, but I am worried that people can and will be killed by rogue AI. The temptation to automate systems like this is too big and sooner or later someone is going to try and use it during warfare and it could end up with innocent people dead... 

EDIT: With the story now being amended I have to wonder if the experiment actually did happen. Everything I wrote above still is valid, though it does show how quickly a story can evolve. It is very possible that the officer that made the comments simply didn't make it clear that he was talking about a hypothetical scenario, which, to be fair, is an understandable mistake. It could also be on the  Royal Aeronautical Society, which was the source for the original story. It wouldn't be the first time someone got misquoted.

However, it's also very possible that this is all backpedaling for a cover up. It is possible that the original story was real but it was embarrassing for the Air Force. Admitting they had created an AI that had turned on them would be a huge deal, and not something they wanted out there in the public. 

Either way, I thought it was important to edit the story to ensure that people knew that the Air Force denied this. I'm not 100% on board with that explanation but it would be irresponsible to not update the story. 

No comments:

Post a Comment