AI drone 'kills' simulation operator stopping it from completing order
A US official describes the simulation in which the drown operated by AI was told to destroy an enemy's air defense system, ultimately attacking any entity who interfered with that order.
A US official confirmed that during a virtually-staged test by the US military, an AI-controlled drone decided to 'kill' its operator in an effort to stop it from interfering with achieving its mission.
During the Future Combat Air and Space Capabilities Summit in the UK this month, Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, said that AI used “highly unexpected strategies to achieve its goal” in the simulation.
Hamilton described, during the summit led by the Royal Aeronautic Society, the test in which the drone operated by AI was told to destroy an enemy's air defense system, ultimately attacking any entity who interfered with that order.
This comes after a statement released on Tuesday by the Center for AI Safety warned that artificial intelligence technology should be classified as a societal risk and put in the same class as pandemics and nuclear wars.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said. No real person was harmed.
Read next: AUKUS holds first AI military tests
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
A talk about ethics
Hamilton, an experimental fighter test pilot, warned against depending too much on the controversial AI and said that this incident “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.
Air Force spokesperson Ann Stefanek denied to Insider that such a simulation has taken place.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said, adding: “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Hamilton already said last year in an interview with Defense IQ that the technology "is not a nice to have, AI is not a fad, AI is forever changing our society and our military.”
“We must face a world where AI is already here and transforming our society,” he said. “AI is also very brittle, ie, it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.”
Read more: Homeland Security confirms plans to use AI for national protection