Air Force denies running simulation where AI drone “killed” its operator

An armed unmanned aerial vehicle on runway, but orange.

Enlarge / An armed unmanned aerial vehicle on runway, but orange. (credit: Getty Images)

Over the past 24 hours, several news outlets reported a now-retracted story claiming that the US Air Force had run a simulation in which an AI-controlled drone “went rogue” and “killed the operator because that person was keeping it from accomplishing its objective.” The US Air Force has denied that any simulation ever took place, and the original source of the story says he “misspoke.”

The story originated in a recap published on the website of the Royal Aeronautical Society that served as an overview of sessions at the Future Combat Air & Space Capabilities Summit that took place last week in London.

In a section of that piece titled “AI—is Skynet here already?” the authors of the piece recount a presentation by USAF Chief of AI Test and Operations Col. Tucker “Cinco” Hamilton, who spoke about a “simulated test” where an AI-enabled drone, tasked with identifying and destroying surface-to-air missile sites, started to perceive human “no-go” decisions as obstacles to achieving its primary mission. In the “simulation,” the AI reportedly attacked its human operator, and when trained not to harm the operator, it instead destroyed the communication tower, preventing the operator from interfering with its mission.

Read 6 remaining paragraphs | Comments

Source

Leave a Reply

Your email address will not be published. Required fields are marked *