The United States Air Force has denied running simulations in which an AI drone used "unexpected strategies to achieve its goal", such as killing the drone operator to ensure the success of its mission.
As reported by The Guardian, stories broke out last month on a virtual field simulation test in which an AI drone was tasked with various objectives to accomplish.
Also Read | AI robots can’t clean our plastic-plagued oceans alone
According to Col Tucker Hamilton, "The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat". Tucker Hamilton is the chief of AI test and operations at the US Air Force.
What the AI did next caught everyone off guard. "It killed the operator because that person was keeping it from accomplishing its objective".
When they told the system that killing operators would cost it points, "It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Also Read | There is no real ecosystem for deep tech in India: Accel’s Prashanth Prakash
It's worth noting that this was a virtual field test and no person was harmed. In a statement to Insider, US Air Force spokesperson Ann Stefanek denied the simulation took place.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.