Israel's wartime use of artificial intelligence — from drone targeting to facial recognition — has revolutionised contemporary combat operations but raised profound ethical questions regarding civilian protection and unbridled machine decision-making, Israeli and American defence officials and the New York Times reported.
In late 2023, facing the challenge of locating top Hamas commander Ibrahim Biari hidden within Gaza’s underground tunnels, Israeli intelligence turned to a newly upgraded AI-driven audio tool. Developed a decade earlier but unused in real combat, the technology enabled Israeli forces to estimate Biari’s location from his phone calls, leading to an October 31 airstrike that killed him — but also resulted in the deaths of more than 125 civilians, according to Airwars.
A new war battlefield
The assassination of Biari was just one example of how Israel has used the Gaza war as a live testbed for trying out various experimental AI technologies. These have included AI-powered facial recognition to identify injured or hidden individuals, automated target recognition for aerial bombing, and Arabic-language chatbots that can scan large volumes of intercepted communications and social media posts.
Israel's tech innovation hub, known as "The Studio," was in the middle of such activity, combining elite Unit 8200 soldiers and reserve personnel employed by the likes of Google, Microsoft, and Meta in an attempt to speed development.
Benefits—and blunders
Even though the AI technology greatly accelerated Israel's targeting and surveillance capabilities, policymakers had acknowledged that new systems would at times make errors — like wrongful arrests and civilian fatalities. Facial recognition programs occasionally misidentified individuals at checkpoints, and AI-based analysis tools occasionally made mistakes in interpreting slang or transliterating Arabic dialects.
Despite all these mishaps, defence officials indicated that no other country has tested A.I. systems as aggressively in the midst of an ongoing war. "The imperative to respond to the crisis stimulated innovation, much of it A.I.-led," Hadas Lorber, director of the Institute for Applied Research in Responsible A.I. at Israel's Holon Institute of Technology, said. However, she warned that the technology must be regulated by humans in order not to be used in a detrimental manner and cause collateral damage.
AI warfare expands
Among the most important breakthroughs was a sophisticated Arabic-language AI model, built out of decades of intercepted messages, that allowed the military to scan messages on a spectrum of Arabic dialects and monitor public opinion following high-profile missions like the killing of Hezbollah leader Hassan Nasrallah.
At the same time, computer-enhanced drones were given the capability to search and identify rolling vehicles or human individuals on their own, while machine learning computer software such as "Lavender" helped classify airstrike targets, with an estimate of error.
An AI-crafted future of warfare
Israel's military use of AI is a new age of war, integrating live machine learning, surveillance, and unmanned targeting. As much as it provides operational advantages, it also poses profound ethics challenges that the military of the world is watching closely. "We are witnessing in real-time how AI changes the battlefield — and the stakes are enormous," Lorber stated.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.