Israel's A.I. Experiments in Gaza War Raise Ethical Concerns

NYT > Middle East 

In the past 18 months, Israel has also combined A.I. with facial recognition software to match partly obscured or injured faces to real identities, turned to A.I. to compile potential airstrike targets, and created an Arabic-language A.I. model to power a chatbot that could scan and analyze text messages, social media posts and other Arabic-language data, two people with knowledge of the programs said. Many of these efforts were a partnership between enlisted soldiers in Unit 8200 and reserve soldiers who work at tech companies such as Google, Microsoft and Meta, three people with knowledge of the technologies said. Unit 8200 set up what became known as "The Studio," an innovation hub and place to match experts with A.I. projects, the people said. Yet even as Israel raced to develop the A.I. arsenal, deployment of the technologies sometimes led to mistaken identifications and arrests, as well as civilian deaths, the Israeli and American officials said. Some officials have struggled with the ethical implications of the A.I. tools, which could result in increased surveillance and other civilian killings.