An overreliance on technology by Israel's intelligence agencies and military has continued to shape the current conflict in Gaza, analysts say, while also being partially responsible for the failure to detect the Hamas attack on October 7. Hamas's surprise attack on army outposts and surrounding villages in southern Israel, which resulted in the deaths of 1,200 Israeli and foreign nationals, mostly civilians, took the Israeli intelligence agencies by surprise. Hamas fighters also took about 240 people captive. Israel, in its brutal military response, has killed more than 17,000 Palestinians in Gaza since then. Within both Israel and the wider Arab region, many have asked how Shin Bet, one of the world's most respected and feared intelligence agencies, which is responsible for Israel's domestic security, could have been outmatched by Hamas using bulldozers and paragliders. The world's disbelief has sparked a bounty of conspiracy theories in some quarters.
More than 60 days into the Israel-Gaza war, two Israeli news outlets – 972 magazine and Local Call – published a report on The Gospel, a new artificial intelligence system deployed in Gaza. The AI helps generate new targets at an unprecedented rate, allowing the Israeli military to loosen its already permissive constraints on the killing of civilians. The exchange of hostages between Israel and Hamas late last month created some challenges for the Netanyahu government – and its messaging. Producer Meenakshi Ravi looks at how Israeli media has been reporting on the story. As the world is focused on the events unfolding in Gaza, Israel has also escalated its attacks on Palestinians in the occupied West Bank, where Hamas has no authority or military presence.
Hamas' attack on Israel was the "largest hijacking of social media platforms by a terrorist organization" and companies still aren't prepared, a tech expert warned. Artificial intelligence could help flag antisemitic and terrorist content online, one tech expert said, but only if social media companies prioritize fighting Jew hatred. "Social media platforms are capable of investing in technologies when it affects their bottom line," CyberWell founder and CEO Tal-Or Cohen Montemayor said. "It's high time that we started demanding that they do it when it comes to violent content and to antisemitism online." CyberWell uses open-source intelligence techniques and tools to identify antisemitic content across the internet.
FOX News White House correspondent Peter Doocy has the latest on the Biden administration's response to the Middle East conflict on'Special Report.' As Israeli Defense Forces resumed military operations to eradicate the Hamas terrorist threat last Friday, the Biden administration is inserting itself into Israel's war planning process, teaching the Israelis – who've been fighting for their survival for decades – how to properly prosecute the conflict. Washington warfare "experts" – who arguably haven't secured a single clear military victory since 1945 – insist that Israeli military strategists alter their war plans to make their combat operations more targeted and their strikes more accurate, in order to minimize casualties, especially among civilians. The Biden administration's demands, while noble-sounding, are misguided and unreasonable. Implementing these requirements, at the expense of achieving the main mission of eliminating Hamas and its entire supporting infrastructure, will likely prolong the conflict, ultimately resulting in many more Israeli and Palestinian deaths.
Elon Musk, the billionaire owner of X, says the advertisers that have stopped spending on the platform due to his endorsement of an antisemitic post can "f----" themselves. "What it's going to do is it's going to kill the company, and the whole world will know the advertisers killed the company," Musk said at the New York Times DealBook conference on Wednesday. The post was the "worst and dumbest I've ever done," said Musk, the chief executive officer of Tesla Inc. Still, if advertisers leave the company, its failure will be their fault, not his -- saying they were trying to "blackmail me with money," he said. "I won't tap dance" to prove trustworthy, he said.
We've already discussed how the Israel-Hamas war is the latest conflict where people are poring over social media and news channels looking for updates on what, exactly, is happening. After all, whether it's news about our neighborhoods or communities on the other side of the world, the web is where we go to find updates. And it's another reminder that misinformation is often big business, and it's everywhere: fake news and fabrications, half-truths and obfuscations, and flat-out lies and propaganda. The rise in AI-powered deep fakes has only made the problem worse and increased the amount of untrustworthy content out there. So is it actually still possible to filter truth from lies online?
A suspected drone attack has hit a container ship owned by an Israeli businessman in the Indian Ocean, according to a United States defence official. The attack was likely carried out using an Iranian-made Shahed-136 drone on Friday, an unnamed US defence official told The Associated Press news agency on Saturday. Pan-Arab satellite channel Al Mayadeen also reported that an Israeli ship had been targeted in the Indian Ocean. The drone targeted the Malta-flagged, French-operated CMA CGM Symi vessel while in international waters. The ship reportedly suffered damage after the drone exploded, but no crew members were injured.
The Israel Defense Forces (IDF) have used artificial intelligence (AI) to improve targeting of Hamas operators and facilities as its military faces criticism for what's been deemed as collateral damage and civilian casualties. "I can't predict how long the Gaza operation will take, but the IDF's use of AI and Machine Learning (ML) tools can certainly assist in the administratively burdensome targeting identification, evaluation and assessment process," Mark Montgomery, a senior fellow at the Foundation for Defense of Democracies' Center on Cyber and Technology Innovation, told Fox News Digital. "Similar to U.S. forces, the IDF takes great effort to reduce collateral damage and civilian casualties, and tools like AI and ML can make the targeting process more agile and executable," Montgomery added. "AI tools should help in target identification efforts, expediting target review and approval," he said. "There will inevitably still be humans in the targeting process but in a much accelerated timeline."
This week, Emily Bazelon, John Dickerson, and David Plotz discuss the problems with issue polling and issues with political journalism; the chaos and conflict of Sam Altman and OpenAI; and the failure of the Oslo Accords and perpetual struggle between Israel and Palestine. Send us your Conundrums: submit them at slate.com/conundrum. And join us in-person or online with our special guest – The Late Show's Steven Colbert – for Gabfest Live: The Conundrums Edition! December 7 at The 92nd Street Y, New York City. Here are some notes and references from this week's show: Nate Cohn for The New York Times: The Crisis in Issue Polling, and What We're Doing About It and We Did an Experiment to See How Much Democracy and Abortion Matter to Voters Eli Saslow for The New York Times: A Jan. 6 Defendant Pleads His Case to the Son Who Turned Him In John Dickerson and Jo Ling Kent for CBS News Prime Time: What Sam Altman's ouster from OpenAI could mean for the tech world Emily Bazelon for The New York Times Magazine: Was Peace Ever Possible? Ezra Klein for The New York Times's The Ezra Klein Show podcast: The Best Primer I've Heard on Israeli-Palestinian Peace Efforts John Dickerson for CBS Mornings: Former President Jimmy Carter: "America will learn from its mistakes" Here are this week's chatters: John: Julia Simon for NPR: 'It feels like I'm not crazy.'
As the ongoing conflict between Israel and Hamas and its devastating effects play out in real time on social media, users are continuing to criticise tech firms for what they say is unfair content censorship – pulling into sharp focus longstanding concerns about the opaque algorithms that shape our online worlds. From the early days of the conflict, social media users have expressed outrage at allegedly uneven censorship of pro-Palestinian content on platforms like Instagram and Facebook. Meta has denied intentionally suppressing the content, saying that with more posts going up about the conflict, "content that doesn't violate our policies may be removed in error". But a third-party investigation (commissioned by Meta last year and conducted by the independent consultancy Business for Social Responsibility) had previously determined Meta had violated Palestinian human rights by censoring content related to Israel's attacks on Gaza in 2021, and incidents in recent weeks have revealed further issues with Meta's algorithmic moderation. Instagram's automated translation feature mistakenly added the word "terrorist" to Palestinian profiles and WhatsApp, also owned by Meta, created auto-generated illustrations of gun-wielding children when prompted with the word "Palestine".