An overreliance on technology by Israel's intelligence agencies and military has continued to shape the current conflict in Gaza, analysts say, while also being partially responsible for the failure to detect the Hamas attack on October 7. Hamas's surprise attack on army outposts and surrounding villages in southern Israel, which resulted in the deaths of 1,200 Israeli and foreign nationals, mostly civilians, took the Israeli intelligence agencies by surprise. Hamas fighters also took about 240 people captive. Israel, in its brutal military response, has killed more than 17,000 Palestinians in Gaza since then. Within both Israel and the wider Arab region, many have asked how Shin Bet, one of the world's most respected and feared intelligence agencies, which is responsible for Israel's domestic security, could have been outmatched by Hamas using bulldozers and paragliders. The world's disbelief has sparked a bounty of conspiracy theories in some quarters.
More than 60 days into the Israel-Gaza war, two Israeli news outlets – 972 magazine and Local Call – published a report on The Gospel, a new artificial intelligence system deployed in Gaza. The AI helps generate new targets at an unprecedented rate, allowing the Israeli military to loosen its already permissive constraints on the killing of civilians. The exchange of hostages between Israel and Hamas late last month created some challenges for the Netanyahu government – and its messaging. Producer Meenakshi Ravi looks at how Israeli media has been reporting on the story. As the world is focused on the events unfolding in Gaza, Israel has also escalated its attacks on Palestinians in the occupied West Bank, where Hamas has no authority or military presence.
European Union policymakers have agreed on landmark legislation to regulate artificial intelligence (AI), paving the way for the most ambitious set of standards yet to control the use of the game-changing technology. The agreement to support the "AI Act" on Friday came after nearly 38 hours of negotiations between lawmakers and policymakers. "The AI Act is a global first. A unique legal framework for the development of AI you can trust," EU chief Ursula von der Leyen said. A commitment we took in our political guidelines – and we delivered.
The world's first comprehensive laws to regulate artificial intelligence have been agreed in a landmark deal after a marathon 37-hour negotiation between the European Parliament and EU member states. The agreement was described as "historic" by Thierry Breton, the European Commissioner responsible for a suite of laws in Europe that will also govern social media and search engines, covering giants such as X, TikTok and Google. Breton said 100 people had been in a room for almost three days to seal the deal. He said it was "worth the few hours of sleep" to make the "historic" deal. Carme Artigas, Spain's secretary of state for AI, who facilitated the negotiations, said France and Germany supported the text, amid reports that tech companies in those countries were fighting for a lighter touch approach to foster innovation among small companies.
After years of inaction in the U.S. Congress, E.U. tech laws have had wide-ranging implications for Silicon Valley companies. Europe's digital privacy law, the General Data Protection Regulation, has prompted some companies, such as Microsoft, to overhaul how they handle users' data even beyond Europe's borders. Meta, Google and other companies have faced fines under the law, and Google had to delay the launch of its generative AI chatbot Bard in the region due to a review under the law. However, there are concerns that the law created costly compliance measures that have hampered small businesses, and that lengthy investigations and relatively small fines have blunted its efficacy among the world's largest companies.
The proliferation of artificial intelligence (AI) tools, and the large language models (LLMs) that underpin them, have been alarming anti-AI advocates, particularly as ChatGPT -- and its ilk -- skyrocketed in popularity this year. A new study conducted by the UK Department of Education implies that the fearful cries of "Ah, AI is coming after our jobs!" According to the lead investigators, up to 30% of jobs can be automated with AI. As AI usage becomes more prevalent, the study discovered that the most affected jobs are in finance, law, business, management roles, and clerical work. Below, take a look at the specific jobs that are in jeopardy.
The Washington Post reports that after a marathon 72-hour debate European Union legislators Friday have reached a historic deal on a broad-ranging AI safety development bill, the most expansive and far-reaching of its kind to date. Details of the deal itself were not immediately available. The proposed regulations would dictate the ways in which future machine learning models can be developed and distributed within the trade bloc, impacting its use in applications ranging from education to employment to healthcare. AI development would be split among four categories, depending on how much societal risk each potentially poses -- minimal, limited, high, and banned. Banned uses would include anything that circumvents the user's will, targets protected groups or provides real-time biometric tracking (like facial recognition).
The European Union today agreed on the details of the AI Act, a far-reaching set of rules for the people building and using artificial intelligence. It's a milestone law that, lawmakers hope, will create a blueprint for the rest of the world. After months of debate about how to regulate companies like OpenAI, lawmakers from the EU's three branches of government--the Parliament, Council and Commission--spent more than 36 hours in total--thrashing out the new legislation between Wednesday afternoon and Friday evening. Lawmakers were under pressure to strike a deal before the EU election campaign starts in the new year. "The EU AI Act is a global first," said European Commission President Ursula von der Leyen on X. "[It is] a unique legal framework for the development of AI you can trust.
AI-enabled military systems have been effective in battle, but some reliability issues still concern troops and their commanders: former Air Force test pilot. Cheap drones equipped with AI can destroy expensive military equipment, and the Pentagon will need to incorporate autonomous technology into its strategy to advance into the next generation of warfare, a former test pilot and military tech company executive told Fox News. "What we've seen in Europe and other theaters is that they've democratized warfare," said EpiSci Vice President of Tactical Autonomous Systems Chris Gentile. "A $1,000 drone can take out a multimillion-dollar asset." The Pentagon has a portfolio of over 800 contracts for AI-enabled projects.
OpenAI's recent drama hasn't only caught UK regulators' attention. Bloomberg reported Friday that the Federal Trade Commission (FTC) is looking into Microsoft's investment in the Sam Altman-led company and whether it violates US antitrust laws. FTC Chair Lina Khan wrote in a New York Times op-ed earlier this year that "the expanding adoption of AI risks further locking in the market dominance of large incumbent technology firms." Bloomberg's report stresses that the FTC inquiry is preliminary, and the agency hasn't opened a formal investigation. But Khan and company are reportedly "analyzing the situation and assessing what its options are." One complicating factor for regulation is that OpenAI is a non-profit, and transactions involving non-corporate entities aren't required by law to be reported.