wooldridge
Race for AI is making Hindenburg-style disaster 'a real risk', says leading expert
Race for AI is making Hindenburg-style disaster'a real risk', says leading expert The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned. Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products' capabilities and potential flaws are fully understood. The surge in AI chatbots with guardrails that are easily bypassed showed how commercial incentives were prioritised over more cautious development and safety testing, he said. "It's the classic technology scenario," he said. "You've got a technology that's very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable."
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Europe > Ukraine (0.07)
- Oceania > Australia (0.05)
- North America > United States > New Jersey (0.05)
- Leisure & Entertainment > Sports (0.74)
- Information Technology (0.52)
- Information Technology > Communications > Social Media (0.75)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.59)
Meta to announce 15bn investment in bid to achieve computerised 'superintelligence'
Meta is to announce a 15bn ( 11bn) bid to achieve computerised "superintelligence", according to multiple reports. The Silicon Valley race to dominate artificial intelligence is speeding up despite the patchy performance of many existing AI systems. Mark Zuckerberg, Meta's chief executive, is expected to announce the company will buy a 49% stake in Scale AI, a startup led by Alexandr Wang and co-founded by Lucy Guo, in a move described by one Silicon Valley analyst as the action of "a wartime CEO". Superintelligence is described as a type of AI that can perform better than humans at all tasks. Currently AI cannot reach the same level as humans in all tasks, a state known as artificial general intelligence (AGI).
- North America > United States > California (0.48)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.06)
- Europe > Switzerland (0.06)
- Information Technology (0.79)
- Government > Military (0.33)
Experts urge caution over use of Chinese AI DeepSeek
Experts have urged caution over rapidly embracing the Chinese artificial intelligence platform DeepSeek, citing concerns about it spreading misinformation and how the Chinese state might exploit users' data. The new low-cost AI wiped 1tn off the leading US tech stock index this week and it rapidly became the most downloaded free app in the UK and the US. Donald Trump called it a "wake-up call" for tech firms. Its emergence has shocked the tech world by apparently showing it can achieve a similar performance to widely used platforms such as ChatGPT at a fraction of the cost. Michael Wooldridge, a professor of the foundations of AI at Oxford University, said it was not unreasonable to assume data inputted into the chatbot could be shared with the Chinese state. He said: "I think it's fine to download it and ask it about the performance of Liverpool football club or chat about the history of the Roman empire, but would I recommend putting anything sensitive or personal or private on them?
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Asia > Taiwan (0.05)
- Asia > China > Zhejiang Province > Hangzhou (0.05)
- Asia > China > Beijing > Beijing (0.05)
Double Machine Learning meets Panel Data -- Promises, Pitfalls, and Potential Solutions
Fuhr, Jonathan, Papies, Dominik
Estimating causal effect using machine learning (ML) algorithms can help to relax functional form assumptions if used within appropriate frameworks. However, most of these frameworks assume settings with cross-sectional data, whereas researchers often have access to panel data, which in traditional methods helps to deal with unobserved heterogeneity between units. In this paper, we explore how we can adapt double/debiased machine learning (DML) (Chernozhukov et al., 2018) for panel data in the presence of unobserved heterogeneity. This adaptation is challenging because DML's cross-fitting procedure assumes independent data and the unobserved heterogeneity is not necessarily additively separable in settings with nonlinear observed confounding. We assess the performance of several intuitively appealing estimators in a variety of simulations. While we find violations of the cross-fitting assumptions to be largely inconsequential for the accuracy of the effect estimates, many of the considered methods fail to adequately account for the presence of unobserved heterogeneity. However, we find that using predictive models based on the correlated random effects approach (Mundlak, 1978) within DML leads to accurate coefficient estimates across settings, given a sample size that is large relative to the number of observed confounders. We also show that the influence of the unobserved heterogeneity on the observed confounders plays a significant role for the performance of most alternative methods.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- North America > United States > Ohio > Warren County > Mason (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Research Report > Promising Solution (0.64)
- Research Report > New Finding (0.46)
Fused Extended Two-Way Fixed Effects for Difference-in-Differences with Staggered Adoptions
To address the bias of the canonical two-way fixed effects estimator for difference-in-differences under staggered adoptions, Wooldridge (2021) proposed the extended two-way fixed effects estimator, which adds many parameters. However, this reduces efficiency. Restricting some of these parameters to be equal helps, but ad hoc restrictions may reintroduce bias. We propose a machine learning estimator with a single tuning parameter, fused extended two-way fixed effects (FETWFE), that enables automatic data-driven selection of these restrictions. We prove that under an appropriate sparsity assumption FETWFE identifies the correct restrictions with probability tending to one. We also prove the consistency, asymptotic normality, and oracle efficiency of FETWFE for two classes of heterogeneous marginal treatment effect estimators under either conditional or marginal parallel trends, and we prove consistency for two classes of conditional average treatment effects under conditional parallel trends. We demonstrate FETWFE in simulation studies and an empirical application.
- North America > United States > California (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Iowa (0.04)
- (4 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Health & Medicine > Therapeutic Area (1.00)
- Education (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.45)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.45)
Tech expert says 'existential' fears from AI are overblown, but sees 'very disturbing' workplace threats
A bipartisan panel of voters weighed in on the future of artificial intelligence and growing concerns surrounding the potential dangers of the emerging technology. A U.K.-based tech expert said he is not losing sleep at night over the recent growth of artificial intelligence but argued he does have concerns over AI potentially becoming a hellish boss that oversees an employee's every move. Michael Wooldridge is a professor of computer science at the University of Oxford who has been a leading expert on AI for at least 30 years. He spoke with The Guardian this month regarding upcoming lectures he will lead this winter to demystify artificial intelligence, while noting what concerns he does have with the tech. He told the outlet that he does not share the same worries as some AI experts who warn the powerful systems could one day lead to the downfall of humanity.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.35)
- Europe > Ukraine (0.06)
The professor's great fear about AI? That it becomes the boss from hell
It has been touted as an existential risk on a par with pandemics. But when it comes to artificial intelligence, at least one pioneer is not losing sleep over such worries. Prof Michael Wooldridge, who will be delivering this year's Royal Institution Christmas lectures, said he was more concerned AI could become the boss from hell, monitoring employees' every email, offering continual feedback and even – potentially – deciding who gets fired. "There are some prototypical examples of those tools that are available today. And I find that very, very disturbing," he said.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > Ukraine (0.05)
Hacked UK voter data could be used to target disinformation, warn experts
Data accessed in the Electoral Commission hack could help state-backed actors target voters with AI-generated disinformation, experts have warned. The UK elections watchdog revealed on Tuesday that a hostile cyber-attack had been able to access the names and addresses of all voters registered between 2014 and 2022. It said the integrity of the UK's largely paper-based electoral system was not at risk, but experts said the data could still be used by rogue actors if deployed alongside powerful new artificial intelligence tools. Michael Veale, an associate professor in digital rights at University College London, said the electoral register data could be combined with other leaked datasets to help target disinformation. Veale cited the example of a vote suppression scandal in Canada in 2011, when automated phone calls impersonating election officials were made to voters, telling them falsely that their polling stations had been moved.
- North America > Canada (0.27)
- Asia > Russia (0.18)
- Europe > Russia (0.07)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.06)
- Government > Voting & Elections (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.40)
Elections in UK and US at risk from AI-driven disinformation, say experts
Next year's elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots. Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users. "The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern," he said. "Regulation would be quite wise: people need to know if they're talking to an AI, or if content that they're looking at is generated or not. The ability to really model … to predict humans, I think is going to require a combination of companies doing the right thing, regulation and public education."
- Europe > United Kingdom (0.51)
- Europe > Ukraine (0.06)
- North America > United States > New York (0.05)
- (4 more...)
- Media > News (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
'I didn't give permission': Do AI's backers care about data law breaches?
Cutting-edge artificial intelligence systems can help you escape a parking fine, write an academic essay, or fool you into believing Pope Francis is a fashionista. The enormous datasets used to train the latest generation of these AI systems, like those behind ChatGPT and Stable Diffusion, are likely to contain billions of images scraped from the internet, millions of pirated ebooks, the entire proceedings of 16 years of the European parliament and the whole of English-language Wikipedia. But the industry's voracious appetite for big data is starting to cause problems, as regulators and courts around the world crack down on researchers hoovering up content without consent or notice. In response, AI labs are fighting to keep their datasets secret, or even daring regulators to push the issue. In Italy, ChatGPT has been banned from operating after the country's data protection regulator said there was no legal basis to justify the collection and "massive storage" of personal data in order to train the GPT AI.
- Europe > Italy (0.25)
- North America > United States > California (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)