anderljung
5 Predictions for AI in 2025
If 2023 was the year of AI fervor, following the late-2022 release of ChatGPT, 2024 was marked by a steady drumbeat of advances as systems got smarter, faster, and cheaper to run. AI also began to reason more deeply and interact via voice and video--trends that AI experts and leaders say will accelerate. Here's what to expect from AI in 2025. In 2025, we'll begin to see a shift from chatbots and image generators toward "agentic" systems that can act autonomously to complete tasks, rather than simply answer questions, says AI futurist Ray Kurzweil. In October, Anthropic gave its AI model Claude the ability to use computers--clicking, scrolling, and typing--but this may be just the start.
- North America > United States (0.74)
- Asia > China (0.07)
- Asia > India (0.05)
Trust in EU approach to artificial intelligence risks being undermined by new AI rules
The EU is winning the battle for trust among artificial intelligence (AI) researchers, academics on both sides of the Atlantic say, bolstering the Commission's ambitions to set global standards for the technology. But some fear the EU risks squandering this confidence by imposing ill-thought through rules in its recently proposed Artificial Intelligence act, which some academics say are at odds with the realities of AI research. "We do see a push for trustworthy and transparent AI also in the US, but, in terms of governance, we are not as far [ahead] as the EU in this regard," said Bart Selman, president of the Association for Advancement of Artificial Intelligence (AAAI) and a professor at Cornell University. Highly international AI researchers are "aware that AI developments in the US are dominated by business interests, and in China by the government interest," said Holger Hoos, professor of machine learning at Leiden University, and a founder of the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE). EU policymaking, though slower, incorporated "more voices, and more perspectives" than the more centralised process in the US and China, he argued, with the EU having taken strong action on privacy through the General Data Protection regulation, which came into effect in 2018.
- Asia > China (0.58)
- Europe > Netherlands > South Holland > Leiden (0.25)
- South America > Brazil (0.05)
- (3 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.49)
Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers
Zhang, Baobao | Anderljung, Markus (Centre for the Governance of AI, Future of Humanity Institute, University of Oxford) | Kahn, Lauren (Perry World House, University of Pennsylvania) | Dreksler, Noemi (Centre for the Governance of AI, Future of Humanity Institute, University of Oxford) | Horowitz, Michael C. (Perry World House, University of Pennsylvania) | Dafoe, Allan (Centre for the Governance of AI, Future of Humanity Institute, University of Oxford)
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including through their work, advocacy, and choice of employment. Nevertheless, this influential group's attitudes are not well understood, undermining our ability to discern consensuses or disagreements between AI/ML researchers. To examine these researchers' views, we conducted a survey of those who published in two top AI/ML conferences (N = 524). We compare these results with those from a 2016 survey of AI/ML researchers (Grace et al., 2018) and a 2018 survey of the US public (Zhang & Dafoe, 2020). We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations to shape the development and use of AI in the public interest; moderate trust in most Western tech companies; and low trust in national militaries, Chinese tech companies, and Facebook. While the respondents were overwhelmingly opposed to AI/ML researchers working on lethal autonomous weapons, they are less opposed to researchers working on other military applications of AI, particularly logistics algorithms. A strong majority of respondents think that AI safety research should be prioritized and that ML institutions should conduct pre-publication review to assess potential harms. Being closer to the technology itself, AI/ML researchers are well placed to highlight new risks and develop technical solutions, so this novel attempt to measure their attitudes has broad relevance. The findings should help to improve how researchers, private sector executives, and policymakers think about regulations, governance frameworks, guiding principles, and national and international governance strategies for AI. This article appears in the special track on AI & Society.
- Asia > China (0.35)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- (5 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study (0.93)