michael wooldridge
Race for AI is making Hindenburg-style disaster 'a real risk', says leading expert
Race for AI is making Hindenburg-style disaster'a real risk', says leading expert The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned. Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products' capabilities and potential flaws are fully understood. The surge in AI chatbots with guardrails that are easily bypassed showed how commercial incentives were prioritised over more cautious development and safety testing, he said. "It's the classic technology scenario," he said. "You've got a technology that's very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable."
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Europe > Ukraine (0.07)
- Oceania > Australia (0.05)
- North America > United States > New Jersey (0.05)
- Leisure & Entertainment > Sports (0.74)
- Information Technology (0.52)
- Information Technology > Communications > Social Media (0.75)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.59)
Experts warn of threat to democracy from 'AI bot swarms' infesting social media
Predictions that AI bot swarms were a threat to democracy weren't'fanciful', said Michael Wooldridge, professor of the foundations of AI at Oxford University. Predictions that AI bot swarms were a threat to democracy weren't'fanciful', said Michael Wooldridge, professor of the foundations of AI at Oxford University. Experts warn of threat to democracy from'AI bot swarms' infesting social media Political leaders could soon launch swarms of human-imitating AI agents to reshape public opinion in a way that threatens to undermine democracy, a high profile group of experts in AI and online misinformation has warned. The Nobel peace prize-winning free-speech activist Maria Ressa, and leading AI and social science researchers from Berkeley, Harvard, Oxford, Cambridge and Yale are among a global consortium flagging the new "disruptive threat" posed by hard-to-detect, malicious "AI swarms" infesting social media and messaging channels. A would-be autocrat could use such swarms to persuade populations to accept cancelled elections or overturn results, they said, amid predictions the technology could be deployed at scale by the time of the US presidential election in 2028.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.47)
- Asia > Taiwan (0.08)
- Europe > Ukraine (0.06)
- (8 more...)
- Media > News (0.92)
- Government > Voting & Elections (0.89)
- Leisure & Entertainment > Sports (0.71)
- Government > Regional Government > North America Government > United States Government (0.51)
Review for NeurIPS paper: Evaluating and Rewarding Teamwork Using Cooperative Game Abstractions
Weaknesses: I believe the results proposed in this paper are related to existing work. The techniques used are close to existing methods - at the very least a detailed comparison is in order. The paper fails to acknowledge lots of literature on representing coalitional games in a restricted manner. In fact, many techniques have been proposed for concisely representing coalitional games, and approximately solving them. This issue is covered in depth in (e.g): Chalkiadakis, Georgios, Edith Elkind, and Michael Wooldridge.
Summary of the #IJCAI2024 doctoral consortium
We were also inspired by an invited talk from Professor Michael Wooldridge (University of Oxford) on "Writing for Research." He emphasized the importance of understanding what to say, creating a narrative flow, and the drafting process, all delivered with an engaging Q&A session. Michael Wooldridge giving his invited talk on "Writing for Research." Following this, we had a dynamic career panel, a cherished tradition of the doctoral consortium. The panel featured esteemed scholars such as Professor Ken Forbus (Northwestern University), Professor Kate Larson (University of Waterloo), Professor Peter Stone (University of Texas at Austin), and Professor Caren Han (The University of Melbourne). The discussion covered a range of topics, including common mistakes in early career presentations, transitioning between different AI research areas, successful grant writing, managing interdisciplinary research in AI, and time management. Both the invited talk and career panel provided students with an excellent opportunity to ask questions about their future careers and other aspects of their graduate and post-graduate journeys. You can see the program in more detail here.
- North America > United States > Texas > Travis County > Austin (0.27)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.27)
- Asia > South Korea (0.20)
BCS Lovelace Lecture 2021
In this talk I will review the development of commercially successful Knowledge Representation and Reasoning (KRR) systems and their genesis in foundational research. I will trace the evolution of KRR systems from logical and algorithmic foundations, through academic prototypes and standardisation to robust and scalable systems that power applications in areas as diverse as search, healthcare, financial services and manufacturing. I will discuss the barriers and milestones encountered along the journey, and lessons learned about the exploitation of research. Multi-agent systems first emerged as a research topic in the late 1980s. A key driver behind the emergence of the field was the idea of building systems that actively worked on behalf of human users in the pursuit of those users' goals.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.19)
- Europe > United Kingdom > England > Leicestershire > Loughborough (0.05)
- Europe > Norway > Eastern Norway > Oslo (0.05)
Quality education focus series round-up: teaching AI and using AI to improve teaching
In the series, we considered both the teaching of AI and machine learning itself, and the use of AI techniques to improve education in general. You can also find out more about conferences and events, and other interesting research at the intersection of AI and education. There are a number of conferences and workshops that focus on the education side of AI. In our focus series we heard from the co-chairs of the Symposium on Educational Advances in Artificial Intelligence (EAAI), which was held in February this year. This event is held as an independent symposium within the AAAI conference, and provides the opportunity for researchers, educators, and students to share educational experiences involving AI.
- Education > Educational Setting > Online (0.31)
- Education > Educational Technology > Educational Software (0.30)
Michael Wooldridge: Talking to the public about AI – #EAAI2021 invited talk
Michael Wooldridge is the winner of the 2021 Educational Advances in Artificial Intelligence (EAAI) Outstanding Educator Award. He gave a plenary talk at AAAI/EAAI in February this year, focussing on lessons he has learnt in communicating AI to the public. Michael's public science journey began in 2014 when the press and social media became awash with stories of AI. He wondered who was going to respond to these, often exaggerated, narratives and to add some nuance to the discussion. It turned out that nobody did, and there was a noticeable absence of expert opinion reported.
It's Not Whom You Know, It's What You (or Your Friends) Can Do: Succint Coalitional Frameworks for Network Centralities
Istrate, Gabriel, Bonchis, Cosmin, Gatina, Claudiu
It's Not Whom You Know, It's What You (or Your Friends) Can Do: Succint Coalitional Frameworks for Network Centralities. September 25, 2019 Abstract We investigate the representation of game-theoretic measures of network centrality using a framework that blends a social network representation with the succint formalism of cooperative skill games. We discuss the expressiveness of the new framework and highlight some of its advantages, including a fixed-parameter tractability result for computing centrality measures under such representations. As an application we introduce new network centrality measures that capture the extent to which neighbors of a certain node can help it complete relevant tasks. 1 Introduction Measures of network centrality have a long and rich history in the social sciences [1] and Artificial Intelligence. Such measures have proved useful for a variety of tasks, such as identifying spreading nodes [2] and gatekeepers for information dissemination [3], advertising ...
- Leisure & Entertainment (0.68)
- Information Technology > Services (0.34)
The Twenty-Second AAAI Conference: Continuing the Content-Rich Tradition in Beautiful Vancouver, British Columbia
The Twenty Second AAAI-07 and the Nineteenth IAAI-07 Conferences will be held in Vancouver, British Columbia, Canada, from July 22-26, 2007. The conferences will be held at the Hyatt Regency Vancouver. The outstanding multidimensional program slated for the AAAI conference, and the content-rich applica - tions-oriented IAAI conference will showcase the latest in research and applications in AI. When not ensconc ed in conference sessions, attendees can also explore Vancouver, which is nestled in the mountains, waterways and rainforests of British Columbia. Careful thought was put into inviting world class speakers to this year's conference.
How Inappropriately Heavyweight AI Solutions Dragged Down a Startup
We came up with a heavyweight agent architecture, using ideas from AI planning and robotics. These sorts of architectures were very much in vogue at the time, and the company wanted its own, proprietary technology. We started thinking about programming languages for the agents and the kinds of knowledge representation and reasoning that would be required. We spent a lot of time and money flying from London to the U.S. West Coast, talking to patent lawyers. It transpired that the architecture, its decision-making and action models, were completely inappropriate for the problem at hand.