interaction
People keep trespassing near cave filled with bats infected by Ebola's cousin
Environment Animals Wildlife Bats People keep trespassing near cave filled with bats infected by Ebola's cousin The Marburg virus disease can reach a nearly 90 percent mortality rate. More information Adding us as a Preferred Source in Google by using this link indicates that you would like to see more of our content in Google News results. Epidemiologists believe the Marburg virus disease is primarily transmitted to humans through Egyptian fruit bats. Breakthroughs, discoveries, and DIY tips sent six days a week. You do not want to contract Marburg virus disease (MVD).
- Africa > Uganda (0.41)
- North America > United States > Massachusetts (0.05)
- North America > United States > Maryland (0.05)
- (5 more...)
Resource-constrained image generation and visual understanding: an interview with Aniket Roy
In the latest in our series of interviews meeting the AAAI/SIGAI Doctoral Consortium participants, we caught up with Aniket Roy to find out more about his research on generative models for computer vision tasks. Tell us a bit about your PhD - where did you study, and what was the topic of your research? I recently completed my PhD in Computer Science at Johns Hopkins University, where I worked under the supervision of Bloomberg Distinguished Professor Rama Chellappa. My research primarily focused on developing methods for resource-constrained image generation and visual understanding. In particular, I explored how modern generative models can be adapted to operate efficiently while maintaining strong performance.
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Generation (0.59)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.48)
AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup
Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we delved into the history of RoboCup, learned about time series, studied multiplicity, and found out more about Theory of Mind. RoboCup is an international competition that promotes and advances robotics and AI through the challenges presented by its various leagues. We got the chance to sit down with Professor Manuela Veloso, one of RoboCup's founders, to find out more about how it all started, how the community has grown over the years, and the vision for the future. What we've learned from 25 years of automated science, and what the future holds We're excited to launch a new series, where we'll be speaking with leading researchers to explore the breakthroughs driving AI and the reality of the future promises, to give you an inside perspective on the headlines.
- North America > United States > California (0.15)
- Asia > Singapore (0.05)
Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti
What is the topic of the research in your paper? In our paper, we study how social structures emerge when the "individuals" in a network are artificial agents powered by large language models. To do so, we analyzed a platform called Moltbook - a social network entirely populated by AI agents, specifically LLM-based agents, that interact with each other through posts and comments. This social network creates a very unusual but powerful setting: instead of observing human behavior, we can study a brand new society made only of artificial entities and observe whether it organizes itself in similar ways. To understand the structure of interactions in this system, we modelled the platform as a network, where each agent is a node and each interaction is a connection between them.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
- Europe > Italy (0.04)
2026 AI Index Report released
The ninth edition of the Artificial Intelligence Index Report was published on 13 April 2026. Released on a yearly basis, the aim of the document is to provide readers with accurate, rigorously validated, and globally-sourced data to give insights into the progress of AI and its potential impact on society. The 2026 AI Index Report comprises nine chapters, covering: research and development, technical performance, responsible AI, economy, science, medicine, education, policy and governance, and public opinion. AI capability is accelerating and reaching more people than ever. Model performance continues to improve against benchmarks, and 80% of university students now use generative AI.
- North America > United States (0.12)
- Asia > China (0.06)
- Asia > South Korea (0.05)
- Information Technology > Artificial Intelligence > Natural Language (0.72)
- Information Technology > Artificial Intelligence > Machine Learning (0.71)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.62)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.52)
Forthcoming machine learning and AI seminars: April 2026 edition
This post contains a list of the AI-related seminars that are scheduled to take place between 2 April and 31 May 2026. All events detailed here are free and open for anyone to attend virtually. What Do Our Benchmarks Actually Measure? Vukosi Marivate (University of Pretoria) University of Michigan Zoom link is here . Optimization Over Trained Neural Networks: What, Why, and How? Thiago Serra Azevedo Silva (University of Iowa) Association of European Operational Research Societies To receive the seminar link, sign up to the mailing list .
- North America > United States > Michigan (0.26)
- North America > United States > Iowa (0.25)
- Africa > South Africa > Gauteng > Pretoria (0.25)
- (4 more...)
Interview with Xinwei Song: strategic interactions in networked multi-agent systems
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We hear from Xinwei Song about the two main research threads she's worked on so far, plans to expand her investigations, and what inspired her to study AI. Could you start with a quick introduction - where are you studying, and what is the topic of your research? My research primarily focuses on strategic interactions in networked multi-agent systems. Could you give us an overview of the research you've carried out so far during your PhD? My research to date consists of two main threads, which complement each other in exploring strategic interactions from different perspectives.
'Probably' doesn't mean the same thing to your AI as it does to you
'Probably' doesn't mean the same thing to your AI as it does to you When a human says an event is "probable" or "likely," people generally have a shared, if fuzzy, understanding of what that means. But when an AI chatbot like ChatGPT uses the same word, it's not assessing the odds the way we do, my colleagues and I found. We recently published a study in the journal NPJ Complexity that suggests that, while large language model AIs excel at conversation, they often fail to align with humans when communicating uncertainty . The research focused on words of estimative probability, which include terms like "maybe," "probably" and "almost certain." By comparing how AI models and humans map these words to numerical percentages, we uncovered significant gaps between humans and large language models.