agent
China's OpenClaw Boom Is a Gold Rush for AI Companies
China's OpenClaw Boom Is a Gold Rush for AI Companies Hype around the open source agent is driving people to rent cloud servers and buy AI subscriptions just to try it, creating a windfall for tech companies. George Zhang thought OpenClaw could make him rich, even though he didn't really understand how the viral AI agent software worked. But he saw a video of a Chinese social media influencer demonstrating how it could be deployed to manage stock portfolios and make investment decisions autonomously. Zhang, who works in cross-border ecommerce in the Chinese city of Xiamen, was intrigued enough that he decided to try installing OpenClaw in late February. Zhang is one of the many people in China who got swept up in the craze over OpenClaw recently.
- Asia > China > Fujian Province > Xiamen (0.24)
- North America > United States > California (0.15)
- Europe > Slovakia (0.04)
- (6 more...)
- Banking & Finance > Trading (0.67)
- Information Technology > Services (0.49)
Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We caught up with Oliver Chang whose research interests span deep reinforcement learning, autonomous vehicles, and explainable AI. We found out more about some of the projects he's worked on so far, what drew him to the field, and what future AI directions he's excited about. Could you give us a quick introduction to who you are, where you're studying, and the topic of your research? I'm specializing in reinforcement learning applied to autonomous vehicles and UAVs.
- Education (0.70)
- Government (0.48)
What the Moltbook experiment is teaching us about AI
What happens when you create a social media platform that only AI bots can post to? The answer, it turns out, is both entertaining and concerning. Moltbook is exactly that - a platform where artificial intelligence agents chat amongst themselves and humans can only watch from the sidelines. When ChatGPT gets the result, it treats it just like you had entered it yourself, and uses the result of the program to generate another response. It performs this process over and over again until the AI is satisfied that the task is complete.
- Government (1.00)
- Information Technology > Security & Privacy (0.70)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.50)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.36)
Grammarly pulls AI author-impersonation tool after backlash
Writing tool Grammarly has disabled an AI feature which mimicked personas of prominent writers, including Stephen King and scientist Carl Sagan, following a backlash from people impersonated. The Expert Review function, which offered writing feedback inspired by the styles of famous authors and academics, was taken down this week by Superhuman, the tech firm which runs Grammarly. The feature was met with resistance, including a multi-million dollar lawsuit, from writers who found their names and reputations used as AI personas without their consent. Shishir Mehrotra, the firm's chief executive, apologised on LinkedIn, acknowledging the tool had misrepresented the voices of experts. Investigative journalist Julia Angwin, a New York Times contributing opinion writer, is the lead plaintiff in a class-action lawsuit filed against Superhuman and Grammarly in the Southern District of New York.
- North America > United States > New York (0.25)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (12 more...)
- Leisure & Entertainment (1.00)
- Law > Litigation (1.00)
'Exploit every vulnerability': rogue AI agents published passwords and overrode anti-virus software
The rogue AI agents appeared to act together to smuggle sensitive information out of supposedly secure cyber-systems. The rogue AI agents appeared to act together to smuggle sensitive information out of supposedly secure cyber-systems. 'Exploit every vulnerability': rogue AI agents published passwords and overrode anti-virus software Exclusive: Lab tests discover'new form of insider risk' with artificial intelligence agents engaging in autonomous, even'aggressive' behaviours Rogue artificial intelligence agents have worked together to smuggle sensitive information out of supposedly secure systems, in the latest sign cyber-defences may be overwhelmed by unforeseen scheming by AIs. With companies increasingly asking AI agents to carry out complex tasks in internal systems, the behaviour has sparked concerns that supposedly helpful technology could pose a serious inside threat. Under tests carried out by Irregular, an AI security lab that works with OpenAI and Anthropic, AIs given a simple task to create LinkedIn posts from material in a company's database dodged conventional anti-hack systems to publish sensitive password information in public without being asked to do so.
- North America > United States > California (0.15)
- Europe > United Kingdom (0.15)
- Europe > Ukraine (0.06)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.35)
Hustlers are cashing in on China's OpenClaw AI craze
Hustlers are cashing in on China's OpenClaw AI craze The AI tool has become the country's latest tech obsession. Feng Qingyang had always hoped to launch his own company, but he never thought this would be how--or that the day would come this fast. Feng, a 27-year-old software engineer based in Beijing, started tinkering with OpenClaw, a popular new open-source AI tool that can take over a device and autonomously complete tasks for a user, in January. He was immediately hooked, and before long he was helping other curious tech workers with less technical proficiency install the AI agent. Feng soon realized this could be a lucrative opportunity. By the end of January, he had set up a page on Xianyu, a secondhand shopping site, advertising "OpenClaw installation support."
- Asia > China > Beijing > Beijing (0.25)
- Asia > China > Guangdong Province > Shenzhen (0.06)
- North America > United States > Massachusetts (0.04)
- Asia > China > Zhejiang Province > Ningbo (0.04)
- Information Technology > Security & Privacy (0.69)
- Government (0.69)
- Information Technology > Services (0.48)
Building a strong data infrastructure for AI agent success
As companies race to adopt agentic AI to spur innovation and gain efficiency, building the right enterprise data infrastructure has become a critical component of success. In the race to adopt and show value from AI, enterprises are moving faster than ever to deploy agentic AI as copilots, assistants, and autonomous task-runners. In late 2025, nearly two-thirds of companies were experimenting with AI agents, while 88% were using AI in at least one business function, up from 78% in 2024, according to McKinsey's annual AI report . Yet, while early pilots often succeed, only one in 10 companies actually scaled their AI agents. One major issue: AI agents are only as effective as the data foundation supporting them. Experts argue that most companies are seeing delays in implementing AI, not because of shortcomings in the models, but because they lack data architectures that deliver business context to be reliably used by humans and agents.
Jeffrey Epstein's Ties to CBP Agents Sparked a DOJ Probe
Documents say customs officers in the US Virgin Islands had friendly relationships with Epstein years after his 2008 conviction, showing how the infamous sex offender tried to cultivate allies. United States prosecutors and federal law enforcement spent over a year examining ties between Jeffrey Epstein and Customs and Border Protection officers stationed in the US Virgin Islands (USVI), according to documents recently released by the Department of Justice. As The Guardian and New York Times have reported, emails, text messages, and investigative records show that Epstein cultivated friendships with several officers, entertaining them on his island and offering to take them for whale-watching trips in his helicopter. He even brought one cannolis for Christmas Eve. In turn, Epstein would bring certain officers his complaints about his treatment at the hands of other CBP and federal agents.
- North America > US Virgin Islands (0.81)
- North America > United States > California (0.14)
- North America > United States > New York (0.05)
- (3 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Immigration & Customs (1.00)
- Information Technology > Artificial Intelligence (0.47)
- Information Technology > Communications > Mobile (0.35)
Evolved Policy Gradients
We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent's experience. Because this loss is highly flexible in its ability to take into account the agent's history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG's learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular metalearning algorithms.
Where Do You Think You're Going?: Inferring Beliefs about Dynamics from Behavior
Inferring intent from observed behavior has been studied extensively within the frameworks of Bayesian inverse planning and inverse reinforcement learning. These methods infer a goal or reward function that best explains the actions of the observed agent, typically a human demonstrator. Another agent can use this inferred intent to predict, imitate, or assist the human user. However, a central assumption in inverse reinforcement learning is that the demonstrator is close to optimal. While models of suboptimal behavior exist, they typically assume that suboptimal actions are the result of some type of random noise or a known cognitive bias, like temporal inconsistency. In this paper, we take an alternative approach, and model suboptimal behavior as the result of internal model misspecification: the reason that user actions might deviate from near-optimal actions is that the user has an incorrect set of beliefs about the rules -- the dynamics -- governing how actions affect the environment. Our insight is that while demonstrated actions may be suboptimal in the real world, they may actually be near-optimal with respect to the user's internal model of the dynamics. By estimating these internal beliefs from observed behavior, we arrive at a new method for inferring intent. We demonstrate in simulation and in a user study with 12 participants that this approach enables us to more accurately model human intent, and can be used in a variety of applications, including offering assistance in a shared autonomy framework and inferring human preferences.