own agent
How and Why to Manipulate Your Own Agent: On the Incentives of Users of Learning Agents
The usage of automated learning agents is becoming increasingly prevalent in many online economic applications such as online auctions and automated trading. Motivated by such applications, this paper is dedicated to fundamental modeling and analysis of the strategic situations that the users of automated learning agents are facing. We consider strategic settings where several users engage in a repeated online interaction, assisted by regret-minimizing learning agents that repeatedly play a game on their behalf. We propose to view the outcomes of the agents' dynamics as inducing a meta-game between the users. Our main focus is on whether users can benefit in this meta-game from manipulating their own agents by misreporting their parameters to them. We define a general framework to model and analyze these strategic interactions between users of learning agents for general games and analyze the equilibria induced between the users in three classes of games. We show that, generally, users have incentives to misreport their parameters to their own agents, and that such strategic user behavior can lead to very different outcomes than those anticipated by standard analysis.
Google's new Gemini 3 "vibe-codes" responses and comes with its own agent
Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent. The previous model, Gemini 2.5, supports multimodal input. Users can feed it images, handwriting, or voice. But it usually requires explicit instructions about the format the user wants back, and it defaults to plain text regardless. But Gemini 3 introduces what Google calls "generative interfaces," which allow the model to make its own choices about what kind of output fits the prompt best, assembling visual layouts and dynamic views on its own instead of returning a block of text. Ask for travel recommendations and it may spin up a website-like interface inside the app, complete with modules, images, and follow-up prompts such as "How many days are you traveling?" or "What kinds of activities do you enjoy?" It also presents clickable options based on what you might want next. When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- North America > United States > New York (0.04)
- (2 more...)
- Leisure & Entertainment > Games (1.00)
- Banking & Finance (0.68)
How and Why to Manipulate Your Own Agent: On the Incentives of Users of Learning Agents
The usage of automated learning agents is becoming increasingly prevalent in many online economic applications such as online auctions and automated trading. Motivated by such applications, this paper is dedicated to fundamental modeling and analysis of the strategic situations that the users of automated learning agents are facing. We consider strategic settings where several users engage in a repeated online interaction, assisted by regret-minimizing learning agents that repeatedly play a "game" on their behalf. We propose to view the outcomes of the agents' dynamics as inducing a "meta-game" between the users. Our main focus is on whether users can benefit in this meta-game from "manipulating" their own agents by misreporting their parameters to them.
Realtor rules just changed dramatically. Here's what buyers and sellers can expect
For decades, real estate commissions have been somewhat standardized, with most home sellers paying 5% to 6% commission to cover both the listing agent and the buyer's agent. A landmark agreement from the National Assn. of Realtors paved the way for a new set of rules that will likely shake up the entire industry, affecting sellers, buyers and the agents tasked with pushing deals across the finish line. The most pivotal rule change pertains to how buyers' agents are paid. Traditionally, home sellers have paid for the commission of both their agent and the buyer's agent, which critics argue stifled competition and drove up home prices. The new rule prohibits most listings from saying how much buyers' agents are paid, removing the assumption that sellers are on the hook for paying both agents.
- North America > United States > California (0.17)
- Europe > San Marino (0.05)
Meet Chaos-GPT: An AI Tool That Seeks to Destroy Humanity - Decrypt
Sooner than even the most pessimistic among us have expected, a new, evil artificial intelligence bent on destroying humankind has arrived. Known as Chaos-GPT, the autonomous implementation of ChatGPT is being touted as "empowering GPT with Internet and Memory to Destroy Humanity." It hasn't gotten very far. But it's definitely a weird idea, as well as the latest peculiar use of Auto-GPT, an open-source program that allows ChatGPT to be used autonomously to carry out tasks imposed by the user. AutoGPT searches the internet, accesses an internal memory bank to analyze tasks and information, connects with other APIs, and much more--all without needing a human to intervene.
- Information Technology > Communications > Social Media (0.81)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.64)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.64)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.64)
Real-World Human-Robot Collaborative Reinforcement Learning
Shafti, Ali, Tjomsland, Jonas, Dudley, William, Faisal, A. Aldo
The intuitive collaboration of humans and intelligent robots (embodied AI) in the real-world is an essential objective for many desirable applications of robotics. Whilst there is much research regarding explicit communication, we focus on how humans and robots interact implicitly, on motor adaptation level. We present a real-world setup of a human-robot collaborative maze game, designed to be non-trivial and only solvable through collaboration, by limiting the actions to rotations of two orthogonal axes, and assigning each axes to one player. This results in neither the human nor the agent being able to solve the game on their own. We use a state-of-the-art reinforcement learning algorithm for the robotic agent, and achieve results within 30 minutes of real-world play, without any type of pre-training. We then use this system to perform systematic experiments on human/agent behaviour and adaptation when co-learning a policy for the collaborative game. We present results on how co-policy learning occurs over time between the human and the robotic agent resulting in each participant's agent serving as a representation of how they would play the game. This allows us to relate a person's success when playing with different agents than their own, by comparing the policy of the agent with that of their own agent.
- North America > United States > Oregon > Benton County > Corvallis (0.04)
- Europe > Denmark (0.04)
- Asia > Japan > Honshū > Kansai > Hyogo Prefecture > Kobe (0.04)