establish rule
VerificAgent: Domain-Specific Memory Verification for Scalable Oversight of Aligned Computer-Use Agents
Nguyen, Thong Q., Desai, Shubhang, Anwar, Raja Hasnain, Shaik, Firoz, Suryanarayanan, Vishwas, Chowdhary, Vishal
Continual memory augmentation lets computer-using agents (CUAs) learn from prior interactions, but unvetted memories can encode domain-inappropriate or unsafe heuristics--spurious rules that drift from user intent and safety constraints. We introduce VerificAgent, a scalable oversight framework that treats persistent memory as an explicit alignment surface. VerificAgent combines (1) an expert-curated seed of domain knowledge, (2) iterative, trajectory-based memory growth during training, and (3) a post-hoc human fact-checking pass to sanitize accumulated memories before deployment. Evaluated on OSWorld productivity tasks and additional adversarial stress tests, VerificAgent improves task reliability, reduces hallucination-induced failures, and preserves interpretable, auditable guidance--without additional model fine-tuning. By letting humans correct high-impact errors once, the verified memory acts as a frozen safety contract that future agent actions must satisfy. Our results suggest that domain-scoped, human-verified memory offers a scalable oversight mechanism for CUAs, complementing broader alignment strategies by limiting silent policy drift and anchoring agent behavior to the norms and safety constraints of the target domain.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- Asia > China > Hong Kong (0.04)
US looks to establish rules for artificial intelligence
The US government is taking its first tentative steps toward establishing rules for artificial intelligence tools, as the frenzy over generative AI and chatbots reach a fever pitch. The US commerce department on Tuesday announced it is officially requesting public comment on how to create accountability measures for AI, seeking help on how to advise US policymakers to approach the technology. "In the same way that financial audits created trust in the accuracy of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy," said Alan Davidson, the head of the National Telecommunications and Information Administration (NTIA), at a press conference at the University of Pittsburgh. Davidson said that the NTIA is seeking feedback from the public, including from researchers, industry groups, and privacy and digital rights organizations on the development of audits and assessments of AI tools created by private industry. He also said that the NTIA looking to establish guardrails that would allow the government to determine whether AI systems perform the way companies claim they do, whether they are safe and effective, whether they have discriminatory outcomes or "reflect unacceptable levels of bias", whether they spread or perpetuate misinformation, and whether they respect individuals' privacy.
'We have to move fast': US looks to establish rules for artificial intelligence
The US government is taking its first tentative steps toward establishing rules for artificial intelligence tools, as the frenzy over generative AI and chatbots reach a fever pitch. The US commerce department on Tuesday announced it is officially requesting public comment on how to create accountability measures for AI, seeking help on how to advise US policymakers to approach the technology. "In the same way that financial audits created trust in the accuracy of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy," said Alan Davidson, the head of the National Telecommunications and Information Administration (NTIA), at a press conference at the University of Pittsburgh. Davidson said that the NTIA is seeking feedback from the public, including from researchers, industry groups, and privacy and digital rights organizations on the development of audits and assessments of AI tools created by private industry. He also said that the NTIA looking to establish guardrails that would allow the government to determine whether AI systems perform the way companies claim they do, whether they are safe and effective, whether they have discriminatory outcomes or "reflect unacceptable levels of bias", whether they spread or perpetuate misinformation, and whether they respect individuals' privacy.
- North America > United States (1.00)
- Europe (0.16)