ata
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Amazon Is Using Specialized AI Agents for Deep Bug Hunting
Born out of an internal hackathon, Amazon's Autonomous Threat Analysis system uses a variety of specialized AI agents to detect weaknesses and propose fixes to the company's platforms. As generative AI pushes the speed of software development, it is also enhancing the ability of digital attackers to carry out financially motivated or state-backed hacks. This means that security teams at tech companies have more code than ever to review while dealing with even more pressure from bad actors. On Monday, Amazon will publish details for the first time of an internal system known as Autonomous Threat Analysis (ATA), which the company has been using to help its security teams proactively identify weaknesses in its platforms, perform variant analysis to quickly search for other, similar flaws, and then develop remediations and detection capabilities to plug holes before attackers find them. ATA was born out of an internal Amazon hackathon in August 2024, and security team members say that it has grown into a crucial tool since then.
- North America > United States > Texas (0.05)
- North America > United States > New York (0.05)
- North America > United States > California (0.05)
- (4 more...)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.71)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.49)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.35)
ATA: A Neuro-Symbolic Approach to Implement Autonomous and Trustworthy Agents
Peer, David, Stabinger, Sebastian
Large Language Models (LLMs) have demonstrated impressive capabilities, yet their deployment in high-stakes domains is hindered by inherent limitations in trustworthiness, including hallucinations, instability, and a lack of transparency. To address these challenges, we introduce a generic neuro-symbolic approach, which we call Autonomous Trustworthy Agents (ATA). The core of our approach lies in decoupling tasks into two distinct phases: Offline knowledge ingestion and online task processing. During knowledge ingestion, an LLM translates an informal problem specification into a formal, symbolic knowledge base. This formal representation is crucial as it can be verified and refined by human experts, ensuring its correctness and alignment with domain requirements. In the subsequent task processing phase, each incoming input is encoded into the same formal language. A symbolic decision engine then utilizes this encoded input in conjunction with the formal knowledge base to derive a reliable result. Through an extensive evaluation on a complex reasoning task, we demonstrate that a concrete implementation of ATA is competitive with state-of-the-art end-to-end reasoning models in a fully automated setup while maintaining trustworthiness. Crucially, with a human-verified and corrected knowledge base, our approach significantly outperforms even larger models, while exhibiting perfect determinism, enhanced stability against input perturbations, and inherent immunity to prompt injection attacks. By generating decisions grounded in symbolic reasoning, ATA offers a practical and controllable architecture for building the next generation of transparent, auditable, and reliable autonomous agents.
- Overview (0.68)
- Research Report (0.50)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.71)
- Asia (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
Enhancing Aerial Combat Tactics through Hierarchical Multi-Agent Reinforcement Learning
Selmonaj, Ardian, Szehr, Oleg, Del Rio, Giacomo, Antonucci, Alessandro, Schneider, Adrian, Rüegsegger, Michael
This is motivated by the strong performance of RL agents in finding effective Courses of Action (CoA) across a wide range of environments, including combinatorial settings such as Chess or Go [1], real-time continuous control tasks found in arcade video games [2], and scenarios that combine control with strategic decision-making, as seen in modern wargames [3]. The application of RL in the context of air combat comes with a number of specific challenges. Those include structural properties of the simulation scenario, such as the complexity of the individual units and their flight dynamics, the exponential size of the combined state and action spaces, the depth of the planning horizon, the presence of stochasticity and imperfect information, etc. Overall the size of the game tree (i.e., the set of possible CoAs) in strategic games and defense scenarios appears vast and beyond the access of straightforward search. Furthermore, real-world operations involve the simultaneous maneuverings of individual units, but also be- ing mindful of the strategic positions and global mission planning. Training policies that integrate real-time control at the troop level with high-level mission planning at the commander level is challenging, as these tasks inherently demand distinct system requirements, algorithmic approaches, and training configurations.
- Europe > Switzerland (0.04)
- North America > United States > California > Monterey County > Monterey (0.04)
- Leisure & Entertainment > Games > Computer Games (0.87)
- Government > Military > Air Force (0.82)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.92)
Data Advisor: Dynamic Data Curation for Safety Alignment of Large Language Models
Wang, Fei, Mehrabi, Ninareh, Goyal, Palash, Gupta, Rahul, Chang, Kai-Wei, Galstyan, Aram
Data is a crucial element in large language model (LLM) alignment. Recent studies have explored using LLMs for efficient data collection. However, LLM-generated data often suffers from quality issues, with underrepresented or absent aspects and low-quality datapoints. To address these problems, we propose Data Advisor, an enhanced LLM-based method for generating data that takes into account the characteristics of the desired dataset. Starting from a set of pre-defined principles in hand, Data Advisor monitors the status of the generated data, identifies weaknesses in the current dataset, and advises the next iteration of data generation accordingly. Data Advisor can be easily integrated into existing data generation methods to enhance data quality and coverage. Experiments on safety alignment of three representative LLMs (i.e., Mistral, Llama2, and Falcon) demonstrate the effectiveness of Data Advisor in enhancing model safety against various fine-grained safety issues without sacrificing model utility.
- North America > United States > Virginia (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
Agent-Time Attention for Sparse Rewards Multi-Agent Reinforcement Learning
She, Jennifer, Gupta, Jayesh K., Kochenderfer, Mykel J.
Cooperative multi-agent reinforcement learning (MARL) where a team of agents learn coordinated policies optimizing global team rewards has been extensively studied in recent years [25, 13], and find potential applications in a wide variety of domains like robot swarm control [15, 2], coordinating autonomous drivers [26, 41], network routing [38, 4], etc. Although cooperative MARL problems can be framed as a centralized single-agent, with the team as that actor with the joint action space, such an approach doesn't scale well. Joint action space grows exponentially with number of agents in such scenarios. Moreover, due to real world constraints on communication and observability, such framing is often not useful for a large number of real world applications. Unfortunately, simply independently learning decentralized policies based on local observations result into unstable learning and convergence issues due to non-stationarity from simultaneous exploration [12, 33]. This has resulted in MARL methods focusing on the centralized training decentralized execution (CTDE) paradigm, where during training decentralized polices can have access to extra state information during training but not during evaluation.
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.69)
Adversarially Trained Model Compression: When Robustness Meets Efficiency
Gui, Shupeng, Wang, Haotao, Yu, Chen, Yang, Haichuan, Wang, Zhangyang, Liu, Ji
The robustness of deep models to adversarial attacks has gained significant attention in recent years, so has the model compactness and efficiency: yet the two have been mostly studied separately, with few relationships drawn between each other. This paper is concerned with: how can we combine the best of both worlds, obtaining a robust and compact network? The answer is not as straightforward as it may seem, since the two goals of model robustness and compactness may contradict from time to time. We formally study this new question, by proposing a novel Adversarially Trained Model Compression (ATMC) framework. A unified constrained optimization formulation is designed, with an efficient algorithm developed. An extensive group of experiments are then carefully designed and presented, demonstrating that ATMC obtains remarkably more favorable trade-off among model size, accuracy and robustness, over currently available alternatives in various settings.
Active Learning for Efficient Testing of Student Programs
Rastogi, Ishan, Kanade, Aditya, Shevade, Shirish
In this work, we propose an automated method to identify semantic bugs in student programs, called ATAS, which builds upon the recent advances in both symbolic execution and active learning. Symbolic execution is a program analysis technique which can generate test cases through symbolic constraint solving. Our method makes use of a reference implementation of the task as its sole input. We compare our method with a symbolic execution-based baseline on 6 programming tasks retrieved from CodeForces comprising a total of 23K student submissions. We show an average improvement of over 2.5x over the baseline in terms of runtime (thus making it more suitable for online evaluation), without a significant degradation in evaluation accuracy.
- Education > Educational Setting > Online (0.93)
- Education > Educational Technology > Educational Software > Computer Based Training (0.46)