Goto

Collaborating Authors

 pearce


Towards LLM-based Root Cause Analysis of Hardware Design Failures

Qiu, Siyu, Wang, Muzhi, Afsharmazayejani, Raheel, Shahmiri, Mohammad Moradi, Tan, Benjamin, Pearce, Hammond

arXiv.org Artificial Intelligence

--With advances in large language models (LLMs), new opportunities have emerged to develop tools that support the digital hardware design process. In this work, we explore how LLMs can assist with explaining the root cause of design issues and bugs that are revealed during synthesis and simulation, a necessary milestone on the pathway towards widespread use of LLMs in the hardware design process and for hardware security analysis. We find promising results: for our corpus of 34 different buggy scenarios, OpenAI's o3-mini reasoning model reached a correct determination 100% of the time under pass@5 scoring, with other state of the art models and configurations usually achieving more than 80% performance and more than 90% when assisted with retrieval-augmented generation. Encountering bugs, glitches, and faults is a normal part of the digital hardware design lifecycle. To ensure they are completely removed and repaired is a time-consuming process requiring a deep understanding of both the technical cause of the issue as well as any impacts on the broader hardware system - particularly as any missed repair may have severe downstream functional and/or security consequences [1] (if the bug is of an exploitable nature). However, as digital hardware grows in complexity, so do the frequency and nature of the bugs themselves.


Pearce's Characterisation in an Epistemic Domain

Su, Ezgi Iraz

arXiv.org Artificial Intelligence

Answer-set programming (ASP) is a successful problem-solving approach in logic-based AI. In ASP, problems are represented as declarative logic programs, and solutions are identified through their answer sets. Equilibrium logic (EL) is a general-purpose nonmonotonic reasoning formalism, based on a monotonic logic called here-and-there logic. EL was basically proposed by Pearce as a foundational framework of ASP. Epistemic specifications (ES) are extensions of ASP-programs with subjective literals. These new modal constructs in the ASP-language make it possible to check whether a regular literal of ASP is true in every (or some) answer-set of a program. ES-programs are interpreted by world-views, which are essentially collections of answer-sets. (Reflexive) autoepistemic logic is a nonmonotonic formalism, modeling self-belief (knowledge) of ideally rational agents. A relatively new semantics for ES is based on a combination of EL and (reflexive) autoepistemic logic. In this paper, we first propose an overarching framework in the epistemic ASP domain. We then establish a correspondence between existing (reflexive) (auto)epistemic equilibrium logics and our easily-adaptable comprehensive framework, building on Pearce's characterisation of answer-sets as equilibrium models. We achieve this by extending Ferraris' work on answer sets for propositional theories to the epistemic case and reveal the relationship between some ES-semantic proposals.


Explaining EDA synthesis errors with LLMs

Qiu, Siyu, Tan, Benjamin, Pearce, Hammond

arXiv.org Artificial Intelligence

Training new engineers in digital design is a challenge, particularly when it comes to teaching the complex electronic design automation (EDA) tooling used in this domain. Learners will typically deploy designs in the Verilog and VHDL hardware description languages to Field Programmable Gate Arrays (FPGAs) from Altera (Intel) and Xilinx (AMD) via proprietary closed-source toolchains (Quartus Prime and Vivado, respectively). These tools are complex and difficult to use -- yet, as they are the tools used in industry, they are an essential first step in this space. In this work, we examine how recent advances in artificial intelligence may be leveraged to address aspects of this challenge. Specifically, we investigate if Large Language Models (LLMs), which have demonstrated text comprehension and question-answering capabilities, can be used to generate novice-friendly explanations of compile-time synthesis error messages from Quartus Prime and Vivado. To perform this study we generate 936 error message explanations using three OpenAI LLMs over 21 different buggy code samples. These are then graded for relevance and correctness, and we find that in approximately 71% of cases the LLMs give correct & complete explanations suitable for novice learners.


Chip-Chat: Challenges and Opportunities in Conversational Hardware Design

Blocklove, Jason, Garg, Siddharth, Karri, Ramesh, Pearce, Hammond

arXiv.org Artificial Intelligence

Modern hardware design starts with specifications provided in natural language. These are then translated by hardware engineers into appropriate Hardware Description Languages (HDLs) such as Verilog before synthesizing circuit elements. Automating this translation could reduce sources of human error from the engineering process. But, it is only recently that artificial intelligence (AI) has demonstrated capabilities for machine-based end-to-end design translations. Commercially-available instruction-tuned Large Language Models (LLMs) such as OpenAI's ChatGPT and Google's Bard claim to be able to produce code in a variety of programming languages; but studies examining them for hardware are still lacking. In this work, we thus explore the challenges faced and opportunities presented when leveraging these recent advances in LLMs for hardware design. Given that these `conversational' LLMs perform best when used interactively, we perform a case study where a hardware engineer co-architects a novel 8-bit accumulator-based microprocessor architecture with the LLM according to real-world hardware constraints. We then sent the processor to tapeout in a Skywater 130nm shuttle, meaning that this `Chip-Chat' resulted in what we believe to be the world's first wholly-AI-written HDL for tapeout.


Scalable Gaussian Process Variational Autoencoders

Jazbec, Metod, Fortuin, Vincent, Pearce, Michael, Mandt, Stephan, Rätsch, Gunnar

arXiv.org Machine Learning

Variational autoencoders (VAEs) are among the most widely used models in representation learning and generative modeling (Kingma and Welling, 2013, 2019; Rezende et al., 2014). As VAEs typically make use of factorized priors, they fall short when modeling correlations between different data points. However, more expressive priors that capture correlations enable useful applications. Casale et al. (2018), for instance, showed that by modeling prior correlations between the data, one could generate a digit's rotated image based on rotations of the same digit at different angles. Gaussian process VAEs (GP-VAEs) have been designed to overcome this shortcoming (Casale et al., 2018). These models introduce a Gaussian process (GP) prior over the latent variables that correlates pairs of latent variables through a kernel function. While GP-VAEs have outperformed standard VAEs on many tasks (Casale et al., 2018; Fortuin et al., 2020; Pearce, 2020), combining the GPs and VAEs brings along fundamental computational challenges. On the one hand, neural networks reveal their full power in conjunction with large datasets, making mini-batching a practical necessity. GPs, on the other hand, are traditionally restricted to medium-scale datasets due to their unfavorable scaling.


Gamers Forge Their Own Paths When It Comes to Accessibility

WIRED

When Mark Barlet realized there weren't many gaming resources available for a friend with multiple sclerosis, he and Stephen Spohn helmed a solution that would change countless lives. They created AbleGamers and turned a personal mission into a global vision of video game accessibility for all. "AbleGamers hasn't followed any path. We've created our own," Spohn said. He's AbleGamers' COO and has spinal muscular atrophy, which attacks his muscles and limits movement from the neck down.


Pick a number: big data, artificial intelligence and aviation

#artificialintelligence

Airline transport faces an enviable problem: how does it improve an already impressive safety record? Doing so may be beyond human capability, but well within the potential of two computing concepts--big data and artificial intelligence. Big data is an almost self-defining term. More specifically, as defined by Gartner Group in 2001, it is data that has the three Vs: 'greater variety, arriving in increasing volumes and with ever-higher velocity.' The Airbus A350 is a good example of the three Vs.


Why Apex Legends has kept me playing for 500 hours

The Guardian

I have now played Apex Legends for over 500 hours. The online multiplayer shooter, developed by Californian studio Respawn Entertainment and released in February 2019, has been my obsession all year, seeing off a variety of pretenders from Doom Eternal to Animal Crossing: New Horizons. Set in a science-fiction universe tied to Respawn's successful Titanfall series, it is another title in the battle royale genre alongside the Goliath that is Fortnite, as well as PlayerUnknown's Battlegrounds and Call of Duty: Warzone. You land in a hi-tech future landscape with two team-mates and then you scramble about, finding weapons, while 19 other teams try to kill you and everyone else. The last team left alive is the winner.


The Case For 'Smart' Security

#artificialintelligence

Ed. note: This is the first article in a two-part series about AI, its potential impact on how organizations approach security, and the accompanying considerations around implementation, efficacy, and compliance. Is Artificial Intelligence (AI) on track to help the world streamline and solve against tasks that are better left to a machine? One might think so, given everything we've seen and heard about the impact of AI on our society -- from our phones telling us the best way to drive home, to chatbots on e-commerce sites answering product questions, to devices as small as a thermostat or as large as an electric vehicle removing friction from everyday life. Now AI is entering the space of cybersecurity, promising to bring greater speed and accuracy in detecting and responding to breaches, user behavior analysis, or predicting new strains of malware. AI and machine learning technologies can help protect organizations from a continuously evolving threat landscape -- but AI is not just for sophisticated attacks, AI can also help protect against classic attack scenarios.


The Case For 'Smart' Security

#artificialintelligence

Ed. note: This is the first article in a two-part series about AI, its potential impact on how organizations approach security, and the accompanying considerations around implementation, efficacy, and compliance. Is Artificial Intelligence (AI) on track to help the world streamline and solve against tasks that are better left to a machine? One might think so, given everything we've seen and heard about the impact of AI on our society -- from our phones telling us the best way to drive home, to chatbots on e-commerce sites answering product questions, to devices as small as a thermostat or as large as an electric vehicle removing friction from everyday life. Now AI is entering the space of cybersecurity, promising to bring greater speed and accuracy in detecting and responding to breaches, user behavior analysis, or predicting new strains of malware. AI and machine learning technologies can help protect organizations from a continuously evolving threat landscape -- but AI is not just for sophisticated attacks, AI can also help protect against classic attack scenarios.