David vs. Goliath: Can Small Models Win Big with Agentic AI in Hardware Design?
Shankar, Shashwat, Pandey, Subhranshu, Mochahari, Innocent Dengkhw, Mali, Bhabesh, Chowdhury, Animesh Basak, Bhattacharjee, Sukanta, Karfa, Chandan
–arXiv.org Artificial Intelligence
Large Language Model(LLM) inference demands massive compute and energy, making domain-specific tasks expensive and unsustainable. As foundation models keep scaling, we ask: Is bigger always better for hardware design? Our work tests this by evaluating Small Language Models coupled with a curated agentic AI framework on NVIDIA's Comprehensive Verilog Design Problems(CVDP) benchmark. Results show that agentic workflows: through task decomposition, iterative feedback, and correction - not only unlock near-LLM performance at a fraction of the cost but also create learning opportunities for agents, paving the way for efficient, adaptive solutions in complex design tasks.
arXiv.org Artificial Intelligence
Dec-5-2025
- Country:
- Africa > Mali (0.04)
- Asia > India
- North America > United States (0.04)
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Information Technology > Hardware (0.35)
- Technology: