Kokai, George
ChipNeMo: Domain-Adapted LLMs for Chip Design
Liu, Mingjie, Ene, Teodor-Dumitru, Kirby, Robert, Cheng, Chris, Pinckney, Nathaniel, Liang, Rongjian, Alben, Jonah, Anand, Himyanshu, Banerjee, Sanmitra, Bayraktaroglu, Ismet, Bhaskaran, Bonita, Catanzaro, Bryan, Chaudhuri, Arjun, Clay, Sharon, Dally, Bill, Dang, Laura, Deshpande, Parikshit, Dhodhi, Siddhanth, Halepete, Sameer, Hill, Eric, Hu, Jiashang, Jain, Sumit, Khailany, Brucek, Kokai, George, Kunal, Kishor, Li, Xiaowei, Lind, Charley, Liu, Hao, Oberman, Stuart, Omar, Sujeet, Pratty, Sreedhar, Raiman, Jonathan, Sarkar, Ambar, Shao, Zhengjiang, Sun, Hanfei, Suthar, Pratik P, Tej, Varun, Turner, Walker, Xu, Kaizhe, Ren, Haoxing
ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: custom tokenizers, domain-adaptive continued pretraining, supervised fine-tuning (SFT) with domain-specific instructions, and domain-adapted retrieval models. We evaluate these methods on three selected LLM applications for chip design: an engineering assistant chatbot, EDA script generation, and bug summarization and analysis. Our results show that these domain adaptation techniques enable significant LLM performance improvements over general-purpose base models across the three evaluated applications, enabling up to 5x model size reduction with similar or better performance on a range of design tasks. Our findings also indicate that there's still room for improvement between our current results and ideal outcomes. We believe that further investigation of domain-adapted LLM approaches will help close this gap in the future.