Learning Library Cell Representations in Vector Space

Liang, Rongjian, Lu, Yi-Chen, Liu, Wen-Hao, Ren, Haoxing

arXiv.org Artificial Intelligence 

--We propose Lib2V ec, a novel self-supervised framework to efficiently learn meaningful vector representations of library cells, enabling ML models to capture essential cell semantics. The framework comprises three key components: (1) an automated method for generating regularity tests to quantitatively evaluate how well cell representations reflect inter-cell relationships; (2) a self-supervised learning scheme that systematically extracts training data from Liberty files, removing the need for costly labeling; and (3) an attention-based model architecture that accommodates various pin counts and enables the creation of property-specific cell and arc embeddings. Experimental results demonstrate that Lib2V ec effectively captures functional and electrical similarities. Moreover, linear algebraic operations on cell vectors reveal meaningful relationships, such as vector(BUF) - vector(INV) + vector(NAND) approximating the vector of AND, showcasing the framework's nuanced representation capabilities. Lib2V ec also enhances downstream circuit learning applications, especially when labeled data is scarce. Library cell representations are vital for effective machine learning (ML)-based circuit analysis and optimization, as library cells are the fundamental building blocks of circuit netlists. Traditional methods often rely on manually defined features [1]-[4], requiring extensive expertise and feature engineering. Alternatively, one-hot encoding [5] demands large amounts of domain-specific training data, which may not always be available.