A Provably Secure Strong PUF based on LWE: Construction and Implementation
Xi, Xiaodan, Li, Ge, Wang, Ye, Jeon, Yeonsoo, Orshansky, Michael
–arXiv.org Artificial Intelligence
We construct a strong PUF with provable security against ML attacks on both classical and quantum computers. The security is guaranteed by the cryptographic hardness of learning decryption functions of public-key cryptosystems, and the hardness of the learning-with-errors (LWE) problem defined on integer lattices. We call our construction the lattice PUF. We construct lattice PUF with a physically obfuscated key and an LWE decryption function block. To allow deployments in different scenarios, we demonstrate designs with different latency-area trade-offs. A compact design uses a highly serialized LFSR and LWE decryption function, while a latency-optimized design uses an unrolled LFSR and a parallel datapath. In addition to theoretical security guarantee, we evaluate empirical resistance to the various leading ML techniques: the prediction error remains above 49.76% after 1 million training CRPs. The resource-efficient design requires only 45 slices for the PUF logic proper, and 351 slices for a fuzzy extractor. The latency-optimized design achieves a 148X reduction in latency, at a 10X increase in PUF hardware utilization. The mean uniformity of PUF responses is 49.98%, the mean uniqueness is 50.00%, and the mean reliability is 1.26%. ILICON physical unclonable functions (PUFs) are security primitives commonly adopted for device identification, authentication, and cryptographic key generation [38]. A PUF exploits the inherent randomness of CMOS technology to generate an output response for a given input challenge. Weak PUFs, which are also called physically obfuscated keys (POKs) [15], have a limited challenge-response pair (CRP) space. The security of a strong PUF requires the associated CRPs to be unpredictable: given a certain set of known CRPs, it should be hard to predict the unobserved CRPs, even with the most powerful machine learning (ML) based modeling attacks. Engineering an ML resistant strong PUF with low hardware cost has been challenging. Variants of the original arbiter PUF (APUF) with stronger modeling attack resilience, including bistable ring PUF and feedforward APUF, have also been broken via ML attacks [37], [35]. The recent interpose PUF (IPUF) [33] proposal is claimed to have provable ML resistance.
arXiv.org Artificial Intelligence
Mar-5-2023
- Country:
- North America > United States > Texas > Travis County > Austin (0.14)
- Genre:
- Research Report (0.50)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: