Goto

Collaborating Authors

 Yang, Haoyu


Large Scale Mask Optimization Via Convolutional Fourier Neural Operator and Litho-Guided Self Training

arXiv.org Artificial Intelligence

Machine learning techniques have been extensively studied for mask optimization problems, aiming at better mask printability, shorter turnaround time, better mask manufacturability, and so on. However, most of these researches are focusing on the initial solution generation of small design regions. To further realize the potential of machine learning techniques on mask optimization tasks, we present a Convolutional Fourier Neural Operator (CFNO) that can efficiently learn layout tile dependencies and hence promise stitch-less large-scale mask optimization with the limited intervention of legacy tools. We discover the possibility of litho-guided self-training (LGST) through a trained machine learning model when solving non-convex optimization problems, which allows iterative model and dataset update and brings significant model performance improvement. Experimental results show that, for the first time, our machine learning-based framework outperforms state-of-the-art academic numerical mask optimizers with an order of magnitude speedup.


ROS-X-Habitat: Bridging the ROS Ecosystem with Embodied AI

arXiv.org Artificial Intelligence

Since the earliest days of robotics, researchers have sought to build embodied agents to perform a variety of jobs, such as assistive tasks in factories [Oliff et al., 2020] or wildfire surveillance [Julian and Kochenderfer, 2019]. Following tremendous advancements in deep learning and convolutional neural networks in the past decade, researchers have been able to develop reinforcement learning (RL)-based embodied agents that interact with the real world on the basis of sensory observations. Software platforms such as OpenAI Gym [Brockman et al., 2016], Unity ML-Agents Toolkit [Juliani et al., 2018], and AI Habitat [Savva et al., 2019] have emerged to address the community's need for training and evaluating RL-based embodied agents end-to-end. Our research group was particularly intrigued by the AI Habitat platform, which offers a high-performance, photorealistic simulator, access to a sizeable library of visually-rich scanned 3D environments, and a modular software design. However, even though these platforms allow roboticists to reuse existing RL algorithms and train agents in simulators with ease, there is a critical step to using them for embodied agents which is only partially addressed: Connecting the trained agent with a real robot. Ideally, after training an RL agent in simulation one would like to take advantage of the extensive set of tools and knowledge from the robotics community to make it easy to embody that agent. One particularly popular tool from the robotics community is ROS, a robotics-focused middleware platform with extensive support for classical robotic mapping, planning and control algorithms ([mov, dwa]) as well as drivers for a wide variety of compute, sensing and actuation hardware. But ROS support for directly training an RL agent is limited, and Gazebo-- the standard simulation environment used for ROS systems-- cannot match the level of photorealism or simulation speed of tools specifically designed to train large-scale RL agents [Liang et al., 2019].


Machine Learning for Electronic Design Automation: A Survey

arXiv.org Artificial Intelligence

In recent years, with the development of semiconductor technology, the scale of integrated circuit (IC) has grown exponentially, challenging the scalability and reliability of the circuit design flow. Therefore, EDA algorithms and software are required to be more effective and efficient to deal with extremely large search space with low latency. Machine learning (ML) is taking an important role in our lives these days, which has been widely used in many scenarios. ML methods, including traditional and deep learning algorithms, achieve amazing performance in solving classification, detection, and design space exploration problems. Additionally, ML methods show great potential to generate high-quality solutions for many NP-complete (NPC) problems, which are common in the EDA field, while traditional methods lead to huge time and resource consumption to solve these problems. Traditional methods usually solve every problem from the beginning, with a lack of knowledge accumulation. Instead, ML algorithms focus on extracting high-level features or patterns that can be reused in other related or similar situations, avoiding repeated complicated analysis. Therefore, applying machine learning methods is a promising direction to accelerate the solving of EDA problems. These authors are ordered alphabetically.


Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection

arXiv.org Machine Learning

There is substantial interest in the use of machine learning (ML) based techniques throughout the electronic computer-aided design (CAD) flow, particularly those based on deep learning. However, while deep learning methods have surpassed state-of-the-art performance in several applications, they have exhibited intrinsic susceptibility to adversarial perturbations --- small but deliberate alterations to the input of a neural network, precipitating incorrect predictions. In this paper, we seek to investigate whether adversarial perturbations pose risks to ML-based CAD tools, and if so, how these risks can be mitigated. To this end, we use a motivating case study of lithographic hotspot detection, for which convolutional neural networks (CNN) have shown great promise. In this context, we show the first adversarial perturbation attacks on state-of-the-art CNN-based hotspot detectors; specifically, we show that small (on average 0.5% modified area), functionality preserving and design-constraint satisfying changes to a layout can nonetheless trick a CNN-based hotspot detector into predicting the modified layout as hotspot free (with up to 99.7% success). We propose an adversarial retraining strategy to improve the robustness of CNN-based hotspot detection and show that this strategy significantly improves robustness (by a factor of ~3) against adversarial attacks without compromising classification accuracy.


Multitask Dyadic Prediction and Its Application in Prediction of Adverse Drug-Drug Interaction

AAAI Conferences

Adverse drug-drug interactions (DDIs) remain a leading cause of morbidity and mortality around the world. Identifying potential DDIs during the drug design process is critical in guiding targeted clinical drug safety testing. Although detection of adverse DDIs is conducted during Phase IV clinical trials, there are still a large number of new DDIs founded by accidents after the drugs were put on market. With the arrival of big data era, more and more pharmaceutical research and development data are becoming available, which provides an invaluable resource for digging insights that can potentially be leveraged in early prediction of DDIs. Many computational approaches have been proposed in recent years for DDI prediction. However, most of them focused on binary prediction (with or without DDI), despite the fact that each DDI is associated with a different type. Predicting the actual DDI type will help us better understand the DDI mechanism and identify proper ways to prevent it. In this paper, we formulate the DDI type prediction problem as a multitask dyadic regression problem, where the prediction of each specific DDI type is treated as a task. Compared with conventional matrix completion approaches which can only impute the missing entries in the DDI matrix, our approach can directly regress those dyadic relationships (DDIs) and thus can be extend to new drugs more easily. We developed an effective proximal gradient method to solve the problem. Evaluation on real world datasets is presented to demonstrate the effectiveness of the proposed approach.