Goto

Collaborating Authors

The Algorithm Design Manual

#artificialintelligence

The reader-friendly Algorithm Design Manual provides straightforward access to combinatorial algorithms technology, stressing design over analysis. The first part, Techniques, provides accessible instruction on methods for designing and analyzing computer algorithms. The second part, Resources, is intended for browsing and reference, and comprises the catalog of algorithmic resources, implementations and an extensive bibliography.


Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks

arXiv.org Machine Learning

Neural Architecture Search (NAS) has demonstrated its power on various AI accelerating platforms such as Field Programmable Gate Arrays (FPGAs) and Graphic Processing Units (GPUs). However, it remains an open problem, how to integrate NAS with Application-Specific Integrated Circuits (ASICs), despite them being the most powerful AI accelerating platforms. The major bottleneck comes from the large design freedom associated with ASIC designs. Moreover, with the consideration that multiple DNNs will run in parallel for different workloads with diverse layer operations and sizes, integrating heterogeneous ASIC sub-accelerators for distinct DNNs in one design can significantly boost performance, and at the same time further complicate the design space. To address these challenges, in this paper we build ASIC template set based on existing successful designs, described by their unique dataflows, so that the design space is significantly reduced. Based on the templates, we further propose a framework, namely NASAIC, which can simultaneously identify multiple DNN architectures and the associated heterogeneous ASIC accelerator design, such that the design specifications (specs) can be satisfied, while the accuracy can be maximized. Experimental results show that compared with successive NAS and ASIC design optimizations which lead to design spec violations, NASAIC can guarantee the results to meet the design specs with 17.77%, 2.49x, and 2.32x reductions on latency, energy, and area and with 0.76% accuracy loss. To the best of the authors' knowledge, this is the first work on neural architecture and ASIC accelerator design co-exploration.


Using Neural Networks to Design Neural Networks: The Definitive Guide to Understand Neural Architecture Search

#artificialintelligence

Designing deep learning systems is hard and highly subjective. Any midsize neural network could contain millions of nodes and hundreds of hidden layers. Given a specific deep learning problem, there is a large number of possible neural network architectures that can serve as a solution. Typically, we need to rely on the expertise or subjective preferences of data scientists to settle on a specific approach but that seems highly unpractical. Recently, neural architecture search(NAS) has emerged as an alternative solution to this problem by making the design of deep learning systems a machine learning problem by itself.


Artificial intelligence now capable of adopting human design strategies

#artificialintelligence

Washington: Researchers have developed trained AI agents capable of adopting human design strategies. Big design problems require creative and exploratory decision making, a skill in which humans excel. When engineers use artificial intelligence (AI), they have traditionally applied it to a problem within a defined set of rules rather than having it generally follow human strategies to create something new. The findings were published in the -- ASME Journal of Mechanical Design. This research considers an AI framework that learns human design strategies through observation of human data to generate new designs without explicit goal information, bias, or guidance.


Automated Architecture Design for Deep Neural Networks

arXiv.org Machine Learning

Machine learning has made tremendous progress in recent years and received large amounts of public attention. Though we are still far from designing a full artificially intelligent agent, machine learning has brought us many applications in which computers solve human learning tasks remarkably well. Much of this progress comes from a recent trend within machine learning, called deep learning. Deep learning models are responsible for many state-of-the-art applications of machine learning. Despite their success, deep learning models are hard to train, very difficult to understand, and often times so complex that training is only possible on very large GPU clusters. Lots of work has been done on enabling neural networks to learn efficiently. However, the design and architecture of such neural networks is often done manually through trial and error and expert knowledge. This thesis inspects different approaches, existing and novel, to automate the design of deep feedforward neural networks in an attempt to create less complex models with good performance that take away the burden of deciding on an architecture and make it more efficient to design and train such deep networks.