Goto

Collaborating Authors

Neural Architecture Search Could Tune AI's Algorithmic Heart - InformationWeek

#artificialintelligence

Data science has evolved far beyond science. It now represents the heart and soul of many disruptive business applications. Everywhere you look, enterprise data science practices have become industrialized within 24x7 DevOps workflows. Under that trend, automation has come to practically every process in the machine-learning DevOps pipeline that surrounds AI. Modeling is the next and perhaps ultimate milestone in the move toward end-to-end, data-science pipeline automation.


Using AI to Make Better AI

IEEE Spectrum

Since 2017, AI researchers have been using AI neural networks to help design better and faster AI neural networks. Applying AI in pursuit of better AI has, to date, been a largely academic pursuit--mainly because this approach requires tens of thousands of GPU hours. If that's what it takes, it's likely quicker and simpler to design real-world AI applications with the fallible guidance of educated guesswork. Next month, however, a team of MIT researchers will be presenting a so-called "neural architecture search" algorithm that can speed up the AI-optimized AI design process by 240 times or more. That would put faster and more accurate AI within practical reach for a broad class of image recognition algorithms and other related applications.


Design Automation for Efficient Deep Learning Computing

arXiv.org Machine Learning

Efficient deep learning computing requires algorithm and hardware co-design to enable specialization: we usually need to change the algorithm to reduce memory footprint and improve energy efficiency. However, the extra degree of freedom from the algorithm makes the design space much larger: it's not only about designing the hardware but also about how to tweak the algorithm to best fit the hardware. Human engineers can hardly exhaust the design space by heuristics. It's labor consuming and sub-optimal. We propose design automation techniques for efficient neural networks. We investigate automatically designing specialized fast models, auto channel pruning, and auto mixed-precision quantization. We demonstrate such learning-based, automated design achieves superior performance and efficiency than rule-based human design. Moreover, we shorten the design cycle by 200x than previous work, so that we can afford to design specialized neural network models for different hardware platforms.


An Overview of Model Compression Techniques for Deep Learning in Space

#artificialintelligence

Every day we depend on extraterrestrial devices to send us information about the state of the Earth and surrounding space--currently, there are about 3,000 satellites orbiting the Earth and this number is growing rapidly. Processing and transmitting the wealth of data these devices produce is not a trivial task, given that resources in space such as on-board memory and downlink bandwidth face tight constraints. In the case of satellite images, the data at hand can be extremely large, sometimes as large as 8,000 8,000 pixels. For most practical applications, only part of the great amount of detail encoded in these images is of interest--such as the footprints of buildings, for example--but the current standard approach is to transmit the entire images back to Earth for processing. It seems a more efficient solution would be to process the data on board the spacecraft, arriving at a compressed representation that occupies fewer resources--something that could be achieved using a machine learning model. Unfortunately, running machine learning models tends to be a resource-intensive process even here on Earth. State-of-the-art networks typically consist of many millions of parameters and limited uplink bandwidth makes uploading such large networks to satellites impractical/infeasible.


Neural Architecture Search

#artificialintelligence

Neural Architecture Search (NAS) automates network architecture engineering. It aims to learn a network topology that can achieve best performance on a certain task. Although most popular and successful model architectures are designed by human experts, it doesn't mean we have explored the entire network architecture space and settled down with the best option. We would have a better chance to find the optimal solution if we adopt a systematic and automatic way of learning high-performance model architectures. Automatically learning and evolving network topologies is not a new idea (Stanley & Miikkulainen, 2002). In recent years, the pioneering work by Zoph & Le 2017 and Baker et al. 2017 has attracted a lot of attention into the field of Neural Architecture Search (NAS), leading to many interesting ideas for better, faster and more cost-efficient NAS methods. As I started looking into NAS, I found this nice survey very helpful by Elsken, et al 2019. They characterize NAS as a system with three major components, which is clean & concise, and also commonly adopted in other NAS papers. The NAS search space defines a set of basic network operations and how operations can be connected to construct valid network architectures.