Goto

Collaborating Authors

Programming & Hardware R-tificialIntelligence

#artificialintelligence

"exploring the humanizing of AI by building a digital brain which can be used as a platform for autonomously animating hyper-realistic digital humans" "I think what will be increasingly important in the digital human space is ethics, as they relate both to the digital human and to the real-life people who may be impacted. From a digital human perspective, companies are essentially birthing entities which, in many cases, are expected to form meaningful connections and relationships with people. So how organizations treat these digital humans--including any decision to dispose of them if they are no longer deemed needed--will increasingly become important. On the flipside, entertainment organizations that are using digital humans run the risk of causing concern of replacing real humans […] and it will be important to clarify how and why digital humans are being used in lieu of the'real' thing." Excerpts from this article: The Virtual Beings Are Arriving Efficient deployment of deep learning models requires specialized neural network architectures to best fit different hardware platforms and efficiency constraints (defined as deployment scenarios).


One Network to Fit All Hardware: New MIT AutoML Method Trains 14X Faster Than SOTA NAS

#artificialintelligence

AI is now integrated into countless scenarios, from tiny drones to huge cloud platforms. Every hardware platform is ideally paired with a tailored AI model that perfectly meets requirements in terms of performance, efficiency, size, latency, etc. However even a single model architecture type needs tweaking when applied to different hardware, and this requires researchers spend time and money training them independently. Popular solutions today include either designing models specialized for mobile devices or pruning a large network by reducing redundant units, aka model compression. A group of MIT researchers (Han Cai, Chuang Gan and Song Han) have introduced a "Once for All" (OFA) network that achieves the same or better level accuracy as state-of-the-art AutoML methods on ImageNet, with a significant speedup in training time. A major innovation of the OFA network is that researchers don't need to design and train a model for each scenario, rather they can directly search for an optimal subnetwork using the OFA network.


Reducing the carbon footprint of artificial intelligence

#artificialintelligence

Artificial intelligence has become a focus of certain ethical concerns, but it also has some major sustainability issues. Last June, researchers at the University of Massachusetts at Amherst released a startling report estimating that the amount of power required for training and searching a certain neural network architecture involves the emissions of roughly 626,000 pounds of carbon dioxide. This issue gets even more severe in the model deployment phase, where deep neural networks need to be deployed on diverse hardware platforms, each with different properties and computational resources. MIT researchers have developed a new automated AI system for training and running certain neural networks. Results indicate that, by improving the computational efficiency of the system in some key ways, the system can cut down the pounds of carbon emissions involved -- in some cases, down to low triple digits.


New system cuts the energy required for training and running neural networks

#artificialintelligence

Artificial intelligence has become a focus of certain ethical concerns, but it also has some major sustainability issues. Last June, researchers at the University of Massachusetts at Amherst released a startling report estimating that the amount of power required for training and searching a certain neural network architecture involves the emissions of roughly 626,000 pounds of carbon dioxide. This issue gets even more severe in the model deployment phase, where deep neural networks need to be deployed on diverse hardware platforms, each with different properties and computational resources. MIT researchers have developed a new automated AI system for training and running certain neural networks. Results indicate that, by improving the computational efficiency of the system in some key ways, the system can cut down the pounds of carbon emissions involved--in some cases, down to low triple digits.


Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours

arXiv.org Machine Learning

Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the runtime constraint of a mobile device? Neural architecture search (NAS) has revolutionized the design of hardware-efficient ConvNets by automating this process. However, the NAS problem remains challenging due to the combinatorially large design space, causing a significant searching time (at least 200 GPU-hours). To alleviate this complexity, we propose Single-Path NAS, a novel differentiable NAS method for designing hardware-efficient ConvNets in less than 4 hours. Our contributions are as follows: 1. Single-path search space: Compared to previous differentiable NAS methods, Single-Path NAS uses one single-path over-parameterized ConvNet to encode all architectural decisions with shared convolutional kernel parameters, hence drastically decreasing the number of trainable parameters and the search cost down to few epochs. 2. Hardware-efficient ImageNet classification: Single-Path NAS achieves 74.96% top-1 accuracy on ImageNet with 79ms latency on a Pixel 1 phone, which is state-of-the-art accuracy compared to NAS methods with similar constraints (<80ms). 3. NAS efficiency: Single-Path NAS search cost is only 8 epochs (30 TPU-hours), which is up to 5,000x faster compared to prior work. 4. Reproducibility: Unlike all recent mobile-efficient NAS methods which only release pretrained models, we open-source our entire codebase at: https://github.com/dstamoulis/single-path-nas.