Deep Learning


Deep learning enables real-time imaging around corners: Detailed, fast imaging of hidden objects could help self-driving cars detect hazards

#artificialintelligence

"Compared to other approaches, our non-line-of-sight imaging system provides uniquely high resolutions and imaging speeds," said research team leader Christopher A. Metzler from Stanford University and Rice University. "These attributes enable applications that wouldn't otherwise be possible, such as reading the license plate of a hidden car as it is driving or reading a badge worn by someone walking on the other side of a corner." In Optica, The Optical Society's journal for high-impact research, Metzler and colleagues from Princeton University, Southern Methodist University, and Rice University report that the new system can distinguish submillimeter details of a hidden object from 1 meter away. The system is designed to image small objects at very high resolutions but can be combined with other imaging systems that produce low-resolution room-sized reconstructions. "Non-line-of-sight imaging has important applications in medical imaging, navigation, robotics and defense," said co-author Felix Heide from Princeton University.


Deep learning vs. machine learning: Understand the differences

#artificialintelligence

Machine learning and deep learning are both forms of artificial intelligence. You can also say, correctly, that deep learning is a specific kind of machine learning. Both machine learning and deep learning start with training and test data and a model and go through an optimization process to find the weights that make the model best fit the data. Both can handle numeric (regression) and non-numeric (classification) problems, although there are several application areas, such as object recognition and language translation, where deep learning models tend to produce better fits than machine learning models. Machine learning algorithms are often divided into supervised (the training data are tagged with the answers) and unsupervised (any labels that may exist are not shown to the training algorithm).


Neural Architecture and AutoML Technology Analytics Insight

#artificialintelligence

Deep learning offers the promise of bypassing the procedure of manual feature engineering by learning representations in conjunction with statistical models in an end-to-end fashion. In any case, neural network architectures themselves are ordinarily designed by specialists in a painstaking, ad hoc fashion. Neural architecture search (NAS) has been touted as the way ahead for lightening this agony via automatically identifying architectures that are better than hand-planned ones. Machine learning has given some huge achievements in diverse fields as of late. Areas like financial services, healthcare, retail, transportation, and more have been utilizing machine learning frameworks somehow, and the outcomes have been promising.


Efficient Computing for Deep Learning, Robotics, and AI (Vivienne Sze) MIT Deep Learning Series

#artificialintelligence

OUTLINE: 0:00 - Introduction 0:43 - Talk overview 1:18 - Compute for deep learning 5:48 - Power consumption for deep learning, robotics, and AI 9:23 - Deep learning in the context of resource use 12:29 - Deep learning basics 20:28 - Hardware acceleration for deep learning 57:54 - Looking beyond the DNN accelerator for acceleration 1:03:45 - Beyond deep neural networks CONNECT: - If you enjoyed this video, please subscribe to this channel.


PyTorch 1.4 adds experimental Java bindings and more

#artificialintelligence

PyTorch 1.4 has been released, and the PyTorch domain libraries have been updated along with it. The popular open source machine learning framework has some experimental features on board, so let's take a closer look. PyTorch Mobile was first introduced in PyTorch 1.3 as an experimental release. It should provide an "end-to-end workflow from Python to deployment on iOS and Android," as the website states. In the latest release, PyTorch Mobile is still experimental but has received additional features.


Google DeepMind's 'Sideways' takes a page from computer architecture ZDNet

#artificialintelligence

Increasingly, machine learning forms of artificial intelligence are contending with the limits of computing hardware, and it's causing scientists to rethink how they design neural networks. That was clear in last week's research offering from Google, called Reformer, which aimed to stuff a natural language program into a single graphics processing chip instead of eight. And this week brought another offering from Google focused on efficiency, something called Sideways. With this invention, scientists have borrowed a page from computer architecture, creating a pipeline that gets more work done at every moment. Most machine learning neural nets during their training phase use a forward pass, a transmission of a signal through layers of the network, followed by backpropagation, a backward pass through the same layers, only in reverse, to gradually modify the weights of a neural network till they're just right.


Deep Learning with Taxonomic Loss for Plant Identification

#artificialintelligence

Plant identification is a fine-grained classification task which aims to identify the family, genus, and species according to plant appearance features. Inspired by the hierarchical structure of taxonomic tree, the taxonomic loss was proposed, which could encode the hierarchical relationships among multilevel labels into the deep learning objective function by simple group and sum operation. By training various neural networks on PlantCLEF 2015 and PlantCLEF 2017 datasets, the experimental results demonstrated that the proposed loss function was easy to implement and outperformed the most commonly adopted cross-entropy loss. Eight neural networks were trained, respectively, by two different loss functions on PlantCLEF 2015 dataset, and the models trained by taxonomic loss led to significant performance improvements. On PlantCLEF 2017 dataset with 10,000 species, the SENet-154 model trained by taxonomic loss achieved the accuracies of 84.07%, 79.97%, and 73.61% at family, genus and species levels, which improved those of model trained by cross-entropy loss by 2.23%, 1.34%, and 1.08%, respectively.


Deep Learning with Taxonomic Loss for Plant Identification

#artificialintelligence

Plant identification is a fine-grained classification task which aims to identify the family, genus, and species according to plant appearance features. Inspired by the hierarchical structure of taxonomic tree, the taxonomic loss was proposed, which could encode the hierarchical relationships among multilevel labels into the deep learning objective function by simple group and sum operation. By training various neural networks on PlantCLEF 2015 and PlantCLEF 2017 datasets, the experimental results demonstrated that the proposed loss function was easy to implement and outperformed the most commonly adopted cross-entropy loss. Eight neural networks were trained, respectively, by two different loss functions on PlantCLEF 2015 dataset, and the models trained by taxonomic loss led to significant performance improvements. On PlantCLEF 2017 dataset with 10,000 species, the SENet-154 model trained by taxonomic loss achieved the accuracies of 84.07%, 79.97%, and 73.61% at family, genus and species levels, which improved those of model trained by cross-entropy loss by 2.23%, 1.34%, and 1.08%, respectively.


Moon Jellyfish and Neural Networks

#artificialintelligence

As efforts to make machine learning easier more accessible increase, different companies are creating tools to make the creation and optimization of deep learning models simpler. As VentureBeat reports, Amazon launched a new tool designed to help create and modify machine learning models in just a few lines of code. Carrying out machine learning on a dataset is often a long, complex task. The data must be transformed and preprocessed, and then the proper model must be created and customized. Tweaking the hyperparameters of a model and then retraining can take a long time, and to help solve issues like this Amazon has launched AutoGluon.


Tensorflow 2.0: Deep Learning and Artificial Intelligence

#artificialintelligence

Created by Lazy Programmer Inc., Lazy Programmer Team Comment Policy: Please write your comments according to the topic of this page posting. Comments containing a link will not be displayed before approval.