Goto

Collaborating Authors

Results


Deep Learning with TensorFlow 2.0 [2020]

#artificialintelligence

Gain a Strong Understanding of TensorFlow - Google's Cutting-Edge Deep Learning Framework Build Deep Learning Algorithms from Scratch in Python Using NumPy and TensorFlow Set Yourself Apart with Hands-on Deep and Machine Learning Experience Grasp the Mathematics Behind Deep Learning Algorithms Understand Backpropagation, Stochastic Gradient Descent, Batching, Momentum, and Learning Rate Schedules Know the Ins and Outs of Underfitting, Overfitting, Training, Validation, Testing, Early Stopping, and Initialization Competently Carry Out Pre-Processing, Standardization, Normalization, and One-Hot Encoding Description Gain a Strong Understanding of TensorFlow - Google's Cutting-Edge Deep Learning Framework Build Deep Learning Algorithms from Scratch in Python Using NumPy and TensorFlow Set Yourself Apart with Hands-on Deep and Machine Learning Experience Grasp the Mathematics Behind Deep Learning Algorithms Understand Backpropagation, Stochastic Gradient Descent, Batching, Momentum, and Learning Rate Schedules Know the Ins and Outs of Underfitting, Overfitting, Training, Validation, Testing, Early Stopping, and Initialization Competently Carry Out Pre-Processing, Standardization, Normalization, and One-Hot Encoding Data scientists, machine learning engineers, and AI researchers all have their own skillsets. But what is that one special thing they have in common? They are all masters of deep learning. We often hear about AI, or self-driving cars, or the'algorithmic magic' at Google, Facebook, and Amazon. But it is not magic - it is deep learning. And more specifically, it is usually deep neural networks – the one algorithm to rule them all.


Udemy Coupon Deep Learning in Java - Artificial Intelligence III

#artificialintelligence

This course is about deep learning fundamentals and convolutional neural networks. Convolutional neural networks are one of the most successful deep learning approaches: self-driving cars rely heavily on this algorithm. First you will learn about densly connected neural networks and its problems. The next chapter are about convolutional neural networks: theory as well as implementation in Java with the deeplearning4j library. The last chapters are about recurrent neural networks and the applications!Who this course is for:


Best Resources to learn AI & Deep Learning

#artificialintelligence

Over the last few years, Deep Learning has proven itself to be the game-changer. This area of data science is the only one responsible for the advancements in machine learning and artificial intelligence. From academic researches to self-driving cars, Deep Learning is found in all possible aspects nowadays. Deep Learning is a complex and a vast field that consists of several components. It cannot be mastered in a day and hence it will take several months if you want to dig deeper into this field.


Editable Neural Networks

arXiv.org Machine Learning

These days deep neural networks are ubiquitously used in a wide range of tasks, from image classification and machine translation to face identification and self-driving cars. In many applications, a single model error can lead to devastating financial, reputational and even life-threatening consequences. Therefore, it is crucially important to correct model mistakes quickly as they appear. In this work, we investigate the problem of neural network editing $-$ how one can efficiently patch a mistake of the model on a particular sample, without influencing the model behavior on other samples. Namely, we propose Editable Training, a model-agnostic training technique that encourages fast editing of the trained model. We empirically demonstrate the effectiveness of this method on large-scale image classification and machine translation tasks.


Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction

arXiv.org Machine Learning

With the emerging adoption of deep neural networks (DNNs) in the HPC domain, the reliability of DNNs is also growing in importance. As prior studies demonstrate the vulnerability of DNNs to hardware transient faults (i.e., soft errors), there is a compelling need for an efficient technique to protect DNNs from soft errors. While the inherent resilience of DNNs can tolerate some transient faults (which would not affect the system's output), prior work has found there are critical faults that cause safety violations (e.g., misclassification). In this work, we exploit the inherent resilience of DNNs to protect the DNNs from critical faults. In particular, we propose Ranger, an automated technique to selectively restrict the ranges of values in particular DNN layers, which can dampen the large deviations typically caused by critical faults to smaller ones. Such reduced deviations can usually be tolerated by the inherent resilience of DNNs. Ranger can be integrated into existing DNNs without retraining, and with minimal effort. Our evaluation on 8 DNNs (including two used in self-driving car applications) demonstrates that Ranger can achieve significant resilience boosting without degrading the accuracy of the model, and incurring negligible overheads.


Towards Safer Self-Driving Through Great PAIN (Physically Adversarial Intelligent Networks)

arXiv.org Machine Learning

Automated vehicles' neural networks suffer from overfit, poor generalizability, and untrained edge cases due to limited data availability. Researchers synthesize randomized edge-case scenarios to assist in the training process, though simulation introduces potential for overfit to latent rules and features. Automating worst-case scenario generation could yield informative data for improving self driving. To this end, we introduce a "Physically Adversarial Intelligent Network" (PAIN), wherein self-driving vehicles interact aggressively in the CARLA simulation environment. We train two agents, a protagonist and an adversary, using dueling double deep Q networks (DDDQNs) with prioritized experience replay. The coupled networks alternately seek-to-collide and to avoid collisions such that the "defensive" avoidance algorithm increases the mean-time-to-failure and distance traveled under non-hostile operating conditions. The trained protagonist becomes more resilient to environmental uncertainty and less prone to corner case failures resulting in collisions than the agent trained without an adversary.


PLOP: Probabilistic poLynomial Objects trajectory Planning for autonomous driving

arXiv.org Artificial Intelligence

To navigate safely in an urban environment, an autonomous vehicle (ego vehicle) needs to understand and anticipate its surroundings, in particular the behavior of other road users (neighbors). However, multiple choices are often acceptable (e.g. turn right or left, or different ways of avoiding an obstacle). We focus here on predicting multiple feasible future trajectories both for the ego vehicle and neighbors through a probabilistic framework. We use a conditional imitation learning algorithm, conditioned by a navigation command for the ego vehicle (e.g. "turn right"). It takes as input the ego car front camera image, a Lidar point cloud in a bird-eye view grid and present and past objects detections to output ego vehicle and neighbors possible trajectories but also semantic segmentation as an auxiliary loss. We evaluate our method on the publicly available dataset nuScenes, showing state-of-the-art performance and investigating the impact of our architecture choices.


Deep Neural Network Perception Models and Robust Autonomous Driving Systems

arXiv.org Machine Learning

This paper analyzes the robustness of deep learning models in autonomous driving applications and discusses the practical solutions to address that.


The Most Influential Deep Learning Research of 2019

#artificialintelligence

Deep learning has continued its forward movement during 2019 with advances in many exciting research areas like generative adversarial networks (GANs), auto-encoders, and reinforcement learning. In terms of deployments, deep learning is the darling of many contemporary application areas such as computer vision, image recognition, speech recognition, natural language processing, machine translation, autonomous vehicles, and many more. Earlier this year, we saw Google AI Language revolutionize the NLP segment of deep learning with the new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. The already seminal paper was released on arXiv on May 24. This has led to a storm of follow-on research results.


Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks

arXiv.org Machine Learning

Deep learning methods are widely regarded as indispensable when it comes to designing perception pipelines for autonomous agents such as robots, drones or automated vehicles. The main reasons, however, for deep learning not being used for autonomous agents at large scale already are safety concerns. Deep learning approaches typically exhibit a black-box behavior which makes it hard for them to be evaluated with respect to safety-critical aspects. While there have been some work on safety in deep learning, most papers typically focus on high-level safety concerns. In this work, we seek to dive into the safety concerns of deep learning methods and present a concise enumeration on a deeply technical level. Additionally, we present extensive discussions on possible mitigation methods and give an outlook regarding what mitigation methods are still missing in order to facilitate an argumentation for the safety of a deep learning method.