Goto

Collaborating Authors

Deep Learning


[R] NeurIPS-2020 paper: GradAug: A New Regularization Method for Deep Neural Networks

#artificialintelligence

We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy.


Enabling Edge AI Through Future Ready Software Development Kit

#artificialintelligence

Edge AI is here to stay! Artificial intelligence (AI) is powering many real-world applications which we see in our daily lives. AI, once seen as an emerging technology, has now successfully penetrated into every industry (B2B & B2C) Banking, logistics, healthcare, defence, manufacturing, retail, automotive, consumer electronics. Smart Speaker like Echo, Google Nest, is one such example of Edge AI solutions in the consumer electronics sector. AI technology is powerful, and human-kind has set its eye on the path of harnessing its potential to the fullest. Intelligence brought to the device can be very useful and creative.


Using Deep Java Library to do Machine Learning on SpringBoot

#artificialintelligence

Many AWS customers--startups and large enterprises--are on a path to adopt machine learning and deep learning in their existing applications. The reasons for machine learning adoption are dictated by the pace of innovation in the industry, with business use cases ranging from customer service (including object detection from images and video streams, sentiment analysis) to fraud detection and collaboration. However, until recently, the adoption learning curve was steep and required development of internal technical expertise in new programming languages (e.g., Python) and frameworks, with cascading effect on the whole software development lifecycle, from coding to building, testing, and deployment. The approach outlined in this blog post enables enterprises to leverage existing talent and resources (frameworks, pipelines, and deployments) to integrate machine learning capabilities. Spring Boot, one of the most popular and widespread open source frameworks for microservices development, has simplified the implementation of distributed systems.


Misinformation or artifact: A new way to think about machine learning: A researcher considers when - and if - we should consider artificial intelligence a failure - IAIDL

#artificialintelligence

They are capable of seemingly sophisticated results, but they can also be fooled in ways that range from relatively harmless -- misidentifying one animal as another -- to potentially deadly if the network guiding a self-driving car misinterprets a stop sign as one indicating it is safe to proceed. A philosopher with the University of Houston suggests in a paper published in Nature Machine Intelligence that common assumptions about the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks. As machine learning and other forms of artificial intelligence become more embedded in society, used in everything from automated teller machines to cybersecurity systems, Cameron Buckner, associate professor of philosophy at UH, said it is critical to understand the source of apparent failures caused by what researchers call "adversarial examples," when a deep neural network system misjudges images or other data when confronted with information outside the training inputs used to build the network. They're rare and are called "adversarial" because they are often created or discovered by another machine learning network -- a sort of brinksmanship in the machine learning world between more sophisticated methods to create adversarial examples and more sophisticated methods to detect and avoid them. "Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are," Buckner said.


The Dark Secret at the Heart of AI

#artificialintelligence

The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen--or shouldn't happen--unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur--and it's inevitable they will. That's one reason Nvidia's car is still experimental.


The case against investing in machine learning: Seven reasons not to and what to do instead

#artificialintelligence

The word on the street is if you don't invest in ML as a company or become an ML specialist, the industry will leave you behind. The hype has caught on at all levels, catching everyone from undergrads to VCs. Words like "revolutionary," "innovative," "disruptive," and "lucrative" are frequently used to describe ML. Allow me to share some perspective from my experiences that will hopefully temper this enthusiasm, at least a tiny bit. This essay materialized from having the same conversation several times over with interlocutors who hope ML can unlock a bright future for them. I'm here to convince you that investing in an ML department or ML specialists might not be in your best interest. That is not always true, of course, so read this with a critical eye. The names invoke a sense of extraordinary success, and for a good reason. Yet, these companies dominated their industries before Andrew Ng's launched his first ML lectures on Coursera. The difference between "good enough" and "state-of-the-art" machine learning is significant in academic publications but not in the real world. About once or twice a year, something pops into my newsfeed, informing me that someone improved the top 1 ImageNet accuracy from 86 to 87 or so. Our community enshrines state-of-the-art with almost religious significance, so this score's systematic improvement creates an impression that our field is racing towards unlocking the singularity. No-one outside of academia cares if you can distinguish between a guitar and a ukulele 1% better. Sit back and think for a minute.


Counterfactual vs Contrastive Explanations in Artificial Intelligence

#artificialintelligence

Counterfactuals, is the Rosetta Stone of causal analysis. Introduction: With the proliferation of deep learning [8] and its anticipated use across various applications in society, trust has become a central issue. Given the black-box nature of these deep learning systems there is a strong desire to understand the reasons behind their decisions. This has led to the sub-field of explainable AI (XAI) gaining prominence [5]. The goal of XAI is to communicate to the consumer, typically a human, why a black-box model made a particular decision.


Automatic Differentiation in PyTorch

#artificialintelligence

Thanks to it, we don't need to worry about partial derivatives, chain rule, or anything like it. To illustrate how it works, let's say we're trying to fit a simple linear regression with a single feature x, using Mean Squared Error (MSE) as our loss: We need to create two tensors, one for each parameter our model needs to learn: b and w. Without PyTorch, we would have to start with our loss, and work the partial derivatives out to compute the gradients manually. Sure, it would be easy enough to do it for this toy problem, but we need something that can scale. So, how do we do it?


Advanced AI: Deep Reinforcement Learning in Python

#artificialintelligence

This course is all about the application of deep learning and neural networks to reinforcement learning. If you've taken my first reinforcement learning class, then you know that reinforcement learning is on the bleeding edge of what we can do with AI. Specifically, the combination of deep learning with reinforcement learning has led to AlphaGo beating a world champion in the strategy game Go, it has led to self-driving cars, and it has led to machines that can play video games at a superhuman level. Reinforcement learning has been around since the 70s but none of this has been possible until now. The world is changing at a very fast pace.


PyTorch in Python

#artificialintelligence

First, let me start by explaining how PyTorch will become useful to you. PyTorch has many different uses but is primarily used as a replacement for NumPy to use the power of GPUs, as well as a deep learning research platform providing flexibility and speed. Artificial Intelligence is essentially the building of smart machines that are capable of performing tasks that normally require human intelligence. It encompasses machine learning as well as deep learning. Machine learning provides computer systems with the ability to learn and improve from experience but without having to be explicitly programmed, i.e., the development of computer programs that can access data and learn from it on their own.