Goto

Collaborating Authors

Deep Learning


Deep Learning: Top 4 Python Libraries You Must Learn in 2021

#artificialintelligence

Deep Learning: Top 4 Python Libraries You Must Learn in 2021 Become A Top-Notch Deep Learning Developer That Big Corporations Will Always Scout New What you'll learn Want To Become A Top-Notch Deep Learning Developer That Big Corporations Will Always Scout? Learn the secrets that helped hundreds of deep learning developers improve their deep learning development skills without sacrificing too much time and money. The demand for deep learning developers is rising. In just a few years, more opportunities will open. Soon, more people will start to pay attention to this trend and many will try to learn and improve as much as they can to become a better Deep learning developer than others.


Can Neural Networks Show Imagination? DeepMind Thinks they Can

#artificialintelligence

I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Creating agents that resemble the cognitive abilities of the human brain has been one of the most elusive goals of the artificial intelligence(AI) space. Recently, I've been spending time on a couple of scenarios that relate to imagination in deep learning systems which reminded me of a very influential paper Alphabet's subsidiary DeepMind published last year in this subject.


How we remember could help AI be less forgetful

#artificialintelligence

A brain mechanism referred to as "replay" inspired researchers at Baylor College of Medicine to develop a new method to protect deep neural networks, found in artificial intelligence (AI), from forgetting what they have previously learned. The study, in the current edition of Nature Communications, has implications for both neuroscience and deep learning. Deep neural networks are the main drivers behind the recent fast progress in AI. These networks are extremely good at learning to solve individual tasks. However, when they are trained on a new task, they typically lose the ability to solve the previously learned task completely.


Deep Learning in Mapping for Autonomous Driving

#artificialintelligence

The applications of deep learning has been explored in various components throughout the autonomous driving stack, for example, in perception, prediction, and planning. Deep learning can also be used in mapping, a critical component for higher-level autonomous driving. Having accurate maps is essential to the success of autonomous driving for routing, localization as well as to ease perception. Maps with varying degrees of information can be obtained through subscribing to the commercially available map service. However, in areas where maps are not available, self-driving vehicles need to rely on their own map building capability to ensure the functionality and safety of autonomous driving.


New DeepMind Approach 'Bootstraps' Self-Supervised Learning of Image Representations

#artificialintelligence

The Cambridge Dictionary defines "bootstrap" as: "to improve your situation or become more successful, without help from others or without advantages that others have." While a machine learning algorithm's strength depends heavily on the quality of data it is fed, an algorithm that can do the work required to improve itself should become even stronger. A team of researchers from DeepMind and Imperial College recently set out to prove that in the arena of computer vision. In the updated paper Bootstrap Your Own Latent – A New Approach to Self-Supervised Learning, the researchers release the source code and checkpoint for their new "BYOL" approach to self-supervised image representation learning along with new theoretical and experimental insights. In computer vision, learning good image representations is critical as it allows for efficient training on downstream tasks. Image representation learning basically leverages neural networks that have been trained to produce good representations.


Artificial intelligence in COVID-19 drug repurposing

#artificialintelligence

One study estimated that pharmaceutical companies spent US$2·6 billion in 2015, up from $802 million in 2003, for the development of a new chemical entity approved by the US Food and Drug Administration (FDA). N Engl J Med. 2015; 372: 1877-1879 The increasing cost of drug development is due to the large volume of compounds to be tested in preclinical stages and the high proportion of randomised controlled trials (RCTs) that do not find clinical benefits or with toxicity issues. Given the high attrition rates, substantial costs, and low pace of de-novo drug discovery, exploiting known drugs can help improve their efficacy while minimising side-effects in clinical trials. As Nobel Prize-winning pharmacologist Sir James Black said, "The most fruitful basis for the discovery of a new drug is to start with an old drug". New uses for old drugs.


My Experience as a Bertelsmann Tech and Deep Learning Nanodegree Graduate

#artificialintelligence

One of the responsible things to do when a year is ending is to reflect on it. What accomplishments you have made, what challenges did you face, what did you learn, and how you can make the remainder of the year count. One experience that I can definitely share, and hopefully it would be beneficial to readers, is being awarded the 2019 Bertelsmann Tech Scholarship and receive the Deep Learning Nanodegree from Udacity, completely free of charge. And this year, Bertelsmann Tech is opening another scholarship application, which you should definitely try if you have a passion for data and cloud tech. Many people have asked online what it was like to apply for the Bertelsmann Tech scholarship, win it, and complete the Nanodegree from Udacity.


Image classification with FASHION MNIST: why convolutional neural networks outperform traditional…

#artificialintelligence

In the last decade, with the discovery of deep learning, the field of image classification has experienced a renaissance. Traditional machine learning methods have been replaced by newer and more powerful deep learning algorithms, such as the convolutional neural network. However, to truly understand and appreciate deep learning, we must know why does it succeed where the other methods fail. In this article, we try to answer some of those questions, by applying various classification algorithms on the Fashion MNIST dataset. Dataset information Fashion MNIST was introduced in August 2017, by research lab at Zalando Fashion.


New Data Processing Module Makes Deep Neural Networks Smarter

#artificialintelligence

Artificial intelligence researchers at North Carolina State University have improved the performance of deep neural networks by combining feature normalization and feature attention modules into a single module that they call attentive normalization (AN). The hybrid module improves the accuracy of the system significantly, while using negligible extra computational power. "Feature normalization is a crucial element of training deep neural networks, and feature attention is equally important for helping networks highlight which features learned from raw data are most important for accomplishing a given task," says Tianfu Wu, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at NC State. "But they have mostly been treated separately. We found that combining them made them more efficient and effective."


The Complete Interesting And Convoluted History of Neural Networks!

#artificialintelligence

We will be looking at the history of neural networks. After thoroughly going through various sources, I found out that the history of neural networks piqued my interest, and I became engrossed. I had a lot of fun because researching this topic was gratifying. Below is the list of the table of contents. Feel free to skip to the topic which fascinates you the most.