Goto

Collaborating Authors

Deep Learning


Combining Physics and Deep Learning

#artificialintelligence

With the rise in compute power over the past 10 years, we have seen a sharp increase in the number of simulations. Digital twins are one such example. They are virtual replicas of a physical object or process that can be simulated in a variety of scenarios. One problem faced by digital twins is how they can combine potentially noisy empirical data with physics. In 2021, researchers at the University of Sheffield developed a very simple digital twin framework called PhysiNet to solve this problem.


Big Tech & Their Favourite Deep Learning Techniques

#artificialintelligence

Interestingly, they all seem to have picked a particular school of thought in deep learning. With time, this pattern is becoming more and more clear. For instance, Facebook AI Research (FAIR) has been championing self-supervised learning (SSL) for quite some time, alongside releasing relevant papers and tech related to computer vision, image, text, video, and audio understanding. Even though many companies and research institutions seem to have their hands on every possible area within deep learning, a clear pattern is emerging. But, of course, all of them have their favourites. In this article, we will explore some of the recent work in their respective niche/popularised areas.


Tensorflow 2.0: Deep Learning and Artificial Intelligence

#artificialintelligence

It's been nearly 4 years since Tensorflow was released, and the library has evolved to its official second version. Tensorflow is Google's library for deep learning and artificial intelligence. Tensorflow is the world's most popular library for deep learning, and it's built by Google, whose parent Alphabet recently became the most cash-rich company in the world (just a few days before I wrote this). It is the library of choice for many companies doing AI and machine learning. In other words, if you want to do deep learning, you gotta know Tensorflow.


Mathematical Foundations of Machine Learning

#artificialintelligence

To be a good data scientist, you need to know how to use data science and machine learning libraries and algorithms, such as Scikit-learn, TensorFlow, and PyTorch, to solve whatever problem you have at hand. To be an excellent data scientist, you need to know how those libraries and algorithms work under the hood. This is where our "Machine Learning & Data Science Foundations Masterclass" comes in. Led by deep learning guru Dr. Jon Krohn, this course provides a firm grasp of the underlying mathematics, such as linear algebra, tensors, and eigenvectors, that operate behind the most important Python libraries, machine learning algorithms, and data science models. While the above sections constitute a standalone, introductory course on linear algebra all on their own, we're not stopping there!


DEEP LEARNING LIBRARIES

#artificialintelligence

Here I am going to share about various Deep Learning Libraries, among them some are very popular and some are very impressive at older times. They are very popular because of their features and properties. If you want to know something about Deep Learning Libraries read this post, you won't get disappointed. The libraries which are going to be discussed are THEANO, TENSER FLOW, PYTORCH, and KERAS. THEANO:- A Library developed my Montreal Institute for learning algorithms and was the major library for deep learning development even before Tenser flow and Pytorch. The founder can't maintain it, so lost its popularity.


How does Transfer Learning work?

#artificialintelligence

The simple idea of transfer learning is, After Neural Network learned from one task, apply that knowledge to another related task. It is a powerful idea in Deep Learning. You all know in Computer vision and Natural Language Processing tasks required high computational costs and time. So, we can simplify those tasks using Transfer Learning. For example, after we trained a model using images to classify Cars, then that model we can use to recognize other vehicles like trucks.


Google Introduces Families of Neural Networks To Train Faster, SOTA Performance

#artificialintelligence

Google AI research team recently introduced two families of neural networks for image recognition -- EfficientNetV2 and CoAtNet. While EffcientNetV2 consists of CNNs with a small-scale dataset for faster training efficiency like ImageNet1K (with 1.28 million images), CoAtNet combines convolution and self-attention to achieve higher accuracy on large-scale datasets like ImageNet21 (13 million images) and JFT (3 billion images). As per Google, EfficientNetV2 and CoAtNet are four to ten times faster while achieving SOTA and 90.88 per cent top-1 accuracy on the well-established ImageNet dataset. In addition to this, the team has also released the source code and pretrained models on the Google AutoML GitHub. Training efficiency has become a critical focus for deep learning with neural network models, and training data size grows. For instance, GPT-3 shows remarkable capabilities in few-shot learning, but it needs weeks of training with hundreds and thousands of GPUs, making it difficult to retrain or improve.


GitHub - Nyandwi/machine_learning_complete

#artificialintelligence

Techniques, tools, best practices and everything you need to to learn machine learning! This is a comprehensive repository containing 30 notebooks on Python programming, data manipulation, data analysis, data visualization, data cleaning, classical machine learning, Computer Vision and Natural Language Processing(NLP). All notebooks were created with the readers in mind. Every notebook starts with a high-level overview of any specific algorithm/concepts being covered. Wherever possible, visuals are used to make things clear.


How TensorFlow is taking the tension out of Machine Learning!

#artificialintelligence

Machine Learning and Deep Learning are both becoming well-known phrases in the current era -- but details of the specific tools they require are less ubiquitous. I'd like to discuss one of the most popular Machine Learning tools and how it compares to the others. TensorFlow is probably the most popular Machine Learning tool among researchers today. The Data Incubator calculated that the rating for TensorFlow is nine standard deviations higher than the rating for the second highest machine learning tool, Keras. TensorFlow was written by the Google Brain Team in 2015, and its front end is written in python, while its backend is written in C .


RStudio AI Blog: Beyond alchemy: A first look at geometric deep learning

#artificialintelligence

Geometric deep learning is a "program" that aspires to situate deep learning architectures and techniques in a framework of mathematical priors. The priors, such as various types of invariance, first arise in some physical domain. A neural network that well matches the domain will preserve as many invariances as possible. In this post, we present a very conceptual, high-level overview, and highlight a few applications.