deep learning


So You Want to be a Machine Learning Engineer? - DATAVERSITY

#artificialintelligence

Ideally, a machine learning engineer would have both the skills of a software engineer and the experience of a data scientist and data engineer. However, data scientists and software engineers usually come from very different backgrounds, and data scientists should not be expected to be great programmers, nor should software engineers be expected to provide statistical summaries. Nonetheless, a background in machine learning algorithms and how they can be implemented is critical to the machine learning engineer (MLE). An MLE works with different algorithms and applies them to different codebases and settings. Previous experience with software engineering and codebase would provide a very useful foundation for this career field.


The Power and Limits Of Deep Learning -- Yann LeCun

#artificialintelligence

We need a lot of data thas a huge drawback. We are learning the representation directly and this is why it works so well. Even in RL, we need a lot of data and this can really be a drawback in every turn. And all supervised learning is backpropagation gradient descent derivative. Even 2005 some good progress were made.


Solving a Rubik's Cube with a dexterous hand

#artificialintelligence

In recent years, a growing number of researchers have explored the use of robotic arms or dexterous hands to solve a variety of everyday tasks. While many of them have successfully tackled simple tasks, such as grasping or basic manipulation, complex tasks that involve multiple steps and precise/strategic movements have so far proved harder to address. A team of researchers at the Chinese University of Hong Kong and Tencent AI Lab has recently developed a deep learning-based approach to solve a Rubik's Cube using a multi-fingered dexterous hand. Their approach, presented in a paper pre-published on arXiv, allows a dexterous hand to solve more advanced in-hand manipulation tasks, such as the renowned Rubik's Cube puzzle. A Rubik's Cube is a plastic cube covered in multi-colored squares that can be shifted into different positions.


A hands-on intuitive approach to Deep Learning Methods for Text Data -- Word2Vec, GloVe and FastText

#artificialintelligence

Working with unstructured text data is hard especially when you are trying to build an intelligent system which interprets and understands free flowing natural language just like humans. You need to be able to process and transform noisy, unstructured textual data into some structured, vectorized formats which can be understood by any machine learning algorithm. Principles from Natural Language Processing, Machine Learning or Deep Learning all of which fall under the broad umbrella of Artificial Intelligence are effective tools of the trade. Based on my previous posts, an important point to remember here is that any machine learning algorithm is based on principles of statistics, math and optimization. Hence they are not intelligent enough to start processing text in their raw, native form.


Never deploy AI without doing these 3 things -- ArthurAI

#artificialintelligence

AI is hard at work delivering huge ROIs and efficiencies for businesses in every sector of commerce, but it's also something that can (quite spectacularly) fail, causing major financial losses, and harm to the brand your team has worked so hard to build. If not done carefully, AI deployments can quickly turn into disaster, as we see more and more of in the news each day. A business interested in building reliable AI, whose decisions can be trusted, is a business that puts appropriate guard rails around their model maintenance, long after it's been trained, tested, and deployed. But seldom do businesses truly take steps to ensure that, after those models are trained, they stay relevant, operational, and healthy. We will be updating this blog with deep explainers for all 3 of the above in the weeks to come, but to be quick about it, we'll explain why they're important… right now!


From TensorFlow to PyTorch

#artificialintelligence

In this post, you'll learn the main recipe to convert a pretrained TensorFlow model in a pretrained PyTorch model, in just a few hours. We'll take the example of a simple architecture like OpenAI GPT-2 Doing such a conversion assumes a good familiarity with both TensorFlow and PyTorch but it's also one of the best ways to get to know better both frameworks! The first step is to retrieve the TensorFlow code and a pretrained checkpoint. Let's get them from OpenAI GPT-2 official repository: TensorFlow checkpoints are usually composed of three files named XXX.ckpt.data-YYY A trained NLP model should also be provided with a vocabulary to associate the tokens to the embeddings indices (here encoder.json


CS231n Convolutional Neural Networks for Visual Recognition

#artificialintelligence

In the previous sections we've discussed the static parts of a Neural Networks: how we can set up the network connectivity, the data, and the loss function. This section is devoted to the dynamics, or in other words, the process of learning the parameters and finding good hyperparameters. In theory, performing a gradient check is as simple as comparing the analytic gradient to the numerical gradient. In practice, the process is much more involved and error prone. This requires you to evaluate the loss function twice to check every single dimension of the gradient (so it is about 2 times as expensive), but the gradient approximation turns out to be much more precise. To see this, you can use Taylor expansion of \(f(x h)\) and \(f(x-h)\) and verify that the first formula has an error on order of \(O(h)\), while the second formula only has error terms on order of \(O(h 2)\) (i.e. it is a second order approximation). What are the details of comparing the numerical gradient \(f'_n\) and analytic gradient \(f'_a\)? That is, how do we know if the two are not compatible? You might be temped to keep track of the difference \(\mid f'_a - f'_n \mid \) or its square and define the gradient check as failed if that difference is above a threshold.


28 Statistical Concepts Explained in Simple English - Part 18

#artificialintelligence

This resource is part of a series on specific topics related to data science: regression, clustering, neural networks, deep learning, decision trees, ensembles, correlation, Python, R, Tensorflow, SVM, data reduction, feature selection, experimental design, cross-validation, model fitting, and many more. To keep receiving these articles, sign up on DSC. Below is the last article in the series Statistical Concepts Explained in Simple English. The full series is accessible here. To make sure you keep getting these emails, please add [email protected] to your address book or whitelist us.


Deep learning AI may identify atrial fibrillation from a normal rhythm ECG - Times of India

#artificialintelligence

Although early and requiring further research before implementation, the findings could aid doctors investigating unexplained strokes or heart failure, enabling appropriate treatment. Researchers have trained an artificial intelligence model to detect the signature of atrial fibrillation in 10-second electrocardiograms (ECG) taken from patients in normal rhythm. The study, involving almost 181,000 patients and published in The Lancet, is the first to use deep learning to identify patients with potentially undetected atrial fibrillation and had an overall accuracy of 83%. Atrial fibrillation is estimated to affect 2.7–6.1 million people in the United States and is associated with increased risk of stroke, heart failure and mortality. It is difficult to detect on a single ECG because patients' hearts can go in and out of this abnormal rhythm, so atrial fibrillation often goes undiagnosed.


Data Science, the Good, the Bad, and the… Future

#artificialintelligence

How often do you think you're touched by data science in some form or another? Finding your way to this article likely involved a whole bunch of data science (whooaa). To simplify things a bit, I'll explain what data science means to me. "Data Science is the art of applying scientific methods of analysis to any kind of data so that we can unlock important information." If we unpack that, all data science really means is to answer questions by using math and science to go through data that's too much for our brains to process.