gradient


Over 150 of the Best Machine Learning, NLP, and Python Tutorials I've Found

#artificialintelligence

I've split this post into four sections: Machine Learning, NLP, Python, and Math. For future posts, I may create a similar list of books, online videos, and code repos as I'm compiling a growing collection of those resources too. What's the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?


The future of deep learning

#artificialintelligence

As we noted in our previous post, a necessary transformational development that we can expect in the field of machine learning is a move away from models that perform purely pattern recognition and can only achieve local generalization, towards models capable of abstraction and reasoning, that can achieve extreme generalization. Current AI programs that are capable of basic forms of reasoning are all hard-coded by human programmers: for instance, software that relies on search algorithms, graph manipulation, formal logic. We will have instead a blend of formal algorithmic modules that provide reasoning and abstraction capabilities, and geometric modules that provide informal intuition and pattern recognition capabilities. Figure: A learned program relying on both geometric primitives (pattern recognition, intuition) and algorithmic primitives (reasoning, search, memory).



Learning to Learn

#artificialintelligence

This differs from many standard machine learning techniques, which involve training on a single task and testing on held-out examples from that task. Like the previous approach, meta-learning is performed using gradient descent (or your favorite neural network optimizer), whereas the learner corresponds to a comparison scheme, e.g. In particular, when approaching any new vision task, the well-known paradigm is to first collect labeled data for the task, acquire a network pre-trained on ImageNet classification, and then fine-tune the network on the collected data using gradient descent. Despite the simplicity of the approach, we were surprised to find that the method was able to substantially outperform a number of existing approaches on popular few-shot image classification benchmarks, Omniglot and MiniImageNet2, including existing approaches that were much more complex or domain specific.


Implementing MaLSTM on Kaggle's Quora Question Pairs competition

#artificialintelligence

Few months ago I came across a very nice article called Siamese Recurrent Architectures for Learning Sentence Similarity which offers a pretty straightforward approach at the common problem of sentence similarity. Siamese network seem to perform good on similarity tasks and have been used for tasks like sentence semantic similarity, recognizing forged signatures and many more. Word embedding is a modern way to represent words in deep learning models, more about it can be found in this nice blog post. Inputs to the network are zero padded sequences of word indices, these inputs are vectors of fixed length, where the first zeros are being ignored and the non zeros are indices that uniquely identify words.


technology-requirements-deep-machine-learning

#artificialintelligence

Understanding key technology requirements will help technologists, management, and data scientists tasked with realizing the benefits of machine learning make intelligent decisions in their choice of hardware platforms. Deep learning is a technical term that describes a particular configuration of an artificial neural network (ANN) architecture that has many'hidden' or computational layers between the input neurons where data is presented for training or inference, and the output neuron layer where the numerical results of the neural network architecture can be read. Each step in the training process simply applies a candidate set of model parameters (as determined by a black box optimization algorithm) to inference all the examples in the training data. The reason is that numerical optimization requires repeated iterations of candidate parameter sets while the training process converges to a solution.


Deconstructing Deep Meta Learning – Intuition Machine – Medium

#artificialintelligence

This article explores in more detail the idea of Meta Learning that was previously introduced in a post "The Meta Model and Meta Meta Model of Deep Learning". HPO and more generally searching for architectures differs from "learning to learn" in that that HPO explores the space of architectures while meta-learning explores the space of learning algorithms. Learning to learn by gradient descent by gradient descent trains an LSTM based optimizer to learn a variant of the gradient decent method. Learning to reinforcement learn trains an LSTM in the context of learning a Reinforcement Learning (RL) algorithm.


[R] Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting • r/MachineLearning

@machinelearnbot

They don't specify if that test was with CE error or MSE, but even if it was with MSE (as a later experiment is), that just speaks to the incredibly poorly designed network they used (392-50-10 neurons is truly weird). The idea bears some resemblance to momentum, where we gradually speed things up when the error gradients are consistent. Overall, it's an interesting idea that I'm going to give a second read tomorrow. Tl;dr In my non-professional opinion this is a interesting idea that was sidelined in my head by questions regarding their explanations and experiments.


Google's latest venture fund will back AI startups

Engadget

There's no question that Google believes artificial intelligence is the future, but it doesn't feel like it needs to all the hard work by itself. To that end, Google has launched a venture capital firm, Gradient Ventures, that will offer financial backing and "technical mentorship" to AI startups. Aurima, meanwhile, is producing both an alternative sensing approach and AI modeling. It's not hard to see why Google would pour resources into sAI tartups it doesn't control directly.


Google Launches Gradient Ventures, Firm Focusing On Artificial Intelligence Startups

International Business Times

Google announced Tuesday the launch of Gradient Ventures, a venture fund designed to target and invest in young startups focusing on artificial intelligence development. Through Gradient, we'll provide portfolio companies with capital, resources, and dedicated access to experts and bootcamps in AI. The firm touts resources ranging from experts in fields like deep learning and machine learning to having Google engineers potentially sit in and consult with startups on a short-term basis. For Google, which already has other venture capital arms like GV, launching a secondary firm like Gradient that's dedicated solely to artificial intelligence investment speaks to the interest that AI has sparked among tech companies.