Goto

Collaborating Authors

Machine Learning


How the public clouds are innovating on AI

#artificialintelligence

The three big cloud providers, specifically Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), want developers and data scientists to develop, test, and deploy machine learning models on their clouds. It's a lucrative endeavor for them because testing models often need a burst of infrastructure, and models in production often require high availability. These are lucrative services for the cloud providers and offer benefits to their customers, but they don't want to compete for your business only on infrastructure, service levels, and pricing. They focus on versatile on-ramps to make it easier for customers to use their machine learning capabilities. Each public cloud offers multiple data storage options, including serverless databases, data warehouses, data lakes, and NoSQL datastores, making it likely that you will develop models in proximity to where your data resides.


Building a Chess Engine: Part 2

#artificialintelligence

Hi everyone, this will be the second instalment in my tutorial series for building a chess engine. This lesson will focus on building an AI agent that we can play. This lesson is going to be more technical than part 1, so please bear with me. I try to supply both equations and diagrams to help make things a little easier. Now that we have finished building our chess game, we can begin designing an AI that plays it.


Landing AI: Unlocking The Power Of Data-Centric Artificial Intelligence

#artificialintelligence

Artificial intelligence (AI) has been hugely transformative in industries with access to huge datasets and trained algorithms to analyze and interpret them. Probably the most obvious examples of this success can be found in consumer-facing internet businesses like Google, Amazon, Netflix, or Facebook. Over the last two decades, companies such as these have grown into some of the world's largest and most powerful corporations. In many ways, their growth can be put down to their exposure to the ever-growing volumes of data being churned out by our increasingly digitized society. But if AI is going to unlock the truly world-changing value that many believe it will – rather than simply making some very smart people in Silicon Valley very rich – then businesses in other industries have to consider different approaches.


AI Trends 2021–2025

#artificialintelligence

One of the trends of 2021 that will continue at least for the next few years is the rise in popularity of the PyTorch framework. This can be seen from graphical data that the use of PyTorch has been steadily growing over the past few years, and though the popularity of the two frameworks has shown some correlation, their trends are different. At the same time, dynamism and flexibility are on the side of PyTorch. PyTorch contrasts the Tensor board with its own tool, Visdom. It doesn't have many features, but it is easier to use.


Supervised, Semi-Supervised, Unsupervised, and Self-Supervised Learning

#artificialintelligence

The exponential number of research and publications have introduced many terms and concepts in the domain of machine learning, yet many have degenerated to merely buzzwords without many people fully understanding their differences. The most common, and perhaps THE type that we refer to when talking about machine learning is supervised learning. In simple words, supervised learning provides a set of input-output pairs such that we can learn an intermediate system that maps inputs to correct outputs. A naive example of supervised learning is determining the class (i.e., dogs/cats, etc) of an image based on a dataset of images and their corresponding classes, which we will refer to as their labels. With the given input-label pair, the current popular approach will be to directly train a deep neural network (i.e., a convolutional neural network) to output a label prediction from the given image, compute a differentiable loss between the prediction and the actual correct answers, and backpropagate through the network to update weights to optimise the predictions.


GitHub - cleanlab/cleanlab: The standard package for machine learning with noisy labels, finding mislabeled data, and uncertainty quantification. Works with most datasets and models.

#artificialintelligence

Check out the: cleanlab code documentation. Past release notes and future features planned is available here. By default, cleanlab requires no hyper-parameters. Pre-computed out-of-sample predicted probabilities for CIFAR-10 train set are available here: [[LINK]]. Check out these examples and tests (includes how to use pyTorch, FastText, etc.).


Moderation pipeline for user-generated content

#artificialintelligence

Running several experiments led to the ladder solution shown in the image below. Every piece of content in the pipeline is moderated by machine learning algorithms and sent to the general Toloka crowd if the AI system isn't sure about the label. To ensure crowd quality, the Yandex Zen team set up a smaller pool of more trusted workers called moderators that label the tasks later used as control tasks for the general crowd. Their labels are also used to create exam pools performers need to pass before being accepted onto the project. The last rung on the ladder is expert moderators who monitor the work of regular moderators by creating control tasks for them.


What you should know about developing GPT-3 applications

#artificialintelligence

Last week, OpenAI removed the waitlist for the application programming interface to GPT-3, its flagship language model. Now, any developer who meets the conditions for using the OpenAI API can apply and start integrating GPT-3 into their applications. Since the beta release of GPT-3, developers have built hundreds of applications on top of the language model. But building successful GPT-3 products presents unique challenges. You must find a way to leverage the power of OpenAI's advanced deep learning models to provide the best value to your users while keeping your operations scalable and cost-efficient.


The Essence of Logistic Regression

#artificialintelligence

Logistic Regression aim is to assign a probability to an event occuring or a sample belonging to a certain class given some features. This is analogous to a boolean valued output. An example problem is determining whether a student passes an exam or not. Let's assign a pass (success) as 1 and a fail as 0. Now, let's assume we know how long they have spent studying for their exam, call this X_1, and whether they passed their previous exam, X_2. Where Y is the target, that should take values between 0 and 1, and the β values are the unknown coefficients that we need to compute to fit the model.


AWS rolls out Graviton2-powered EC2 instances for GPU-based workloads

ZDNet

Amazon Web Services on Monday said it's bringing a new set of EC2 instances into general availability, including Graviton2-based instances designed for GPU-based workloads. AWS highlighted a few workloads that G5g instances would serve well: For Android game streaming, the instances provide up to 30% lower cost per stream per hour than x86-based GPU instances, Amazon said. For ML inference, G5g instances are well-suited for models that are sensitive to CPU performance or leverage Nvidia's AI libraries. For graphics rendering, G5g instances are the most cost-effective option for AWS customers. The instances are compatible with a number of graphical and machine learning libraries on Linux, including NVENC, NVDEC, nvJPEG, OpenGL, Vulkan, CUDA, CuDNN, CuBLAS, and TensorRT.