Instructional Theory


Deploying machine learning models as serverless APIs Amazon Web Services

#artificialintelligence

Machine learning (ML) practitioners gather data, design algorithms, run experiments, and evaluate the results. After you create an ML model, you face another problem: serving predictions at scale cost-effectively. Serverless technology empowers you to serve your model predictions without worrying about how to manage the underlying infrastructure. Services like AWS Lambda only charge for the amount of time that you run your code, which allows for significant cost savings. Depending on latency and memory requirements, AWS Lambda can be an excellent choice for easily deploying ML models.


Deploying a Deep Learning Model using Flask

#artificialintelligence

I am creating the web deployment for a book I am writing for Manning Publications on deep learning with structured data. The audience for this book is interested in how to deploy a simple deep learning model. They need a deployment example that is straightforward and doesn't force them to wade through a bunch of web programming details. For this reason, I wanted a web deployment solution that kept as much of the coding as possible in Python. With this in mind, I looked at two Python-based options for web deployment: Flask and Django.




How time can ruin your most precious machine learning model

#artificialintelligence

Once, potential time dependent effects have been identified you should check if these effects are present in the data and if your data actually covers the necessary time spans to detect it. If the data depends on time there are three basic options to handle it. What experience regarding time have you made in your machine learning projects? Would you like to read a story about the various modelling & feature engineering techniques and validation schemes to include these time effects?


Build Machine Learning Model in Python

#artificialintelligence

Build Machine Learning Model in Python In this video, I will show you how to build a simple machine learning model in Python. Particularly, we will be using the scikit-learn package in Python to build a simple classification model (for classifying Iris flowers) using the random forest algorithm. Build Machine Learning Model in Python In this video, I will show you how to build a simple machine learning model in Python. Particularly, we will be using the scikit-learn package in Python to build a simple classification model (for classifying Iris flowers) using the random forest algorithm.


6 Python Libraries to Interpret Machine Learning Models and Build Trust - Analytics Vidhya

#artificialintelligence

The'SHapley Additive exPlanations' Python library, better knows as the SHAP library, is one of the most popular libraries for machine learning interpretability. The SHAP library uses Shapley values at its core and is aimed at explaining individual predictions. But wait – what are Shapley values? Simply put, Shapley values are derived from Game Theory, where each feature in our data is a player, and the final reward is the prediction. Depending on the reward, Shapley values tell us how to distribute this reward among the players fairly. We won't cover this technique in detail here, but you can refer to this excellent article explaining how Shapley values work: A Unique Method for Machine Learning Interpretability: Game Theory & Shapley Values! The best part about SHAP is that it offers a special module for tree-based models. Considering how popular tree-based models are in hackathons and in the industry, this module makes fast computations, even considering dependent features.


PyTorch Tutorial: How to Develop Deep Learning Models with Python

#artificialintelligence

Predictive modeling with deep learning is a skill that modern developers need to know. PyTorch is the premier open-source deep learning framework developed and maintained by Facebook. At its core, PyTorch is a mathematical library that allows you to perform efficient computation and automatic differentiation on graph-based models. Achieving this directly is challenging, although thankfully, the modern PyTorch API provides classes and idioms that allow you to easily develop a suite of deep learning models. In this tutorial, you will discover a step-by-step guide to developing deep learning models in PyTorch. PyTorch Tutorial – How to Develop Deep Learning Models Photo by Dimitry B., some rights reserved. The focus of this tutorial is on using the PyTorch API for common deep learning model development tasks; we will not be diving into the math and theory of deep learning. For that, I recommend starting with this excellent book.


Explaining machine learning models to the business

#artificialintelligence

Explainable machine learning is a sub-discipline of artificial intelligence (AI) and machine learning that attempts to summarize how machine learning systems make decisions. Summarizing how machine learning systems make decisions can be helpful for a lot of reasons, like finding data-driven insights, uncovering problems in machine learning systems, facilitating regulatory compliance, and enabling users to appeal -- or operators to override -- inevitable wrong decisions. Of course all that sounds great, but explainable machine learning is not yet a perfect science. Figure 1: Explanations created by H2O Driverless AI. These explanations are probably better suited for data scientists than for business users.


Introducing GIG: A Practical Method for Explaining Diverse Ensemble Machine Learning Models

#artificialintelligence

Machine learning is proven to yield better underwriting results and mitigate bias in lending. But not all machine learning techniques, including the wide swath at work in unregulated uses, is built to be transparent. Many of the algorithms that get deployed generate results that are difficult to explain. Recently, researchers have proposed novel and powerful methods for explaining machine learning models, notably Shapley Additive Explanations (SHAP Explainers) and Integrated Gradients (IG). These methods provide mechanisms for assigning credit to the data variables used by a model to generate a score.