Instructional Theory


How to build an API for a machine learning model in 5 minutes using Flask Codementor

#artificialintelligence

As a data scientist consultant, I want to make impact with my machine learning models. However, this is easier said than done. When starting a new project, it starts with playing around with the data in a Jupyter notebook. Once you've got a full understanding of what data you're dealing with and have aligned with the client on what steps to take, one of the outcomes can be to create a predictive model. You get excited and go back to your notebook to make the best model possible.


Researchers introduce ML model where learnability cannot be proved Packt Hub

#artificialintelligence

In a study published in Nature Machine Intelligence, researchers discovered that in some cases of machine learning it cannot be proved whether the system actually'learned' something or solved the problem. They explore machine learning learnability. We already know that machine learning systems, and AI systems in general are black boxes. You feed the system some data, you get some output or a trained system that performs some tasks but you don't know how the system arrived at a particular solution. Now we have a published study from Ben-Davis et al that shows learnability in machine learning is undecidable.


Hands-on Machine Learning Model Interpretation – Towards Data Science

#artificialintelligence

Interpreting Machine Learning models is no longer a luxury but a necessity given the rapid adoption of AI in the industry. This article in a continuation in my series of articles aimed at'Explainable Artificial Intelligence (XAI)'. The idea here is to cut through the hype and enable you with the tools and techniques needed to start interpreting any black box machine learning model. Following are the previous articles in the series in case you want to give them a quick skim (but are not mandatory for this article). In this article we will give you hands-on guides which showcase various ways to explain potential black-box machine learning models in a model-agnostic way.


Building our First Machine Learning Model (Pt. 4)

#artificialintelligence

In this post I'll be showing you a quick and easy example applying machine learning to a dataset and doing predictions with our model. I won't be diving too deep on how to use these tools. But for what I'll cover in this post should be enough to get you through most datasets. First let's download the iris dataset. You can download the iris dataset here and save it as CSV.


r/MachineLearning - [P] AiFiddle: a Web GUI to Build, Visualise and Share Deep Learning Models

#artificialintelligence

For a few months now I've been working on a web GUI to build, visualise, train and share deep neural models. Encouraged by the positive feedback that I received from this community when I posted preview, I've continued developing it and I now think it's ready for a beta release. The editor can be found here: https://aifiddle.io. Your feedback, ideas, suggestions are greatly useful, so please don't hesitate to share them.


How To Fine Tune Your Machine Learning Models To Improve Forecasting Accuracy?

#artificialintelligence

Fine tuning machine learning predictive model is a crucial step to improve accuracy of the forecasted results. In the recent past, I have written a number of articles that explain how machine learning works and how to enrich and decompose the feature set to improve accuracy of your machine learning models. I am often asked a question on the techniques that can be utilised to tune the forecasting models when the features are stable and the feature set is decomposed. Once everything is tried, we should look to tune our machine learning models. This diagram illustrates how parameters can be dependent on one another.


Using Bayesian Optimization to Tune Machine Learning Models - insideBIGDATA

#artificialintelligence

The presentation below, "Using Bayesian Optimization to Tune Machine Learning Models" by Scott Clark of SigOpt is from MLconf. The talk briefly introduces Bayesian Global Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. The talk motivates the problem and gives example applications. Clark also talks about the development of a robust benchmark suite for our algorithms including test selection, metric design, infrastructure architecture, visualization, and comparison to other standard and open source methods. He discusses how this evaluation framework empowers our research engineers to confidently and quickly make changes to our core optimization engine.


Configure, monitor, and understand machine learning models with IBM AI OpenScale

#artificialintelligence

The new Monitor WML models with AI OpenScale code pattern shows you how to gain insight into a machine learning model using IBM AI OpenScale. The pattern provides examples of how to configure the AI OpenScale service. You can then enable and explore a model deployed with Watson Machine Learning, and create fairness and accuracy measures for the model. IBM AI OpenScale is an open platform that enables organizations to automate and operate their AI across its full lifecycle. AI OpenScale provides a powerful environment for managing AI and ML models on IBM Cloud, IBM Cloud Private, or other platforms.


How To Develop a Machine Learning Model From Scratch

#artificialintelligence

In this article we are going to study in depth how the process for developing a machine learning model is done. There will be a lot of concepts explained and we will reserve others that are more specific to future articles. The first, and one of the most critical things to do, is to find out what are the inputs and the expected outputs. It is crucial to keep in mind that machine learning can only be used to memorize patterns that are present in the training data, so we can only recognize what we have seen before. When using Machine Learning we are making the assumption that the future will behave like the past, and this isn't always true.


Impact of Dataset Size on Deep Learning Model Skill And Performance Estimates

#artificialintelligence

Supervised learning is challenging, although the depths of this challenge are often learned then forgotten or willfully ignored. This must be the case, because dwelling too long on this challenge may result in a pessimistic outlook. In spite of the challenge, we continue to wield supervised learning algorithms and they perform well in practice. Generally, it is common knowledge that too little training data results in a poor approximation. Too little test data will result in an optimistic and high variance estimation of model performance. It is critical to make this "common knowledge" concrete with worked examples. In this post, we will work through a detailed case study for developing a Multilayer Perceptron neural network on a simple two-class classification problem. You will discover that, in practice, we don't have enough data to learn the mapping function or to evaluate models, yet supervised learning algorithms like neural networks remain remarkably effective.