Goto

Collaborating Authors

Results


A Guide to Selecting Machine Learning Models in Python

#artificialintelligence

Model testing is a key part of model building. When done correctly, testing ensures your model is stable and isn't overfit. The three most well-known methods of model testing are randomized train-test split, K-fold cross-validation, and leave one out cross-validation. Feature selection is another important part of model building as it directly impacts model performance and interpretability. The simplest method of feature selection is manual, which is ideally guided by domain expertise.


Deploy Machine Learning Models leveraging CherryPy and Docker - Analytics Vidhya

#artificialintelligence

Deployment of a machine learning model means making your model predictions available to the users through API or through a web application. In this post let us see how we could leverage CherryPy and Docker to deploy a Machine Learning model. In this post, we are going to look at Model Deployment using CherryPy and Docker. CherryPy is a python, object-oriented web framework. A web framework is a software framework that assists us in developing web applications.


ProtoTree: Addressing the black-box nature of deep learning models

#artificialintelligence

One of the biggest obstacles in the adoption of Artificial Intelligence is that it cannot explain what a prediction is based on. These machine-learning systems are so-called black boxes when the reasoning for a decision is not self-evident to a user. Meike Nauta, Ph.D. candidate at the Data Science group within the EEMCS faculty of the University of Twente, created a model to address the black-box nature of deep learning models. Algorithms can already make accurate predictions, such as medical diagnoses, but they cannot explain how they arrived at such a prediction. In recent years, a lot of attention has been paid to the explainable AI field.


Validate computer vision deep learning models

#artificialintelligence

This code pattern is part of the Getting started with IBM Maximo Visual Inspection learning path. After a deep learning computer vision model is trained and deployed, it is often necessary to periodically (or continuously) evaluate the model with new test data. This developer code pattern provides a Jupyter Notebook that will take test images with known "ground-truth" categories and evaluate the inference results versus the truth. We will use a Jupyter Notebook to evaluate an IBM Maximo Visual Inspection image classification model. You can train a model using the provided example or test your own deployed model.


Dashboards for Interpreting & Comparing Machine Learning Models

#artificialintelligence

With the advent of technology, there are multiple machine learning algorithms in the Data Science field which makes it really difficult for a User/Data Scientist/ML Engineer to select the best model according to the dataset that they are working on. Comparing different models can be one way of selecting the best model, but it is time taking process where we will create different machine learning models and then compare their performance. It is also not feasible because most of the models are black-box and we don't know what is going on inside the model and how it will behave. In short, we don't know how to interpret the model because of the model complexity and model being black-box. Without interpreting the models it is difficult to understand how a model is behaving and how it will behave on the new data provided to it.


Deployment of Machine Learning Models in Production

#artificialintelligence

Are you ready to kickstart your Advanced NLP course? Are you ready to deploy your machine learning models in production at AWS? You will learn each and every steps on how to build and deploy your ML model on a robust and secure server at AWS. Prior knowledge of python and Data Science is assumed. If you are AN absolute beginner in Data Science, please do not take this course. This course is made for medium or advanced level of Data Scientist.


Machine Learning Models

#artificialintelligence

In practice, "applying machine learning" means that you apply an algorithm to data, and that algorithm creates a model that captures the trends in the data. There are many different types of machine learning models to choose from, and each has its own characteristics that may make it more or less appropriate for a given dataset. This page gives an overview of different types of machine learning models available for supervised learning; that is, for problems where we build a model to predict a response. Within supervised learning there are two categories of models: regression (when the response is continuous) and classification (when the response belongs to a set of classes).


How to Build your First Machine Learning Model in Python

#artificialintelligence

So what machine learning model are we building today? In this article, we are going to be building a regression model using the random forest algorithm on the solubility dataset. After model building, we are going to apply the model to make predictions followed by model performance evaluation and data visualization of its results. So which dataset are we going to use? The default answer may be to use a toy dataset as an example such as the Iris dataset (classification) or the Boston housing dataset (regression).


Beijing AI academy unveils world's largest pre-trained deep learning model

#artificialintelligence

The Beijing Academy of Artificial Intelligence (BAAI) unveiled a newer version of its hyper-scale pre-trained deep learning model, the country's first and the world's largest, at an ongoing AI-themed forum in Beijing, in the latest signal of China's ambition to become a global leader in AI. The latest version of the model, known as Wudao, literally meaning an understanding of natural laws, sports 1.75 trillion parameters, breaking the record of 1.6 trillion previously set by Google's Switch Transformer AI language model, the academy announced Tuesday at the three-day forum that runs through Thursday. Wudao was only initially released in March. Wudao is intended to create cognitive intelligence dually driven by data and knowledge, making machines think like humans and enabling machine cognitive abilities to pass the Turing test, Tang Jie, BAAI's vice director of academics, said during the forum. The newer version of Wudao is both gigantic and smart, featuring its hyper scale, high precision and efficiency.


Finding Best Hyper Parameters For Deep Learning Model

#artificialintelligence

Creating a deep learning model has become an easy task nowadays because of the advent of new efficient and fast working libraries like Keras. One can easily create the model by using different functionalities of Keras but the difficult part is to optimize the model to get higher accuracy. We can tune the hyperparameters to make the model more efficient but sometimes it can be a never-ending process. Storm tuner is a hyperparameter tuner that is used to search for the best hyperparameters for a deep learning neural network. It helps in finding out the most optimized hyperparameters for the model we create in less than 25 trials.