Goto

Collaborating Authors

Results


Using SHAP to Explain Machine Learning Models

#artificialintelligence

Do you understand how your machine learning model works? Despite the ever-increasing usage of machine learning (ML) and deep learning (DL) techniques, the majority of companies say they can't explain the decisions of their ML algorithms [1]. This is, at least in part, due to the increasing complexity of both the data and models used. It's not easy to find a nice, stable aggregation over 100 decision trees in a random forest to say which features were most important or how the model came to the conclusion it did. This problem grows even more complex in application domains such as computer vision (CV) or natural language processing (NLP), where we no longer have the same high-level, understandable features to help us understand the model's failures.


Mastering Machine Learning Explainability in Python

#artificialintelligence

For data scientists, a key part of interpreting machine learning models is understanding which factors impact predictions. In order to effectively use machine learning in their decision-making processes, companies need to know which factors are most important. For example, if a company wants to predict the likelihood of customer churn, it might also want to know what exactly drives a customer to leave a company. In this example, the model might indicate that customers who purchase products that rarely go on sale are much more likely to stop purchasing. Armed with this knowledge, a company can make smarter pricing decisions in the future.


Image-Based Deep Learning Models Can Predict Abdominal Surgery Outcomes: Study

#artificialintelligence

Deep learning is a type of machine learning in which a model learns to perform classification tasks directly from images, text or sound. Image-based deep learning models (DLMs) have been used in other disciplines, but this method has yet to be used to predict surgical outcomes. With this background, researchers carried out a study to examine whether deep learning models (DLMs) using routine preoperative imaging can predict surgical complexity and outcomes in abdominal wall reconstruction. They applied image-based deep learning to predict complexity, defined as need for component separation, and pulmonary and wound complications after abdominal wall reconstruction (AWR). This quality improvement study was performed at an 874-bed hospital and tertiary hernia referral center from September 2019 to January 2020.


Deep–learning model improves radiologist interpretation of X-rays

#artificialintelligence

Consequently, the use of artificial intelligence, deep-learning algorithms to assist with the interpretation of X-rays has the potential to improve diagnostic …


How To Optimise Deep Learning Models

#artificialintelligence

Increasing number of parameters, latency, resources required to train etc have made working with deep learning tricky. Google researchers, in an extensive survey, have found common challenging areas for deep learning practitioners and suggested key checkpoints to mitigate these challenges. According to Gaurav Menghani of Google Research, if one were to deploy a model on smartphones where inference is constrained or expensive due to cloud servers, attention should be paid to inference efficiency. And if a large model has to be trained from scratch with limited training resources, models that are designed for training efficiency would be better off. According to Menghani, practitioners should aim to achieve pareto-optimality i.e. any model we choose should have the best of tradeoffs.


A multi-stage machine learning model on diagnosis of esophageal manometry

#artificialintelligence

High-resolution manometry (HRM) is the primary procedure used to diagnose esophageal motility disorders. Its interpretation and classification includes an initial evaluation of swallow-level outcomes and then derivation of a study-level diagnosis based on Chicago Classification (CC), using a tree-like algorithm. This diagnostic approach on motility disordered using HRM was mirrored using a multi-stage modeling framework developed using a combination of various machine learning approaches. Specifically, the framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage. In the swallow-level stage, three models based on convolutional neural networks (CNNs) were developed to predict swallow type, swallow pressurization, and integrated relaxation pressure (IRP).


ProtoTree: Addressing the black-box nature of deep learning models

#artificialintelligence

One of the biggest obstacles in the adoption of Artificial Intelligence is that it cannot explain what a prediction is based on. These machine-learning systems are so-called black boxes when the reasoning for a decision is not self-evident to a user. Meike Nauta, Ph.D. candidate at the Data Science group within the EEMCS faculty of the University of Twente, created a model to address the black-box nature of deep learning models. Algorithms can already make accurate predictions, such as medical diagnoses, but they cannot explain how they arrived at such a prediction. In recent years, a lot of attention has been paid to the explainable AI field.


Validate computer vision deep learning models

#artificialintelligence

This code pattern is part of the Getting started with IBM Maximo Visual Inspection learning path. After a deep learning computer vision model is trained and deployed, it is often necessary to periodically (or continuously) evaluate the model with new test data. This developer code pattern provides a Jupyter Notebook that will take test images with known "ground-truth" categories and evaluate the inference results versus the truth. We will use a Jupyter Notebook to evaluate an IBM Maximo Visual Inspection image classification model. You can train a model using the provided example or test your own deployed model.


Beijing AI academy unveils world's largest pre-trained deep learning model

#artificialintelligence

The Beijing Academy of Artificial Intelligence (BAAI) unveiled a newer version of its hyper-scale pre-trained deep learning model, the country's first and the world's largest, at an ongoing AI-themed forum in Beijing, in the latest signal of China's ambition to become a global leader in AI. The latest version of the model, known as Wudao, literally meaning an understanding of natural laws, sports 1.75 trillion parameters, breaking the record of 1.6 trillion previously set by Google's Switch Transformer AI language model, the academy announced Tuesday at the three-day forum that runs through Thursday. Wudao was only initially released in March. Wudao is intended to create cognitive intelligence dually driven by data and knowledge, making machines think like humans and enabling machine cognitive abilities to pass the Turing test, Tang Jie, BAAI's vice director of academics, said during the forum. The newer version of Wudao is both gigantic and smart, featuring its hyper scale, high precision and efficiency.


Finding Best Hyper Parameters For Deep Learning Model

#artificialintelligence

Creating a deep learning model has become an easy task nowadays because of the advent of new efficient and fast working libraries like Keras. One can easily create the model by using different functionalities of Keras but the difficult part is to optimize the model to get higher accuracy. We can tune the hyperparameters to make the model more efficient but sometimes it can be a never-ending process. Storm tuner is a hyperparameter tuner that is used to search for the best hyperparameters for a deep learning neural network. It helps in finding out the most optimized hyperparameters for the model we create in less than 25 trials.