Collaborating Authors


A/B testing machine learning models in production


There is (rightfully) quite a bit of emphasis on testing and optimizing models pre-deployment in the machine learning ecosystem, with meta machine learning platforms like Comet becoming a standard part of the data science stack. There has been less of an emphasis, however, on testing and optimizing models post-deployment, at least as far as tooling is concerned. This dearth of tooling has forced many to build extra in-house infrastructure, adding yet another bottleneck to deploying to production. We've spent a lot of time thinking about A/B testing deployed models in Cortex, our open source ML deployment platform. After several iterations, we've built a set of features that make it easy to conduct scalable, automated A/B tests of deployed models.

Deployment of Machine Learning Models in Production


Deployment of Machine Learning Models in Production, Deploy ML Model in with BERT, DistilBERT, FastText NLP Models in Production with Flask, uWSGI, and NGINX at AWS EC2 Created by Laxmi Kant KGP TalkiePreview this course Udemy GET COUPON CODE Complete End to End NLP Application How to work with BERT in Google Colab How to use BERT for Text Classification Deploy Production Ready ML Model Fine Tune and Deploy ML Model with Flask Deploy ML Model in Production at AWS Deploy ML Model at Ubuntu and Windows Server Optimize your NLP Code You will learn how to develop and deploy FastText model on AWS Learn Multi-Label and Multi-Class classification in NLP Are you ready to kickstart your Advanced NLP course? Are you ready to deploy your machine learning models in production at AWS? You will learn each and every steps on how to build and deploy your ML model on a robust and secure server at AWS. Prior knowledge of python and Data Science is assumed. If you are AN absolute beginner in Data Science, please do not take this course. This course is made for medium or advanced level of Data Scientist.

Machine Learning Model Development


If you intend to take the certification, this will be a good starting point. If you don't, this will help you develop the basic know-how needed to succeed in a rapidly evolving Machine Learning ecosystem. This is not a certification study guide. This article's objective is to provide a simple explanation of complex ideas and give a broad view of the subject matter. The outline mimics the GCP Professional Machine Learning Engineer certification guide.

Is Your Machine Learning Model Likely to Fail? - KDnuggets


TL;DR -- Amidst intentions of generating brilliant statistical analyses and breakthroughs in machine learning, don't get tripped up by these five common mistakes in the Data Science planning process. As a Federal consultant, I work with U.S. government agencies that conduct scientific research, support veterans, offer medical services, and maintain healthcare supply chains. Data Science can be a very important tool to help these teams advance their mission-driven work. I'm deeply invested in making sure we don't waste time and energy on Data Science models that: Based on my experience, I'm sharing hard-won lessons about five missteps in the Data Science planning process -- shortfalls that you can avoid if you follow these recommendations. Just like the visible light spectrum, the work we do as Data Scientists constitutes a small portion of a broader range.

A Survey on Recent Advances in Sequence Labeling from Deep Learning Models Artificial Intelligence

Sequence labeling (SL) is a fundamental research problem encompassing a variety of tasks, e.g., part-of-speech (POS) tagging, named entity recognition (NER), text chunking, etc. Though prevalent and effective in many downstream applications (e.g., information retrieval, question answering, and knowledge graph embedding), conventional sequence labeling approaches heavily rely on hand-crafted or language-specific features. Recently, deep learning has been employed for sequence labeling tasks due to its powerful capability in automatically learning complex features of instances and effectively yielding the stat-of-the-art performances. In this paper, we aim to present a comprehensive review of existing deep learning-based sequence labeling models, which consists of three related tasks, e.g., part-of-speech tagging, named entity recognition, and text chunking. Then, we systematically present the existing approaches base on a scientific taxonomy, as well as the widely-used experimental datasets and popularly-adopted evaluation metrics in the SL domain. Furthermore, we also present an in-depth analysis of different SL models on the factors that may affect the performance and future directions in the SL domain.

Recent Trends in the Use of Deep Learning Models for Grammar Error Handling Artificial Intelligence

Grammar error handling (GEH) is an important topic in natural language processing (NLP). GEH includes both grammar error detection and grammar error correction. Recent advances in computation systems have promoted the use of deep learning (DL) models for NLP problems such as GEH. In this survey we focus on two main DL approaches for GEH: neural machine translation models and editor models. We describe the three main stages of the pipeline for these models: data preparation, training, and inference. Additionally, we discuss different techniques to improve the performance of these models at each stage of the pipeline. We compare the performance of different models and conclude with proposed future directions.

Luminoso Introduces Deep Learning Model for Evaluating Sentiment at the Concept Level


Company's state-of-the-art architecture identifies unique concepts within text-based communications, and analyzes the sentiment of each concept Luminoso, the company that automatically turns unstructured text data into business-critical insights, unveiled its new deep learning model for analyzing sentiment of multiple concepts within the same text-based document. "While sentiment analysis has been prevalent for well over a decade, the most common form of sentiment analysis today involves evaluating whether a document's sentiment is overall more positive than negative," said Adam Carte, CEO of Luminoso. "This type of analysis is overly-simplistic, as it fails to address nuanced comments such as customers explaining what they like and dislike about a product, or employee feedback about a company's strengths and weaknesses. With Concept-Level Sentiment in Luminoso Daylight, businesses across industries will be able to upload any text-based document, and quickly receive a nuanced analysis of the author's sentiment regarding the topics they wrote about." Luminoso's new deep learning model understands documents using multiple layers of attention, a mechanism that identifies which words are relevant to get context around a specific concept as expressed by a word or phrase.

Explainable AI: From the peak of inflated expectations to the pitfalls of interpreting machine learning models


Machine learning and artificial intelligence are helping automate an ever-increasing array of tasks, with ever-increasing accuracy. They are supported by the growing volume of data used to feed them, and the growing sophistication in algorithms. The flip side of more complex algorithms, however, is less interpretability. "Trust models based on responsible authorities are being replaced by algorithmic trust models to ensure privacy and security of data, source of assets and identity of individuals and things. Algorithmic trust helps to ensure that organizations will not be exposed to the risk and costs of losing the trust of their customers, employees and partners. Emerging technologies tied to algorithmic trust include secure access service edge, differential privacy, authenticated provenance, bring your own identity, responsible AI and explainable AI."

DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models Machine Learning

With machine learning models being increasingly applied to various decision-making scenarios, people have spent growing efforts to make machine learning models more transparent and explainable. Among various explanation techniques, counterfactual explanations have the advantages of being human-friendly and actionable -- a counterfactual explanation tells the user how to gain the desired prediction with minimal changes to the input. Besides, counterfactual explanations can also serve as efficient probes to the models' decisions. In this work, we exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models. We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets, supporting users ranging from decision-subjects to model developers. DECE supports exploratory analysis of model decisions by combining the strengths of counterfactual explanations at instance- and subgroup-levels. We also introduce a set of interactions that enable users to customize the generation of counterfactual explanations to find more actionable ones that can suit their needs. Through three use cases and an expert interview, we demonstrate the effectiveness of DECE in supporting decision exploration tasks and instance explanations.

Explainable AI: A guide for making black box machine learning models explainable


Robots have moved off the assembly line and into warehouses, offices, hospitals, retail shops, and even our homes. ZDNet explores how the explosive growth in robotics is affecting specific industries, like healthcare and logistics, and the enterprise more broadly on issues like hiring and workplace safety. But machine learning (ML), which many people conflate with the broader discipline of artificial intelligence (AI), is not without its issues. ML works by feeding historical real world data to algorithms used to train models. ML models can then be fed new data and produce results of interest, based on the historical data used to train the model.