Collaborating Authors

Red Hat Accelerates AI/ML Workflows and Delivery of AI-Powered Intelligent Applications with Red Hat OpenShift


Red Hat, Inc., the world's leading provider of open source solutions, today highlighted that more organizations are using Red Hat OpenShift as the foundation for building artificial intelligence (AI) and machine-learning (ML) data science workflows and AI-powered intelligent applications. OpenShift helps to provide agility, flexibility, portability and scalability across the hybrid cloud, from cloud infrastructure to edge computing deployments, a necessity for developing and deploying ML models and intelligent applications into production more quickly and without vendor lock-in. AI/ML represents a top emerging workload for Red Hat OpenShift across hybrid cloud and multicloud deployments for both our customers and for our partners supporting these global organizations. By applying DevOps to AI/ML on the industry's most comprehensive enterprise Kubernetes platform, IT organizations want to pair the agility and flexibility of industry best practices with the promise and power of intelligent workloads. As a production-proven enterprise container and Kubernetes platform, OpenShift delivers integrated DevOps capabilities for independent software vendors (ISVs) via Kubernetes Operators and NVIDIA GPU-powered infrastructure platforms.

Spell machine learning platform goes on-prem


Spell, an end-to-end platform for machine learning and deep learning--covering data prep, training, deployment, and management--has announced Spell for Private Machines, a new version of its system that can be deployed on your own hardware as well as on cloud resources. Spell was founded by Serkan Piantino, former director of engineering at Facebook and founder of Facebook's AI Research group. Spell allows teams to create reproducible machine learning systems that incorporate familiar tools such as Jupyter notebooks and that leverage cloud-hosted GPU compute instances. Spell emphasizes ease of use. For example, hyperparameter optimization for an experiment is a high-level, one-command function.

Présenté en Français: Traitement du Langage Naturel avec H2O Driverless AI


H2O Driverless AI is's flagship platform for automatic machine learning. It fully automates the data science workflow including some of the most challenging tasks in applied data science such as feature engineering, model tuning, model optimization, and model deployment. Driverless AI turns Kaggle Grandmaster recipes into a full functioning platform that delivers "an expert data scientist in a box" from training to deployment. We will be discussing the latest in Driverless AI, as follows: Driverless AI with Auto Doc is the next logical step of the data science workflow by taking the final step of automatically documenting and explaining the processes used by the platform. Auto Doc frees up the user from the time consuming task of documenting and summarizing their workflow while building machine learning models.

Operationalizing machine learning: 5 challenges, 1 solution


ML can unlock valuable insights from data, but many companies struggle to implement effective, consistent workflows. Machine learning (ML) is enabling enterprises to make data-driven business decisions by using sophisticated models to deliver insights from large datasets. But in order to be able to fully realize the value of ML -- including new revenue streams and improved customer experiences--enterprises need to implement fully operational ML workflows. According to a recent study conducted by Forrester Consulting, 41% of companies say they have struggled to operationalize any ML models and lack the process to do so.1 Organizations in every industry are looking at ways to leverage ML to harness the power of their data and deliver business innovation through data science. But even when they achieve some measure of success with ML pilot programs, many organizations face challenges when they seek to scale these programs to production, such as security concerns, legacy hardware, siloed data and workflows, inefficient processes, and daunting costs. Enterprises must capitalize on ML's value – or risk getting left behind.

ML Infrastructure Tools for Production


The core challenge in Production ML is uplifting a model from a research environment to a software engineering environment while still delivering the results of research. In this blog post, we will highlight core areas that are needed to uplift research into production with consistency, reproducibility, and observability that we expect of software engineering. Note: Model validation is NOT to be confused with the validation data set. Quick Recap on Datasets: Models are built and evaluated using multiple datasets. The training data set is used to fit the parameters of the model.