Goto

Collaborating Authors

Red Hat Accelerates AI/ML Workflows and Delivery of AI-Powered Intelligent Applications with Red Hat OpenShift

#artificialintelligence

Red Hat, Inc., the world's leading provider of open source solutions, today highlighted that more organizations are using Red Hat OpenShift as the foundation for building artificial intelligence (AI) and machine-learning (ML) data science workflows and AI-powered intelligent applications. OpenShift helps to provide agility, flexibility, portability and scalability across the hybrid cloud, from cloud infrastructure to edge computing deployments, a necessity for developing and deploying ML models and intelligent applications into production more quickly and without vendor lock-in. AI/ML represents a top emerging workload for Red Hat OpenShift across hybrid cloud and multicloud deployments for both our customers and for our partners supporting these global organizations. By applying DevOps to AI/ML on the industry's most comprehensive enterprise Kubernetes platform, IT organizations want to pair the agility and flexibility of industry best practices with the promise and power of intelligent workloads. As a production-proven enterprise container and Kubernetes platform, OpenShift delivers integrated DevOps capabilities for independent software vendors (ISVs) via Kubernetes Operators and NVIDIA GPU-powered infrastructure platforms.


Spell machine learning platform goes on-prem

#artificialintelligence

Spell, an end-to-end platform for machine learning and deep learning--covering data prep, training, deployment, and management--has announced Spell for Private Machines, a new version of its system that can be deployed on your own hardware as well as on cloud resources. Spell was founded by Serkan Piantino, former director of engineering at Facebook and founder of Facebook's AI Research group. Spell allows teams to create reproducible machine learning systems that incorporate familiar tools such as Jupyter notebooks and that leverage cloud-hosted GPU compute instances. Spell emphasizes ease of use. For example, hyperparameter optimization for an experiment is a high-level, one-command function.


Présenté en Français: Traitement du Langage Naturel avec H2O Driverless AI

#artificialintelligence

H2O Driverless AI is H2O.ai's flagship platform for automatic machine learning. It fully automates the data science workflow including some of the most challenging tasks in applied data science such as feature engineering, model tuning, model optimization, and model deployment. Driverless AI turns Kaggle Grandmaster recipes into a full functioning platform that delivers "an expert data scientist in a box" from training to deployment. We will be discussing the latest in Driverless AI, as follows: Driverless AI with Auto Doc is the next logical step of the data science workflow by taking the final step of automatically documenting and explaining the processes used by the platform. Auto Doc frees up the user from the time consuming task of documenting and summarizing their workflow while building machine learning models.


ML Infrastructure Tools for Production

#artificialintelligence

The core challenge in Production ML is uplifting a model from a research environment to a software engineering environment while still delivering the results of research. In this blog post, we will highlight core areas that are needed to uplift research into production with consistency, reproducibility, and observability that we expect of software engineering. Note: Model validation is NOT to be confused with the validation data set. Quick Recap on Datasets: Models are built and evaluated using multiple datasets. The training data set is used to fit the parameters of the model.


10 tools and platforms for data preparation

@machinelearnbot

Traditional approaches to enterprise reporting, analysis and Business Intelligence such as Data Warehousing, upfront modelling and ETL have given way to new, more agile tools and ideas. Within this landscape Data Preparation tools have become very popular for good reason. Data preparation has traditionally been a very manual task and consumed the bulk of most data project's time. Profiling data, standardising it and transforming it has traditionally been very manual and error prone. This has derailed many Data Warehousing and analysis projects as they become bogged down with infrastructure and consistency issues rather than focusing on the true value add – producing good quality analysis.