Goto

Collaborating Authors

 kfserving


Kubeflow -- Your Toolkit for MLOps

#artificialintelligence

In MLOps, different platforms work within the data science environment and hold their grips concerning their services -- one of which is Kubeflow. Before understanding the specialties of Kubeflow and its importance, it is necessary to know what is MLOps and why we need it. MLOps, also referred to as Machine Learning Operations, combines Data Science, software engineering, and DevOps practices. Whenever a data scientist builds a model that runs seamlessly and provides a high-performance output, it needs to be deployed for real-time inference. Calling a DevOps engineer for this task is sometimes helpful because a DevOps engineer has hands-on experience in software engineering and development operations, but monitoring a model and dataset in real-time can be sophisticated.


Get Started with Kubeflow on AWS using MiniKF

#artificialintelligence

The Kubeflow project was announced back in December 2017 and has since become a very popular machine learning platform with both data scientists and MLOps engineers. If you are new to the Kubeflow ecosystem and community, here's a quick rundown. Kubeflow is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. In a nutshell, Kubeflow is the machine learning toolkit for Kubernetes. As such, anywhere you are running Kubernetes, you should also be able to run Kubeflow.


KFServing

#artificialintelligence

KFServing enables serverless inferencing on Kubernetes and provides performant, high abstraction interfaces for common machine learning (ML) frameworks like TensorFlow, XGBoost, scikit-learn, PyTorch, and ONNX to solve production model serving use cases. Provide a Kubernetes Custom Resource Definition for serving ML models on arbitrary frameworks. Encapsulate the complexity of autoscaling, networking, health checking, and server configuration to bring cutting edge serving features like GPU autoscaling, scale to zero, and canary rollouts to your ML deployments. Our strong community contributions help KFServing to grow. We have a Technical Steering Committee driven by Bloomberg, IBM Cloud, Seldon, Amazon Web Services (AWS) and NVIDIA.


Serverless inferencing on Kubernetes

Cox, Clive, Sun, Dan, Tarn, Ellis, Singh, Animesh, Kelkar, Rakesh, Goodwin, David

arXiv.org Machine Learning

Organisations are increasingly putting machine learning models into production at scale. The increasing popularity of serverless scale-to-zero paradigms presents an opportunity for deploying machine learning models to help mitigate infrastructure costs when many models may not be in continuous use. We will discuss the KFServing project which builds on the KNative serverless paradigm to provide a serverless machine learning inference solution that allows a consistent and simple interface for data scientists to deploy their models. We will show how it solves the challenges of autoscaling GPU based inference and discuss some of the lessons learnt from using it in production.


Kubeflow and IBM: An open source journey to 1.0

#artificialintelligence

Machine learning must address a daunting breadth of functionalities around building, training, serving, and managing models. Doing so in a consistent, composable, portable, and scalable manner is hard. The Kubernetes framework is well suited to address these issues, which is why it's a great foundation for deploying machine learning workloads. The Kubeflow project's development has been a journey to realize this promise, and we are excited that journey has reached its first major destination – Kubeflow 1.0. Always ready to work with a strong and diverse community, IBM joined this Kubeflow journey early on.


Kubeflow 1.0: Cloud Native ML for Everyone

#artificialintelligence

On behalf of the entire community, we are proud to announce Kubeflow 1.0, our first major release. Kubeflow was open sourced at Kubecon USA in December 2017, and during the last two years the Kubeflow Project has grown beyond our wildest expectations. There are now hundreds of contributors from over 30 participating organizations. Kubeflow's goal is to make it easy for machine learning (ML) engineers and data scientists to leverage cloud assets (public or on-premise) for ML workloads. You can use Kubeflow on any Kubernetes-conformant cluster.