Collaborating Authors

AgroScout Improves Development and DevOps with Oracle Cloud Native Services


AgroScout, a startup in the agritech (agricultural technology) sector dedicated to the early detection of pests and disease in field crops, is a prime example of a cutting-edge company using Oracle Cloud Native Services to migrate their application to Kubernetes and deliver an automated deployment pipeline. Cloud native technologies are all the rage right now, with a huge range of options a customer can choose from to implement both their application platform and the continuous integration/continuous delivery technologies used to deliver those applications. Now up and running, the AgroScout development team enjoys much easier management of their application with Kubernetes and a streamlined CI/CD platform, better performance from Oracle's Gen 2 cloud and much more. AgroScout surveys fields via auto-piloted drones with cameras, then processes, detects and classifies any issues in the crops before recommending treatment. They rely on Graphical Processing Unit (GPU) based machine learning as well as a set of microservices backed by a SQL database.

Machine Learning at Scale with Databricks and Kubernetes


Machine Learning Operationalisation (ML Ops) is a set of practices that aim to quickly and reliably build, deploy and monitor machine learning applications. Many organizations standardize around certain tools to develop a platform to enable these goals. One combination of tools includes using Databricks to build and manage machine learning models and Kubernetes to deploy models. This article will explore how to design this solution on Microsoft Azure followed by step-by-step instructions on how to implement this solution as a proof-of-concept. This approach aims to use common open source technologies and can easily be adapted for other cloud platforms.

Deploy Machine Learning Pipeline on Google Kubernetes Engine


In our last post on deploying a machine learning pipeline in the cloud, we demonstrated how to develop a machine learning pipeline in PyCaret, containerize it with Docker and serve as a web app using Microsoft Azure Web App Services. If you haven't heard about PyCaret before, please read this announcement to learn more. In this tutorial, we will use the same machine learning pipeline and Flask app that we built and deployed previously. This time we will demonstrate how to containerize and deploy a machine learning pipeline on Google Kubernetes Engine. Previously we demonstrated how to deploy a ML pipeline on Heroku PaaS and how to deploy a ML pipeline on Azure Web Services with a Docker container.

Automating IoT Machine Learning: Bridging Cloud and Device Benefits with Cloud ML Engine Solutions Google Cloud Platform


This tutorial addresses the following scenario: A camera attached to a connected device visually identifies mechanical parts moving along a conveyor belt or other mechanism. The tutorial focuses on delivery to a camera-enabled, Linux-based IoT device, but you can build similar systems for other types of devices with different sensor inputs.

MLOps in 2021: The pillar for seamless Machine Learning Lifecycle


MLOps is the new terminology defining the operational work needed to push machine learning projects from research mode to production. While Software Engineering involves DevOps for operationalizing Software Applications, MLOps encompass the processes and tools to manage end-to-end Machine Learning lifecycle. Machine Learning defines the models' hypothesis learning relationships among independent(input) variables and predicting target(output) variables. Machine Learning projects involve different roles and responsibilities starting from the Data Engineering team collecting, processing, and transforming data, Data Scientists experimenting with algorithms and datasets, and the MLOps team focusing on moving the trained models to production. Machine Learning Lifecycle represents the complete end-to-end lifecycle of machine learning projects from research mode to production.