Enterprises know they want to do machine learning, but they also know they can't afford to think too long or too hard about it. They need to act, and they have specific business problems that they want to solve. And they know instinctively and anecdotally from the experience of the hyperscalers and the HPC centers of the world that machine learning techniques can be utterly transformative in augmenting existing applications, replacing hand-coded applications, or creating whole new classes of applications that were not possible before. They also have to decide if they want to run their AI workloads on-premise or on any one of a number of clouds where a lot of the software for creating models and training them are available as a service. And let's acknowledge that a lot of those models were created by the public cloud giants for internal workloads long before they were peddled as a service.
The three big cloud providers, specifically Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), want developers and data scientists to develop, test, and deploy machine learning models on their clouds. It's a lucrative endeavor for them because testing models often need a burst of infrastructure, and models in production often require high availability. These are lucrative services for the cloud providers and offer benefits to their customers, but they don't want to compete for your business only on infrastructure, service levels, and pricing. They focus on versatile on-ramps to make it easier for customers to use their machine learning capabilities. Each public cloud offers multiple data storage options, including serverless databases, data warehouses, data lakes, and NoSQL datastores, making it likely that you will develop models in proximity to where your data resides.
The report referenced an "interoperability test [in which] Huawei and Megvii jointly provided a face-recognition solution based on Huawei's video cloud solution. In the solution, Huawei provided servers, storage, network equipment, its FusionSphere cloud platform, cameras and other software and hardware, [while] Megvii provided its dynamic facial-recognition system software".
IBM Corp. said today it's hoping to provide a standardized solution for developers to create and deploy machine learning models in production and make them portable to any cloud platform. To do so, it said it's open-sourcing the Kubeflow machine learning platform on Tekton, a continuous integration/continuous development platform developed by Google LLC. It's popular with developers who use Kubernetes to manage containerized applications, which can run unchanged across many computing environments. IBM said it created Kubeflow Pipelines on Tekton in response to the need for a more reliable solution for deploying, monitoring and governing machine learning models in production on any cloud platform. That's important, IBM says, because hybrid cloud models are rapidly becoming the norm for many enterprises that want to take advantage of the benefits of running their most critical business applications across distributed computing environments.
Continual learning to build and automate ML pipelines from research to production, automatically retraining models in production with incoming data and advanced monitoring capabilities to ensure that models are accurate, healthy and performing well. Machine learning management that standardizes the full ML process in a collaborative environment, which supports management of models, experiments, data and research for "100% reproducible data science". An open platform that works with any framework or programming language. The platform's advanced connectivity to any compute resources (cloud/on premis) lets companies utilize on-premise infrastructure, including Kubernetes, Data Lakes, Hadoop, and more – as well as scale to any cloud service. Continual learning to build and automate ML pipelines from research to production, automatically retraining models in production with incoming data and advanced monitoring capabilities to ensure that models are accurate, healthy and performing well.
The rise of industrial Internet of Things (IoT) and artificial intelligence (AI) are making edge computing significant for enterprises. Many industry verticals such as manufacturing, healthcare, automobile, transportation, and aviation are considering an investment in edge computing. Edge computing is fast becoming the conduit between the devices that generate data and the public cloud that processes the data. In the context of machine learning and artificial intelligence, the public cloud is used for training the models and the edge is utilized for inferencing. To accelerate ML training in the cloud, public cloud vendors such as AWS, Azure, and the Google Cloud Platform (GCP) offer GPU-backed virtual machines.
Google's attempt to wrest more cloud computing dollars from market leaders Amazon and Microsoft got a new boss late last year. Next week, Thomas Kurian is expected to lay out his vision for the business at the company's cloud computing conference, building on his predecessor's strategy of emphasizing Google's strength in artificial intelligence. That strategy is complicated by controversies over how Google and its clients use the powerful technology. After employee protests over a Pentagon contract in which Google trained algorithms to interpret drone imagery, the cloud unit now subjects its--and its customers'--AI projects to ethical reviews. They have caused Google to turn away some business.
Amazon SageMaker, Microsoft Azure ML Services, Google Cloud ML Engine, IBM Watson Knowledge Studio are examples of ML PaaS in the cloud. If your business wants to bring agility into machine learning model development and deployment, consider ML PaaS. It combines the proven technique of CI/CD with ML model management.
Amazon has been investing in all the three key areas - IoT, edge computing, and machine learning. AWS IoT is a mature connected devices platform that can deliver scalable M2M, bulk device on-boarding, digital twins and analytics along with tight integration with AWS Lambda for dynamic rules. AWS Greengrass extends AWS IoT to the edge by delivering local M2M, rules engine, and routing capabilities. The most recent addition, Amazon SageMaker, brought scalable machine learning service to AWS. Customers can use it for evolving trained models based on popular algorithms.