ai ml workload
Optimize AI/ML workloads for sustainability: Part 3, deployment and monitoring
We're celebrating Earth Day 2022 from 4/22 through 4/29 with posts that highlight how to build, maintain, and refine your workloads for sustainability. AWS estimates that inference (the process of using a trained machine learning [ML] algorithm to make a prediction) makes up 90 percent of the cost of an ML model. Given with AWS you pay for what you use, we estimate that inference also generally equates to most of the resource usage within an ML lifecycle. In Part 3, our final piece in the series, we show you how to reduce the environmental impact of your ML workload once your model is in production. If you missed the first parts of this series, in Part 1, we showed you how to examine your workload to help you 1) evaluate the impact of your workload, 2) identify alternatives to training your own model, and 3) optimize data processing.
AI/ML at the Edge: 4 things CIOs should know
And latency almost always matters when it comes to running artificial intelligence/machine learning (AI/ML) workloads. Great AI requires a lot of data, and it demands it immediately." That's both the blessing and the curse in any sector – industrial and manufacturing are prominent examples, but the principle applies widely across businesses – that generates tons of machine data outside of their centralized clouds or data centers and wants to feed it to an ML model or other form of automation for any number of purposes. Whether you're working with IoT data on a factory floor, or medical diagnostic data in a healthcare facility – or one of many other scenarios where AI/ML use cases are rolling out – you probably can't do so optimally if you're trying to send everything (or close to it) on a round-trip from the edge to the cloud and back again. In fact, if you're dealing with huge volumes of data, your trip might never get off the ground. "I've seen situations in manufacturing facilities ...
- Health & Medicine (0.55)
- Information Technology > Services (0.37)
AI/ML workloads in containers: 6 things to know
Two of today's big IT trends, AI/ML and containers, have become part of the same conversation at many organizations. They're increasingly paired together, as teams look for better ways to manage their Artificial Intelligence and Machine Learning workloads – enabled by a growing menu of commercial and open source technologies for doing so. "The best news for IT leaders is that tooling and processes for running machine learning at scale in containers has improved significantly over the past few years," says Blair Hanley Frank, enterprise technology analyst at ISG. "There is no shortage of available open source tooling, commercial products, and tutorials to help data scientists and IT teams get these systems up and running." Before IT leaders and their teams begin to dig into the nitty-gritty technical aspects of containerizing AI/ML workloads, some principles are worth thinking about up front. Here are six essentials to consider.
Learning at the Edge
This article looks at the unique challenges introduced by Edge computing for AI/ML workloads, which can have a negative impact on results. It applies available machine learning models to real-world Edge datasets, to show how these challenges can be overcome, while preserving accuracy in the dynamic nature of Edge environments. The field of machine learning has experienced an explosion of innovation over the past 10 years. Although its roots date back more than 70 years when Alan Turing devised the Turing Test, it has not matured significantly until recently. Two primary contributing factors are the exponential growth in both compute power and data that can be used for training. There is now enough data and compute power (some in specialized hardware like GPUs/FPGAs) that new, real-world problems are being solved every day with machine learning.
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- Telecommunications (0.94)
- Information Technology (0.91)
The Path to Machine Learning & AI
On this livestream from KubeCon CloudNativeCon China, we're sitting down with Alejandro Saucedo, Chief Scientist at the Institute for Ethical AI & Machine Learning and Dr. Han Xiao, Engineering Lead at Tencent AI Lab to learn more about how Kubernetes is used in an AI&ML context. When one is running complicated AI/ML workloads at scale, Kubernetes fits naturally as the solution due to its ability to scale rapidly, portability, and the variety of tools available for AI & ML use cases on Kubernetes, Kubeflow. Rather than setting out to recreate the wheel, Kubeflow offers those working with AI & ML data sets the best-of-the-best options for deploying AI/ML workloads on Kubernetes by bringing together Jupyter notebooks, TensorFlow model training to adjust CPU & GPU cluster size for workloads, TensorFlow serving containers to export trained models to Kubernetes, and Kubeflow Pipelines.
Whose Hardware Will Run Analytics, AI and ML Workloads? - The New Stack
Where will and AI/ML workloads be executed and who should handle them? The industry-wide rising tide towards the public cloud is not a foregone conclusion, as we were reminded by a Micron-commissioned report by Forrester Consulting that surveyed 200 manage architecture, systems, or strategy for complex data at large enterprises in the US and China. As of mid-2018, 72 percent are analyzing complex data within on-premises data centers and 51 percent do so in a public cloud. Three years from now, on-premises-only use will drop to 44 percent and public cloud use for analytics will rise to 61 percent. Those using edge environments to analyze complex data sets will rise from 44 to 53 percent.
- North America > United States (0.25)
- Asia > China (0.25)