Goto

Collaborating Authors

AWS Announces Nine New Amazon SageMaker Capabilities

#artificialintelligence

Distributed Training on Amazon SageMaker delivers new capabilities that can train large models up to two times faster than would otherwise be possible with today's machine learning processors Inc. company, announced nine new capabilities for its industry-leading machine learning service, Amazon SageMaker, making it even easier for developers to automate and scale all steps of the end-to-end machine learning workflow. Today's announcements bring together powerful new capabilities like faster data preparation, a purpose-built repository for prepared data, workflow automation, greater transparency into training data to mitigate bias and explain predictions, distributed training capabilities to train large models up to two times faster, and model monitoring on edge devices. Machine learning is becoming more mainstream, but it is still evolving at a rapid clip. With all the attention machine learning has received, it seems like it should be simple to create machine learning models, but it isn't. In order to create a model, developers need to start with the highly manual process of preparing the data.


Amazon AWS Machine Learning Summit keynote kicks off with 'few-shot learning'

ZDNet

Amazon's AWS cloud compting service on Wednesday morning kicked off its machine learning summit via virtual transmission. The morning's keynote talk was lead by Swami Sivasubramanian, AWS's vice president of AI and machine learning; Yoelle Maarek, vice president of research and Science for Alexa Shopping; Bratin Saha, vice president of machine learning at AWS; with a guest appearance by Ashok Srivastava, who is Chief Data Officer at software maker Intuit. Sivasubramanian lead off with a talk about machine learning being "one of the most transformative" technologies in a generation. He cited a stat that more than 100 papers in machine learning are published each day. "Machine learning is going mainstream," said Sivasubramanian. More than 100,000 customers use AWS for machine learning, said Sivasubramanian, citing examples such as pharma giant Roche and The New York Times.


7 last-mile delivery problems in AI and how to solve them

#artificialintelligence

The term last-mile problem comes from the telecom industry, which observed that it costs inordinately more to build and manage the last-mile of infrastructure to the home than to bring infrastructure to the hub city or residential perimeter. Businesses are starting to discover a similar last-mile delivery problem in AI: It is much harder to weave AI technologies into business processes that actually run companies than it is to build or buy the AI and machine learning (ML) models that promise to improve those processes. "The path to deploying ML is still expensive," said Ian Xiao, manager at Deloitte Omnia, Deloitte Canada's AI consulting practice. He estimates that most companies deploy only between 10% and 40% of their machine learning projects depending on their size and technology readiness. In fact, the last-mile problem is a bit of a misnomer when applied to AI deployment in the enterprise.


Evaluate your MLOps maturity

#artificialintelligence

Operationalizing machine learning models has been a crucial stake for organizations which have invested in Artificial Intelligence. Indeed, many organizations launched PoCs (Proofs of Concepts) without succeeding in operationalizing their machine learning or deep learning models for different reasons: lack of expertise, or experience, reluctance of C-level executives to trust a new technology, no adapted processes or unwillingness of business to loose a part of their expertise or their understanding of decisions made by a model etc. To help to perform ML operationalization a new discipline appeared: MLOps for Machine Learning Operations. MLOps is part of the Ops family and is inspired from the DevOps concepts even though it has some specificities related to models management. This is the reason why we chose to evaluate the MLOps processes the same way DevOps processes are.


Maximizing the Impact of ML in Production - insideBIGDATA

#artificialintelligence

In this special guest feature, Emily Kruger, Vice President of Product at Kaskada, discusses the topic that is on the minds of many data scientists and data engineers these days, maximizing the impact of machine learning in production environments. Kaskada is a machine learning company that enables collaboration among data scientists and data engineers. Kaskada develops a machine learning studio for feature engineering using event-based data. Kaskada's platform allows data scientists to unify the feature engineering process across their organizations with a single platform for feature creation and feature serving. Machine learning is changing the way the world does business.