Distributed Inference Using Apache MXNet and Apache Spark on Amazon EMR Amazon Web Services

@machinelearnbot 

In this blog post we demonstrate how to run distributed offline inference on large datasets using Apache MXNet (incubating) and Apache Spark on Amazon EMR. We explain how offline inference is useful, why it is challenging, and how you can leverage MXNet and Spark on Amazon EMR to overcome these challenges. After a deep learning model has been trained, it's put to work by running inference on new data. Inference can be executed in real-time for tasks that require immediate feedback, such as fraud detection. This is typically known as online inference.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found