Goto

Collaborating Authors

 openvino deep learning workbench


Model performance using OpenVINO Deep Learning Workbench

#artificialintelligence

As we have previously described, DL Workbench is a tool that allows you to import Deep Learning models, evaluate their performance and accuracy, and perform different optimization tasks, like calibration for 8-bit integer inference. Profiling and model optimization are device-specific, therefore, to achieve maximum performance in a deployment environment, we need to perform these steps directly in that environment. DL Workbench helps you with accessing those capabilities on remote machines. Please note that if you want to have access to numerous hardware configurations ready for work and you do not have them locally or in your private lab, you can run DL Workbench in the Intel DevCloud for the Edge, where you can easily start experiments with available hardware. In this paper, we primarily focus on the case when you prepare the model for deployment and need to benchmark it on a specific hardware setup available in your private lab or a pre-production sandbox.