TensorFlow Lite Now Faster with Mobile GPUs (Developer Preview)
Running inference on compute-heavy machine learning models on mobile devices is resource demanding due to the devices' limited processing and power. While converting to a fixed-point model is one avenue to acceleration, our users have asked us for GPU support as an option to speed up the inference of the original floating point models without the extra complexity and potential accuracy loss of quantization. We listened and we are excited to announce that you will now be able to leverage mobile GPUs for select models (listed below) with the release of developer preview of the GPU backend for TensorFlow Lite; it will fall back to CPU inference for parts of a model that are unsupported. In the coming months, we will continue to add additional ops and improve the overall GPU backend offering. Today, we are releasing a precompiled binary preview of the new GPU backend, allowing developers and machine learning researchers an early chance to try this exciting new technology.
Jan-18-2019, 16:31:33 GMT
- Technology: