serving
Tech Talk: How AI Is Serving the Restaurant Industry
As the Chief Revenue Officer at HungerRush, Olivier Thierry is influencing customer expectations with AI as the restaurant industry has begun experimenting with it, he tells Spiceworks News & Insights' Technology Editor, Neha Kulkarni. Restaurants have realized taking on new technology will help them not only survive the challenges but achieve results, he notes. From labor shortages to improving customer experience, in this edition of Tech Talk, Olivier discusses how AI can overcome these challenges and allow restaurants to reduce human error. He also shares how natural language processing can interpret customer attitudes in phone orders and have a real place in understanding customer experience. Olivier: The pandemic turned the restaurant industry upside down, and many of its setbacks are still being felt today.
Top Tools To Do Machine Learning Serving In Production
Creating a model is one thing, but using that model in production is quite another. The next step after a data scientist completes a model is to deploy it so that it can serve the application. Batch and online model serving are the two main categories. Batch refers to feeding a large amount of data into a model and writing the results to a table, usually as a scheduled operation. You must deploy the model online using an endpoint for applications to send a request to the model and receive a quick response with no latency.
Hosting Models with TF Serving on Docker
Training a Machine Learning (ML) model is only one step in the ML lifecycle. There's no purpose to ML if you cannot get a response from your model. You must be able to host your trained model for inference. There's a variety of hosting/deployment options that can be used for ML, with one of the most popular being TensorFlow Serving. TensorFlow Serving helps take your trained model's artifacts and host it for inference.
KServe: A Robust and Extensible Cloud Native Model Server
If you are familiar with Kubeflow, you know KFServing as the platform's model server and inference engine. In September last year, the KFServing project has gone through a transformation to become KServe. KServe is now an independent component graduating from the Kubeflow project, apart from the name change. The separation allows KServe to evolve as a separate, cloud native inference engine deployed as a standalone model server. Of course, it will continue to have tight integration with Kubeflow, but they would be treated and maintained as independent open source projects.