Tuesday, December 24, 2019

Publishing Keras Model API with TensorFlow Serving

Building a ML model is a crucial task. Running ML model in production is not a less complex and important task. I had a post in the past about serving ML model through Flask REST API — Publishing Machine Learning API with Python Flask. While this approach works, it certainly lacks some important points:

  • Model versioning 
  • Request batching 
  • Multithreading 

TensorFlow comes with a set of tools to help you run ML model in production. One of these tools — TensorFlow Serving. There is an excellent tutorial that describes how to configure and run it — TensorFlow Serving with Docker. I will follow the same steps in my example.

Read more in my Towards Data Science post.

No comments:

Post a Comment