Monday, July 19, 2021

Serving ML Model with Docker, RabbitMQ, FastAPI and Nginx

In this tutorial I explain how to serve ML model using such tools as Docker, RabbitMQ, FastAPI and Nginx. The solution is based on our open-source product Katana ML Skipper (or just Skipper). It allows running ML workflow using a group of microservices. It is not limited to ML, you can run any workload using Skipper and plugin your own services. You can reach out to me if you got any questions.

 

No comments:

Post a Comment