Sunday, October 3, 2021

MLOps: Sharing Model Across Services

Typically you would want to scale ML model training and ML inference/prediction services separately. This means services should not share the file storage, at least this is true for the Kubernetes cluster environment. I explain how you could transfer files across services using RabbitMQ.


No comments: