Sunday, February 13, 2022
Development Workflow with Hugging Face Transformer Model
This tutorial explains how I do app development with Hugging Face Transformer model. Typically the flow involves model fine-tuning on Colab GPU. Fine-tuned model is downloaded to my local development workstation where I continue development and use the model for inference task. To be able to run complex library dependencies locally, my development environment is setup with a remote Python interpreter through PyCharm and Docker.
Labels:
Hugging Face,
Machine Learning,
Python
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment