Monday, January 29, 2024

LLM Structured Output with Local Haystack RAG and Ollama

Haystack 2.0 provides functionality to process LLM output and ensure proper JSON structure, based on predefined Pydantic class. I show how you can run this on your local machine, with Ollama. This is possible thanks to OllamaGenerator class available from Haystack. 

 

No comments: