Monday, November 27, 2023

Easy-to-Follow RAG Pipeline Tutorial: Invoice Processing with ChromaDB & LangChain

I explain the implementation of the pipeline to process invoice data from PDF documents. The data is loaded into Chroma DB's vector store. Through LangChain API, the data from the vector store is ready to be consumed by LLM as part of the RAG infrastructure. 

 

Sunday, November 19, 2023

Vector Database Impact on RAG Efficiency: A Simple Overview

I explain the importance of Vector DB for RAG implementation. I show with a simple example, how data retrieval from Vector DB could affect LLM performance. Before data is sent to LLM, you should verify if quality data is fetched from Vector DB. 

 

Monday, November 13, 2023

JSON Output from Mistral 7B LLM [LangChain, Ctransformers]

I explain how to compose a prompt for Mistral 7B LLM model running with LangChain and Ctransformers to retrieve output as JSON string, without any additional text. 

 

Monday, November 6, 2023

Structured JSON Output from LLM RAG on Local CPU [Weaviate, Llama.cpp, Haystack]

I explain how to get structured JSON output from LLM RAG running using Haystack API on top of Llama.cpp. Vector embeddings are stored in Weaviate database, the same as in my previous video. When extracting data, a structured JSON response is preferred because we are not interested in additional descriptions.

 

Sunday, October 22, 2023

Invoice Data Processing with Llama2 13B LLM RAG on Local CPU [Weaviate, Llama.cpp, Haystack]

I explained how to set up local LLM RAG to process invoice data with Llama2 13B. Based on my experiments, Llama2 13B works better with tabular data compared to Mistral 7B model. This example presents a production LLM RAG setup with Weaviate database for vector embeddings, Haystack for LLM API, and Llama.cpp to run Llama2 13b on a local CPU. 

 

Monday, October 16, 2023

Invoice Data Processing with Mistral LLM on Local CPU

I explain the solution to extract invoice document fields with open-source LLM Mistral. It runs on CPU and doesn't require Cloud machine. I'm using Mistral 7B LLM model, Langchain, Ctransformers and Faiss vector store to run it on a local CPU machine. This approach gives a great advantage for enterprise systems, when running ML models on Cloud is not allowed for privacy reasons. 

 

Monday, October 9, 2023

Skipper MLOps Debugging and Development on Your Local Machine

I explain how to stop some of the Skipper MLOps services running in Docker and debug/develop these services code locally. This improves development workflow. There is no need to deploy code change to Docker container, it can be tested locally. Service that runs locally, connects to the Skipper infra through RabbitMQ queue.