Tuesday, September 16, 2025
Ollama vs MLX Inference Speed on Mac Mini M4 Pro 64GB
MLX runs faster on first inference, but thanks to model caching or other optimizations by Ollama, second and next inference runs faster on Ollama.
Labels:
Sparrow,
Structured Data,
vLLM
Wednesday, September 10, 2025
Advanced Structured Data Processing in Sparrow
I added instruction and validation functionality into Sparrow. This allows to process business logic with document data directly through Sparrow query. For example, it allows to check if given fields are present in the document.
Labels:
Python,
Structured Data,
vLLM
Monday, September 1, 2025
My Experience with PyCharm AI Assistant
Explaining my experience with PyCharm AI Assistant. Showing example how code changes can be reviewed one by one, before they are accepted into your codebase.
Subscribe to:
Posts (Atom)