

How to run LLMs locally ⤵️ with Ollama
Setup Ollama on docker, run Llama3.2 and deepseek-r1 models locally, chat with open web UI, and use CLI/API, all in 30 mins with the privacy of your machine 🚀
READ THE LATEST








automation hacks
Helping you elevate ⚡️ your software testing and automation.
Recommendations
Gergely Orosz
Alex Xu
Kent Beck
Alan Page
© 2025 Gaurav Singh
Substack is the home for great culture