Setup Ollama on docker, run Llama3.2 and deepseek-r1 models locally, chat with open web UI, and use CLI/API, all in 30 mins with the privacy of your machine 🚀
Share this post
How to run LLMs locally ⤵️ with Ollama
Share this post
Setup Ollama on docker, run Llama3.2 and deepseek-r1 models locally, chat with open web UI, and use CLI/API, all in 30 mins with the privacy of your machine 🚀