Setup Ollama on docker, run Llama3.2 and deepseek-r1 models locally, chat with open web UI, and use CLI/API, all in 30 mins with the privacy of your machine 🚀
How to run LLMs locally ⤵️ with Ollama
Setup Ollama on docker, run Llama3.2 and deepseek-r1 models locally, chat with open web UI, and use CLI/API, all in 30 mins with the privacy of your machine 🚀