How to Set Up Ollama with OpenClaw?
Ollama lets you run AI models locally — completely free, no API keys, no cloud dependency. It's the privacy-first choice for OpenClaw users who want full control over their data.
First, install Ollama from ollama.com. It supports macOS, Linux, and Windows (via WSL2). Then pull a model — Llama 3.1 8B is a great starting point, offering good quality in a manageable size.
Configure OpenClaw to use Ollama as its provider. OpenClaw connects to Ollama's local API (default: localhost:11434) using the OpenAI-compatible endpoint.
Hardware matters with local models. A 7-8B parameter model needs about 8 GB RAM and works on most modern computers. A 13B model needs 16 GB. For 70B models, you need 64 GB+ RAM or a GPU with sufficient VRAM.
Apple Silicon Macs are excellent for Ollama — the unified memory architecture lets models use both CPU and GPU memory. An M2 Mac Mini with 16 GB RAM runs Llama 3.1 8B at very usable speeds.
The trade-off: local models are slower than cloud APIs and generally lower quality than Claude or GPT-4o. But they're completely free, work offline, and your data never leaves your machine.
Tip: Start with llama3.1:8b for general use. Try qwen2.5:14b for better quality if your hardware supports it.
# Install Ollama curl -fsSL https://ollama.com/install.sh | sh # Pull a model ollama pull llama3.1 # Configure OpenClaw openclaw config set provider ollama openclaw config set model llama3.1