OpenClaw on Mac Mini: Complete Setup Guide
- M4 Mac Mini 16GB ($599) runs 14B parameter models at 22 tokens/sec — fast enough for all practical agentic workflows
- Energy cost is ~$2.50/month running 24/7, making break-even vs a $10/month VPS around 7 months
- Use launchd plists (not PM2) for native macOS service management with automatic restart on crash
- Cloudflare Tunnel eliminates the need for port forwarding — your instance gets a stable HTTPS URL with zero router configuration
- Local inference via Ollama means zero per-token API costs and complete data privacy — models never leave your hardware
- Disable sleep with `sudo pmset -a sleep 0` and enable `autorestart 1` for unattended headless operation
- For 32GB models (Qwen 2.5 32B, etc.) you need the M4 32GB config; 16GB maxes out at 14B models comfortably
The Mac Mini M2 or M4 is one of the best platforms for running OpenClaw as a 24/7 home AI server — it draws only 5–10 watts at idle, runs Ollama for free local inference, and costs nothing in monthly hosting fees after the initial hardware purchase. Combined with OpenClaw's Telegram integration, you get a personal AI assistant that never sleeps.
This guide walks through the complete setup: from macOS configuration to OpenClaw installation, Ollama model selection, and Telegram bot connection.
Why Mac Mini Is Ideal for OpenClaw
Apple's M-series chips changed what's possible with local AI. The unified memory architecture means your Mac Mini can run large language models at speeds that would require a $3,000+ GPU workstation on traditional x86 hardware. A few reasons Mac Mini stands out as an OpenClaw host:
Energy efficiency. An M2 Mac Mini idles at roughly 6–8 watts and peaks around 25–30 watts under heavy inference load. Running 24/7 for a month costs less than $3 in electricity. A gaming PC running the same workload might consume 200–400 watts. Unified memory for models. Apple Silicon's unified memory pool is shared between CPU and GPU (actually the Neural Engine and GPU compute). This means a 16GB M2 Mac Mini can comfortably run 7B parameter models entirely in memory with no VRAM bottleneck. A 32GB M4 Mac Mini handles 34B models smoothly. macOS stability. macOS is a Unix-based OS with excellent process management, a mature network stack, and native support for the same tools (Homebrew, Node.js, Python) that OpenClaw depends on. It rarely needs reboots and handles long-running server processes gracefully. Silent operation. The Mac Mini M4 is essentially silent under moderate load. Running it on a shelf or in a closet as a headless server is completely practical.Hardware Recommendations
Choosing the right Mac Mini configuration matters. Here's the breakdown:
Minimum viable: M2 Mac Mini, 16GB RAM, 256GB SSD (~$599 refurbished)- Runs 7B–13B models via Ollama at 15–30 tokens/sec
- Handles OpenClaw core with 3–5 concurrent agent tasks
- 256GB is tight if you plan to store multiple models; upgrade to 512GB if possible
- Significantly faster inference than M2 at the same price point
- Runs Llama 3.1 8B at 40–55 tokens/sec
- Good for single-user power use or small team shared instance
- Runs 34B models (e.g., Qwen 2.5 34B, Mistral 22B) at usable speeds
- Multiple models loaded simultaneously without eviction
- Ideal if privacy and local-only inference is the primary goal
Step-by-Step Installation
1. Prepare macOS
Start with a fresh macOS Sequoia install or a clean user account. Enable SSH for headless access:
sudo systemsetup -setremotelogin on
Verify with ssh yourusername@mac-mini.local from another machine.
2. Install Homebrew
Homebrew is the package manager that makes the rest of the setup simple:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Follow the post-install instructions to add Homebrew to your PATH. For Apple Silicon, the path is /opt/homebrew/bin.
3. Install Node.js
OpenClaw requires Node.js 20 or higher. Install via nvm for easy version management:
brew install nvm mkdir ~/.nvmAdd to ~/.zshrc:
export NVM_DIR="$HOME/.nvm" [ -s "/opt/homebrew/opt/nvm/nvm.sh" ] && . "/opt/homebrew/opt/nvm/nvm.sh"source ~/.zshrc nvm install 22 nvm use 22 nvm alias default 22
Verify: node --version should show v22.x.x.
4. Install OpenClaw
npm install -g openclaw openclaw --version
Run the initial setup wizard:
openclaw init
This creates ~/.openclaw/config.json where you'll configure API keys, enabled skills, and model preferences.
5. Install and Configure Ollama
Ollama is the recommended local model runner for Mac Mini. It has native Apple Silicon optimizations and integrates cleanly with OpenClaw:
brew install ollama
Pull your first model:
ollama pull llama3.2 ollama pull qwen2.5:14b # Recommended for 16GB+ systems
Start the Ollama server:
ollama serve
In your OpenClaw config, point to the local Ollama instance:
{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434",
"defaultModel": "qwen2.5:14b"
}
},
"defaultProvider": "ollama"
}Configuring launchd for Auto-Start
launchd is macOS's native service manager. It starts services at login, restarts them on crash, and handles logging — perfect for a headless server setup.
Create a plist for OpenClaw at ~/Library/LaunchAgents/com.openclaw.agent.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.openclaw.agent</string>
<key>ProgramArguments</key>
<array>
<string>/Users/YOURUSERNAME/.nvm/versions/node/v22.0.0/bin/openclaw</string>
<string>serve</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/Users/YOURUSERNAME/.openclaw/logs/openclaw.log</string>
<key>StandardErrorPath</key>
<string>/Users/YOURUSERNAME/.openclaw/logs/openclaw-error.log</string>
<key>EnvironmentVariables</key>
<dict>
<key>PATH</key>
<string>/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin</string>
</dict>
</dict>
</plist>Load and start:
mkdir -p ~/.openclaw/logs launchctl load ~/Library/LaunchAgents/com.openclaw.agent.plist launchctl start com.openclaw.agent
Create a similar plist for Ollama (com.ollama.server.plist) following the same pattern.
Disable Sleep and Configure Energy Settings
A headless server must never sleep. Open System Settings → Energy → set "Prevent automatic sleeping when the display is off" to on. Or use the command line:
# Prevent sleep sudo pmset -a sleep 0 sudo pmset -a disksleep 0 sudo pmset -a displaysleep 10Wake on network access (useful for WoL)
sudo pmset -a womp 1Restart automatically after power failure
sudo pmset -a autorestart 1
Verify settings:
pmset -g
Cloudflare Tunnel for Remote Access
Instead of opening ports on your home router, Cloudflare Tunnel creates a secure outbound connection. Your OpenClaw instance becomes accessible at a stable URL with zero port forwarding.
brew install cloudflaredAuthenticate
cloudflared tunnel loginCreate tunnel
cloudflared tunnel create openclaw-homeConfigure ~/.cloudflared/config.yml
tunnel: YOUR_TUNNEL_ID credentials-file: /Users/YOURUSERNAME/.cloudflared/YOUR_TUNNEL_ID.jsoningress: - hostname: openclaw.yourdomain.com service: http://localhost:3000 - service: http_status:404
# Route DNS cloudflared tunnel route dns openclaw-home openclaw.yourdomain.comRun as service
sudo cloudflared service install
Now https://openclaw.yourdomain.com routes to your Mac Mini from anywhere in the world, with TLS handled automatically.
Performance Benchmarks
Real-world tokens/sec measured on Mac Mini hardware with Ollama:
| Model | M2 16GB | M4 16GB | M4 32GB |
|---|---|---|---|
| Llama 3.2 3B | 65 t/s | 95 t/s | 100 t/s |
| Llama 3.1 8B | 28 t/s | 42 t/s | 48 t/s |
| Qwen 2.5 14B | 14 t/s | 22 t/s | 26 t/s |
| Qwen 2.5 32B | OOM | OOM | 11 t/s |
Cost Analysis: Mac Mini vs VPS
Mac Mini M4 16GB ($599 one-time):- Electricity: ~$2.50/month (25W average, $0.14/kWh)
- Cloudflare Tunnel: free tier (or $0–5/month for Pro)
- Ollama: free, unlimited local inference
- Total monthly: ~$2.50–7.50
- Break-even vs $10/mo VPS: ~7 months
- Break-even vs $15/mo VPS: ~5 months
- No upfront cost
- GPU-accelerated inference for large models
- Managed infrastructure, easy scaling
- API costs for Claude/GPT add on top
- CPU-only inference: painfully slow for 7B+ models
- Fine for routing to cloud APIs, but then you're paying API costs too
When Mac Mini Beats VPS
Choose Mac Mini if:- Privacy is non-negotiable (medical, legal, personal data never leaves your hardware)
- You want to run large local models (14B–34B) without per-token costs
- You're already paying $10+/month for VPS and it's running 24/7 anyway
- You want a persistent agent that can interact with local files and applications
- You're in a location with cheap electricity
- You need instant scalability (burst to 10x load for a day)
- You're running multiple services that already justify a VPS
- You want managed backups, snapshots, and datacenter reliability
- You don't want to manage physical hardware
- You need 99.9%+ uptime SLA (home internet isn't that reliable)