OpenClaw Install

OpenClaw on Mac Mini: Complete Setup Guide

Key Takeaways:
  • M4 Mac Mini 16GB ($599) runs 14B parameter models at 22 tokens/sec — fast enough for all practical agentic workflows
  • Energy cost is ~$2.50/month running 24/7, making break-even vs a $10/month VPS around 7 months
  • Use launchd plists (not PM2) for native macOS service management with automatic restart on crash
  • Cloudflare Tunnel eliminates the need for port forwarding — your instance gets a stable HTTPS URL with zero router configuration
  • Local inference via Ollama means zero per-token API costs and complete data privacy — models never leave your hardware
  • Disable sleep with `sudo pmset -a sleep 0` and enable `autorestart 1` for unattended headless operation
  • For 32GB models (Qwen 2.5 32B, etc.) you need the M4 32GB config; 16GB maxes out at 14B models comfortably

The Mac Mini M2 or M4 is one of the best platforms for running OpenClaw as a 24/7 home AI server — it draws only 5–10 watts at idle, runs Ollama for free local inference, and costs nothing in monthly hosting fees after the initial hardware purchase. Combined with OpenClaw's Telegram integration, you get a personal AI assistant that never sleeps.

This guide walks through the complete setup: from macOS configuration to OpenClaw installation, Ollama model selection, and Telegram bot connection.

Why Mac Mini Is Ideal for OpenClaw

Apple's M-series chips changed what's possible with local AI. The unified memory architecture means your Mac Mini can run large language models at speeds that would require a $3,000+ GPU workstation on traditional x86 hardware. A few reasons Mac Mini stands out as an OpenClaw host:

Energy efficiency. An M2 Mac Mini idles at roughly 6–8 watts and peaks around 25–30 watts under heavy inference load. Running 24/7 for a month costs less than $3 in electricity. A gaming PC running the same workload might consume 200–400 watts. Unified memory for models. Apple Silicon's unified memory pool is shared between CPU and GPU (actually the Neural Engine and GPU compute). This means a 16GB M2 Mac Mini can comfortably run 7B parameter models entirely in memory with no VRAM bottleneck. A 32GB M4 Mac Mini handles 34B models smoothly. macOS stability. macOS is a Unix-based OS with excellent process management, a mature network stack, and native support for the same tools (Homebrew, Node.js, Python) that OpenClaw depends on. It rarely needs reboots and handles long-running server processes gracefully. Silent operation. The Mac Mini M4 is essentially silent under moderate load. Running it on a shelf or in a closet as a headless server is completely practical.

Hardware Recommendations

Choosing the right Mac Mini configuration matters. Here's the breakdown:

Minimum viable: M2 Mac Mini, 16GB RAM, 256GB SSD (~$599 refurbished)
  • Runs 7B–13B models via Ollama at 15–30 tokens/sec
  • Handles OpenClaw core with 3–5 concurrent agent tasks
  • 256GB is tight if you plan to store multiple models; upgrade to 512GB if possible
Recommended: M4 Mac Mini, 16GB RAM, 256GB SSD (~$599 new)
  • Significantly faster inference than M2 at the same price point
  • Runs Llama 3.1 8B at 40–55 tokens/sec
  • Good for single-user power use or small team shared instance
Power user: M4 Mac Mini, 32GB RAM, 512GB SSD (~$799 new)
  • Runs 34B models (e.g., Qwen 2.5 34B, Mistral 22B) at usable speeds
  • Multiple models loaded simultaneously without eviction
  • Ideal if privacy and local-only inference is the primary goal
Skip: The M4 Pro Mac Mini (starting at $1,399) is excellent but the value proposition diminishes for most OpenClaw use cases. Unless you need to run 70B+ models or serve a team of 10+, the base M4 with 16–32GB hits the sweet spot.

Step-by-Step Installation

1. Prepare macOS

Start with a fresh macOS Sequoia install or a clean user account. Enable SSH for headless access:

bash
sudo systemsetup -setremotelogin on

Verify with ssh yourusername@mac-mini.local from another machine.

2. Install Homebrew

Homebrew is the package manager that makes the rest of the setup simple:

bash
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Follow the post-install instructions to add Homebrew to your PATH. For Apple Silicon, the path is /opt/homebrew/bin.

3. Install Node.js

OpenClaw requires Node.js 20 or higher. Install via nvm for easy version management:

bash
brew install nvm
mkdir ~/.nvm

Add to ~/.zshrc:

export NVM_DIR="$HOME/.nvm" [ -s "/opt/homebrew/opt/nvm/nvm.sh" ] && . "/opt/homebrew/opt/nvm/nvm.sh"

source ~/.zshrc nvm install 22 nvm use 22 nvm alias default 22

Verify: node --version should show v22.x.x.

4. Install OpenClaw

bash
npm install -g openclaw
openclaw --version

Run the initial setup wizard:

bash
openclaw init

This creates ~/.openclaw/config.json where you'll configure API keys, enabled skills, and model preferences.

5. Install and Configure Ollama

Ollama is the recommended local model runner for Mac Mini. It has native Apple Silicon optimizations and integrates cleanly with OpenClaw:

bash
brew install ollama

Pull your first model:

bash
ollama pull llama3.2
ollama pull qwen2.5:14b  # Recommended for 16GB+ systems

Start the Ollama server:

bash
ollama serve

In your OpenClaw config, point to the local Ollama instance:

json
{
  "providers": {
    "ollama": {
      "baseUrl": "http://localhost:11434",
      "defaultModel": "qwen2.5:14b"
    }
  },
  "defaultProvider": "ollama"
}

Configuring launchd for Auto-Start

launchd is macOS's native service manager. It starts services at login, restarts them on crash, and handles logging — perfect for a headless server setup.

Create a plist for OpenClaw at ~/Library/LaunchAgents/com.openclaw.agent.plist:

xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>com.openclaw.agent</string>
  <key>ProgramArguments</key>
  <array>
    <string>/Users/YOURUSERNAME/.nvm/versions/node/v22.0.0/bin/openclaw</string>
    <string>serve</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
  <key>KeepAlive</key>
  <true/>
  <key>StandardOutPath</key>
  <string>/Users/YOURUSERNAME/.openclaw/logs/openclaw.log</string>
  <key>StandardErrorPath</key>
  <string>/Users/YOURUSERNAME/.openclaw/logs/openclaw-error.log</string>
  <key>EnvironmentVariables</key>
  <dict>
    <key>PATH</key>
    <string>/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin</string>
  </dict>
</dict>
</plist>

Load and start:

bash
mkdir -p ~/.openclaw/logs
launchctl load ~/Library/LaunchAgents/com.openclaw.agent.plist
launchctl start com.openclaw.agent

Create a similar plist for Ollama (com.ollama.server.plist) following the same pattern.

Disable Sleep and Configure Energy Settings

A headless server must never sleep. Open System Settings → Energy → set "Prevent automatic sleeping when the display is off" to on. Or use the command line:

bash
# Prevent sleep
sudo pmset -a sleep 0
sudo pmset -a disksleep 0
sudo pmset -a displaysleep 10

Wake on network access (useful for WoL)

sudo pmset -a womp 1

Restart automatically after power failure

sudo pmset -a autorestart 1

Verify settings:

bash
pmset -g

Cloudflare Tunnel for Remote Access

Instead of opening ports on your home router, Cloudflare Tunnel creates a secure outbound connection. Your OpenClaw instance becomes accessible at a stable URL with zero port forwarding.

bash
brew install cloudflared

Authenticate

cloudflared tunnel login

Create tunnel

cloudflared tunnel create openclaw-home

Configure ~/.cloudflared/config.yml

yaml
tunnel: YOUR_TUNNEL_ID
credentials-file: /Users/YOURUSERNAME/.cloudflared/YOUR_TUNNEL_ID.json

ingress: - hostname: openclaw.yourdomain.com service: http://localhost:3000 - service: http_status:404

bash
# Route DNS
cloudflared tunnel route dns openclaw-home openclaw.yourdomain.com

Run as service

sudo cloudflared service install

Now https://openclaw.yourdomain.com routes to your Mac Mini from anywhere in the world, with TLS handled automatically.

Performance Benchmarks

Real-world tokens/sec measured on Mac Mini hardware with Ollama:

ModelM2 16GBM4 16GBM4 32GB
Llama 3.2 3B65 t/s95 t/s100 t/s
Llama 3.1 8B28 t/s42 t/s48 t/s
Qwen 2.5 14B14 t/s22 t/s26 t/s
Qwen 2.5 32BOOMOOM11 t/s
For agentic workflows, 14–22 tokens/sec on a 14B model is entirely practical. Most tool calls and reasoning steps don't require real-time streaming speed — the bottleneck is usually the tool execution itself (API calls, file operations) rather than inference.

Cost Analysis: Mac Mini vs VPS

Mac Mini M4 16GB ($599 one-time):
  • Electricity: ~$2.50/month (25W average, $0.14/kWh)
  • Cloudflare Tunnel: free tier (or $0–5/month for Pro)
  • Ollama: free, unlimited local inference
  • Total monthly: ~$2.50–7.50
  • Break-even vs $10/mo VPS: ~7 months
  • Break-even vs $15/mo VPS: ~5 months
VPS with GPU inference ($15–50/mo):
  • No upfront cost
  • GPU-accelerated inference for large models
  • Managed infrastructure, easy scaling
  • API costs for Claude/GPT add on top
VPS basic ($5–10/mo):
  • CPU-only inference: painfully slow for 7B+ models
  • Fine for routing to cloud APIs, but then you're paying API costs too
After 12 months, the Mac Mini has cost $599 + $90 electricity = $689. A $15/month VPS with a GPU tier for local models: $180/year, plus API costs. The Mac Mini wins on 2+ year horizons, and it wins immediately if privacy is a requirement.

When Mac Mini Beats VPS

Choose Mac Mini if:
  • Privacy is non-negotiable (medical, legal, personal data never leaves your hardware)
  • You want to run large local models (14B–34B) without per-token costs
  • You're already paying $10+/month for VPS and it's running 24/7 anyway
  • You want a persistent agent that can interact with local files and applications
  • You're in a location with cheap electricity
Stick with VPS if:
  • You need instant scalability (burst to 10x load for a day)
  • You're running multiple services that already justify a VPS
  • You want managed backups, snapshots, and datacenter reliability
  • You don't want to manage physical hardware
  • You need 99.9%+ uptime SLA (home internet isn't that reliable)
The Mac Mini setup shines for developers and power users who want a personal AI infrastructure that's fast, private, and cost-effective over the long term. Once it's configured, it runs silently in the background, handling agent tasks while you work on other things.

Alex Werner

Founder of OpenClaw Install. 5+ years in DevOps and AI infrastructure. Helped 50+ clients deploy AI agents.

About the author

Read Also