Overview
This template provides a production-ready OpenClaw AI assistant gateway as a Monk runnable. You can:- Run it directly to get a managed AI gateway for messaging platforms
- Inherit it in your own applications to add AI assistant capabilities
- Combine with Ollama for local LLM inference
What this template manages
- OpenClaw gateway container (
alpine/openclawimage) - HTTP API service on port 18789
- Bridge service on port 18790
- Configuration and workspace persistence
- Health checks (readiness and liveness)
- Optional Ollama integration for local LLMs
Quick start (run directly)
- Load templates
- Add required secrets
- Run OpenClaw gateway
localhost:18789.
With Ollama integration
For local LLM inference without external API dependencies:- Load both templates
- Add secrets
- Run the combined stack
- Pull a model
Configuration
Key variables you can customize:Changing the Default Model
The template comes with a pre-configured default model (qwen2.5:0.5b for CPU, llama3.3 for GPU). If you want to use a different Ollama model, you need to:
- Pull the new model:
- Update the gateway configuration by modifying
openclaw.yaml:
gateway-ollama section and update the models list and primary model:
- Reload and restart:
Example: Using Mistral
Note: The model ID in the configuration must exactly match the model name in Ollama (as shown by monk do openclaw/ollama/list-models).
Use by inheritance
Inherit the OpenClaw runnable in your application:Ports and connectivity
- Service
http: TCP port 18789 (Gateway API) - Service
bridge: TCP port 18790 (Bridge connections) - Health endpoint:
GET /health
connection-hostname("\<connection-name>") to resolve the OpenClaw host.
Persistence
- Configuration:
${monk-volume-path}/openclaw/config:/home/node/.openclaw - Workspace:
${monk-volume-path}/openclaw/workspace:/home/node/.openclaw/workspace - Ollama models:
${monk-volume-path}/ollama:/root/.ollama
Features
- Multi-platform messaging support (WhatsApp, Telegram, Discord, Slack, Signal)
- Multiple LLM provider support (OpenAI, Anthropic, Ollama, and more)
- Local LLM inference with Ollama integration
- Persistent configuration and workspaces
- Health monitoring with readiness/liveness checks
- CLI tools for channel management
Available Actions
| Action | Description | Command |
|---|---|---|
onboard | Run onboarding wizard | monk do openclaw/cli/onboard |
channels-login | Login to WhatsApp | monk do openclaw/cli/channels-login |
channels-status | Check channels status | monk do openclaw/cli/channels-status |
health | Gateway health check | monk do openclaw/cli/health |
pull-model | Pull Ollama model | monk do openclaw/ollama/pull-model model=llama3.3 |
list-models | List Ollama models | monk do openclaw/ollama/list-models |
Recommended Ollama Models
For CPU (no GPU):| Model | Size | Speed | Use Case |
|---|---|---|---|
| qwen2.5:0.5b | 398MB | ⚡⚡⚡ | Testing, fast responses |
| tinyllama | 637MB | ⚡⚡⚡ | Basic tasks |
| llama3.2:1b | 1.3GB | ⚡⚡ | General purpose |
| gemma2:2b | 1.6GB | ⚡⚡ | Quality responses |
| Model | Size | Speed | Use Case |
|---|---|---|---|
| llama3.3 | 4.7GB | ⚡⚡⚡ | High quality |
| qwen2.5-coder:32b | 19GB | ⚡⚡ | Code generation |
| deepseek-r1:32b | 19GB | ⚡⚡ | Reasoning tasks |
Related templates
- Combine with
ollama/ollamafor local LLM inference - Use with database templates for persistent storage
- Integrate with monitoring templates for observability
Troubleshooting
- Check gateway logs:
- Check Ollama logs:
- Verify gateway health:
- Enter container shell:
- For Ollama 404 errors, ensure a model is pulled:

