Overview
This template provides a production‑ready Chroma instance as a Monk runnable. You can:- Run it directly to get a managed Chroma container with sensible defaults
- Inherit it in your own runnable to seamlessly add a vector database for AI/ML embeddings to your stack
What this template manages
- Chroma container (
ghcr.io/chroma-core/chroma:latestimage) - Network service on port 8000
- Persistent volumes for vector data and embeddings storage
- Optional authentication and observability configuration
Quick start (run directly)
- Load templates
- Run Chroma with defaults
- Customize configuration (optional)
variables. For custom configuration:
- Preferred: inherit and override variables as shown below
- Alternative: fork/clone and edit the
variablesinchroma/chroma.yaml, thenmonk load MANIFESTand run
localhost:8000 (or the runnable hostname inside Monk networks) using the Chroma client library.
Configuration
Key variables you can customize in this template:${monk-volume-path}/chroma:/chroma/chroma on the host.
Use by inheritance (recommended for AI/ML apps)
Inherit the Chroma runnable in your AI/ML application and declare a connection. Example:Ports and connectivity
- Service:
chromaon TCP port8000 - From other runnables in the same process group, use
connection-hostname("\<connection-name>")to resolve the Chroma host
Persistence
- Data path:
${monk-volume-path}/chroma:/chroma/chroma - All collections, embeddings, and metadata are persisted to this volume when
is_persistentistrue
Use cases
Chroma is ideal for:- Semantic search over documents
- Question-answering systems with LLMs
- Recommendation engines
- Similarity matching
- RAG (Retrieval-Augmented Generation) applications
Related templates
- Use with LLM applications (
ollama/,openllm/,langfuse/) - Combine with embedding models for semantic search
- Integrate with RAG pipelines for enhanced AI responses
Troubleshooting
- Ensure the host volumes are writable by the container user
- Check logs:
- Verify Chroma is responding: