Overview
This template provides a production‑ready Fluent Bit instance as a Monk runnable. You can:- Run it directly to get a managed Fluent Bit container with sensible defaults
- Inherit it in your own runnable to seamlessly add log processing and forwarding to your infrastructure
What this template manages
- Fluent Bit container (
fluent/fluent-bitimage, configurable tag) - Network service on port 24224 (Forward protocol)
- Custom configuration and parser files
- Log processing pipelines and output destinations
Quick start (run directly)
- Load templates
- Run Fluent Bit with defaults
- Customize configuration (recommended via inheritance)
app/. To customize the log processing pipeline:
- Preferred: inherit and replace configuration files as shown below.
- Alternative: fork/clone and edit the configuration files in
app/, thenmonk load MANIFESTand run.
Configuration
The default configuration is set up to:- config.conf: Main Fluent Bit configuration (SERVICE, INPUT, FILTER, OUTPUT sections)
- parser.conf: Parser definitions for structured log formats (JSON, Docker)
- Configure input sources (forward, tail, systemd, docker, etc.)
- Define filters to process and enrich logs
- Set up output destinations (Elasticsearch, Kafka, S3, HTTP, etc.)
- Create custom parsers for your log formats
Use by inheritance (recommended for apps)
Inherit the Fluent Bit runnable in your application to add log forwarding. Example:Ports and connectivity
- Service:
fluentbit-svcon TCP port24224(Forward/Fluentd protocol) - From other runnables in the same process group, use
connection-hostname("\<connection-name>")to resolve the Fluent Bit host. - Most logging libraries support the Fluentd forward protocol for sending structured logs.
Configuration files
The template mounts configuration files into the container:/configs/config.conf— Main Fluent Bit configuration/configs/parsers.conf— Parser definitions
server-conf and parser-conf files when inheriting:
Features
- Lightweight: Minimal memory footprint (~450KB)
- Fast: High throughput log processing (up to 20,000 events/sec per core)
- Plugin-based: Extensible with 80+ input, filter, and output plugins
- Multi-format: Supports JSON, regex, LTSV, logfmt, and custom parsers
- Cloud-native: Designed for Kubernetes and containers
- Multiple outputs: Forward to Elasticsearch, Kafka, S3, Splunk, and more
- Stream processing: Real-time log enrichment and transformation
Common use cases
- Container logging: Collect logs from Docker/Kubernetes containers via Fluentd logging driver
- System logging: Forward system logs to centralized storage (ELK, Loki, CloudWatch)
- Log enrichment: Add metadata, parse JSON, and transform log formats on the fly
- Metrics collection: Gather and forward system metrics alongside logs
- Edge computing: Process logs at the edge with minimal resource overhead
- Multi-destination: Send the same logs to multiple backends (ES, S3, monitoring)
Related templates
- See other observability templates in this repository for complementary services
- Combine with Elasticsearch (
elasticsearch/) for log storage and search - Integrate with Prometheus + Grafana (
prometheus-grafana/) for full observability stack - Use with Kafka (
kafka/) for log streaming pipelines
Troubleshooting
- Verify configuration syntax in
config.confandparser.conf:
- Check that input sources (files, ports) are accessible
- Ensure output destinations are reachable and credentials are valid
- Verify parsers match your log format
- Check logs:
- Test log forwarding from your application:
- For high-volume logging, tune the
Flushinterval and buffer settings in[SERVICE]section - If logs are being dropped, increase
Mem_Buf_Limitor reduce input rate