Skip to main content

Overview

This template provides a production‑ready Fluent Bit instance as a Monk runnable. You can:
  • Run it directly to get a managed Fluent Bit container with sensible defaults
  • Inherit it in your own runnable to seamlessly add log processing and forwarding to your infrastructure
Fluent Bit is a fast and lightweight logs and metrics processor and forwarder. It’s designed with performance and low resource consumption in mind, making it ideal for edge computing, IoT, and containerized environments.

What this template manages

  • Fluent Bit container (fluent/fluent-bit image, configurable tag)
  • Network service on port 24224 (Forward protocol)
  • Custom configuration and parser files
  • Log processing pipelines and output destinations

Quick start (run directly)

  1. Load templates
monk load MANIFEST
  1. Run Fluent Bit with defaults
monk run fluentbit/fluentbit
  1. Customize configuration (recommended via inheritance)
Running directly uses the default configuration files in app/. To customize the log processing pipeline:
  • Preferred: inherit and replace configuration files as shown below.
  • Alternative: fork/clone and edit the configuration files in app/, then monk load MANIFEST and run.
Once started, Fluent Bit will begin accepting logs on port 24224 and processing them according to the configured pipeline.

Configuration

The default configuration is set up to:
# Input: Forward protocol on port 24224
# Filters: 
#   - Rename SERVICE_NAME to service_name
#   - Parse JSON logs and lift keys to first level
# Output: stdout (for testing/debugging)
Configuration files included:
  • config.conf: Main Fluent Bit configuration (SERVICE, INPUT, FILTER, OUTPUT sections)
  • parser.conf: Parser definitions for structured log formats (JSON, Docker)
You can customize these to:
  • Configure input sources (forward, tail, systemd, docker, etc.)
  • Define filters to process and enrich logs
  • Set up output destinations (Elasticsearch, Kafka, S3, HTTP, etc.)
  • Create custom parsers for your log formats
Inherit the Fluent Bit runnable in your application to add log forwarding. Example:
namespace: myapp
log-forwarder:
  defines: runnable
  inherits: fluentbit/fluentbit
  files:
    server-conf:
      container: fluentbit
      path: /configs/config.conf
      contents: |
        [SERVICE]
            Flush        5
            Daemon       Off
            Log_Level    info
            Parsers_File /configs/parsers.conf

        [INPUT]
            Name         forward
            Listen       0.0.0.0
            Port         24224

        [FILTER]
            Name         modify
            Match        *
            Add          environment production
            Add          cluster my-cluster

        [OUTPUT]
            Name         es
            Match        *
            Host         elasticsearch.example.com
            Port         9200
            Logstash_Format On
            Retry_Limit  5

api:
  defines: runnable
  containers:
    app:
      image: myorg/api
  connections:
    logs:
      runnable: log-forwarder
      service: fluentbit-svc
  variables:
    fluentd-host:
      value: <- connection-hostname("logs")
    fluentd-port:
      value: <- connection-port("logs")
Configure your application to send logs to the Fluent Bit forwarder using the connection variables.

Ports and connectivity

  • Service: fluentbit-svc on TCP port 24224 (Forward/Fluentd protocol)
  • From other runnables in the same process group, use connection-hostname("\<connection-name>") to resolve the Fluent Bit host.
  • Most logging libraries support the Fluentd forward protocol for sending structured logs.

Configuration files

The template mounts configuration files into the container:
  • /configs/config.conf — Main Fluent Bit configuration
  • /configs/parsers.conf — Parser definitions
To provide custom configuration, override the server-conf and parser-conf files when inheriting:
files:
  server-conf:
    container: fluentbit
    path: /configs/config.conf
    contents: |
      # Your custom Fluent Bit configuration
  parser-conf:
    container: fluentbit
    path: /configs/parsers.conf
    contents: |
      # Your custom parser definitions

Features

  • Lightweight: Minimal memory footprint (~450KB)
  • Fast: High throughput log processing (up to 20,000 events/sec per core)
  • Plugin-based: Extensible with 80+ input, filter, and output plugins
  • Multi-format: Supports JSON, regex, LTSV, logfmt, and custom parsers
  • Cloud-native: Designed for Kubernetes and containers
  • Multiple outputs: Forward to Elasticsearch, Kafka, S3, Splunk, and more
  • Stream processing: Real-time log enrichment and transformation

Common use cases

  • Container logging: Collect logs from Docker/Kubernetes containers via Fluentd logging driver
  • System logging: Forward system logs to centralized storage (ELK, Loki, CloudWatch)
  • Log enrichment: Add metadata, parse JSON, and transform log formats on the fly
  • Metrics collection: Gather and forward system metrics alongside logs
  • Edge computing: Process logs at the edge with minimal resource overhead
  • Multi-destination: Send the same logs to multiple backends (ES, S3, monitoring)
  • See other observability templates in this repository for complementary services
  • Combine with Elasticsearch (elasticsearch/) for log storage and search
  • Integrate with Prometheus + Grafana (prometheus-grafana/) for full observability stack
  • Use with Kafka (kafka/) for log streaming pipelines

Troubleshooting

  • Verify configuration syntax in config.conf and parser.conf:
monk shell fluentbit/fluentbit
fluent-bit -c /configs/config.conf --dry-run
  • Check that input sources (files, ports) are accessible
  • Ensure output destinations are reachable and credentials are valid
  • Verify parsers match your log format
  • Check logs:
monk logs -l 500 -f fluentbit/fluentbit
  • Test log forwarding from your application:
# Send a test log via Forward protocol
echo '{"message":"test","level":"info"}' | fluent-cat test.logs
  • For high-volume logging, tune the Flush interval and buffer settings in [SERVICE] section
  • If logs are being dropped, increase Mem_Buf_Limit or reduce input rate