This guide walks you through installing Monitoring Docker end-to-end: configuring the Splunk app and HTTP Event Collector, preparing the Docker daemon, and running Collectord to forward metadata-enriched container logs, host logs, and metrics. A typical install takes under 10 minutes. If you don’t have a license yet, you can request a 30-day evaluation.
Splunk configuration
Install the Monitoring Docker application
Install the latest version of Monitoring Docker from Splunkbase on your Search Heads only.
If you’re using a dedicated index that isn’t searchable by default, update the macro_docker_base macro to include it:
1macro_docker_base = (index=docker)Enable HTTP Event Collector in Splunk
Collectord forwards data to Splunk over the HTTP Event Collector (HEC). If HEC isn’t enabled yet, follow Splunk’s guide to Configure the Splunk HTTP Event Collector for use with additional technologies.
Once HEC is enabled, you need two pieces of information for the rest of this guide: the HEC endpoint URL and an HEC token. You can verify both with curl:
1$ curl -k https://hec.example.com:8088/services/collector/event/1.0 -H "Authorization: Splunk B5A79AAD-D822-46CC-80D1-819F80D7BFB0" -d '{"event": "hello world"}'
2{"text": "Success", "code": 0}Don’t set a source type on the token — Collectord assigns its own source types per datatype, and the Splunk app expects them.
-kskips certificate validation; use it only for self-signed certificates.
Splunk Cloud uses a different HEC URL than Splunk Web — see Send data to HTTP Event Collector on Splunk Cloud instances.
If your indexes aren’t searchable by default, see Splunk Indexes for how to configure them on both the Splunk side and inside Collectord.
Install Collectord for Docker
- For ECS installation, see the blog post Monitoring Amazon Elastic Container Service (ECS) Clusters in Splunk.
- For Docker UCP installation, see the blog post Monitoring Docker Universal Control Plane (UCP) with Splunk Enterprise and Splunk Cloud.
Pre-requirements
Collectord reads container logs from the JSON files written by Docker’s json-file logging driver, so the daemon needs to be using that driver. Some Linux distributions — CentOS, for example — default to journald instead. Check what you’re running with:
1$ docker info | grep "Logging Driver"
2Logging Driver: json-fileIf the daemon configuration lives in /etc/sysconfig/docker (typical on CentOS/RHEL with Docker 1.13), switch the driver and restart Docker:
1$ sed -i 's/--log-driver=journald/--log-driver=json-file --log-opt max-size=100M --log-opt max-file=3/' /etc/sysconfig/docker
2$ systemctl restart dockerIf you’re configuring the daemon through /etc/docker/daemon.json (typical on Debian/Ubuntu), set it there instead:
1{
2 "log-driver": "json-file",
3 "log-opts" : {
4 "max-size" : "100m",
5 "max-file" : "3"
6 }
7}1$ systemctl restart dockerFor the full reference on configuring the default logging driver, see:
- Docker 1.13 - Configure the default logging driver for the Docker daemon
- Docker latest EE/CE - Configure the default logging driver
JSON logging driver configuration
Out of the box, Docker doesn’t rotate JSON log files — left alone, they’ll grow until they fill the disk. The max-size and max-file options in the examples above set up rotation so that doesn’t happen. These files are also your safety buffer between Collectord and Splunk HEC: if a container writes 100 MiB per hour with 3 files, you have roughly 3 hours to fix any disruption before the oldest log is overwritten. See Configure and troubleshoot the Docker daemon for the full set of options.
Installation
Pull the version of Collectord you want to run:
1docker pull outcoldsolutions/collectorfordocker:26.04.1Pin to a specific version to make upgrades predictable. Follow the blog, Twitter, or the newsletter to keep up with releases.
Run Collectord with the command below. It uses the same HEC URL and token from the curl test above. Edit it to:
- Set the Splunk HEC URL, token, and (if needed) certificate options.
- Review and accept the license agreement and paste in your license key.
- Optionally, name the cluster — replace the
-on the cluster line. Useful when you’re monitoring more than one host and want to filter by cluster in the app.
If you’re deploying onto a host that’s been running for a while and has a lot of historical logs on disk, Collectord will start by forwarding all of them — which can spike both your network and your Splunk indexing. Use the
[general]settingsthruputPerSecondto cap throughput andtooOldEventsto skip events older than a given age.
1docker run -d \
2 --name collectorfordocker \
3 --volume /:/rootfs/:ro \
4 --volume collector_data:/data/ \
5 --cpus=1 \
6 --cpu-shares=204 \
7 --memory=256M \
8 --restart=always \
9 --env "COLLECTOR__SPLUNK_URL=output.splunk__url=https://hec.example.com:8088/services/collector/event/1.0" \
10 --env "COLLECTOR__SPLUNK_TOKEN=output.splunk__token=B5A79AAD-D822-46CC-80D1-819F80D7BFB0" \
11 --env "COLLECTOR__SPLUNK_INSECURE=output.splunk__insecure=true" \
12 --env "COLLECTOR__ACCEPTLICENSE=general__acceptLicense=true" \
13 --env "COLLECTOR__LICENSE=general__license=..." \
14 --env "COLLECTOR__CLUSTER=general__fields.docker_cluster=-" \
15 --privileged \
16 outcoldsolutions/collectorfordocker:26.04.1Running this from PowerShell on Windows? Replace the trailing
\on each line (except the last) with^so the multi-line continuation works.
On AWS ECS, replace
/sys/fs/cgroup:/rootfs/sys/fs/cgroup:rowith/cgroup:/rootfs/sys/fs/cgroup:ro— ECS-optimized images mount the cgroup filesystem at the root. See Monitoring Amazon Elastic Container Service Clusters in Splunk.
For the full set of configuration options — and how to bake your own configuration into a custom image on top of the official one — see the Configuration page.
If you use Docker Compose, the same setup looks like this:
1version: "3"
2services:
3
4 collectorfordocker:
5 image: outcoldsolutions/collectorfordocker:26.04.1
6 volumes:
7 - /:/rootfs/:ro
8 - collector_data:/data/
9 environment:
10 - COLLECTOR__SPLUNK_URL=output.splunk__url=https://hec.example.com:8088/services/collector/event/1.0
11 - COLLECTOR__SPLUNK_TOKEN=output.splunk__token=B5A79AAD-D822-46CC-80D1-819F80D7BFB0
12 - COLLECTOR__SPLUNK_INSECURE=output.splunk__insecure=true
13 - COLLECTOR__ACCEPTLICENSE=general__acceptLicense=true
14 - COLLECTOR__LICENSE=general__license=...
15 - COLLECTOR__CLUSTER=general__fields.docker_cluster=-
16 restart: always
17 deploy:
18 mode: global
19 restart_policy:
20 condition: any
21 resources:
22 limits:
23 cpus: '1'
24 memory: 256M
25 reservations:
26 cpus: '0.1'
27 memory: 64M
28 privileged: true
29
30volumes:
31 collector_data:For docker-compose 2.x:
1version: "2.2"
2services:
3
4 collectorfordocker:
5 image: outcoldsolutions/collectorfordocker:26.04.1
6 volumes:
7 - /:/rootfs/:ro
8 - collector_data:/data/
9 environment:
10 - COLLECTOR__SPLUNK_URL=output.splunk__url=https://hec.example.com:8088/services/collector/event/1.0
11 - COLLECTOR__SPLUNK_TOKEN=output.splunk__token=B5A79AAD-D822-46CC-80D1-819F80D7BFB0
12 - COLLECTOR__SPLUNK_INSECURE=output.splunk__insecure=true
13 - COLLECTOR__ACCEPTLICENSE=general__acceptLicense=true
14 - COLLECTOR__LICENSE=general__license=...
15 - COLLECTOR__CLUSTER=general__fields.docker_cluster=-
16 restart: always
17 cpus: 1
18 cpu_shares: 200
19 mem_limit: 256M
20 mem_reservation: 64M
21 privileged: true
22
23volumes:
24 collector_data:If you’re using a Splunk-generated certificate, you’ll need some SSL configuration. The simplest path to a working install is
--env "COLLECTOR__SPLUNK_INSECURE=output.splunk__insecure=true"to skip validation, as shown in the examples above.
Collectord doesn’t require you to change the default logging driver — it forwards logs from the standard JSON logging driver as-is.
Once the image is pulled and the container is running, open the Monitoring Docker app in Splunk — dashboards should start populating within a minute or two.
By default, Collectord forwards container logs, host logs (including syslog), and metrics for the host, containers, and processes.
Deploying on Docker Swarm
For Swarm, reuse the docker-compose.yaml above with stack deploy:
1docker stack deploy --compose-file ./docker-compose.yaml collectorfordockerDeploying on Podman
Collectord supports Podman as a Docker replacement. One requirement: use the default journald logging driver. The k8s-file driver doesn’t keep rotated files, which makes reliable log collection impossible.
The minimum Podman-specific changes are pointing Collectord at the Podman socket instead of the Docker socket and adjusting the container root folder:
1podman run -d \
2 --name collectorforpodman \
3 --volume /:/rootfs:ro \
4 --volume collector_data:/data/ \
5 --cpus=2 \
6 --cpu-shares=1024 \
7 --memory=512M \
8 --restart=always \
9 --env "COLLECTOR__SPLUNK_URL=output.splunk__url=..." \
10 --env "COLLECTOR__SPLUNK_TOKEN=output.splunk__token=..." \
11 --env "COLLECTOR__SPLUNK_INSECURE=output.splunk__insecure=true" \
12 --env "COLLECTOR__EULA=general__acceptLicense=true" \
13 --env "COLLECTOR__LICENSE_KEY=general__license=..." \
14 --env "COLLECTOR__GENERALPODMAN_URL=general.docker__url=unix:///rootfs/var/run/podman/podman.sock" \
15 --env "COLLECTOR__GENERALPODMAN_STORAGE=general.docker__dockerRootFolder=/rootfs/var/lib/" \
16 --ulimit nofile=1048576:1048576 \
17 --privileged \
18 outcoldsolutions/collectorfordocker:26.04.1Next steps
- Review the predefined alerts and enable the ones relevant to your environment.
- If something looks off, work through the troubleshooting checks.
- If your Docker hosts run in a timezone other than UTC, the Collectord container needs to know about it. The simplest fix is mounting the host’s timezone file with
--volume /etc/localtime:/etc/localtime:ro. Alternatively, settimestampLocationon each[input.files::*]input in the configuration. - For better indexing performance, split logs and metrics into separate indexes. See Splunk Indexes for the recommended layout and how to wire it up on both sides.
- For search-time field extraction on container logs, see Splunk fields extraction for container logs.
- Collectord can pick up application logs from inside containers automatically — see annotations.
- For per-container control over log forwarding — multi-line patterns, field extraction, index/source/sourcetype overrides, dropping noisy lines, and hashing sensitive values — see annotations.