Collectord’s defaults live in 001-general.conf, mounted at /config/001-general.conf inside the image. You don’t edit that file directly — instead, you drop additional .conf files into the same /config directory, and Collectord reads everything matching *.conf in alphabetical order, with later files overriding earlier ones.
Through these configuration files you control how Collectord behaves overall, what data it forwards to Splunk, how it talks to Splunk, and the defaults for each data pipeline (logs, metrics, events). For per-container behavior — multi-line join rules, stripping terminal colors, dropping noisy lines, field extraction, value masking, application-log paths — use annotations on the containers themselves.
Overriding configuration by embedding configuration files
The cleanest way to customize a deployment is to bake a small override file into your own image. You only put the keys you want to change — everything else inherits from 001-general.conf. For example, save the following as 002-conf.conf to accept the license, point Collectord at your Splunk HEC, and route logs and metrics to dedicated indexes:
1# accepting license
2[general]
3acceptLicense = true
4
5# Configuring Splunk output
6[output.splunk]
7url = https://hec.example.com:8088/services/collector/event/1.0
8token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0
9insecure = true
10
11# Overriding default indices
12[input.system_stats]
13index = docker_stats
14
15[input.mount_stats]
16index = docker_stats
17
18[input.net_stats]
19index = docker_stats
20
21[input.net_socket_table]
22index = docker_stats
23
24[input.proc_stats]
25index = docker_stats
26
27[input.files]
28index = docker_logs
29
30[input.app_logs]
31index = docker_logs
32
33[input.files::syslog]
34index = docker_logs
35
36[input.files::logs]
37index = docker_logs
38
39[input.docker_events]
40index = docker_logsWrap it in a Dockerfile that extends the base Collectord image:
1FROM outcoldsolutions/collectorfordocker:26.04.1
2
3COPY 002-conf.conf /config/002-conf.confBuild the image:
1docker build -t example.com/collectorfordocker:26.04.1 .Then run it the same way you would the stock image — see the installation guide for the full docker run command.
Overriding configuration with the environment variables
If you’d rather not build a custom image — for quick experiments, ephemeral hosts, or when you’re driving Collectord from an orchestrator that already manages env vars — you can pass every setting through environment variables in this format:
1--env "COLLECTOR__<ANYUNIQUENAME>=<section>__<key>=<value>"The same overrides as the file above, expressed as env vars:
1--env "COLLECTOR__ACCEPTLICENSE=general__acceptLicense=true" \
2--env "COLLECTOR__SPLUNK_URL=output.splunk__url=https://hec.example.com:8088/services/collector/event/1.0" \
3--env "COLLECTOR__SPLUNK_TOKEN=output.splunk__token=B5A79AAD-D822-46CC-80D1-819F80D7BFB0" \
4--env "COLLECTOR__SPLUNK_INSECURE=output.splunk__insecure=true" \
5--env "COLLECTOR__STATS_INDEX=input.system_stats__index=docker_stats" \
6--env "COLLECTOR__STATS_INDEX=input.mount_stats__index=docker_stats" \
7--env "COLLECTOR__STATS_INDEX=input.net_stats__index=docker_stats" \
8--env "COLLECTOR__STATS_INDEX=input.net_socket_table__index=docker_stats" \
9--env "COLLECTOR__PROCSTATS_INDEX=input.proc_stats__index=docker_stats" \
10--env "COLLECTOR__CONTAINERLOGS_INDEX=input.files__index=docker_logs" \
11--env "COLLECTOR__APPLICATIONLOGS_INDEX=input.app_logs__index=docker_logs" \
12--env "COLLECTOR__SYSLOG_INDEX=input.files::syslog__index=docker_logs" \
13--env "COLLECTOR__HOSTLOGS_INDEX=input.files::logs__index=docker_logs" \
14--env "COLLECTOR__EVENTS_INDEX=input.docker_events__index=docker_logs"Attaching EC2 Metadata
If you’re running on EC2 and want every event to carry instance-level context — instance ID, instance type, availability zone — point Collectord at the EC2 metadata service. Each entry maps a field name to a path from Instance Metadata and User Data:
1# Include EC2 Metadata (see list of possible fields https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html)
2# Should be in format ec2Metadata.{desired_field_name} = {url path to read the value}
3# ec2Metadata.ec2_instance_id = /latest/meta-data/instance-id
4# ec2Metadata.ec2_instance_type = /latest/meta-data/instance-typeIf you’re configuring Collectord with environment variables, the same two fields look like:
1...
2--env "COLLECTOR__EC2_INSTANCE_ID=general__ec2Metadata.ec2_instance_id=/latest/meta-data/instance-id" \
3--env "COLLECTOR__EC2_INSTANCE_TYPE=general__ec2Metadata.ec2_instance_type=/latest/meta-data/instance-type" \
4...Placeholders in index and sources
Static index names work fine for a single host, but once you’re forwarding from a fleet you’ll often want logs grouped by cluster, environment, or container. Collectord lets you use any metadata field as a placeholder in index, source, and sourcetype.
For example, to route each cluster’s logs to its own index:
1[input.files]
2
3index = docker_{{docker_cluster}}Or to build a structured source path that combines cluster and container — handy when a single index holds logs from many hosts and you want Splunk’s source-based filtering to do the heavy lifting:
1[input.files]
2
3source = /{{docker_cluster}}/{{docker_container_name}}FIPS-compliant images
Since version 26.04, we publish FIPS 140-compliant images alongside the regular images, with a -fips suffix:
outcoldsolutions/collectorfordocker:26.04.1-fipsBy default, these images use FIPS-certified cryptographic algorithms but still allow fallback to non-approved algorithms when needed. To strictly enforce FIPS 140 mode, set the environment variable GODEBUG=fips140=only — Collectord will crash if any code attempts to use a non-FIPS-140-compliant algorithm.
Collectord logs its FIPS state at startup and includes it in the output of collectord diag and collectord verify.
See FIPS 140-3 Compliance for details on the Go implementation.
Reference 001-general.conf
1# collectord configuration file
2#
3# Run collectord with flag -conf and specify location of the configuration file.
4#
5# You can override all the values using environment variables with the format like
6# COLLECTOR__<ANYNAME>=<section>__<key>=<value>
7# As an example you can set dataPath in [general] section as
8# COLLECTOR__DATAPATH=general__dataPath=C:\\some\\path\\data.db
9# This parameter can be configured using -env-override, set it to empty string to disable this feature
10
11[general]
12
13# Please review license https://www.outcoldsolutions.com/docs/license-agreement/
14# and accept license by changing the value to *true*
15acceptLicense = false
16
17# location for the database
18# is used to store position of the files and internal state
19dataPath = ./data/
20
21# log level (trace, debug, info, warn, error, fatal)
22logLevel = info
23
24# http server gives access to two endpoints
25# /healthz
26# /metrics
27httpServerBinding =
28
29# telemetry report endpoint, set it to empty string to disable telemetry
30telemetryEndpoint = https://license.outcold.solutions/telemetry/
31
32# license check endpoint
33licenseEndpoint = https://license.outcold.solutions/license/
34
35# license server through proxy
36# This configuration is used only for the Outcold Solutions License Server
37# For license server running on-premises, use configuration under [license.client]
38licenseServerProxyUrl =
39
40# authentication with basic authorization (user:password)
41# This configuration is used only for the Outcold Solutions License Server
42# For license server running on-premises, use configuration under [license.client]
43licenseServerProxyBasicAuth =
44
45# license key
46license =
47
48# docker daemon hostname is used by default as hostname
49# use this configuration to override
50hostname =
51
52# Default output for events, logs and metrics
53# valid values: splunk and devnull
54# Use devnull by default if you don't want to redirect data
55defaultOutput = splunk
56
57# Default buffer size for file input
58fileInputBufferSize = 256b
59
60# Maximum size of one line the file reader can read
61fileInputLineMaxSize = 1mb
62
63# Include custom fields to attach to every event, in example below every event sent to Splunk will hav
64# indexed field my_environment=dev. Fields names should match to ^[a-z][_a-z0-9]*$
65# Better way to configure that is to specify labels for Docker Hosts.
66# ; fields.my_environment = dev
67
68# Include EC2 Metadata (see list of possible fields https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html)
69# Should be in format ec2Metadata.{desired_field_name} = {url path to read the value}
70# ec2Metadata.ec2_instance_id = /latest/meta-data/instance-id
71# ec2Metadata.ec2_instance_type = /latest/meta-data/instance-type
72
73# subdomain for the annotations added to the pods, workloads, namespaces or containers, like splunk.collectord.io/..
74annotationsSubdomain =
75
76# configure global thruput per second for forwarded logs (metrics are not included)
77# for example if you set `thruputPerSecond = 512Kb`, that will limit amount of logs forwarded
78# from the single Collectord instance to 512Kb per second.
79# You can configure thruput individually for the logs (including specific for container logs) below
80thruputPerSecond =
81
82# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
83# older than 7 days
84tooOldEvents =
85
86# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
87tooNewEvents =
88
89# For input.files::X and application logs, when glob or match are configured, Collectord can automatically
90# detect gzipped files and skip them (based on the extensions or magic numbers)
91autoSkipGzipFiles = true
92
93# Multi-output async publishing. When enabled (default), events routed to
94# non-default outputs are published asynchronously so that a slow or down
95# output does not block events destined for other outputs.
96; multioutput.async = true
97# Buffer size for the async proxy (default 100). Absorbs transient bursts.
98# When this buffer and the output's own queue are both full, events are
99# dropped immediately without blocking the pipeline.
100; multioutput.asyncBufferSize = 100
101
102
103[license.client]
104# point to the license located on the HTTP web server, or a hosted by the Collectord running as license server
105url =
106# basic authentication for the HTTP server
107basicAuth =
108# if SSL, ignore the certificate verification
109insecure = false
110# CA Path for the Server certificate
111capath =
112# CA Name fot the Server certificate
113caname =
114# license server through proxy
115proxyUrl =
116# authentication with basic authorization (user:password)
117proxyBasicAuth =
118
119
120# forward internal collectord metrics
121[input.collectord_metrics]
122
123# disable collectord internal metrics
124disabled = false
125
126# override type
127type = docker_prometheus
128
129# how often to collect internal metrics
130interval = 1m
131
132# set output (splunk or devnull, default is [general]defaultOutput)
133output =
134
135# specify Splunk index
136index =
137
138# whitelist or blacklist the metrics
139whitelist.1 = ^file_input_open$
140whitelist.2 = ^file_input_read_bytes$
141whitelist.3 = ^docker_handlers$
142whitelist.4 = ^pipe$
143whitelist.5 = ^pipelines_num$
144whitelist.6 = ^splunk_post_bytes_sum.*$
145whitelist.7 = ^splunk_post_events_count_sum.*$
146whitelist.8 = ^splunk_post_failed_requests$
147whitelist.9 = ^splunk_post_message_max_lag_seconds_bucket.*$
148whitelist.10 = ^splunk_post_requests_seconds_sum.*$
149whitelist.11 = ^splunk_post_retries_required_sum.*$
150
151
152# connection to docker host
153[general.docker]
154
155# url for docker API, only unix socket is supported
156url = unix:///rootfs/run/docker.sock
157
158# path to docker root folder (can fallback to use folder structure to read docker metadata)
159# used to discovery application logs and collecting mount stats (usage)
160dockerRootFolder = /rootfs/var/lib/docker/
161
162# Timeout for http responses to docker client. The streaming requests depend on this timeout.
163timeout = 1m
164
165# regex to find docker container cgroups (helps excluding other cgroups with matched ID)
166containersCgroupFilter = ^/([^/\s]+/)*(((docker-|docker/|ecs/[a-f0-9\-]{36}/)[0-9a-f]{64}(\.scope)?)|(libpod-[0-9a-f]{64}(\.scope)?/container))$
167
168# api version for internal calls
169apiVersion = v1.44
170
171# cgroup input
172# sends stas for the host and cgroups (containers)
173[input.system_stats]
174
175# disable system level stats
176disabled.host = false
177disabled.cgroup = false
178
179# cgroups fs location
180pathCgroups = /rootfs/sys/fs/cgroup
181
182# proc location
183pathProc = /rootfs/proc
184
185# how often to collect cgroup stats
186statsInterval = 30s
187
188# override type
189type.host = docker_stats_v2_host
190type.cgroup = docker_stats_v2_cgroup
191
192# specify Splunk index
193index.host =
194index.cgroup =
195
196# set output (splunk or devnull, default is [general]defaultOutput)
197output.host =
198output.cgroup =
199
200
201# mount input (collects mount stats where docker runtime is stored)
202[input.mount_stats]
203
204# disable system level stats
205disabled = false
206
207# how often to collect mount stats
208statsInterval = 30s
209
210# override type
211type = docker_mount_stats
212
213# specify Splunk index
214index =
215
216# set output (splunk or devnull, default is [general]defaultOutput)
217output =
218
219
220# diskstats input (collects /proc/diskstats)
221[input.disk_stats]
222
223# disable system level stats
224disabled = false
225
226# how often to collect mount stats
227statsInterval = 30s
228
229# override type
230type = docker_disk_stats
231
232# specify Splunk index
233index =
234
235# set output (splunk or devnull, default is [general]defaultOutput)
236output =
237
238
239# proc input
240[input.proc_stats]
241
242# disable proc level stats
243disabled = false
244
245# proc location
246pathProc = /rootfs/proc
247
248# how often to collect proc stats
249statsInterval = 60s
250
251# override type
252type = docker_proc_stats_v2
253
254# specify Splunk index
255index.host =
256index.cgroup =
257
258# proc filesystem includes by default system threads (there can be over 100 of them)
259# these stats do not help with the observability
260# excluding them can reduce the size of the index, performance of the searches and usage of the collector
261includeSystemThreads = false
262
263# set output (splunk or devnull, default is [general]defaultOutput)
264output.host =
265output.cgroup =
266
267# Hide arguments for the processes, replacing with HIDDEN_ARGS(NUMBER)
268hideArgs = false
269
270
271# network stats
272[input.net_stats]
273
274# disable net stats
275disabled = false
276
277# proc path location
278pathProc = /rootfs/proc
279
280# how often to collect net stats
281statsInterval = 30s
282
283# override type
284type = docker_net_stats_v2
285
286# specify Splunk index
287index =
288
289# set output (splunk or devnull, default is [general]defaultOutput)
290output =
291
292
293# network socket table
294[input.net_socket_table]
295
296# disable net stats
297disabled = false
298
299# proc path location
300pathProc = /rootfs/proc
301
302# how often to collect net stats
303statsInterval = 30s
304
305# override type
306type = docker_net_socket_table
307
308# specify Splunk index
309index =
310
311# set output (splunk or devnull, default is [general]defaultOutput)
312output =
313
314# group connections by tcp_state, localAddr, remoteAddr (if localPort is not the port it is listening on)
315# that can significally reduces the amount of events
316group = true
317
318
319# Log files
320[input.files]
321
322# disable container logs monitoring
323disabled = false
324
325# root location of docker log files, expecting that this folder has a folder for each container,
326# folder is the containerID
327path = /rootfs/var/lib/docker/containers/
328
329# (obsolete, ignored) glob matching pattern for log files
330# glob = */*-json.log*
331
332# files are read using polling schema, when reach the EOF how often to check if files got updated
333pollingInterval = 250ms
334
335# how often to look for the new files under logs path
336walkingInterval = 5s
337
338# include verbose fields in events (file offset)
339verboseFields = false
340
341# override type
342type = docker_logs
343
344# specify Splunk index
345index =
346
347# docker splits events when they are larger than 10-100k (depends on the docker version)
348# we join them together by default and forward to Splunk as one event
349joinPartialEvents = true
350
351# In case if your containers report messages with terminal colors or other escape sequences
352# you can enable strip for all the containers in one place.
353# Better is to enable it only for required container with the label collectord.io/strip-terminal-escape-sequences=true
354stripTerminalEscapeSequences = false
355# Regexp used for stripping terminal colors, it does not stip all the escape sequences
356# Read http://man7.org/linux/man-pages/man4/console_codes.4.html for more information
357stripTerminalEscapeSequencesRegex = (\x1b\[\d{1,3}(;\d{1,3})*m)|(\x07)|(\x1b]\d+(\s\d)?;[^\x07]+\x07)|(.*\x1b\[K)
358
359# sample output (-1 does not sample, 20 - only 20% of the logs should be forwarded)
360samplingPercent = -1
361
362# sampling key for hash based sampling (should be regexp with the named match pattern `key`)
363samplingKey =
364
365# set output (splunk or devnull, default is [general]defaultOutput)
366output =
367
368# configure default thruput per second for for each container log
369# for example if you set `thruputPerSecond = 128Kb`, that will limit amount of logs forwarded
370# from the single container to 128Kb per second.
371thruputPerSecond =
372
373# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
374# older than 7 days
375tooOldEvents =
376
377# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
378tooNewEvents =
379
380
381# Application Logs
382[input.app_logs]
383
384# disable container logs monitoring
385disabled = false
386
387# root location of mounts (applies to hostPath mounts only), , if the hostPath differs inside container from the path on host
388root = /rootfs
389
390# how often to review list of available volumes
391syncInterval = 5s
392
393# glob matching pattern for log files
394glob = *.log*
395
396# files are read using polling schema, when reach the EOF how often to check if files got updated
397pollingInterval = 250ms
398
399# how often to look for the new files under logs path
400walkingInterval = 5s
401
402# include verbose fields in events (file offset)
403verboseFields = false
404
405# override type
406type = docker_logs
407
408# specify Splunk index
409index =
410
411# we split files using new line character, with this configuration you can specify what defines the new event
412# after new line
413eventPatternRegex = ^[^\s]
414# Maximum interval of messages in pipeline
415eventPatternMaxInterval = 100ms
416# Maximum time to wait for the messages in pipeline
417eventPatternMaxWait = 1s
418# Maximum message size
419eventPatternMaxSize = 100kb
420
421# sample output (-1 does not sample, 20 - only 20% of the logs should be forwarded)
422samplingPercent = -1
423
424# sampling key for hash based sampling (should be regexp with the named match pattern `key`)
425samplingKey =
426
427# set output (splunk or devnull, default is [general]defaultOutput)
428output =
429
430# configure default thruput per second for for each container log
431# for example if you set `thruputPerSecond = 128Kb`, that will limit amount of logs forwarded
432# from the single container to 128Kb per second.
433thruputPerSecond =
434
435# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
436# older than 7 days
437tooOldEvents =
438
439# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
440tooNewEvents =
441
442# Configure how long Collectord should keep the file descriptors open for files, that has not been forwarded yet
443# When using PVC, and if pipeline is lagging behind, Collectord holding open fd for files, can cause long termination
444# of pods, as kubelet cannot unmount the PVC volume from the system
445maxHoldAfterClose = 1800s
446
447
448# Input syslog(.\d+)? files
449[input.files::syslog]
450
451# disable host level logs
452disabled = false
453
454# root location of log files
455path = /rootfs/var/log/
456
457# regex matching pattern
458match = ^(syslog|messages)(.\d+)?$
459
460# limit search only on one level
461recursive = false
462
463# files are read using polling schema, when reach the EOF how often to check if files got updated
464pollingInterval = 250ms
465
466# how often o look for the new files under logs path
467walkingInterval = 5s
468
469# include verbose fields in events (file offset)
470verboseFields = false
471
472# override type
473type = docker_host_logs
474
475# specify Splunk index
476index =
477
478# field extraction
479extraction = ^(?P<timestamp>[A-Za-z]+\s+\d+\s\d+:\d+:\d+)\s(?P<syslog_hostname>[^\s]+)\s(?P<syslog_component>[^:\[]+)(\[(?P<syslog_pid>\d+)\])?: (.+)$
480
481# timestamp field
482timestampField = timestamp
483
484# format for timestamp
485# the layout defines the format by showing how the reference time, defined to be `Mon Jan 2 15:04:05 -0700 MST 2006`
486timestampFormat = Jan 2 15:04:05
487
488# Adjust date, if month/day aren't set in format
489timestampSetMonth = false
490timestampSetDay = false
491
492# timestamp location (if not defined by format)
493timestampLocation = Local
494
495# sample output (-1 does not sample, 20 - only 20% of the logs should be forwarded)
496samplingPercent = -1
497
498# sampling key for hash based sampling (should be regexp with the named match pattern `key`)
499samplingKey =
500
501# set output (splunk or devnull, default is [general]defaultOutput)
502output =
503
504# configure default thruput per second for for each container log
505# for example if you set `thruputPerSecond = 128Kb`, that will limit amount of logs forwarded
506# from the single container to 128Kb per second.
507thruputPerSecond =
508
509# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
510# older than 7 days
511tooOldEvents =
512
513# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
514tooNewEvents =
515
516
517# Input all *.log(.\d+)? files
518[input.files::logs]
519
520# disable host level logs
521disabled = false
522
523# root location of log files
524path = /rootfs/var/log/
525
526# regex matching pattern
527match = ^(([\w\-.]+\.log(.[\d\-]+)?)|(docker))$
528
529# files are read using polling schema, when reach the EOF how often to check if files got updated
530pollingInterval = 250ms
531
532# how often o look for the new files under logs path
533walkingInterval = 5s
534
535# include verbose fields in events (file offset)
536verboseFields = false
537
538# override type
539type = docker_host_logs
540
541# specify Splunk index
542index =
543
544# field extraction
545extraction =
546
547# timestamp field
548timestampField =
549
550# format for timestamp
551# the layout defines the format by showing how the reference time, defined to be `Mon Jan 2 15:04:05 -0700 MST 2006`
552timestampFormat =
553
554# timestamp location (if not defined by format)
555timestampLocation =
556
557# sample output (-1 does not sample, 20 - only 20% of the logs should be forwarded)
558samplingPercent = -1
559
560# sampling key (should be regexp with the named match pattern `key`)
561samplingKey =
562
563# set output (splunk or devnull, default is [general]defaultOutput)
564output =
565
566# configure default thruput per second for for each container log
567# for example if you set `thruputPerSecond = 128Kb`, that will limit amount of logs forwarded
568# from the single container to 128Kb per second.
569thruputPerSecond =
570
571# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
572# older than 7 days
573tooOldEvents =
574
575# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
576tooNewEvents =
577
578
579[input.journald]
580
581# disable host level logs
582disabled = false
583
584# root location of log files
585path.persistent = /rootfs/var/log/journal/
586path.volatile = /rootfs/run/log/journal/
587
588# when reach end of journald, how often to pull
589pollingInterval = 250ms
590
591# override type
592type = docker_host_logs
593
594# specify Splunk index
595index =
596
597# sample output (-1 does not sample, 20 - only 20% of the logs should be forwarded)
598samplingPercent = -1
599
600# sampling key (should be regexp with the named match pattern `key`)
601samplingKey =
602
603# how often to reopen the journald to free old files
604reopenInterval = 1h
605
606# set output (splunk or devnull, default is [general]defaultOutput)
607output =
608
609# configure default thruput per second for for each container log
610# for example if you set `thruputPerSecond = 128Kb`, that will limit amount of logs forwarded
611# from the single container to 128Kb per second.
612thruputPerSecond =
613
614# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
615# older than 7 days
616tooOldEvents =
617
618# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
619tooNewEvents =
620
621# Move Journald logs reader to a separate process, to prevent process from crashing in case of corrupted log files
622spawnExternalProcess = false
623
624# Docker input (events)
625[input.docker_events]
626
627# disable docker events
628disabled = false
629
630# (obsolete) interval to poll for new events in docker
631# docker events are streamed in realtime, recreating connections every ~docker.timeout
632# eventsPollingInterval = 5s
633
634# override type
635type = docker_events
636
637# specify Splunk index
638index =
639
640# set output (splunk or devnull, default is [general]defaultOutput)
641output =
642
643
644[input.docker_api::containers]
645
646# disable docker events
647disabled = false
648
649path = /containers/json
650inspectPath = /containers/{{.Id}}/json
651interval = 5m
652query = all=1
653apiVersion =
654
655# override type
656type = docker_objects
657
658# specify Splunk index
659index =
660
661# set output (splunk or devnull, default is [general]defaultOutput)
662output =
663
664
665[input.docker_api::images]
666
667# disable docker events
668disabled = false
669
670path = /images/json
671inspectPath = /images/{{.Id}}/json
672interval = 5m
673query = all=0
674apiVersion =
675
676# override type
677type = docker_objects
678
679# specify Splunk index
680index =
681
682# set output (splunk or devnull, default is [general]defaultOutput)
683output =
684
685[input.docker_api::system]
686
687# disable docker events
688disabled = false
689
690path = /system/df
691listPath.Volumes = Volumes
692interval = 5m
693apiVersion =
694
695# override type
696type = docker_objects
697
698# specify Splunk index
699index =
700
701# set output (splunk or devnull, default is [general]defaultOutput)
702output =
703
704# Prometheus auto-discovery
705# Collectord can automatically scrape Prometheus metrics from Docker containers
706# that have the appropriate labels (annotations) set
707[input.prometheus_auto]
708
709# disable prometheus auto discovery
710disabled = false
711
712# override type
713type = docker_prometheus
714
715# specify Splunk index
716index =
717
718# how often to scrape (default, can be overridden per container via annotations)
719interval = 60s
720
721# include help text from prometheus metrics
722includeHelp = false
723
724# timeout for scraping
725timeout = 30s
726
727# set output (splunk or devnull, default is [general]defaultOutput)
728output =
729
730# authorization headers for prometheus endpoints
731# authorization.<key> = Bearer <token>
732# Then use collectord.io/prometheus.myapp-authorizationkey = <key> in container labels
733
734# Splunk output
735[output.splunk]
736
737# Splunk HTTP Event Collector url
738url =
739# You can specify muiltiple splunk URls with
740#
741# urls.0 = https://server1:8088/services/collector/event/1.0
742# urls.1 = https://server1:8088/services/collector/event/1.0
743# urls.2 = https://server1:8088/services/collector/event/1.0
744#
745# Limitations:
746# * The urls cannot have different path.
747
748# Specify how URL should be picked up (in case if multiple is used)
749# urlSelection = random|round-robin|random-with-round-robin
750# where:
751# * random - choose random url on first selection and after each failure (connection or HTTP status code >= 500)
752# * round-robin - choose url starting from first one and bump on each failure (connection or HTTP status code >= 500)
753# * random-with-round-robin - choose random url on first selection and after that in round-robin on each
754# failure (connection or HTTP status code >= 500)
755urlSelection = random-with-round-robin
756
757# Splunk HTTP Event Collector Token
758token =
759
760# Allow invalid SSL server certificate
761insecure = false
762# minTLSVersion = TLSv1.2
763# maxTLSVersion = TLSv1.3
764
765# Path to CA cerificate
766caPath =
767
768# CA Name to verify
769caName =
770
771# path for client certificate (if required)
772clientCertPath =
773
774# path for client key (if required)
775clientKeyPath =
776
777# Events are batched with the maximum size set by batchSize and staying in pipeline for not longer
778# than set by frequency
779frequency = 5s
780batchSize = 768K
781# limit by the number of events (0 value has no limit on the number of events)
782events = 50
783
784# Splunk through proxy
785proxyUrl =
786
787# authentication with basic authorization (user:password)
788proxyBasicAuth =
789
790# Splunk acknowledgement url (.../services/collector/ack)
791ackUrl =
792# You can specify muiltiple splunk URls for ackUrl
793#
794# ackUrls.0 = https://server1:8088/services/collector/ack
795# ackUrls.1 = https://server1:8088/services/collector/ack
796# ackUrls.2 = https://server1:8088/services/collector/ack
797#
798# Make sure that they in the same order as urls for url, to make sure that this Splunk instance will be
799# able to acknowledge the payload.
800#
801# Limitations:
802# * The urls cannot have different path.
803
804# Enable index acknowledgment
805ackEnabled = false
806
807# Index acknowledgment timeout
808ackTimeout = 3m
809
810# Timeout specifies a time limit for requests made by collectord.
811# The timeout includes connection time, any
812# redirects, and reading the response body.
813timeout = 30s
814
815# in case when pipeline can post to multiple indexes, we want to avoid posibility of blocking
816# all pipelines, because just some events have incorrect index
817dedicatedClientPerIndex = true
818
819# (obsolete) in case if some indexes aren't used anymore, how often to destroy the dedicated client
820# dedicatedClientCleanPeriod = 24h
821
822# possible values: RedirectToDefault, Drop, Retry
823incorrectIndexBehavior = RedirectToDefault
824
825# gzip compression level (nocompression, default, 1...9)
826compressionLevel = default
827
828# number of dedicated splunk output threads (to increase throughput above 4k events per second)
829threads = 2
830# Default algorithm between threads is roundrobin, but you can change it to weighted
831threadsAlgorithm = weighted
832
833# if you want to exclude some preindexed fields from events
834# excludeFields.docker_container_ip = true
835
836# By default if there are no indexes defined on the message, Collectord sends the event without the index, and
837# Splunk HTTP Event Collector going to use the default index for the Token. You can change that, and tell Collectord
838# to ignore all events that don't have index defined explicitly
839; requireExplicitIndex = true
840
841# You can define if you want to truncate messages that are larger than 1M in length (or define your own size, like 256K)
842; maximumMessageLength = 1M
843
844# For messages generated from logs, include unique `event_id` in the event
845; includeEventID = false
846
847# Dedicated queue size for the output, default is 1024, larger queue sizes will require more memory,
848# but will allow to handle more events in case of network issues
849queueSize = 1024
850
851# How many digits after the decimal point to keep for timestamps (0-9)
852# Defaults to 3 (milliseconds)
853# Change to 6 for microseconds
854# Change to 9 for nanoseconds
855; timestampPrecision = 3
856
857# Pipe to join events (container logs only)
858[pipe.join]
859
860# disable joining event
861disabled = false
862
863# Maximum interval of messages in pipeline
864maxInterval = 100ms
865
866# Maximum time to wait for the messages in pipeline
867maxWait = 1s
868
869# Maximum message size
870maxSize = 100K
871
872# Default pattern to indicate new message (should start not from space)
873patternRegex = ^[^\s]
874
875
876# (depricated, use container annotations for settings up join rules)
877# Define special event join patterns for matched events
878# Section consist of [pipe.join::<name>]
879# [pipe.join::my_app]
880## Set match pattern for the fields
881#; matchRegex.docker_container_image = my_app
882#; matchRegex.stream = stdout
883## All events start from '[<digits>'
884#; patternRegex = ^\[\d+
885
886# You can configure global replace rules for the events, which can help to remove sensitive data
887# from logs before they are sent to Splunk. Those rules will be applied to all pipelines for container logs, host logs,
888# application logs and events.
889# In the following example we replace password=TEST with password=********
890; [pipe.replace::name]
891; patternRegex = (password=)([^\s]+)
892; replace = $1********
893
894# You can configure global hash rules for the events, which can help to hide sensitive data
895# from logs before they are sent to outputs. Those rules will be applied to all pipelines for container logs, host logs,
896# application logs and events.
897# In the following example we hash IP addresses with fnv-1a-64
898; [pipe.hash::name]
899; match = (\d{1,3}\.){3}\d{1,3}'
900; function = fnv-1a-64
901
902# Collectord reports if entropy is low
903; [diagnostics::node-entropy]
904; settings.path = /rootfs/proc/sys/kernel/random/entropy_avail
905; settings.interval = 1h
906; settings.threshold = 800
907
908# Collectord can report if node reboot is required
909[diagnostics::node-reboot-required]
910settings.path = /rootfs/var/run/reboot-required*
911settings.interval = 1h
912
913# See https://www.kernel.org/doc/Documentation/admin-guide/hw-vuln/index.rst
914# And https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-devices-system-cpu
915[diagnostics::cpu-vulnerabilities]
916settings.path = /rootfs/sys/devices/system/cpu/vulnerabilities/*
917settings.interval = 1h