Configuration
The default Configuration file 001-general.conf is mapped to the /config/001-general.conf inside the image.
You can override configurations by placing another files in the same folder. The collectord reads all the *.conf files
in the /config folder in the alphabetical order.
With the configuration files, you can override general settings for how collectord behaves, what data it forwards to
Splunk, how it should connect to Splunk, and default configurations for data pipelines (logs, metrics).
You can also configure data forwarding for the specific containers using annotations
attached to these containers, use it to define join-rules for multi-line events, enable removing the terminal colors,
redirecting events base on patterns to /dev/null, defining fields extraction, replace patterns in the logs,
configuring application logs for the containers.
Overriding configuration by embedding configuration files
You can create your configuration files, which overrides the default values in 001-general.conf. Just place only
the values that you want to replace inside this file, for example, create a file 002-conf.conf
1# accepting license
2[general]
3acceptLicense = true
4
5# Configuring Splunk output
6[output.splunk]
7url = https://hec.example.com:8088/services/collector/event/1.0
8token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0
9insecure = true
10
11# Overriding default indices
12[input.system_stats]
13index = docker_stats
14
15[input.mount_stats]
16index = docker_stats
17
18[input.net_stats]
19index = docker_stats
20
21[input.net_socket_table]
22index = docker_stats
23
24[input.proc_stats]
25index = docker_stats
26
27[input.files]
28index = docker_logs
29
30[input.app_logs]
31index = docker_logs
32
33[input.files::syslog]
34index = docker_logs
35
36[input.files::logs]
37index = docker_logs
38
39[input.docker_events]
40index = docker_logs
Create a Dockerfile
1FROM outcoldsolutions/collectorfordocker:25.10.3
2
3COPY 002-conf.conf /config/002-conf.conf
Build the image
1docker build -t example.com/collectorfordocker:25.10.3 .
Use this image to start the collectord with the instructions how we deploy the collectord.
Overriding configuration with the environment variables
You can override configurations with the environment variables in format
1--env "COLLECTOR__<ANYUNIQUENAME>=<section>__<key>=<value>"
Identical example to the file above
1--env "COLLECTOR__ACCEPTLICENSE=general__acceptLicense=true" \
2--env "COLLECTOR__SPLUNK_URL=output.splunk__url=https://hec.example.com:8088/services/collector/event/1.0" \
3--env "COLLECTOR__SPLUNK_TOKEN=output.splunk__token=B5A79AAD-D822-46CC-80D1-819F80D7BFB0" \
4--env "COLLECTOR__SPLUNK_INSECURE=output.splunk__insecure=true" \
5--env "COLLECTOR__STATS_INDEX=input.system_stats__index=docker_stats" \
6--env "COLLECTOR__STATS_INDEX=input.mount_stats__index=docker_stats" \
7--env "COLLECTOR__STATS_INDEX=input.net_stats__index=docker_stats" \
8--env "COLLECTOR__STATS_INDEX=input.net_socket_table__index=docker_stats" \
9--env "COLLECTOR__PROCSTATS_INDEX=input.proc_stats__index=docker_stats" \
10--env "COLLECTOR__CONTAINERLOGS_INDEX=input.files__index=docker_logs" \
11--env "COLLECTOR__APPLICATIONLOGS_INDEX=input.app_logs__index=docker_logs" \
12--env "COLLECTOR__SYSLOG_INDEX=input.files::syslog__index=docker_logs" \
13--env "COLLECTOR__HOSTLOGS_INDEX=input.files::logs__index=docker_logs" \
14--env "COLLECTOR__EVENTS_INDEX=input.docker_events__index=docker_logs"
Attaching EC2 Metadata
You can include EC2 metadata with the forwarded data (logs and metrics) by specifying desired field name and path from Instance Metadata and User Data.
1# Include EC2 Metadata (see list of possible fields https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html)
2# Should be in format ec2Metadata.{desired_field_name} = {url path to read the value}
3# ec2Metadata.ec2_instance_id = /latest/meta-data/instance-id
4# ec2Metadata.ec2_instance_type = /latest/meta-data/instance-type
As an example, if you configure Collectord with environment variables, you can include EC2 Metadata specific fields with
1...
2--env "COLLECTOR__EC2_INSTANCE_ID=general__ec2Metadata.ec2_instance_id=/latest/meta-data/instance-id" \
3--env "COLLECTOR__EC2_INSTANCE_TYPE=general__ec2Metadata.ec2_instance_type=/latest/meta-data/instance-type" \
4...
Placeholders in index and sources
You can apply dynamic index names in the configurations to forward logs or stats to a specific index, based on the meta fields. For example, you can define an index as
1[input.files]
2
3index = docker_{{docker_cluster}}
Similarly you can change the source of all the forwarded logs like
1[input.files]
2
3source = /{{docker_cluster}}/{{docker_container_name}}
Reference 001-general.conf
1# collectord configuration file
2#
3# Run collectord with flag -conf and specify location of the configuration file.
4#
5# You can override all the values using environment variables with the format like
6# COLLECTOR__<ANYNAME>=<section>__<key>=<value>
7# As an example you can set dataPath in [general] section as
8# COLLECTOR__DATAPATH=general__dataPath=C:\\some\\path\\data.db
9# This parameter can be configured using -env-override, set it to empty string to disable this feature
10
11[general]
12
13# Please review license https://www.outcoldsolutions.com/docs/license-agreement/
14# and accept license by changing the value to *true*
15acceptLicense = false
16
17# location for the database
18# is used to store position of the files and internal state
19dataPath = ./data/
20
21# log level (trace, debug, info, warn, error, fatal)
22logLevel = info
23
24# http server gives access to two endpoints
25# /healthz
26# /metrics
27httpServerBinding =
28
29# telemetry report endpoint, set it to empty string to disable telemetry
30telemetryEndpoint = https://license.outcold.solutions/telemetry/
31
32# license check endpoint
33licenseEndpoint = https://license.outcold.solutions/license/
34
35# license server through proxy
36# This configuration is used only for the Outcold Solutions License Server
37# For license server running on-premises, use configuration under [license.client]
38licenseServerProxyUrl =
39
40# authentication with basic authorization (user:password)
41# This configuration is used only for the Outcold Solutions License Server
42# For license server running on-premises, use configuration under [license.client]
43licenseServerProxyBasicAuth =
44
45# license key
46license =
47
48# docker daemon hostname is used by default as hostname
49# use this configuration to override
50hostname =
51
52# Default output for events, logs and metrics
53# valid values: splunk and devnull
54# Use devnull by default if you don't want to redirect data
55defaultOutput = splunk
56
57# Default buffer size for file input
58fileInputBufferSize = 256b
59
60# Maximum size of one line the file reader can read
61fileInputLineMaxSize = 1mb
62
63# Include custom fields to attach to every event, in example below every event sent to Splunk will hav
64# indexed field my_environment=dev. Fields names should match to ^[a-z][_a-z0-9]*$
65# Better way to configure that is to specify labels for Docker Hosts.
66# ; fields.my_environment = dev
67
68# Include EC2 Metadata (see list of possible fields https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html)
69# Should be in format ec2Metadata.{desired_field_name} = {url path to read the value}
70# ec2Metadata.ec2_instance_id = /latest/meta-data/instance-id
71# ec2Metadata.ec2_instance_type = /latest/meta-data/instance-type
72
73# subdomain for the annotations added to the pods, workloads, namespaces or containers, like splunk.collectord.io/..
74annotationsSubdomain =
75
76# configure global thruput per second for forwarded logs (metrics are not included)
77# for example if you set `thruputPerSecond = 512Kb`, that will limit amount of logs forwarded
78# from the single Collectord instance to 512Kb per second.
79# You can configure thruput individually for the logs (including specific for container logs) below
80thruputPerSecond =
81
82# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
83# older than 7 days
84tooOldEvents =
85
86# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
87tooNewEvents =
88
89# For input.files::X and application logs, when glob or match are configured, Collectord can automatically
90# detect gzipped files and skip them (based on the extensions or magic numbers)
91autoSkipGzipFiles = true
92
93
94[license.client]
95# point to the license located on the HTTP web server, or a hosted by the Collectord running as license server
96url =
97# basic authentication for the HTTP server
98basicAuth =
99# if SSL, ignore the certificate verification
100insecure = false
101# CA Path for the Server certificate
102capath =
103# CA Name fot the Server certificate
104caname =
105# license server through proxy
106proxyUrl =
107# authentication with basic authorization (user:password)
108proxyBasicAuth =
109
110
111# forward internal collectord metrics
112[input.collectord_metrics]
113
114# disable collectord internal metrics
115disabled = false
116
117# override type
118type = docker_prometheus
119
120# how often to collect internal metrics
121interval = 1m
122
123# set output (splunk or devnull, default is [general]defaultOutput)
124output =
125
126# specify Splunk index
127index =
128
129# whitelist or blacklist the metrics
130whitelist.1 = ^file_input_open$
131whitelist.2 = ^file_input_read_bytes$
132whitelist.3 = ^docker_handlers$
133whitelist.4 = ^pipe$
134whitelist.5 = ^pipelines_num$
135whitelist.6 = ^splunk_post_bytes_sum.*$
136whitelist.7 = ^splunk_post_events_count_sum.*$
137whitelist.8 = ^splunk_post_failed_requests$
138whitelist.9 = ^splunk_post_message_max_lag_seconds_bucket.*$
139whitelist.10 = ^splunk_post_requests_seconds_sum.*$
140whitelist.11 = ^splunk_post_retries_required_sum.*$
141
142
143# connection to docker host
144[general.docker]
145
146# url for docker API, only unix socket is supported
147url = unix:///rootfs/run/docker.sock
148
149# path to docker root folder (can fallback to use folder structure to read docker metadata)
150# used to discovery application logs and collecting mount stats (usage)
151dockerRootFolder = /rootfs/var/lib/docker/
152
153# Timeout for http responses to docker client. The streaming requests depend on this timeout.
154timeout = 1m
155
156# regex to find docker container cgroups (helps excluding other cgroups with matched ID)
157containersCgroupFilter = ^/([^/\s]+/)*(((docker-|docker/|ecs/[a-f0-9\-]{36}/)[0-9a-f]{64}(\.scope)?)|(libpod-[0-9a-f]{64}(\.scope)?/container))$
158
159# api version for internal calls
160apiVersion = v1.21
161
162# cgroup input
163# sends stas for the host and cgroups (containers)
164[input.system_stats]
165
166# disable system level stats
167disabled.host = false
168disabled.cgroup = false
169
170# cgroups fs location
171pathCgroups = /rootfs/sys/fs/cgroup
172
173# proc location
174pathProc = /rootfs/proc
175
176# how often to collect cgroup stats
177statsInterval = 30s
178
179# override type
180type.host = docker_stats_v2_host
181type.cgroup = docker_stats_v2_cgroup
182
183# specify Splunk index
184index.host =
185index.cgroup =
186
187# set output (splunk or devnull, default is [general]defaultOutput)
188output.host =
189output.cgroup =
190
191
192# mount input (collects mount stats where docker runtime is stored)
193[input.mount_stats]
194
195# disable system level stats
196disabled = false
197
198# how often to collect mount stats
199statsInterval = 30s
200
201# override type
202type = docker_mount_stats
203
204# specify Splunk index
205index =
206
207# set output (splunk or devnull, default is [general]defaultOutput)
208output =
209
210
211# diskstats input (collects /proc/diskstats)
212[input.disk_stats]
213
214# disable system level stats
215disabled = false
216
217# how often to collect mount stats
218statsInterval = 30s
219
220# override type
221type = docker_disk_stats
222
223# specify Splunk index
224index =
225
226# set output (splunk or devnull, default is [general]defaultOutput)
227output =
228
229
230# proc input
231[input.proc_stats]
232
233# disable proc level stats
234disabled = false
235
236# proc location
237pathProc = /rootfs/proc
238
239# how often to collect proc stats
240statsInterval = 60s
241
242# override type
243type = docker_proc_stats_v2
244
245# specify Splunk index
246index.host =
247index.cgroup =
248
249# proc filesystem includes by default system threads (there can be over 100 of them)
250# these stats do not help with the observability
251# excluding them can reduce the size of the index, performance of the searches and usage of the collector
252includeSystemThreads = false
253
254# set output (splunk or devnull, default is [general]defaultOutput)
255output.host =
256output.cgroup =
257
258# Hide arguments for the processes, replacing with HIDDEN_ARGS(NUMBER)
259hideArgs = false
260
261
262# network stats
263[input.net_stats]
264
265# disable net stats
266disabled = false
267
268# proc path location
269pathProc = /rootfs/proc
270
271# how often to collect net stats
272statsInterval = 30s
273
274# override type
275type = docker_net_stats_v2
276
277# specify Splunk index
278index =
279
280# set output (splunk or devnull, default is [general]defaultOutput)
281output =
282
283
284# network socket table
285[input.net_socket_table]
286
287# disable net stats
288disabled = false
289
290# proc path location
291pathProc = /rootfs/proc
292
293# how often to collect net stats
294statsInterval = 30s
295
296# override type
297type = docker_net_socket_table
298
299# specify Splunk index
300index =
301
302# set output (splunk or devnull, default is [general]defaultOutput)
303output =
304
305# group connections by tcp_state, localAddr, remoteAddr (if localPort is not the port it is listening on)
306# that can significally reduces the amount of events
307group = true
308
309
310# Log files
311[input.files]
312
313# disable container logs monitoring
314disabled = false
315
316# root location of docker log files, expecting that this folder has a folder for each container,
317# folder is the containerID
318path = /rootfs/var/lib/docker/containers/
319
320# (obsolete, ignored) glob matching pattern for log files
321# glob = */*-json.log*
322
323# files are read using polling schema, when reach the EOF how often to check if files got updated
324pollingInterval = 250ms
325
326# how often to look for the new files under logs path
327walkingInterval = 5s
328
329# include verbose fields in events (file offset)
330verboseFields = false
331
332# override type
333type = docker_logs
334
335# specify Splunk index
336index =
337
338# docker splits events when they are larger than 10-100k (depends on the docker version)
339# we join them together by default and forward to Splunk as one event
340joinPartialEvents = true
341
342# In case if your containers report messages with terminal colors or other escape sequences
343# you can enable strip for all the containers in one place.
344# Better is to enable it only for required container with the label collectord.io/strip-terminal-escape-sequences=true
345stripTerminalEscapeSequences = false
346# Regexp used for stripping terminal colors, it does not stip all the escape sequences
347# Read http://man7.org/linux/man-pages/man4/console_codes.4.html for more information
348stripTerminalEscapeSequencesRegex = (\x1b\[\d{1,3}(;\d{1,3})*m)|(\x07)|(\x1b]\d+(\s\d)?;[^\x07]+\x07)|(.*\x1b\[K)
349
350# sample output (-1 does not sample, 20 - only 20% of the logs should be forwarded)
351samplingPercent = -1
352
353# sampling key for hash based sampling (should be regexp with the named match pattern `key`)
354samplingKey =
355
356# set output (splunk or devnull, default is [general]defaultOutput)
357output =
358
359# configure default thruput per second for for each container log
360# for example if you set `thruputPerSecond = 128Kb`, that will limit amount of logs forwarded
361# from the single container to 128Kb per second.
362thruputPerSecond =
363
364# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
365# older than 7 days
366tooOldEvents =
367
368# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
369tooNewEvents =
370
371
372# Application Logs
373[input.app_logs]
374
375# disable container logs monitoring
376disabled = false
377
378# root location of mounts (applies to hostPath mounts only), , if the hostPath differs inside container from the path on host
379root = /rootfs
380
381# how often to review list of available volumes
382syncInterval = 5s
383
384# glob matching pattern for log files
385glob = *.log*
386
387# files are read using polling schema, when reach the EOF how often to check if files got updated
388pollingInterval = 250ms
389
390# how often to look for the new files under logs path
391walkingInterval = 5s
392
393# include verbose fields in events (file offset)
394verboseFields = false
395
396# override type
397type = docker_logs
398
399# specify Splunk index
400index =
401
402# we split files using new line character, with this configuration you can specify what defines the new event
403# after new line
404eventPatternRegex = ^[^\s]
405# Maximum interval of messages in pipeline
406eventPatternMaxInterval = 100ms
407# Maximum time to wait for the messages in pipeline
408eventPatternMaxWait = 1s
409# Maximum message size
410eventPatternMaxSize = 100kb
411
412# sample output (-1 does not sample, 20 - only 20% of the logs should be forwarded)
413samplingPercent = -1
414
415# sampling key for hash based sampling (should be regexp with the named match pattern `key`)
416samplingKey =
417
418# set output (splunk or devnull, default is [general]defaultOutput)
419output =
420
421# configure default thruput per second for for each container log
422# for example if you set `thruputPerSecond = 128Kb`, that will limit amount of logs forwarded
423# from the single container to 128Kb per second.
424thruputPerSecond =
425
426# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
427# older than 7 days
428tooOldEvents =
429
430# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
431tooNewEvents =
432
433# Configure how long Collectord should keep the file descriptors open for files, that has not been forwarded yet
434# When using PVC, and if pipeline is lagging behind, Collectord holding open fd for files, can cause long termination
435# of pods, as kubelet cannot unmount the PVC volume from the system
436maxHoldAfterClose = 1800s
437
438
439# Input syslog(.\d+)? files
440[input.files::syslog]
441
442# disable host level logs
443disabled = false
444
445# root location of log files
446path = /rootfs/var/log/
447
448# regex matching pattern
449match = ^(syslog|messages)(.\d+)?$
450
451# limit search only on one level
452recursive = false
453
454# files are read using polling schema, when reach the EOF how often to check if files got updated
455pollingInterval = 250ms
456
457# how often o look for the new files under logs path
458walkingInterval = 5s
459
460# include verbose fields in events (file offset)
461verboseFields = false
462
463# override type
464type = docker_host_logs
465
466# specify Splunk index
467index =
468
469# field extraction
470extraction = ^(?P<timestamp>[A-Za-z]+\s+\d+\s\d+:\d+:\d+)\s(?P<syslog_hostname>[^\s]+)\s(?P<syslog_component>[^:\[]+)(\[(?P<syslog_pid>\d+)\])?: (.+)$
471
472# timestamp field
473timestampField = timestamp
474
475# format for timestamp
476# the layout defines the format by showing how the reference time, defined to be `Mon Jan 2 15:04:05 -0700 MST 2006`
477timestampFormat = Jan 2 15:04:05
478
479# Adjust date, if month/day aren't set in format
480timestampSetMonth = false
481timestampSetDay = false
482
483# timestamp location (if not defined by format)
484timestampLocation = Local
485
486# sample output (-1 does not sample, 20 - only 20% of the logs should be forwarded)
487samplingPercent = -1
488
489# sampling key for hash based sampling (should be regexp with the named match pattern `key`)
490samplingKey =
491
492# set output (splunk or devnull, default is [general]defaultOutput)
493output =
494
495# configure default thruput per second for for each container log
496# for example if you set `thruputPerSecond = 128Kb`, that will limit amount of logs forwarded
497# from the single container to 128Kb per second.
498thruputPerSecond =
499
500# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
501# older than 7 days
502tooOldEvents =
503
504# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
505tooNewEvents =
506
507
508# Input all *.log(.\d+)? files
509[input.files::logs]
510
511# disable host level logs
512disabled = false
513
514# root location of log files
515path = /rootfs/var/log/
516
517# regex matching pattern
518match = ^(([\w\-.]+\.log(.[\d\-]+)?)|(docker))$
519
520# files are read using polling schema, when reach the EOF how often to check if files got updated
521pollingInterval = 250ms
522
523# how often o look for the new files under logs path
524walkingInterval = 5s
525
526# include verbose fields in events (file offset)
527verboseFields = false
528
529# override type
530type = docker_host_logs
531
532# specify Splunk index
533index =
534
535# field extraction
536extraction =
537
538# timestamp field
539timestampField =
540
541# format for timestamp
542# the layout defines the format by showing how the reference time, defined to be `Mon Jan 2 15:04:05 -0700 MST 2006`
543timestampFormat =
544
545# timestamp location (if not defined by format)
546timestampLocation =
547
548# sample output (-1 does not sample, 20 - only 20% of the logs should be forwarded)
549samplingPercent = -1
550
551# sampling key (should be regexp with the named match pattern `key`)
552samplingKey =
553
554# set output (splunk or devnull, default is [general]defaultOutput)
555output =
556
557# configure default thruput per second for for each container log
558# for example if you set `thruputPerSecond = 128Kb`, that will limit amount of logs forwarded
559# from the single container to 128Kb per second.
560thruputPerSecond =
561
562# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
563# older than 7 days
564tooOldEvents =
565
566# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
567tooNewEvents =
568
569
570[input.journald]
571
572# disable host level logs
573disabled = false
574
575# root location of log files
576path.persistent = /rootfs/var/log/journal/
577path.volatile = /rootfs/run/log/journal/
578
579# when reach end of journald, how often to pull
580pollingInterval = 250ms
581
582# override type
583type = docker_host_logs
584
585# specify Splunk index
586index =
587
588# sample output (-1 does not sample, 20 - only 20% of the logs should be forwarded)
589samplingPercent = -1
590
591# sampling key (should be regexp with the named match pattern `key`)
592samplingKey =
593
594# how often to reopen the journald to free old files
595reopenInterval = 1h
596
597# set output (splunk or devnull, default is [general]defaultOutput)
598output =
599
600# configure default thruput per second for for each container log
601# for example if you set `thruputPerSecond = 128Kb`, that will limit amount of logs forwarded
602# from the single container to 128Kb per second.
603thruputPerSecond =
604
605# Configure events that are too old to be forwarded, for example 168h (7 days) - that will drop all events
606# older than 7 days
607tooOldEvents =
608
609# Configure events that are too new to be forwarded, for example 1h - that will drop all events that are 1h in future
610tooNewEvents =
611
612# Move Journald logs reader to a separate process, to prevent process from crashing in case of corrupted log files
613spawnExternalProcess = false
614
615# Docker input (events)
616[input.docker_events]
617
618# disable docker events
619disabled = false
620
621# (obsolete) interval to poll for new events in docker
622# docker events are streamed in realtime, recreating connections every ~docker.timeout
623# eventsPollingInterval = 5s
624
625# override type
626type = docker_events
627
628# specify Splunk index
629index =
630
631# set output (splunk or devnull, default is [general]defaultOutput)
632output =
633
634
635[input.docker_api::containers]
636
637# disable docker events
638disabled = false
639
640path = /containers/json
641inspectPath = /containers/{{.Id}}/json
642interval = 5m
643query = all=1
644apiVersion =
645
646# override type
647type = docker_objects
648
649# specify Splunk index
650index =
651
652# set output (splunk or devnull, default is [general]defaultOutput)
653output =
654
655
656[input.docker_api::images]
657
658# disable docker events
659disabled = false
660
661path = /images/json
662inspectPath = /images/{{.Id}}/json
663interval = 5m
664query = all=0
665apiVersion =
666
667# override type
668type = docker_objects
669
670# specify Splunk index
671index =
672
673# set output (splunk or devnull, default is [general]defaultOutput)
674output =
675
676[input.docker_api::system]
677
678# disable docker events
679disabled = false
680
681path = /system/df
682listPath.Volumes = Volumes
683interval = 5m
684apiVersion =
685
686# override type
687type = docker_objects
688
689# specify Splunk index
690index =
691
692# set output (splunk or devnull, default is [general]defaultOutput)
693output =
694
695# Splunk output
696[output.splunk]
697
698# Splunk HTTP Event Collector url
699url =
700# You can specify muiltiple splunk URls with
701#
702# urls.0 = https://server1:8088/services/collector/event/1.0
703# urls.1 = https://server1:8088/services/collector/event/1.0
704# urls.2 = https://server1:8088/services/collector/event/1.0
705#
706# Limitations:
707# * The urls cannot have different path.
708
709# Specify how URL should be picked up (in case if multiple is used)
710# urlSelection = random|round-robin|random-with-round-robin
711# where:
712# * random - choose random url on first selection and after each failure (connection or HTTP status code >= 500)
713# * round-robin - choose url starting from first one and bump on each failure (connection or HTTP status code >= 500)
714# * random-with-round-robin - choose random url on first selection and after that in round-robin on each
715# failure (connection or HTTP status code >= 500)
716urlSelection = random-with-round-robin
717
718# Splunk HTTP Event Collector Token
719token =
720
721# Allow invalid SSL server certificate
722insecure = false
723# minTLSVersion = TLSv1.2
724# maxTLSVersion = TLSv1.3
725
726# Path to CA cerificate
727caPath =
728
729# CA Name to verify
730caName =
731
732# path for client certificate (if required)
733clientCertPath =
734
735# path for client key (if required)
736clientKeyPath =
737
738# Events are batched with the maximum size set by batchSize and staying in pipeline for not longer
739# than set by frequency
740frequency = 5s
741batchSize = 768K
742# limit by the number of events (0 value has no limit on the number of events)
743events = 50
744
745# Splunk through proxy
746proxyUrl =
747
748# authentication with basic authorization (user:password)
749proxyBasicAuth =
750
751# Splunk acknowledgement url (.../services/collector/ack)
752ackUrl =
753# You can specify muiltiple splunk URls for ackUrl
754#
755# ackUrls.0 = https://server1:8088/services/collector/ack
756# ackUrls.1 = https://server1:8088/services/collector/ack
757# ackUrls.2 = https://server1:8088/services/collector/ack
758#
759# Make sure that they in the same order as urls for url, to make sure that this Splunk instance will be
760# able to acknowledge the payload.
761#
762# Limitations:
763# * The urls cannot have different path.
764
765# Enable index acknowledgment
766ackEnabled = false
767
768# Index acknowledgment timeout
769ackTimeout = 3m
770
771# Timeout specifies a time limit for requests made by collectord.
772# The timeout includes connection time, any
773# redirects, and reading the response body.
774timeout = 30s
775
776# in case when pipeline can post to multiple indexes, we want to avoid posibility of blocking
777# all pipelines, because just some events have incorrect index
778dedicatedClientPerIndex = true
779
780# (obsolete) in case if some indexes aren't used anymore, how often to destroy the dedicated client
781# dedicatedClientCleanPeriod = 24h
782
783# possible values: RedirectToDefault, Drop, Retry
784incorrectIndexBehavior = RedirectToDefault
785
786# gzip compression level (nocompression, default, 1...9)
787compressionLevel = default
788
789# number of dedicated splunk output threads (to increase throughput above 4k events per second)
790threads = 2
791# (BETA) Default algorithm between threads is roundrobin, but you can change it to weighted
792threadsAlgorithm = weighted
793
794# if you want to exclude some preindexed fields from events
795# excludeFields.docker_container_ip = true
796
797# By default if there are no indexes defined on the message, Collectord sends the event without the index, and
798# Splunk HTTP Event Collector going to use the default index for the Token. You can change that, and tell Collectord
799# to ignore all events that don't have index defined explicitly
800; requireExplicitIndex = true
801
802# You can define if you want to truncate messages that are larger than 1M in length (or define your own size, like 256K)
803; maximumMessageLength = 1M
804
805# For messages generated from logs, include unique `event_id` in the event
806; includeEventID = false
807
808# Dedicated queue size for the output, default is 1024, larger queue sizes will require more memory,
809# but will allow to handle more events in case of network issues
810queueSize = 1024
811
812# How many digits after the decimal point to keep for timestamps (0-9)
813# Defaults to 3 (milliseconds)
814# Change to 6 for microseconds
815# Change to 9 for nanoseconds
816; timestampPrecision = 3
817
818# Pipe to join events (container logs only)
819[pipe.join]
820
821# disable joining event
822disabled = false
823
824# Maximum interval of messages in pipeline
825maxInterval = 100ms
826
827# Maximum time to wait for the messages in pipeline
828maxWait = 1s
829
830# Maximum message size
831maxSize = 100K
832
833# Default pattern to indicate new message (should start not from space)
834patternRegex = ^[^\s]
835
836
837# (depricated, use container annotations for settings up join rules)
838# Define special event join patterns for matched events
839# Section consist of [pipe.join::<name>]
840# [pipe.join::my_app]
841## Set match pattern for the fields
842#; matchRegex.docker_container_image = my_app
843#; matchRegex.stream = stdout
844## All events start from '[<digits>'
845#; patternRegex = ^\[\d+
846
847# You can configure global replace rules for the events, which can help to remove sensitive data
848# from logs before they are sent to Splunk. Those rules will be applied to all pipelines for container logs, host logs,
849# application logs and events.
850# In the following example we replace password=TEST with password=********
851; [pipe.replace::name]
852; patternRegex = (password=)([^\s]+)
853; replace = $1********
854
855# You can configure global hash rules for the events, which can help to hide sensitive data
856# from logs before they are sent to outputs. Those rules will be applied to all pipelines for container logs, host logs,
857# application logs and events.
858# In the following example we hash IP addresses with fnv-1a-64
859; [pipe.hash::name]
860; match = (\d{1,3}\.){3}\d{1,3}'
861; function = fnv-1a-64
862
863# Collectord reports if entropy is low
864; [diagnostics::node-entropy]
865; settings.path = /rootfs/proc/sys/kernel/random/entropy_avail
866; settings.interval = 1h
867; settings.threshold = 800
868
869# Collectord can report if node reboot is required
870[diagnostics::node-reboot-required]
871settings.path = /rootfs/var/run/reboot-required*
872settings.interval = 1h
873
874# See https://www.kernel.org/doc/Documentation/admin-guide/hw-vuln/index.rst
875# And https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-devices-system-cpu
876[diagnostics::cpu-vulnerabilities]
877settings.path = /rootfs/sys/devices/system/cpu/vulnerabilities/*
878settings.interval = 1hLinks
- Installation
- Start monitoring your Docker environments in under 10 minutes.
- Automatically forward host, container and application logs.
- Test our solution with the embedded 30-day evaluation license.
- Collectord Configuration
- Collectord configuration reference.
- Build custom image on top of collectord image with embedded configuration.
- Container Annotations
- Forwarding application logs.
- Multi-line container logs.
- Fields extraction for application and container logs (including timestamp extractions).
- Hiding sensitive data, stripping terminal escape codes and colors.
- Configuring Splunk Indexes
- Using non-default HTTP Event Collector index.
- Configure the Splunk application to use indexes that are not searchable by default.
- Splunk fields extraction for container logs
- Configure search-time field extractions for container logs.
- Container logs source pattern.
- Configurations for Splunk HTTP Event Collector
- Configure multiple HTTP Event Collector endpoints for Load Balancing and Fail-overs.
- Secure HTTP Event Collector endpoint.
- Configure the Proxy for HTTP Event Collector endpoint.
- Collecting metrics from Prometheus format
- Configure collectord to forward metrics from the services in Prometheus format.
- Monitoring multiple clusters
- Learn how you can monitor multiple clusters.
- Learn how to set up ACL in Splunk.
- Streaming Docker Objects from API Engine
- Learn how you can poll Docker containers and images and forward them to Splunk.
- License Server
- Learn how you can configure a remote License URL for Collectord.
- Alerts
- Troubleshooting
- Release History
- Upgrade instructions
- Security
- FAQ and the common questions
- License agreement
- Pricing
- Contact