Outcold Solutions LLC

Monitoring OpenShift - Version 5

Annotations

You can define annotations for the namespaces, workloads and pods. Annotations allows you to change how collector forwards data to Splunk. Annotations also helps collector where to discover the application logs.

The complete list of all the available annotations available at the bottom of this page.

Overriding indexes

Using annotations you can override to which index data should be forwarded from specific namespace, workload or pod. You can define one index for the whole object with collectord.io/index or specific indices for container logs collectord.io/logs-index, container stats collectord.io/stats-index, process stats collectord.io/procstats-index and events collectord.io/events-index (can be applied only to whole namespace).

As an example, if you want to override indexes for a specific namespace

apiVersion: v1
kind: Namespace
metadata:
  name: team1
  annotations:
    collectord.io/index: openshift_team1

This annotation tells collector to forward all the data from this namespace to index named openshift_team1. That includes Pod and Collector stats, Logs and Events.

When you change annotations for the existing objects - it can take up to 2x[general.kubernetes]/timeout (2x5m by default) for that to take effect. That is how often collector reloads metatada for all already monitored pods and comparing the differences. You can always force this effect by recreating the monitored pod (wait [general.kubernetes]/metadataTTL after applying the change) or restarting collector.

To define that you want to forward stats from Pods, Containers and Processes to the dedicated index you can define it as

apiVersion: v1
kind: Namespace
metadata:
  name: team1
  annotations:
    collectord.io/index: openshift_team1
    collectord.io/procstats-index: openshift_team1_stats
    collectord.io/netstats-index: openshift_team1_stats
    collectord.io/nettable-index: openshift_team1_stats
    collectord.io/stats-index: openshift_team1_stats

That tells that all the data from the namespace team1 should be forwarded to index openshift_team1, and Pod, Container (collectord.iostats-index) and Process stats (collectord.ioprocstats-index) should be forwarded to index openshift_team1_stats. That means that events, container logs and application logs will be forwarded to openshift_team1 index.

collectord.io/logs-index overrides only index for the container logs. If you want to override logs for the application logs you should use collectord.io/index or collectord.io/volume.{N}-logs-index.

Similarly you can override source, type and host.

Overriding index, source and type for specific events

Available since version 5.2

In the case when your container is running multiple processes, sometimes you want to override source or type just for specific events in the container (or application) logs. You can do that with the override annotations.

For example we will use the nginx image with logs

172.17.0.1 - - [12/Oct/2018:22:38:05 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.54.0" "-"
2018/10/12 22:38:15 [error] 8#8: *2 open() "/usr/share/nginx/html/a.txt" failed (2: No such file or directory), client: 172.17.0.1, server: localhost, request: "GET /a.txt HTTP/1.1", host: "localhost:32768"
172.17.0.1 - - [12/Oct/2018:22:38:15 +0000] "GET /a.txt HTTP/1.1" 404 153 "-" "curl/7.54.0" "-"

If we want to override source of the web logs and keep all other logs with the predefined source

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  annotations:
    collectord.io/logs-override.1-match: ^(\d{1,3}\.){3}\d{1,3}
    collectord.io/logs-override.1-source: /openshift/nginx/web-log
spec:
  containers:
  - name: nginx
    image: nginx

The collector will override source for matched events, so with our example you will end up with events similar to

source                    | event
------------------------------------------------------------------------------------------------------------------------
/openshift/nginx/web-log  | 172.17.0.1 - - [12/Oct/2018:22:38:05 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.54.0" "-"
/openshift/550...stderr   | 2018/10/12 22:38:15 [error] 8#8: *2 open() "/usr/share/nginx/html/a.txt" failed (2: No such file or directory), client: 172.17.0.1, server: localhost, request: "GET /a.txt HTTP/1.1", host: "localhost:32768"
/openshift/nginx/web-log  | 172.17.0.1 - - [12/Oct/2018:22:38:15 +0000] "GET /a.txt HTTP/1.1" 404 153 "-" "curl/7.54.0" "-"

Replace patterns in events

You can define replace patterns with the annotations. That allows you to hide sensitive information, or drop unimportant information from the messages.

Replace patterns for container logs are configured with pair of annotations grouped with the same number collectord.io/logs-replace.1-search and collectord.io/logs-replace.2-val, first specifies the search pattern as a regular expression, second a replace pattern. In replace patterns you can use placeholders for matches, like $1 or $name for named patterns.

We are using Go regular expression library for replace pipes. You can find more information about the syntax at Package regexp and re2 syntax. We recommend to use https://regex101.com for testing your patterns (set the Flavor to golang).

Using nginx as an example, our logs have a default pattern like

172.17.0.1 - - [31/Aug/2018:21:11:26 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.54.0" "-"
172.17.0.1 - - [31/Aug/2018:21:11:32 +0000] "POST / HTTP/1.1" 405 173 "-" "curl/7.54.0" "-"
172.17.0.1 - - [31/Aug/2018:21:11:35 +0000] "GET /404 HTTP/1.1" 404 612 "-" "curl/7.54.0" "-"

Example 1. Replacing IPv4 addresses with X.X.X.X

If we want to hide an IP address from the logs by replacing all IPv4 addresses with X.X.X.X

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  annotations:
    collectord.io/logs-replace.1-search: (\d{1,3}\.){3}\d{1,3}
    collectord.io/logs-replace.1-val: X.X.X.X
spec:
  containers:
  - name: nginx
    image: nginx

The result of this replace pattern will be in Splunk

X.X.X.X - - [31/Aug/2018:21:11:26 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.54.0" "-"
X.X.X.X - - [31/Aug/2018:21:11:32 +0000] "POST / HTTP/1.1" 405 173 "-" "curl/7.54.0" "-"
X.X.X.X - - [31/Aug/2018:21:11:35 +0000] "GET /404 HTTP/1.1" 404 612 "-" "curl/7.54.0" "-"

You can also keep the first part of the IPv4 with

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  annotations:
    collectord.io/logs-replace.1-search: (?P<IPv4p1>\d{1,3})(\.\d{1,3}){3}
    collectord.io/logs-replace.1-val: ${IPv4p1}.X.X.X
spec:
  containers:
  - name: nginx
    image: nginx

That results in

172.X.X.X - - [31/Aug/2018:21:11:26 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.54.0" "-"
172.X.X.X - - [31/Aug/2018:21:11:32 +0000] "POST / HTTP/1.1" 405 173 "-" "curl/7.54.0" "-"
172.X.X.X - - [31/Aug/2018:21:11:35 +0000] "GET /404 HTTP/1.1" 404 612 "-" "curl/7.54.0" "-"

Example 2. Dropping messages

With the replace patterns you can drop messages that you don't want to see in Splunk. With the example below we drop all log messages resulted from GET requests with 200 response

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  annotations:
    collectord.io/logs-replace.1-search: '^.+\"GET [^\s]+ HTTP/[^"]+" 200 .+$'
    collectord.io/logs-replace.1-val: ''
    collectord.io/logs-replace.2-search: '(\d{1,3}\.){3}\d{1,3}'
    collectord.io/logs-replace.2-val: 'X.X.X.X'
spec:
  containers:
  - name: nginx
    image: nginx

In this example we have two replace pipes. The apply in the alphabetical order (replace.1 comes first, before the replace.2).

X.X.X.X - - [31/Aug/2018:21:11:32 +0000] "POST / HTTP/1.1" 405 173 "-" "curl/7.54.0" "-"
X.X.X.X - - [31/Aug/2018:21:11:35 +0000] "GET /404 HTTP/1.1" 404 612 "-" "curl/7.54.0" "-"

Escaping terminal sequences, including terminal colors

Some containers does not turn off terminal colors automatically, when they run inside container. For example if you run container with attached tty and define that you want to see colors

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-shell
spec:
  containers:
  - name: ubuntu
    image: ubuntu
    tty: true
    command: [/bin/sh, -c,
             'while true; do ls --color=auto /; sleep 5; done;']

You can find messages similar to below in Splunk

[01;34mboot  etc  lib   media  opt  root  sbin  sys  usr
[0mbin   dev  home  lib64  mnt  proc  run   srv  tmp  var

You can easily escape them with the annotation collectord.io/logs-escapeterminalsequences='true'

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-shell
  annotations:
    collectord.io/logs-escapeterminalsequences: 'true'
spec:
  containers:
  - name: ubuntu
    image: ubuntu
    tty: true
    command: [/bin/sh, -c,
             'while true; do ls --color=auto /; sleep 5; done;']

That way you will see logs in Splunk as you would expect

bin   dev  home  lib64  mnt  proc  run   srv  tmp  var
boot  etc  lib   media  opt  root  sbin  sys  usr

In the collector configuration file you can find [input.files]/stripTerminalEscapeSequencesRegex and [input.files]/stripTerminalEscapeSequences that defines default regexp used for removing terminal escape sequences and default value if collector should strip terminal escape sequences (defaults to false).

Extracting fields from the container logs

You can use fields extraction, that allows you to extract timestamps from the messages, extract fields that will be indexed with Splunk to speed up the search.

Using the same example with nginx we can define fields extraction for some of the fields.

172.17.0.1 - - [31/Aug/2018:21:11:26 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.54.0" "-"
172.17.0.1 - - [31/Aug/2018:21:11:32 +0000] "POST / HTTP/1.1" 405 173 "-" "curl/7.54.0" "-"
172.17.0.1 - - [31/Aug/2018:21:11:35 +0000] "GET /404 HTTP/1.1" 404 612 "-" "curl/7.54.0" "-"

Important note, that first unnamed pattern is used as the message for the event.

Example 1. Extracting the timestamp

Assuming we want to keep whole message as is, and extract just a timestamp. We can define the extraction pattern with the regexp. Specify that the timestampfield is timestamp and define the timestampformat.

We use Go time parsing library, that defines the format with the specific date Mon Jan 2 15:04:05 MST 2006. See Go documentation for details.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  annotations:
    collectord.io/logs-extraction: '^(.*\[(?P<timestamp>[^\]]+)\].+)$'
    collectord.io/logs-timestampfield: timestamp
    collectord.io/logs-timestampformat: '02/Jan/2006:15:04:05 -0700'
spec:
  containers:
  - name: nginx
    image: nginx

In that way you will get messages in Splunk with the exact timestamp as specified in your container logs.

Example 2. Extracting the fields

If you want to extract some fields, and keep the message shorter, as an example, if you have extracted the timestamps, there is no need for you to keep the timestamp in the raw message. In the example below we extract the ip_address address as a field, timestamp and keep the rest as a raw message.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  annotations:
    collectord.io/logs-extraction: '^(?P<ip_address>[^\s]+) .* \[(?P<timestamp>[^\]]+)\] (.+)$'
    collectord.io/logs-timestampfield: timestamp
    collectord.io/logs-timestampformat: '02/Jan/2006:15:04:05 -0700'
spec:
  containers:
  - name: nginx
    image: nginx

That results in messages

ip_address | _time               | _raw
-----------|---------------------|-------------------------------------------------
172.17.0.1 | 2018-08-31 21:11:26 | "GET / HTTP/1.1" 200 612 "-" "curl/7.54.0" "-"
172.17.0.1 | 2018-08-31 21:11:32 | "POST / HTTP/1.1" 405 173 "-" "curl/7.54.0" "-"
172.17.0.1 | 2018-08-31 21:11:35 | "GET /404 HTTP/1.1" 404 612 "-" "curl/7.54.0" "-"

Defining Event pattern

With the annotation collectord.io/logs-eventpattern you can define how collector should identify new events in the pipe. The default event pattern is defined by the collector configuration as ^[^\s] (anything that does not start from a space character).

The default pattern works in most of the cases, but does not work in some, like Java exceptions, where the call stack of the error starts on the next line, and it does not start with the space character.

In example below we intentionally made a mistake in a configuration for the ElasticSearch (s-node should be a single-node) to get the error message

apiVersion: v1
kind: Pod
metadata:
  name: elasticsearch-pod
spec:
  containers:
  - name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0
    env:
    - name: discovery.type
      value: s-node

Results in

[2018-08-31T22:44:56,433][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/92] [Main.cc@109] controller (64 bit): Version 6.4.0 (Build cf8246175efff5) Copyright (c) 2018 Elasticsearch BV
[2018-08-31T22:44:56,886][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: Unknown discovery type [s-node]
    at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:140) ~[elasticsearch-6.4.0.jar:6.4.0]
    at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:127) ~[elasticsearch-6.4.0.jar:6.4.0]
    at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.4.0.jar:6.4.0]
    at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.4.0.jar:6.4.0]
    at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.4.0.jar:6.4.0]
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) ~[elasticsearch-6.4.0.jar:6.4.0]
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:86) ~[elasticsearch-6.4.0.jar:6.4.0]
Caused by: java.lang.IllegalArgumentException: Unknown discovery type [s-node]
    at org.elasticsearch.discovery.DiscoveryModule.<init>(DiscoveryModule.java:129) ~[elasticsearch-6.4.0.jar:6.4.0]
    at org.elasticsearch.node.Node.<init>(Node.java:477) ~[elasticsearch-6.4.0.jar:6.4.0]
    at org.elasticsearch.node.Node.<init>(Node.java:256) ~[elasticsearch-6.4.0.jar:6.4.0]
    at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.4.0.jar:6.4.0]
    at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.4.0.jar:6.4.0]
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:326) ~[elasticsearch-6.4.0.jar:6.4.0]
    at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-6.4.0.jar:6.4.0]
    ... 6 more
[2018-08-31T22:44:56,892][INFO ][o.e.x.m.j.p.NativeController] Native controller process has stopped - no new native processes can be started

And with the default pattern we will not have the warning line [2018-08-31T22:44:56,886][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main] with the whole callstack.

We can define that every log event in this container should start with the [ character with the regular expression as

apiVersion: v1
kind: Pod
metadata:
  name: elasticsearch-pod
  annotations:
    collectord.io/logs-eventpattern: '^\['
spec:
  containers:
  - name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0
    env:
    - name: discovery.type
      value: s-node

Application Logs

Sometimes it is hard or just not practical to redirect all logs from the container to stdout and stderr of the container. In that cases you keep the logs in the container. We call them application logs. With collector you can easily pick up these logs and forward them to Splunk. No additional sidecars or processes required inside your container.

Let's take a look on the example below. We have a postgresql container, that redirects most of the logs to the path inside the container /var/log/postgresql. We define for this container a volume (emptyDir driver) with the name psql_logs and mount it to /var/log/postgresql/. With the annotation collectord.io/volume.1-logs-name=psql_logs we tell collector to pick up all the logs with the default glob pattern *.log* (default glob pattern is set int the collector configuration, and you can override it with annotation collectord.io/volume.{N}-logs-glob) in the volume and forward them automatically to Splunk.

When you need to forward logs from multiple volumes of the same container you can group the settings with the same number, for example collectord.io/volume.1-logs-name=psql_logs and collectord.io/volume.2-logs-name=psql_logs

Example 1. Forwarding application logs

apiVersion: v1
kind: Pod
metadata:
  name: postgres-pod
  annotations:
    collectord.io/volume.1-logs-name: 'logs'
spec:
  containers:
  - name: postgres
    image: postgres
    command:
      - docker-entrypoint.sh
    args:
      - postgres
      - -c
      - logging_collector=on
      - -c
      - log_min_duration_statement=0
      - -c
      - log_directory=/var/log/postgresql
      - -c
      - log_min_messages=INFO
      - -c
      - log_rotation_age=1d
      - -c
      - log_rotation_size=10MB
    volumeMounts:
      - name: data
        mountPath: /var/lib/postgresql/data
      - name: logs
        mountPath: /var/log/postgresql/
  volumes:
  - name: data
    emptyDir: {}
  - name: logs
    emptyDir: {}

In the example above the logs from the container will have a source, similar to psql_logs:postgresql-2018-08-31_232946.log.

2018-08-31 23:31:02.034 UTC [133] LOG:  duration: 0.908 ms  statement: SELECT n.nspname as "Schema",
      c.relname as "Name",
      CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table' WHEN 'p' THEN 'table' END as "Type",
      pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
    FROM pg_catalog.pg_class c
         LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
    WHERE c.relkind IN ('r','p','')
          AND n.nspname <> 'pg_catalog'
          AND n.nspname <> 'information_schema'
          AND n.nspname !~ '^pg_toast'
      AND pg_catalog.pg_table_is_visible(c.oid)
    ORDER BY 1,2;
2018-08-31 23:30:53.490 UTC [124] FATAL:  role "postgresql" does not exist

Example 2. Forwarding application logs with fields extraction and time parsing

With the annotations for application logs you can define fields extraction, replace patterns, override the indexes, sources and hosts.

As an example, with the extraction pattern and timestamp parsing you can do

apiVersion: v1
kind: Pod
metadata:
  name: postgres-pod
  annotations:
    collectord.io/volume.1-logs-name: 'logs'
    collectord.io/volume.1-logs-extraction: '^(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3} [^\s]+) (.+)$'
    collectord.io/volume.1-logs-timestampfield: 'timestamp'
    collectord.io/volume.1-logs-timestampformat: '2006-01-02 15:04:05.000 MST'
spec:
  containers:
  - name: postgres
    image: postgres
    command:
      - docker-entrypoint.sh
    args:
      - postgres
      - -c
      - logging_collector=on
      - -c
      - log_min_duration_statement=0
      - -c
      - log_directory=/var/log/postgresql
      - -c
      - log_min_messages=INFO
      - -c
      - log_rotation_age=1d
      - -c
      - log_rotation_size=10MB
    volumeMounts:
      - name: data
        mountPath: /var/lib/postgresql/data
      - name: logs
        mountPath: /var/log/postgresql/
  volumes:
  - name: data
    emptyDir: {}
  - name: logs
    emptyDir: {}

That way you will extract the timestamps and remove them from the _raw message

_time               | _raw
2018-08-31 23:31:02 | [133] LOG:  duration: 0.908 ms  statement: SELECT n.nspname as "Schema",
                    |     c.relname as "Name",
                    |     CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table' WHEN 'p' THEN 'table' END as "Type",
                    |     pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
                    |   FROM pg_catalog.pg_class c
                    |        LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
                    |   WHERE c.relkind IN ('r','p','')
                    |         AND n.nspname <> 'pg_catalog'
                    |         AND n.nspname <> 'information_schema'
                    |         AND n.nspname !~ '^pg_toast'
                    |     AND pg_catalog.pg_table_is_visible(c.oid)
                    |   ORDER BY 1,2;
2018-08-31 23:30:53 |  UTC [124] FATAL:  role "postgresql" does not exist

Volume types

Collector supports two volume types for application logs: emptyDir and hostPath. Collector configuration has two settings that helps collector to autodiscover application logs. First is the [general.kubernetes]/volumesRootDir for discovering volumes created with emptyDir, second is [input.app_logs]/root for discovering host mounts, considering that they will be mounted with different path to collector.

Forwarding Prometheus metrics

Available since Monitoring OpenShift 5.1.

You can define one or multiple ports that collector should use to collect metrics in prometheus format and forward to Splunk. Collector addon, which we deploy as part of the collectorforopenshift.yaml has a stanza that defines pod metrics autodiscovery [input.prometheus_auto]. The stanza determines the default interval, type, and timeout for the HTTP client.

The addon watches for new pods in the cluster, and when it sees annotations that can define Prometheus endpoint, it sets up the collection right away. At the minimum, you need to configure a port, for example, collectord.io/prometheus.1-port=9888.

For example, we use sophos/nginx-prometheus-metrics image, which has a built-in plugin for nginx, that can exports metrics in Prometheus format. For that we define the port, that addon needs to use to collect this data and the path.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: MyApp
  annotations:
    collectord.io/prometheus.1-port: '9527'
    collectord.io/prometheus.1-path: '/metrics'
spec:
  containers:
  - name: nginx
    image: sophos/nginx-prometheus-metrics

For details about metrics format, please read our documentation on Prometheus metrics.

Learn in the reference bellow of other available annotations for the Prometheus metrics.

Change output destination

By default collector forwards all the data to Splunk. You can configure containers to redirect data to devnull instead with annotation collectord.io/output=devnull. This annotations changes output for all the data related to this container. Alternatively you can change output only for logs with the annotation collectord.io/logs-output=devnull.

By changing default output for specific data you can change how you forward data to Splunk. Instead of forwarding all the logs by default, you can change configuration for collector with --env "COLLECTOR__LOGS_OUTPUT=input.files__output=devnull" to specify not forward container logs by default. And define with the containers which logs you want to see in Splunk with collectord.io/logs-output=splunk.

For example:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: MyApp
  annotations:
    collectord.io/logs-output: 'splunk'
spec:
  containers:
  - name: nginx
    image: nginx

Troubleshooting

Check the collector logs for warning messages about the annotations, you can find if you made a misprint in the annotations if you see warnings like

WARN 2018/08/31 21:05:33.122978 core/input/annotations.go:76: invalid annotation ...

Some pipes, like fields extraction and time parsing pipes adds a error in the field collector_error, so you can identify when some events failed to be processed by this pipe.

Reference

  • General annotations
    • collectord.io/index - change the index for all the data forwarded for this Pod (metrics, container logs, application logs)
    • collectord.io/source - change the source for all the data forwarded for this Pod (metrics, container logs, application logs)
    • collectord.io/type - change the sourcetype for all the data forwarded for this Pod (metrics, container logs, application logs)
    • collectord.io/host - change the host for all the data forwarded for this Pod (metrics, container logs, application logs)
    • collectord.io/output - (5.2+) change the output to devnull or splunk
  • Annotations for container logs
    • collectord.io/logs-index - change the index for the container logs forwarded from this Pod
    • collectord.io/logs-source - change the source for the container logs forwarded from this Pod
    • collectord.io/logs-type - change the sourcetype for the container logs forwarded from this Pod
    • collectord.io/logs-host - change the host for the container logs forwarded from this Pod
    • collectord.io/logs-eventpattern - set the regex identifying the event start pattern for Pod logs
    • collectord.io/logs-replace.{N}-search - define the search pattern for the replace pipe
    • collectord.io/logs-replace.{N}-val - define the replace pattern for the replace pipe
    • collectord.io/logs-extraction - define the regexp for fields extraction
    • collectord.io/logs-timestampfield - define the field for timestamp (after fields extraction)
    • collectord.io/logs-timestampformat - define the timestamp format
    • collectord.io/logs-timestampsetmonth - define if month should be set to current for timestamp
    • collectord.io/logs-timestampsetday - define if day should be set to current for timestamp
    • collectord.io/logs-timestamplocation - define timestamp location if not set by format
    • collectord.io/logs-joinpartial - join partial events
    • collectord.io/logs-escapeterminalsequences - escape terminal sequences (including colors)
    • collectord.io/logs-override.{N}-match - (5.2+) match for override pattern
    • collectord.io/logs-override.{N}-index - (5.2+) override index for matched events
    • collectord.io/logs-override.{N}-source - (5.2+) override source for matched events
    • collectord.io/logs-override.{N}-type - (5.2+) override type for matched events
    • collectord.io/logs-output - (5.2+) change the output to devnull or splunk (this annotation cannot be specified for stderr and stdout)
    • Specific for stdout, with the annotations below you can define configuration specific for stdout
      • collectord.io/stdout-logs-index
      • collectord.io/stdout-logs-source
      • collectord.io/stdout-logs-type
      • collectord.io/stdout-logs-host
      • collectord.io/stdout-logs-eventpattern
      • collectord.io/stdout-logs-replace.{N}-search
      • collectord.io/stdout-logs-replace.{N}-val
      • collectord.io/stdout-logs-extraction
      • collectord.io/stdout-logs-timestampfield
      • collectord.io/stdout-logs-timestampformat
      • collectord.io/stdout-logs-timestampsetmonth
      • collectord.io/stdout-logs-timestampsetday
      • collectord.io/stdout-logs-timestamplocation
      • collectord.io/stdout-logs-joinpartial
      • collectord.io/stdout-logs-escapeterminalsequences
      • collectord.io/stdout-logs-override.{N}-match - (5.2+)
      • collectord.io/stdout-logs-override.{N}-index - (5.2+)
      • collectord.io/stdout-logs-override.{N}-source - (5.2+)
      • collectord.io/stdout-logs-override.{N}-type - (5.2+)
    • Specific for stderr, with the annotations below you can define configuration specific for stderr
      • collectord.io/stderr-logs-index
      • collectord.io/stderr-logs-source
      • collectord.io/stderr-logs-type
      • collectord.io/stderr-logs-host
      • collectord.io/stderr-logs-eventpattern
      • collectord.io/stderr-logs-replace.{N}-search
      • collectord.io/stderr-logs-replace.{N}-val
      • collectord.io/stderr-logs-extraction
      • collectord.io/stderr-logs-timestampfield
      • collectord.io/stderr-logs-timestampformat
      • collectord.io/stderr-logs-timestampsetmonth
      • collectord.io/stderr-logs-timestampsetday
      • collectord.io/stderr-logs-timestamplocation
      • collectord.io/stderr-logs-joinpartial
      • collectord.io/stderr-logs-escapeterminalsequences
      • collectord.io/stderr-logs-override.{N}-match - (5.2+)
      • collectord.io/stderr-logs-override.{N}-index - (5.2+)
      • collectord.io/stderr-logs-override.{N}-source - (5.2+)
      • collectord.io/stderr-logs-override.{N}-type - (5.2+)
  • Annotations for container stats
    • collectord.io/stats-index - change the index for the container metrics forwarded from this Pod
    • collectord.io/stats-source - change the source for the container metrics forwarded from this Pod
    • collectord.io/stats-type - change the sourcetype for the container metrics forwarded from this Pod
    • collectord.io/stats-host - change the host for the container metrics forwarded from this Pod
    • collectord.io/stats-output - (5.2+) change the output to devnull or splunk
  • Annotations for container processes stats
    • collectord.io/procstats-index - change the index for the container process metrics forwarded from this Pod
    • collectord.io/procstats-source - change the source for the container process metrics forwarded from this Pod
    • collectord.io/procstats-type - change the type for the container process metrics forwarded from this Pod
    • collectord.io/procstats-host - change the host for the container process metrics forwarded from this Pod
    • collectord.io/procstats-output - (5.2+) change the output to devnull or splunk
  • Annotations for container network stats
    • collectord.io/netstats-index - change the index for the container network metrics forwarded from this Pod
    • collectord.io/netstats-source - change the source for the container network metrics forwarded from this Pod
    • collectord.io/netstats-type - change the type for the container network metrics forwarded from this Pod
    • collectord.io/netstats-host - change the host for the container network metrics forwarded from this Pod
    • collectord.io/netstats-output - (5.2+) change the output to devnull or splunk
  • Annotations for container network socket table
    • collectord.io/nettable-index - change the index for the container network socket table forwarded from this Pod
    • collectord.io/nettable-source - change the source for the container network socket table forwarded from this Pod
    • collectord.io/nettable-type - change the type for the container network socket table forwarded from this Pod
    • collectord.io/nettable-host - change the host for the container network socket table forwarded from this Pod
    • collectord.io/nettable-output - (5.2+) change the output to devnull or splunk
  • Annotations for events (can be applied only to namespaces)
    • collectord.io/events-index - change the index for the events of specific namespace
    • collectord.io/events-source - change the source for the events of specific namespace
    • collectord.io/events-type - change the source type for the events of specific namespace
    • collectord.io/events-host - change the host for the events of specific namespace
  • Annotations for application logs
    • collectord.io/volume.{N}-logs-name - name of the volume attached to Pod
    • collectord.io/volume.{N}-logs-index - target index for logs forwarded from the volume
    • collectord.io/volume.{N}-logs-source - change the source for logs forwarded from the volume
    • collectord.io/volume.{N}-logs-type - change the type for logs forwarded from the volume
    • collectord.io/volume.{N}-logs-host - change the host for logs forwarded from the volume
    • collectord.io/volume.{N}-logs-eventpattern - change the event pattern defining new event for logs forwarded from the volume
    • collectord.io/volume.{N}-logs-replace.{N}-search - specify the regex search for replace pipe for the logs
    • collectord.io/volume.{N}-logs-replace.{N}-val - specify the regex replace pattern for replace pipe for the logs
    • collectord.io/volume.{N}-logs-extraction - specify the fields extraction with the regex the logs
    • collectord.io/volume.{N}-logs-timestampfield - specify the timestamp field
    • collectord.io/volume.{N}-logs-timestampformat - specify the format for timestamp field
    • collectord.io/volume.{N}-logs-timestampsetmonth - define if month should be set to current for timestamp
    • collectord.io/volume.{N}-logs-timestampsetday - define if day should be set to current for timestamp
    • collectord.io/volume.{N}-logs-timestamplocation - define timestamp location if not set by format
    • collectord.io/volume.{N}-logs-glob - set the glob pattern for matching logs
    • collectord.io/volume.{N}-logs-match - set the regexp pattern for matching logs
    • collectord.io/volume.{N}-logs-recursive - set if walker should walk the directory recursive
    • collectord.io/volume.{N}-logs-override.{N}-match - (5.2+) match for override pattern
    • collectord.io/volume.{N}-logs-override.{N}-index - (5.2+) override index for matched events
    • collectord.io/volume.{N}-logs-override.{N}-source - (5.2+) override source for matched events
    • collectord.io/volume.{N}-logs-override.{N}-type - (5.2+) override type for matched events
  • Annotations for forwarding metrics in Prometheus format
    • collectord.io/prometheus.{N}-port - (required) specify the port that needs to be used to collect metrics
    • collectord.io/prometheus.{N}-index - specify target index for the metrics
    • collectord.io/prometheus.{N}-source - specify target source for the metrics (default is /openshift/{openshift_pod_id})
    • collectord.io/prometheus.{N}-type - specify the type for the metrics (default is set in [input.prometheus_auto] of collector configuration)
    • collectord.io/prometheus.{N}-host - specify the host for the metrics (by default node name is used)
    • collectord.io/prometheus.{N}-path - specify the path in url for the metrics
    • collectord.io/prometheus.{N}-interval - how often to collect metrics (default is set in [input.prometheus_auto] of collector configuration)
    • collectord.io/prometheus.{N}-includehelp - include help for metrics (default is false)
    • collectord.io/prometheus.{N}-insecure - insecure if scheme is https (default is false)
    • collectord.io/prometheus.{N}-scheme - scheme (http or https, default is http)
    • collectord.io/prometheus.{N}-username - (5.2+) basic auth username
    • collectord.io/prometheus.{N}-password - (5.2+) basic auth password

About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and operators to keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.