Outcold Solutions LLC

Monitoring Docker, OpenShift and Kubernetes - Version 5.6

February 19, 2019

Version 5.6 brings dark theme support, refreshed Logs dashboard (free-text search and more control), support for auto-refresh dashboards, and bug fixes. With Collectord 5.6 we included support for log sampling (random and hash based), small improvements and bug fixes.

Auto-refresh

Every dashboard has an option now, that allows you to specify how often you want to refresh the dashboard. This can be useful for keeping the dashboard open for a long time and monitor the performance of your applications.

Auto-refresh

Dark Theme

You can find a link on every dashboard, that will allow you to switch to a dark theme.

There is a limitation in Splunk, that does not allow us to keep the preference, so if you will navigate with the navigation bar at the top, the theme configuration will be switched back to light.

With the auto-refresh configuration you can keep the dashboards open on a large screen.

Dark theme

Updated Logs dashboard

We have updated logs dashboard, including the possibility to add a free-text search, and specify the limit for the number of logs you want to see. Also added a visualization to quickly identify the period of time of the log messages.

Logs

Logs sampling

Example 1. Random based sampling

When the application produces a high amount of logs, in some cases it could be enough to just look at the sampled amount of the logs to understand how many failed requests the application has, or how it behaves. You can add an annotation for the logs to specify the percent amount of the logs that should be forwarded to splunk.

In the following example, this application produces 300,000 log lines. Only about 60,000 log lines are going to be forwarded to Splunk.

docker run -d --rm \
    --label 'collectord.io/logs-sampling-percent=20' \
    docker.io/mffiedler/ocp-logtest:latest \
    python ocp_logtest.py --line-length=1024 --num-lines=300000 --rate 60000 --fixed-line

Example 2. Hash-based sampling

In the situations where you want to look at the pattern for a specific user, you can specify that you want to sample logs based on the hash value, to be sure if the same key presents in two different log lines, both of them will be forwarded to Splunk.

In the following example we define a key (should be a named submatch pattern) as an IP address.

docker run -d --rm \
    --label 'collectord.io/logs-sampling-percent=20' \
    --label 'collectord.io/logs-sampling-key=^(?P<key>(\d+\.){3}\d+)' \
    nginx

To test it, we can run 500 containers with different IP addresses (make sure to change the IP address of the original container that we run from nginx image)

Make sure that you have enough capacity to run 500 additional containers.

seq 500 | xargs -L 1 -P 10 docker run -d -it alpine:3.8 sh -c 'apk add bash curl 1>/dev/null 2>&1 && bash -c "(while true; do curl --silent 172.17.0.7:80; sleep 5; done;) 1>/dev/null 2>&1"'

From this example, we can see that only data from 69 containers was forwarded to Splunk (you will get close to 20% with a much higher number of different values). But each IP address has more than one value in Splunk.

Forward annotations

You can configure collectorforopenshift and collectorforkubernetes to include annotations similarly how we attach labels. Labels in Kubernetes usually have a data that can identify the pods or workloads, annotations in most cases have some data, that can be useful for other tools and Kubernetes components, but sometimes it can have valuable information, that you might want to forward to Splunk as well.

As an example, if you want to forward annotation openshift.io/display-name= for the OpenShift Projects, you can add configuration in the ConfigMap

[general.kubernetes]

includeAnnotations.displayName = openshift\.io\/display-name

After that you can find these annotations attached to the logs and stats

Attached Annotations

Change in licensing

We have changed the way we generate unique InstanceID for every Collectord instance, to take in account, that you might need to run multiple collectord per host/node in order to send data to different Splunk Clusters. Now each Collectord will have the same unique for this host InstanceID, which will allow us to count it as one host.

You can find more information about other minor updates by following links below.

Release notes

Upgrade instructions

Installation instructions

docker, kubernetes, openshift, splunk

About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and operators to keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.