Outcold Solutions LLC

Monitoring Docker, OpenShift and Kubernetes - Version 5.9 - Support for multiple Splunk Clusters, streaming API Objects

May 15, 2019

With this release we improved capabilities for streaming data to the multiple Splunk Clusters and support for deploying multiple collectord on the same node (in case you need to stream the same data to multiple clusters), and added a new capability to stream objects and changes from the API Server.

This release also includes journald input fix. We have found that in previos vertion Collectord could hold the file descriptors of the rotated journald files. If you are using Journald input (enabled by default), please upgrade.

Streaming API Objects

Starting with version 5.9 you can stream all changes from the Kubernetes and Docker API server to Splunk. That is useful if you want to monitor all changes for the Workloads or ConfigMaps in Splunk. Or you want to recreate Kubernetes Dashboard experience in Splunk. With the default configuration we don't forward any objects from the API Server except events.

Please follow updated documentation to setup streaming of the Kubernetes and Docker API Objects to Splunk.


Support for multiple Splunk Clusters or Splunk Tokens

In case you want to use multiple HTTP Event Collector Tokens, or forward data from the namespaces to a different Splunk Clusters, you can define more than one Splunk Output in the configuration.

In the default ConfigMap (or configuration for Docker) we include only default Splunk output under the stanza [output.splunk], you can define additional outputs and name them, like

url = https://prod1.hec.example.com:8088/services/collector/event/1.0
token = AF420832-F61B-480F-86B3-CCB5D37F7D0D

See the details on how to configure the outputs

Using the annotations you can override default Solunk output and define Splunk Cluster you want to redirect the data from the namespace or pod or container. In the example below, we are using the configuration to forward all the data from a specific namespace to the Splunk output prod1

apiVersion: v1
kind: Namespace
  name: prod1-namespace
    collectord.io/output: 'splunk::prod1'

Imroved support for multiple Collectord deployments

If you need to stream the same data to multiple Splunk deployments you can easily deploy more than one Collectord on the one node. Some configuration changes are required in order to ensure that the deployments will not conflict with each other, primary about location of the database that stores acknowledgement data.

Before 5.9 annotation would be applied to all the collectord deployments. From version 5.9 you can define the subdomains for the annotations under [general] with the key annotationsSubdomain, for example

annotationsSubdomain = prod1

After that for this specific deployment you can use annotations as prod1.collectord.io/index=foo.

You can find more information about other minor updates by following links below.

Release notes

Upgrade instructions

Installation instructions

docker, kubernetes, openshift, splunk

About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and operators to keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.