FAQ and the common questions
I do not see IO metrics on hosts dashboards
We use metrics from the blkio-controller. It is not always enabled. As a the workaround, you can use the sum from proc metrics, which we also collect for processes.
I received a license. How do I apply it?
Collector reads license key from the configuration file.
If you are using collector for OpenShift or Kubernetes, you can set the license in the ConfigMap we include
in our configuration
Find the line
license =, add your license key as a value. License key should not have any spaces.
After setting the license, you need to restart collectors. When you modify just a ConfigMap, it does not trigger the restart of the Pod.
You can just delete all the running pods in our namespace, and scheduler will create new pods.
In case of OpenShift:
oc delete --namespace collectorforopenshift pods --all
In case of Kubernetes
kubectl delete --namespace collectorforkubernetes pods --all
If you are using version 3 of our solution for Monitoring OpenShift and Kubernetes we deploy our collector in the
default namespace. In that case you will need to manually delete all pods and they will be rescheduled with the
For the collector docker you can modify the collector.yaml
configuration file if you already have your own. Or you
can set the license key with the environment variable
Can we use applications with Splunk versions below 6.5?
Our collector sends data to the HTTP Event Collector and it uses Indexed field extractions, this feature was introduced in Splunk 6.5. One workaround is to install the fleet of Heavy Weight Forwarders with the HTTP Event Collector using the latest version of Splunk and configure them to forward data to your indexers. For details see
How much data your application generates?
Because we are using Indexed field extractions in HTTP Event Collector, in our tests, we saw less than 5% of Splunk Licensing cost increase for logs.
You can also run a search in Splunk to get the information on how much licensing usage for every source type.
index=_internal source=*metrics.log | eval MB=kb/1024 | search group="per_sourcetype_thruput" | timechart span=1h sum(MB) by series
In our tests for Kubernetes nodes with 20-30 containers, we have seen around 200Mb a day of data indexed