Outcold Solutions - Monitoring Kubernetes, OpenShift and Docker in Splunk

Monitoring Kubernetes

Splunk Indexes

By default, collectorforkubernetes forwards all events to the default index specified for the HTTP Event Collector Token. Every HTTP Event Collector Token has a list of indexes where this specific Token can write data. One of the indexes from this list is also used as the default index when the sender of the data does not specify a target index. The application assumes that you are writing data to indexes that are searchable by default by your Splunk role. For example, the main index is searchable by default.

If you use a different index that isn’t searchable by default by your Splunk role, you would not see data on the dashboards.

To fix this, you can include this index in the Indexes searched by default for your role under Settings - Access Control

  • Roles
Splunk - Indexes searched by default

Or you can change the Search Macros we use in the application and include a list of indexes you use for Monitoring Kubernetes events. You can find search macros in the Splunk Web UI under Settings - Advanced search

  • Search macros (or by overriding $SPLUNK_HOME/etc/apps/monitoringkubernetes/default/macros.conf with $SPLUNK_HOME/etc/apps/monitoringkubernetes/local/macros.conf).
Monitoring Kubernetes - Macros

Starting from version 5.10, we include a base macro macro_kubernetes_base where you can include the list of indexes only once, and all other macros will inherit this configuration.

macro_kubernetes_base = (index=kubernetes_stats OR index=kubernetes_logs)

If you want to have more precise configuration, you can modify specific macros.

macro_kubernetes_stats = (index=kube_system_stats sourcetype=kubernetes_stats)

You only need to update the following macros:

  • macro_kubernetes_events - all the kubernetes events.
  • macro_kubernetes_host_logs - host logs.
  • macro_kubernetes_logs - container logs.
  • macro_kubernetes_proc_stats - proc metrics.
  • macro_kubernetes_net_stats - network metrics.
  • macro_kubernetes_net_socket_table - network socket tables.
  • macro_kubernetes_stats - system and container metrics.
  • macro_kubernetes_mount_stats - container runtime storage usage metrics.
  • macro_kubernetes_prometheus_metrics - metrics from prometheus format

Using dedicated indexes for different types of data

Considering the application access patterns and the content of the events, we recommend splitting logs and metrics and using dedicated indexes. For example, kube_system_logs for events, container and host logs; kube_system_stats for proc and system metrics; and kube_prometheus for Prometheus metrics. You can also specify a dedicated index for every type of data that collectord forwards.

Using dedicated indexes also allows you to specify different retention policies for logs and metrics.

You can do that by using the Configuration Reference file and setting the values of the indexes you want to use as the destination.

data:
  collector.conf: |
    ...
    
    [input.system_stats]
    
    ...
    
    # specify Splunk index
    index = 
    
    ...
    
    [input.proc_stats]
    
    ...
    
    # specify Splunk index
    index = 
    
    ...
    
    [input.net_stats]
    
    ...
    
    # specify Splunk index
    index = 
    
    ...
    
    [input.net_socket_table]
    
    ...
    
    # specify Splunk index
    index = 
    
    ...
    
    [input.mount_stats]
    
    ...
    
    # specify Splunk index
    index = 
    
    ...
    
    [input.files]
    
    ...
    
    # specify Splunk index
    index = 
    
    ...
    
    [input.files::syslog]
    
    ...
    
    # specify Splunk index
    index =

    ...
    
    [input.files::logs]
    
    ...
    
    # specify Splunk index
    index =
    
    ...
    
    [input.kubernetes_events]
    
    ...
    
    # specify Splunk index
    index =
    
    ...

Configuring dedicated indexes, source and sourcetype for Namespaces

You can also override targeted indexes for namespaces in Kubernetes. The Collectord watches for annotations on namespaces, workloads, and pods. For example, if you want to specify that you want to index all container logs, metrics, and events in the specific namespace namespace1 to index kubernetes_namespace1, you can annotate this namespace with

kubectl annotate namespaces namespace1 \
  collectord.io/index=kubernetes_namespace1

You can learn more about available annotations in Annotations


About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all container environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and easy-to-deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and help operators keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.