OpenShift Objects

Starting with version 5.9, you can stream all the changes from the Kubernetes API server to Splunk. This is useful if you want to monitor all changes for the workloads or ConfigMaps in Splunk, or if you want to recreate a Kubernetes Dashboard experience in Splunk. With the default configuration, we don’t forward any objects from the API Server except events.

Configuration

In the ConfigMap configuration for the collectorforopenshift, you can add additional sections for the 004-addon.conf. In the example below, we will forward all objects every 10 minutes (refresh) and stream all changes immediately for the pods, deployments, and ConfigMaps.

 1[input.kubernetes_watch::pods]
 2
 3# disable events
 4disabled = false
 5
 6# Set the timeout for how often watch request should refresh the whole list
 7refresh = 10m
 8
 9apiVersion = v1
10kind = Pod
11namespace =
12
13# override type
14type = openshift_objects
15
16# specify Splunk index
17index =
18
19# set output (splunk or devnull, default is [general]defaultOutput)
20output =
21
22# exclude managed fields from the metadata
23excludeManagedFields = true
24
25
26[input.kubernetes_watch::deployments]
27
28# disable events
29disabled = false
30
31# Set the timeout for how often watch request should refresh the whole list
32refresh = 10m
33
34apiVersion = apps/v1
35kind = Deployment
36namespace =
37
38# override type
39type = openshift_objects
40
41# specify Splunk index
42index =
43
44# set output (splunk or devnull, default is [general]defaultOutput)
45output =
46
47# exclude managed fields from the metadata
48excludeManagedFields = true
49
50
51[input.kubernetes_watch::configmap]
52
53# disable events
54disabled = false
55
56# Set the timeout for how often watch request should refresh the whole list
57refresh = 10m
58
59apiVersion = v1
60kind = ConfigMap
61namespace =
62
63# override type
64type = openshift_objects
65
66# specify Splunk index
67index =
68
69# set output (splunk or devnull, default is [general]defaultOutput)
70output =
71
72# exclude managed fields from the metadata
73excludeManagedFields = true

If you need to stream other kinds of objects, you can find the list of available objects in the Kubernetes API reference. You need to find the correct apiVersion and kind. All the object kinds with the group core should have an apiVersion v1. Other groups, like apps, should have an apiVersion apps/v1. You can also specify a namespace if you want to forward objects only from a specific namespace.

Modifying objects

Available in collectorforopenshift:5.19+

If you want to hide sensitive information or remove some properties from the objects before streaming them to Splunk, you can use the modifyValues. modifier in the ConfigMap.

As an example, if you add the ability for Collectord to query Secrets from the Kubernetes API (you need to add secrets under resources for the collectorforkubernetes ClusterRole, see below), you can define a new input

 1[input.kubernetes_watch::secrets]
 2disabled = false
 3refresh = 10m
 4apiVersion = v1
 5kind = Secret
 6namespace =
 7type = openshift_objects
 8index =
 9output =
10excludeManagedFields = true
11# hash all fields before sending them to Splunk
12modifyValues.object.data.* = hash:sha256
13# remove annotations like last-applied-configuration not to expose values by accident
14modifyValues.object.metadata.annotations.kubectl* = remove

In that case, all the values for keys under object.data will be hashed, and annotations that start with kubectl will be removed (this is a special case to remove the last-applied-configuration annotation, which can expose those secrets).

The syntax of modifyValues. is simple; everything that comes after is a path with a simple glob pattern where * can be at the beginning of the path property or at the end. The value can be a function remove or hash:{hash_function}. The list of hash functions is the same as what can be applied with annotations.

Filtering objects

Available in collectorforkubernetes:5.21+

If you want to filter objects based on the namespaces, you can configure under a specific input as follows

1[input.kubernetes_watch::pods]
2# You can exclude events by namespace with blacklist or whitelist only required namespaces
3# blacklist.kubernetes_namespace = ^namespace0$
4# whitelist.kubernetes_namespace = ^((namespace1)|(namespace2))$

For example, you can tell Collectord to stream all the pods except those from the namespace0 namespace

1[input.kubernetes_watch::pods]
2blacklist.kubernetes_namespace = ^namespace0$

Or stream only the pods from the namespace1 and namespace2 namespaces.

1[input.kubernetes_watch::pods]
2whitelist.kubernetes_namespace = ^((namespace1)|(namespace2))$

ClusterRole rules

Please check the collectorforopenshift.yaml configuration with the list of apiGroups and resources that collectord has access to. In the example above, we request streaming of the ConfigMaps, which we don’t provide access to in our default configuration. To be able to stream this type of object, we need to add an additional resource to the ClusterRole

 1apiVersion: v1
 2kind: ClusterRole
 3metadata:
 4  labels:
 5    app: collectorforopenshift
 6  name: collectorforopenshift
 7rules:
 8- apiGroups:
 9  - ""
10  - apps
11  - batch
12  - extensions
13  - monitoring.coreos.com
14  - apps.openshift.io
15  - build.openshift.io
16  resources:
17  - alertmanagers
18  - buildconfigs
19  - builds
20  - cronjobs
21  - daemonsets
22  - deploymentconfigs
23  - deployments
24  - endpoints
25  - events
26  - jobs
27  - namespaces
28  - nodes
29  - nodes/metrics
30  - nodes/proxy
31  - pods
32  - prometheuses
33  - replicasets
34  - replicationcontrollers
35  - scheduledjobs
36  - services
37  - statefulsets
38  - configmaps
39  verbs:
40  - get
41  - list
42  - watch
43- nonResourceURLs:
44  - /metrics
45  verbs:
46  - get

Applying the changes

After you make changes to the ConfigMap and ClusterRole, recreate or restart the addon pod (the pod with the name similar to collectorforopenshift-addon-XXX). You can just delete this pod and the Deployment will recreate it for you.

Searching the data

With the configuration in the example above, the collectord will resend all the objects every 10 minutes and stream all the changes immediately. If you are planning to run the join command or populate the lookups, make sure that your search command covers more than the refresh interval; you can use, for example, 12 minutes.

The source name

By default, the source will be in the format /openshift/{namespace}/{apiVersion}/{kind}, where namespace is the namespace that is used to make a request (from the configuration, not the actual namespace of the object), and apiVersion and kind are also from the configuration that makes the API request to the API Server.

Attached fields

With the events, we also attach openshift_namespace and openshift_node_labels, which will help you find the objects from the right namespace or from the right cluster (if the node labels have a cluster label).

Event format

We forward objects wrapped in the watch object, which means that every event has an object field and a type (ADDED, MODIFIED, or DELETED).

Objects

Searching the data

Considering that in the same time frame you can have the same object more than once (for example, if the object has been modified several times in 10 minutes), you need to group the objects by the unique identifier.

1sourcetype="openshift_objects" source="/openshift//v1/pod" |
2spath output=uid path=object.metadata.uid |
3stats latest(_raw) as _raw, latest(_time) as _time by uid, openshift_namespace |
4spath output=name path=object.metadata.name |
5spath output=creationTimestamp path=object.metadata.creationTimestamp |
6table openshift_namespace, name, creationTimestamp
Table

Example. Extracting limits

A more complicated example of how to extract container limits and requests (CPU, Memory, and GPU)

 1sourcetype="openshift_objects" source="/openshift//v1/pod" |
 2spath output=uid path=object.metadata.uid |
 3stats latest(_raw) as _raw, latest(_time) as _time by uid, openshift_namespace |
 4spath output=pod_name path=object.metadata.name |
 5spath output=containers path=object.spec.containers{} | 
 6mvexpand containers |
 7spath output=container_name path=name input=containers  | 
 8spath output=limits_cpu path=resources.limits.cpu input=containers |
 9spath output=requests_cpu path=resources.requests.cpu input=containers |
10spath output=limits_memory path=resources.limits.memory input=containers |
11spath output=requests_memory path=resources.requests.memory input=containers |
12spath output=limits_gpu path=resources.limits.nvidia.com/gpu input=containers |
13spath output=requests_gpu path=resources.requests.nvidia.com/gpu input=containers |
14table openshift_namespace, pod_name, container_name, limits_cpu, requests_cpu, limits_memory, requests_memory, limits_gpu, requests_gpu

About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all container environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and easy-to-deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and help operators keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.

Red Hat
Splunk
AWS