Outcold Solutions LLC

Monitoring Kubernetes - Version 5

Streaming Kubernetes Objects from the API Server

Starting with version 5.9 you can stream all the changes from the Kubernetes API server to Splunk. That is useful if you want to monitor all changes for the Workloads or ConfigMaps in Splunk. Or you want to recreate a Kubernetes Dashboard experience in Splunk. With the default configuration we don't forward any objects from the API Server except events.

Configuration

In the ConfigMap configuration for the collectorforkubernetes you can add additional sections for the 004-addon.conf, in the example below we will forward all objects every 10 minutes (refresh) and stream all changes right away for the Pods, Deployments and ConfigMaps.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
[input.kubernetes_watch::pods]

# disable events
disabled = false

# Set the timeout for how often watch request should refresh the whole list
refresh = 10m

apiVersion = v1
kind = pod
namespace =

# override type
type = kubernetes_objects

# specify Splunk index
index =

# set output (splunk or devnull, default is [general]defaultOutput)
output =


[input.kubernetes_watch::deployments]

# disable events
disabled = false

# Set the timeout for how often watch request should refresh the whole list
refresh = 10m

apiVersion = apps/v1
kind = Deployment
namespace =

# override type
type = kubernetes_objects

# specify Splunk index
index =

# set output (splunk or devnull, default is [general]defaultOutput)
output =


[input.kubernetes_watch::configmap]

# disable events
disabled = false

# Set the timeout for how often watch request should refresh the whole list
refresh = 10m

apiVersion = v1
kind = ConfigMap
namespace =

# override type
type = kubernetes_objects

# specify Splunk index
index =

# set output (splunk or devnull, default is [general]defaultOutput)
output =

If you need to stream other kind of objects, you can find the list of available in the Kubernetes API reference. You need to find the correct apiVersion and kind. All the object kinds with the group core should have a apiVersion v1. Other groups, like apps, should have an apiVersion apps/v1. You can also specify a namespace, if you want to forward objects only from a specific namespace.

ClusterRole rules

Please check the collectorforkubernetes.yaml configuration with the list of apiGroups and resources that collectord has access to. In the example above we request streaming of the ConfigMaps, that we don't give access in our default configuration. To be able to stream this type of objects, we need to add an additional resource to the ClusterRole

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app: collectorforkubernetes
  name: collectorforkubernetes
rules:
- apiGroups:
  - ""
  - apps
  - batch
  - extensions
  - monitoring.coreos.com
  - etcd.database.coreos.com
  - vault.security.coreos.com
  resources:
  - alertmanagers
  - cronjobs
  - daemonsets
  - deployments
  - endpoints
  - events
  - jobs
  - namespaces
  - nodes
  - nodes/proxy
  - pods
  - prometheuses
  - replicasets
  - replicationcontrollers
  - scheduledjobs
  - services
  - statefulsets
  - vaultservices
  - etcdclusters
  - configmaps
  verbs:
  - get
  - list
  - watch
- nonResourceURLs:
  - /metrics
  verbs:
  - get

Applying the changes

After you make changes to the ConfigMap and ClusterRole recreate or restart the addon Pod (the pod with the name similar to collectorforkubernetes-addon-XXX. You can just delete this pod and Deployment will recreate it for you.

Searching the data

With the configuration in the example above, the collectord will resend all the objects every 10 minutes, and stream all the changes right away. If you are planning to run the join command or populate the lookups, make sure that your search command covers more than refresh interval, you can use for example 12 minutes.

The source name

By default the source will be in the format /kubernetes/{namespace}/{apiVersion}/{kind}, where namespace is the namespace that is used to make a request (from the configuration, not the actual namespace of the object), and apiVersion and kind also from the configuration, that makes the API request to the API Server.

Attached fields

With the events we attach also kubernetes_namespace and kubernetes_node_labels, that will help you to find the objects from the right namespace or from the right cluster (if the node labels has a cluster label).

Event format

We forward objects wrapped in the watch object, which means that every event has an object field and the type (ADDED, MODIFIED or DELETED).

Objects

Searching the data

Considering that in the same time frame you can have the same object more than once (as an example if the object has been modified several times in 10 minutes), you need to group the objects by the unique identifier.

sourcetype="kubernetes_objects" source="/kubernetes//v1/pod" |
spath output=uid path=object.metadata.uid |
stats latest(_raw) as _raw, latest(_time) as _time by uid, kubernetes_namespace |
spath output=name path=object.metadata.name |
spath output=creationTimestamp path=object.metadata.creationTimestamp |
table kubernetes_namespace, name, creationTimestamp

Table


About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and operators to keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.