Streaming OpenShift Objects from the API Server
Starting with version
5.9 you can stream all the changes from the Kubernetes API server to Splunk. That is useful if you
want to monitor all changes for the Workloads or ConfigMaps in Splunk. Or you want to recreate a Kubernetes Dashboard
experience in Splunk. With the default configuration we don't forward any objects from the API Server except events.
In the ConfigMap configuration for the collectorforopenshift you can add additional sections for the
in the example below we will forward all objects every 10 minutes (
refresh) and stream all changes right away for the
Pods, Deployments and ConfigMaps.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
[input.kubernetes_watch::pods] # disable events disabled = false # Set the timeout for how often watch request should refresh the whole list refresh = 10m apiVersion = v1 kind = pod namespace = # override type type = openshift_objects # specify Splunk index index = # set output (splunk or devnull, default is [general]defaultOutput) output = [input.kubernetes_watch::deployments] # disable events disabled = false # Set the timeout for how often watch request should refresh the whole list refresh = 10m apiVersion = apps/v1 kind = Deployment namespace = # override type type = openshift_objects # specify Splunk index index = # set output (splunk or devnull, default is [general]defaultOutput) output = [input.kubernetes_watch::configmap] # disable events disabled = false # Set the timeout for how often watch request should refresh the whole list refresh = 10m apiVersion = v1 kind = ConfigMap namespace = # override type type = openshift_objects # specify Splunk index index = # set output (splunk or devnull, default is [general]defaultOutput) output =
If you need to stream other kind of objects, you can find the list of available in the Kubernetes API reference.
You need to find the correct apiVersion and kind. All the object kinds with the group
core should have a apiVersion
Other groups, like
apps, should have an apiVersion
apps/v1. You can also specify a namespace, if you want to forward
objects only from a specific namespace.
Please check the
collectorforopenshift.yaml configuration with the list of apiGroups and resources that collectord has access to.
In the example above we request streaming of the ConfigMaps, that we don't give access in our default configuration.
To be able to stream this type of objects, we need to add an additional resource to the ClusterRole
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
apiVersion: v1 kind: ClusterRole metadata: labels: app: collectorforopenshift name: collectorforopenshift rules: - apiGroups: - "" - apps - batch - extensions - monitoring.coreos.com - apps.openshift.io - build.openshift.io resources: - alertmanagers - buildconfigs - builds - cronjobs - daemonsets - deploymentconfigs - deployments - endpoints - events - jobs - namespaces - nodes - nodes/metrics - nodes/proxy - pods - prometheuses - replicasets - replicationcontrollers - scheduledjobs - services - statefulsets - configmaps verbs: - get - list - watch - nonResourceURLs: - /metrics verbs: - get
Applying the changes
After you make changes to the ConfigMap and ClusterRole recreate or restart the addon Pod (the pod with the name similar to
collectorforopenshift-addon-XXX. You can just delete this pod and Deployment will recreate it for you.
Searching the data
With the configuration in the example above, the collectord will resend all the objects every 10 minutes, and stream all
the changes right away. If you are planning to run the
join command or populate the lookups, make sure that your
search command covers more than
refresh interval, you can use for example
The source name
By default the source will be in the format
namespace is the namespace
that is used to make a request (from the configuration, not the actual namespace of the object), and
also from the configuration, that makes the API request to the API Server.
With the events we attach also
openshift_node_labels, that will help you to find the objects
from the right namespace or from the right cluster (if the node labels has a cluster label).
We forward objects wrapped in the watch object, which means that every event has an object field and the type (ADDED, MODIFIED or DELETED).
Searching the data
Considering that in the same time frame you can have the same object more than once (as an example if the object has been modified several times in 10 minutes), you need to group the objects by the unique identifier.
sourcetype="openshift_objects" source="/openshift//v1/pod" | spath output=uid path=object.metadata.uid | stats latest(_raw) as _raw, latest(_time) as _time by uid, openshift_namespace | spath output=name path=object.metadata.name | spath output=creationTimestamp path=object.metadata.creationTimestamp | table openshift_namespace, name, creationTimestamp
- Start monitoring your OpenShift environments in under 10 minutes.
- Automatically forward host, container and application logs.
- Test our solution with the embedded 30 days evaluation license.
- Collector configuration reference.
- Changing index, source, sourcetype for namespaces, workloads and pods.
- Forwarding application logs.
- Multi-line container logs.
- Fields extraction for application and container logs (including timestamp extractions).
- Hiding sensitive data, stripping terminal escape codes and colors.
- Forwarding Prometheus metrics from Pods.
- Configure audit logs.
- Forwarding audit logs.
- Collect metrics from control plane (etcd cluster, API server, kubelet, scheduler, controller).
- Configure collector to forward metrics from the services in Prometheus format.
Configuring Splunk Indexes
- Using not default HTTP Event Collector index.
- Configure the Splunk application to use not searchable by default indexes.
Splunk fields extraction for container logs
- Configure search-time fields extractions for container logs.
- Container logs source pattern.
Configurations for Splunk HTTP Event Collector
- Configure multiple HTTP Event Collector endpoints for Load Balancing and Fail-overs.
- Secure HTTP Event Collector endpoint.
- Configure the Proxy for HTTP Event Collector endpoint.
Monitoring multiple clusters
- Learn how you can monitor multiple clusters.
- Learn how to set up ACL in Splunk.
Streaming OpenShift Objects from the API Server
- Learn how you can stream all changes from the OpenShift API Server.
- Stream changes and objects from OpenShift API Server, including Pods, Deployments or ConfigMaps.
- Release History
- Upgrade instructions
- FAQ and the common questions
- License agreement