Monitoring Multiple Clusters and ACL
Our dashboards allows you to filter nodes based on the node labels.
If you have two clusters
dev, each cluster has
node2 nodes you can
apply labels to every node with
As an example, in the
dev cluster for the node
master you can append label
$ kubectl edit nodes/master1
Find labels list and append new label.
1 2 3 4 5 6
labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/hostname: master1 node-role.kubernetes.io/master: "" example.com/cluster: dev
If you do that for all of the nodes in all of your clusters, you will be able to use these labels on most of the
dashboards of our applications. With the given example, you will be able to filter by labels
Our collector reads node labels only at the start. To apply this change to the collector you need to restart it.
ACL for Clusters
All searches in the application are powered by the macros. If you want to separate access to the data for specific clusters or namespaces you can define different target indexes for clusters or namespaces and update the macros to use these indexes.
For example, let's assume you have Admins, Team1 and Team2 organizations in your company. You want to make Admins see data from Production and Development environments and all namespaces, Team1 only data from the NamespaceTeam1, and Team2 only data from the NamespaceTeam2.
You can define several indices
Create two HTTP Tokens. One for the Production cluster with the default index
kubernetes_prod, allow this Token to
kubernetes_prod_team2. Another token for Development cluster with the default index
kubernetes_dev, allow this Token to write to
For Kubernetes cluster running in Production environment use the First token, for Cluster running Development environment use
the Second token. Use annotations to override Indexes for Namespaces NamespaceTeam1 and
NamespaceTeam2 to redirect their data
In Splunk change the macros to always search in the indices
index=kubernetes_*. Create 3 roles in Splunk, one Admins, that
have access to all created indices, second role Team1 with access to
and third role Team2 with access to
kubernetes_dev_team2. Now, depending who is logged in
with Splunk you will see a different set of data in the application. Team1 and Team2 will not be able to see system-related information,
only logs and metrics from their Pods running in their namespaces. Admins will be able to see all the information.
- Start monitoring your Kubernetes environments in under 10 minutes.
- Automatically forward host, container and application logs.
- Test our solution with the embedded 30 days evaluation license.
- Collector configuration reference.
- Changing index, source, sourcetype for namespaces, workloads and pods.
- Forwarding application logs.
- Multi-line container logs.
- Fields extraction for application and container logs (including timestamp extractions).
- Hiding sensitive data, stripping terminal escape codes and colors.
- Forwarding Prometheus metrics from Pods.
- Configure audit logs.
- Forwarding audit logs.
- Collect metrics from control plane (etcd cluster, API server, kubelet, scheduler, controller).
- Configure collector to forward metrics from the services in Prometheus format.
Configuring Splunk Indexes
- Using not default HTTP Event Collector index.
- Configure the Splunk application to use not searchable by default indexes.
Splunk fields extraction for container logs
- Configure search-time fields extractions for container logs.
- Container logs source pattern.
Configurations for Splunk HTTP Event Collector
- Configure multiple HTTP Event Collector endpoints for Load Balancing and Fail-overs.
- Secure HTTP Event Collector endpoint.
- Configure the Proxy for HTTP Event Collector endpoint.
Monitoring multiple clusters
- Learn how you can monitor multiple clusters.
- Learn how to set up ACL in Splunk.
- Release History
- Upgrade instructions
- FAQ and the common questions
- License agreement