When more than one cluster forwards data into the same Splunk, every dashboard turns into a guessing game — which cluster did this pod come from, which environment is this node in. The fix is to tag the data at the source so you can filter cleanly in Splunk.
Identify the cluster with the configuration
The simplest tag is a cluster name. Set it once on each Collectord deployment and every event from that cluster carries it:
1[general]
2
3...
4
5fields.kubernetes_cluster = -For example, on a development cluster:
1[general]
2
3...
4
5fields.kubernetes_cluster = developmentCluster labels
A second axis — useful when you want to filter by environment, region, or any other dimension your nodes already encode — is node labels. Most dashboards in the app let you filter on them.
Take two clusters, prod and dev, each with master1, node1, and node2. Tag every node in the dev cluster with example.com/cluster: dev and every node in prod with example.com/cluster: prod using kubectl:
1$ kubectl edit nodes/master1Find the labels list and append the new label:
1 labels:
2 beta.kubernetes.io/arch: amd64
3 beta.kubernetes.io/os: linux
4 kubernetes.io/hostname: master1
5 node-role.kubernetes.io/master: ""
6 example.com/cluster: devRepeat for every node in every cluster. The dashboards will then let you filter on example.com/cluster=dev or example.com/cluster=prod.
Collectord reads node labels at startup only. After changing labels, restart Collectord so it picks them up.
ACL for Clusters
When different teams should see different data — Team1 their namespace, Team2 theirs, Admins everything — push the separation down to Splunk indexes and let Splunk’s role system enforce it. Every search in the app runs through macros, so once the indexes are wired up, the macros do the rest.
A worked example: you have Admins, Team1, and Team2. Admins should see Production and Development across all namespaces, Team1 should see only NamespaceTeam1, and Team2 should see only NamespaceTeam2.
Define six indexes — one default per cluster, plus one per team per cluster:
kubernetes_prod_team1kubernetes_prod_team2kubernetes_prodkubernetes_dev_team1kubernetes_dev_team2kubernetes_dev
Then create two HEC tokens. The Production token defaults to kubernetes_prod and is allowed to write to kubernetes_prod_team1 and kubernetes_prod_team2. The Development token defaults to kubernetes_dev and is allowed to write to kubernetes_dev_team1 and kubernetes_dev_team2.
Use the Production token on the production cluster, the Development token on the development cluster, and use annotations on NamespaceTeam1 and NamespaceTeam2 to redirect their data into kubernetes_prod_team1, kubernetes_prod_team2, kubernetes_dev_team1, and kubernetes_dev_team2 respectively.
On the Splunk side, change the macros to search across index=kubernetes_*, then create three roles: Admins with access to every index, Team1 with access to kubernetes_prod_team1 and kubernetes_dev_team1, and Team2 with access to kubernetes_prod_team2 and kubernetes_dev_team2. The same dashboards now show different data depending on who’s logged in — Team1 and Team2 see only logs and metrics from their own pods, while Admins see everything including system-level data.