Outcold Solutions LLC

Monitoring OpenShift - Version 5

Monitoring Multiple Clusters and ACL

Identify the cluster with the configuration

When you apply collectorforopenshift configuration specify the cluster name with the configuration

[general]

...

fields.openshift_cluster = -

For example

[general]

...

fields.openshift_cluster = development

Cluster labels

Our dashboards allows you to filter nodes based on the node labels.

If you have two clusters prod and dev, each cluster has master1, node1 and node2 nodes you can apply labels to every node with oc.

As an example, in the dev cluster for the node master you can append label example.com/cluster: dev.

$ oc edit nodes/master1

Find labels list and append new label.

1
2
3
4
5
6
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    kubernetes.io/hostname: master1
    node-role.kubernetes.io/master: ""
    example.com/cluster: dev

If you do that for all of the nodes in all of your clusters, you will be able to use these labels on most of the dashboards of our applications. With the given example, you will be able to filter by labels example.com/cluster=dev and example.com/cluster=prod.

Our collector reads node labels only at the start. To apply this change to the collector you need to restart it.

ACL for Clusters

All searches in the application are powered by the macros. If you want to separate access to the data for specific clusters or namespaces you can define different target indexes for clusters or namespaces and update the macros to use these indexes.

For example, let's assume you have Admins, Team1 and Team2 organizations in your company. You want to make Admins see data from Production and Development environments and all namespaces, Team1 only data from the NamespaceTeam1, and Team2 only data from the NamespaceTeam2.

You can define several indices

  • openshift_prod_team1
  • openshift_prod_team2
  • openshift_prod
  • openshift_dev_team1
  • openshift_dev_team2
  • openshift_dev

Create two HTTP Tokens. One for the Production cluster with the default index openshift_prod, allow this Token to write to openshift_prod_team1, openshift_prod_team2. Another token for Development cluster with the default index openshift_dev, allow this Token to write to openshift_dev_team1, openshift_dev_team2.

For OpenShift cluster running in Production environment use the First token, for Cluster running Development environment use the Second token. Use annotations to override Indexes for Namespaces NamespaceTeam1 and NamespaceTeam2 to redirect their data to indexes openshift_prod_team1, openshift_prod_team2, openshift_dev_team1, openshift_dev_team2.

In Splunk change the macros to always search in the indices index=openshift_*. Create 3 roles in Splunk, one Admins, that have access to all created indices, second role Team1 with access to openshift_prod_team1 and openshift_dev_team1, and third role Team2 with access to openshift_prod_team2 and openshift_dev_team2. Now, depending who is logged in with Splunk you will see a different set of data in the application. Team1 and Team2 will not be able to see system-related information, only logs and metrics from their Pods running in their namespaces. Admins will be able to see all the information.


About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and operators to keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.