Monitoring Multiple Clusters
Identify the cluster with the configuration
When you apply the collectorforopenshift configuration, specify the cluster name with the configuration
[general]
...
fields.openshift_cluster = -
For example
[general]
...
fields.openshift_cluster = development
Cluster labels
Our dashboards allow you to filter nodes based on the node labels.
If you have two clusters prod
and dev
, and each cluster has master1
, node1
and node2
nodes, you can
apply labels to every node with oc
.
As an example, in the dev
cluster for the node master1
you can append the label example.com/cluster: dev
.
$ oc edit nodes/master1
Find the labels list and append the new label.
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: master1
node-role.kubernetes.io/master: ""
example.com/cluster: dev
If you do that for all the nodes in all your clusters, you will be able to use these labels on most of the
dashboards in our applications. With the given example, you will be able to filter by labels example.com/cluster=dev
and example.com/cluster=prod
.
Our collectord reads node labels only at startup. To apply this change to the collectord, you need to restart it.
ACL for Clusters
All searches in the application are powered by macros. If you want to separate access to the data for specific clusters or namespaces, you can define different target indexes for clusters or namespaces and update the macros to use these indexes.
For example, let’s assume you have Admins, Team1, and Team2 organizations in your company. You want Admins to see data from Production and Development environments and all namespaces, Team1 to see only data from NamespaceTeam1, and Team2 to see only data from NamespaceTeam2.
You can define several indices
openshift_prod_team1
openshift_prod_team2
openshift_prod
openshift_dev_team1
openshift_dev_team2
openshift_dev
Create two HTTP Tokens. One for the Production cluster with the default index openshift_prod
—allow this Token to
write to openshift_prod_team1
and openshift_prod_team2
. Another token for the Development cluster with the default index
openshift_dev
—allow this Token to write to openshift_dev_team1
and openshift_dev_team2
.
For the OpenShift cluster running in the Production environment, use the first token; for the cluster running in the Development environment, use
the second token. Use annotations to override indexes for namespaces NamespaceTeam1 and
NamespaceTeam2 to redirect their data
to indexes openshift_prod_team1
, openshift_prod_team2
, openshift_dev_team1
, and openshift_dev_team2
.
In Splunk, change the macros to always search in the indices index=openshift_*
. Create 3 roles in Splunk: one for Admins that
has access to all created indices, a second role for Team1 with access to openshift_prod_team1
and openshift_dev_team1
,
and a third role for Team2 with access to openshift_prod_team2
and openshift_dev_team2
. Now, depending on who is logged in
to Splunk, you will see a different set of data in the application. Team1 and Team2 will not be able to see system-related information—
only logs and metrics from their pods running in their namespaces. Admins will be able to see all the information.
Links
- Installation
- Start monitoring your OpenShift environments in under 10 minutes.
- Automatically forward host, container and application logs.
- Test our solution with the embedded 30 days evaluation license.
- Collectord Configuration
- Collectord configuration reference.
- Annotations
- Changing index, source, sourcetype for namespaces, workloads and pods.
- Forwarding application logs.
- Multi-line container logs.
- Fields extraction for application and container logs (including timestamp extractions).
- Hiding sensitive data, stripping terminal escape codes and colors.
- Forwarding Prometheus metrics from Pods.
- Audit Logs
- Configure audit logs.
- Forwarding audit logs.
- Prometheus metrics
- Collect metrics from control plane (etcd cluster, API server, kubelet, scheduler, controller).
- Configure the collectord to forward metrics from the services in Prometheus format.
- Configuring Splunk Indexes
- Using not default HTTP Event Collector index.
- Configure the Splunk application to use not searchable by default indexes.
- Splunk fields extraction for container logs
- Configure search-time field extractions for container logs.
- Container logs source pattern.
- Configurations for Splunk HTTP Event Collector
- Configure multiple HTTP Event Collector endpoints for Load Balancing and Fail-overs.
- Secure HTTP Event Collector endpoint.
- Configure the Proxy for HTTP Event Collector endpoint.
- Monitoring multiple clusters
- Learn how to monitor multiple clusters.
- Learn how to set up ACL in Splunk.
- Streaming OpenShift Objects from the API Server
- Learn how to stream all changes from the OpenShift API Server.
- Stream changes and objects from OpenShift API Server, including Pods, Deployments or ConfigMaps.
- License Server
- Learn how to configure a remote License URL for Collectord.
- Monitoring GPU
- Alerts
- Troubleshooting
- Release History
- Upgrade instructions
- Security
- FAQ and the common questions
- License agreement
- Pricing
- Contact