Outcold Solutions LLC

Monitoring OpenShift - Version 3

You are looking at documentation for an older release. See the current release documentation.

Monitoring OpenShift Configuration

Created OpenShift Objects

Configuration file collectorforkubernetes.yaml creates several OpenShift Objects.

  • ClusterRole collectorforopenshift with limited capabilities to get, list and watch most of the various deployment objects. Collector uses this information to enrich logs and stats with openshift specific metadata.
  • ServiceAccount collectorforopenshift is used to connect to OpenShift API.
  • ClusterRoleBinding collectorforopenshift to bind service account to cluster role.
  • ConfigMap collectorforopenshift delivers configuration file for collector.
  • DaemonSet collectorforopenshift allows to deploy collector on every node, including master node.

Please read commentaries in collectorforopenshift.yaml file to get more deep details on all configurations and source of the logs and metrics.

Collector configuration

ConfigMap collectorforopenshift delivers configuration file for collector. This is an ini file, where all the configuration values are commented out. All the commented out values are default values.

Values can be overridden using environment values with the format as specified below

COLLECTOR__{ANY_NAME}={section}__{key}={value}

Configurations with environment variables are the simplest way to explore and debug quickly, but we recommend to write your configuration file based on the default provided with collectorforopenshift.yaml.

Join Rules

By default collector joins all messages with previous if they start with spaces. Below you can find how to specify a custom rule on the example of java application.

If this is a sample of the application logs.

[2017-09-04T06:28:05,664][WARN ][MyComponent]
java.security.AccessControlException: access denied
  at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_131]
  at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_131]
[2017-09-04T06:28:05,664][WARN ][MyComponent] another message

You can specify the join rules, where you configure that you want to match all containers with the name that contains my_app in their name, and pattern for the new message should match regex ^\[\d{4}-.

[pipe.join::my_app]
matchRegex.openshift_container_name = .+my_app.+
patternRegex = ^\[\d{4}-

Cluster labels

Our dashboards allows you to filter nodes based on the node labels, if you label your nodes with the names of the clusters you can easily filter in into specific cluster. To leverage that capability, you need to label nodes in your clusters.

For example, if you have two clusters prod and dev, each cluster has master1, node1 and node2 nodes you can apply labels to every node by using kubectl.

As an example for dev cluster, node master you can append label example.com/cluster: dev

$ oc edit nodes/master1

And add label

1
2
3
4
5
6
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    kubernetes.io/hostname: master1
    node-role.kubernetes.io/master: ""
    example.com/cluster: dev

Do that for all of the nodes in your clusters. After that you will be able to filter clusters in dashboards by using node labels example.com/cluster=dev and example.com/cluster=prod.

Our collector reads node labels only at start. To apply this change to collector you need to restart it. One of doing that is by applying the label to the pod template in daemonset configuration.

collectorforopenshift.yaml

Download latest configuration file from Configuration page.


About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and operators to keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.