Outcold Solutions LLC

Monitoring OpenShift - Version 5

Configurations for Splunk HTTP Event Collector

Configure HTTP Event Collector secure connection

Splunk by default uses self-signed certificates. Collector provides various configuration options for you to set up how it should connect to HTTP Event Collector.

Configure trusted SSL connection to the self-signed certificate

If you are using Splunk self-signed certificate, you can copy server CA certificate from $SPLUNK_HOME/etc/auth/cacert.pem and create a secret from it.

oc --namespace collectorforopenshift create secret generic splunk-cacert --from-file=./cacert.pem

For every collectorforopenshift workload (2 Daemon Sets and 1 Deployment) you need to attach this secret as a volume.

...
        volumeMounts:
        - name: splunk-cacert
          mountPath: "/splunk-cacert/"
          readOnly: true
        ...
      volumes:
      - name: splunk-cacert
        secret:
          secretName: splunk-cacert
      ...

And update the ConfigMap under section [output.splunk]

[output.splunk]

# Allow invalid SSL server certificate
insecure = false

# Path to CA cerificate
caPath = /splunk-cacert/cacert.pem

# CA Name to verify
caName = SplunkServerDefaultCert

In this configuration, we define the path to the CA server certificate that collector should trust and identify the name of the server, specified in the certificate, which is SplunkServerDefaultCert in case of default self-signed certificate.

After applying this update we set up trusted SSL connection between collector and HTTP Event Collector.

HTTP Event Collector incorrect index behavior

HTTP Event Collector rejects payloads with the indexes that specified Token does not allow to write. When you override indexes with the annotations, it is a very common mistake to make a misprint in the index name or forget to enable writing capabilities for the token in Splunk.

Collector provides configuration how these errors should be handled with configuration incorrectIndexBehavior.

  • RedirectToDefault - this is the default behavior, which forwards events with an incorrect index to default index of the HTTP Event Collector.
  • Drop - this configuration drops events with incorrect index.
  • Retry - this configuration keeps retrying. Some pipelines, like process stats, can be blocked for the whole host with this configuration.

You can specify behavior with the configuration.

[output.splunk]
incorrectIndexBehavior = Drop

Using proxy for HTTP Event Collector

If you need to use a Proxy for HTTP Event Collector, you can define that with the configuration. If you are using SSL connection, you need to include the certificate used by the Proxy as well (similarly how we attach the certificate for Splunk

[output.splunk]
url = https://hec.example.com:8088/services/collector/event/1.0
token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0
proxyUrl = http://proxy.example:4321
caPath = /proxy-cert/proxie-ca.pem

Using multiple HTTP Event Collector endpoints for Load Balancing and Fail-over

The Collector can accept multiple HTTP Event Collector URLs for Load Balancing (in case if you are using multiple hosts with the same configuration) and for fail-over.

The collector provides you with 3 different algorithms for URL selection:

  • random - choose random URL on first selection and after each failure (connection or HTTP status code >= 500)
  • round-robin - choose URL starting from the first one and bump on each failure (connection or HTTP status code >= 500)
  • random-with-round-robin - choose random url on first selection and after that in round-robin on each failure (connection or HTTP status code >= 500)`

The default value is random-with-round-robin

[output.splunk]
urls.0 = https://hec1.example.com:8088/services/collector/event/1.0
urls.1 = https://hec2.example.com:8088/services/collector/event/1.0
urls.2 = https://hec3.example.com:8088/services/collector/event/1.0

urlSelection = random-with-round-robin

token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0

Enable indexer acknowledgement

HTTP Event Collector provides an Indexer acknowledgment, which allows knowing when payload not only accepted by HTTP Event Collector but also written to the Indexer. Enabling this feature can significantly reduce the performance of the clients, including the collector. But if you need guarantees for data delivery, you can enable it for HTTP Event Collector token and in the collector configuration.

[general]
acceptLicense = true

[output.splunk]
url = https://hec.example.com:8088/services/collector/event/1.0
ackUrl = https://hec.example.com:8088/services/collector/ack
token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0
ackEnabled = true
ackTimeout = 3m

Client certificates for collector

If you secure your HTTP Event Collector endpoint with the requirement of client certificates, you can embed them in the image and provide configuration to use them

[output.splunk]
url = https://hec.example.com:8088/services/collector/event/1.0
token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0
clientCertPath = /client-cert/client-cert.pem
clientKeyPath = /client-cert/client-cert.key

Support for multiple Splunk clusters

If you need to forward logs from the same OpenShift cluster to multiple Splunk Clusters you can configure additional Splunk output in the configuration

[output.splunk::prod1]
url = https://prod1.hec.example.com:8088/services/collector/event/1.0
token = AF420832-F61B-480F-86B3-CCB5D37F7D0D

All other configurations will be used from the default output output.splunk.

And override the outputs for the Pods or Project like collectord.io/output=splunk::prod1.


About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and operators to keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.