This guide walks you through deploying Collectord to forward Kubernetes container logs, host logs, and audit logs to ElasticSearch or OpenSearch. A typical install takes under 10 minutes. If you don’t have a license yet, you can request a 30-day evaluation.
Install Collectord for Kubernetes / OpenShift
Note for clusters running Collectord for Splunk
If you’re already running Collectord for Splunk on the same cluster, the ElasticSearch deployment lives alongside it without conflict. A few details worth knowing up front:
- Both Collectord instances share the
collectorforkubernetesnamespace. The ElasticSearch deployment doesn’t disturb the Splunk one — they run side by side. - The ElasticSearch instance ships with
annotationSubdomainset toelasticsearch, so it only readselasticsearch.collectord.io/annotations and ignores plaincollectord.io/annotations meant for the Splunk instance. This is what keeps the two from stepping on each other’s annotations. - Licensing is shared. Use the same license key for both — running multiple Collectord instances on one cluster doesn’t multiply your usage.
Installation
If you prefer Helm, use the chart that matches your destination:
1. Download configuration
Grab the latest collectorforkubernetes-elasticsearch.yaml. The manifest creates the collectorforkubernetes namespace and deploys every workload Collectord needs.
2. Configure Collectord
Open the file in your editor and point it at one or more ElasticSearch hosts. Review and accept the license agreement and paste in your license key — request an evaluation key with this form if you need one.
1[general]
2
3acceptLicense = false
4
5license =
6
7fields.orchestrator.cluster.name = -
8
9...
10
11# ElasticSearch output
12[output.elasticsearch]
13
14host =
15
16authorizationBasicUsername =
17authorizationBasicPassword =
18
19insecure = falseA filled-in example:
1[general]
2
3acceptLicense = true
4
5license = ...
6
7fields.orchestrator.cluster.name = development
8
9...
10
11# ElasticSearch output
12[output.elasticsearch]
13
14host = https://elasticsearch:9200
15
16authorizationBasicUsername = elastic
17authorizationBasicPassword = elastic
18
19insecure = true3. Additional Configurations
If you’re deploying onto a cluster that’s been running for a while and has a lot of historical logs on disk, Collectord will start by forwarding all of them — which can spike both your network and your ElasticSearch ingest. Use the
[general]settingsthruputPerSecondto cap throughput andtooOldEventsto skip events older than a given age.Collectord submits an index lifecycle policy named
logs-collectordthat deletes indices older than 30 days. Reviewes-default-index-lifecycle-management-policy.jsonand adjust the retention to match your needs.Collectord submits two index templates — one for the
logs-collectord-${COLLECTORD_VERSION}datastream and one forlogs-collectord-failed-${COLLECTORD_VERSION}. Reviewes-default-index-template.jsonandes-failed-index-template.jsonand tune the shard and mapping settings for your environment.The
logs-collectord-failed-${COLLECTORD_VERSION}index is where Collectord routes events that fail ingestion into the default index — typically because of a mapping conflict (for example, the type of a field changed and ElasticSearch can no longer index it). Watching this index is a quick way to spot mapping problems.
4. Apply the configuration
Apply the manifest with kubectl:
1$ kubectl apply -f ./collectorforkubernetes-elasticsearch.yamlCheck that the workloads came up:
1$ kubectl get all --namespace collectorforkubernetesGive it a few moments to pull the image and start the containers. Once the pods are Running, head to ElasticSearch or OpenSearch and you should see data arriving.
By default, Collectord forwards container logs, host logs (including syslog), and audit logs (when enabled on the API server).
ElasticSearch configuration
You can start using ElasticSearch right away — open Observability -> Logs and you’ll see your data.
OpenSearch configuration
For a fresh OpenSearch installation, you need to create an index pattern before logs are visible:
- Go to
Management->Stack Management->Index Patterns->Create Index Pattern - Use
logs-collectord-*as the index pattern name. - Click
Next step - Select
@timestampas the time field name. - Click
Create index pattern.