On OpenShift, swap
kubectlforocand use thecollectorforopenshift-syslognamespace.
Verify configuration
When something looks off, the first thing to do is run collectord verify from inside a Collectord pod. It checks the configuration end-to-end — license, syslog output, container runtime, file inputs — and reports each item as OK or FAILED.
Start by listing the Collectord pods:
1$ kubectl get pods -n collectorforkubernetes-syslog
2NAME READY STATUS RESTARTS AGE
3collectorforkubernetes-syslog-addon-857fccb8b9-t9qgq 1/1 Running 1 1h
4collectorforkubernetes-syslog-master-bwmwr 1/1 Running 0 1h
5collectorforkubernetes-syslog-xbnaa 1/1 Running 0 1hCollectord runs as three workloads — a DaemonSet on master nodes (collectorforkubernetes-syslog-master), a DaemonSet on the rest of the nodes (collectorforkubernetes-syslog), and a single Deployment add-on (collectorforkubernetes-syslog-addon). Run verify against one pod from each so every code path is exercised:
1$ kubectl exec -n collectorforkubernetes-syslog collectorforkubernetes-syslog-addon-857fccb8b9-t9qgq -- /collectord verify
2$ kubectl exec -n collectorforkubernetes-syslog collectorforkubernetes-syslog-master-bwmwr -- /collectord verify
3$ kubectl exec -n collectorforkubernetes-syslog collectorforkubernetes-syslog-xbnaa -- /collectord verifyEach command produces output similar to:
1Version = 5.2.176
2Build date = 181012
3Environment = kubernetes
4
5
6 General:
7 + conf: OK
8 + db: OK
9 + db-meta: OK
10 + instanceID: OK
11 instanceID = 2LEKCFD4KT4MUBIAQSUG7GRSAG
12 + license load: OK
13 trial
14 + license expiration: OK
15 license expires 2018-11-12 15:51:18.200772266 -0500 EST
16 + license connection: OK
17
18 Kubernetes configuration:
19 + api: OK
20 + pod cgroup: OK
21 pods = 18
22 + container cgroup: OK
23 containers = 39
24 + volumes root: OK
25 + runtime: OK
26 docker
27
28 Docker configuration:
29 + connect: OK
30 containers = 43
31 + path: OK
32 + cgroup: OK
33 containers = 40
34 + files: OK
35
36 CRI-O configuration:
37 - ignored: OK
38 kubernetes uses other container runtime
39
40 File Inputs:
41 x input(syslog): FAILED
42 no matches
43 + input(logs): OK
44 path /rootfs/var/log/
45
46Errors: 1The total number of errors appears at the bottom. Not every failure is a real problem — some are expected on smaller or non-standard clusters. The example above is from minikube, where this failure is benign:
input(syslog)— minikube doesn’t persist syslog to disk, so those logs aren’t available.
If you fix a real configuration error,
kubectl apply -f ./collectorforkubernetes-syslog.yamlwon’t restart the running pods. Delete them so the workloads recreate them with the new config:kubectl delete pods --all -n collectorforkubernetes-syslog.
Describe command
When the same setting can be defined on a pod, its workload, the namespace, a CRD Configuration, and the ConfigMap, it can be hard to track which value Collectord is actually using. The collectord describe command resolves a pod’s effective configuration and prints every field. Run it from inside any Collectord pod:
1kubectl exec -n collectorforkubernetes-syslog collectorforkubernetes-syslog-master-4gjmc -- /collectord describe --namespace default --pod postgres-pod --container postgres26.04Starting with version 26.04, describe also tags each resolved field with its origin in square brackets:
[pod]— the value comes from a pod annotation[namespace]— the value comes from a namespace annotation[configuration:<name>]— the value comes from a Collectord CRDConfigurationresource (the<name>matches the resource name)
This makes it easy to trace which level of the configuration hierarchy is winning when the same syslog.collectord.io/ annotation is defined at multiple levels — for example, when a CRD-level default is being overridden by a pod-level annotation, or when a namespace annotation is unexpectedly routing logs to a different output:
1$ kubectl exec -n collectorforkubernetes-syslog collectorforkubernetes-syslog-fqhmv -- /collectord describe --namespace webportal --pod audit-logger-774675c89c-rpfwx | grep '\['
2logs-type [pod] = audit_logs
3volume.1-logs-name [pod] = data
4volume.1-logs-glob [pod] = *.logEspecially useful when debugging why a pod is routing to an unexpected output, using the wrong sourcetype, or picking up a field extraction you didn’t expect.
Collect diagnostic information
When you open a support case, attach a diagnostic bundle so we can reproduce the issue without a back-and-forth. The bundle includes performance, memory, and telemetry metrics, host Linux information, and the Collectord configuration — sensitive output settings are stripped out.
Run all four steps below.
1. Collect diagnostics information run following command
Pick any Collectord pod and run collectord diag. The command takes a few minutes:
1kubectl exec -n collectorforkubernetes-syslog collectorforkubernetes-syslog-master-bwmwr -- /collectord diag --stream 1>diag.tar.gzYou can extract the archive yourself to see exactly what’s in it — performance and memory profiles, basic telemetry metrics, host Linux info, and license metadata.
2. Collect logs
1kubectl logs -n collectorforkubernetes-syslog --timestamps collectorforkubernetes-syslog-master-bwmwr 1>collectorforkubernetes.log 2>&13. Run verify
1kubectl exec -n collectorforkubernetes-syslog collectorforkubernetes-syslog-master-bwmwr -- /collectord verify > verify.log4. Prepare tar archive
1tar -czvf collectorforkubernetes-$(date +%s).tar.gz verify.log collectorforkubernetes.log diag.tar.gz