Outcold Solutions LLC

Forwarding Kubernetes logs to ElasticSearch and OpenSearch

Troubleshooting

Verify configuration

Get the list of the pods

$ kubectl get pods -n collectorforkubernetes
NAME                                           READY     STATUS    RESTARTS   AGE
collectorforkubernetes-elasticsearch-addon-857fccb8b9-t9qgq   1/1       Running   1          1h
collectorforkubernetes-elasticsearch-xbnaa                    1/1       Running   0          1h

Considering that we have 2 different deployment types, the DaemonSet and one Deployment addon (collectorforkubernetes-syslog-addon) verify one node from each deployment (in example below change the pod names to the pods that are running on your cluster).

$ kubectl exec -n collectorforkubernetes collectorforkubernetes-elasticsearch-addon-857fccb8b9-t9qgq -- /collectord verify
$ kubectl exec -n collectorforkubernetes collectorforkubernetes-elasticsearch-xbnaa -- /collectord verify

For each command you will see an output similar to

Version = 5.20.400
Build date = 230405
Environment = kubernetes


  General:
  + conf: OK
  + db: OK
  + db-meta: OK
  + instanceID: OK
    instanceID = 2T55I6TH4P09H9CCSLT0CDKCV8
  + license load: OK
  + license expiration: OK
  + license connection: OK

  ElasticSearch output:
  + OPTIONS(url=https://10.211.55.2:9200): OK

  Kubernetes configuration:
  + api: OK
  + volumes root: OK
  + runtime: OK
    containerd

  CRI-O configuration:
  - ignored: OK
    kubernetes uses other container runtime

  Containerd configuration:
  + api: OK
  + files: OK

  File Inputs:
  + input(syslog): OK
    path /rootfs/var/log/
  + input(logs): OK
    path /rootfs/var/log/

  Journald input:
  + input(journald): OK

With the number of errors at the end.

If you find some error in the configuration, after applying the change kubectl apply -f ./collectorforkubernetes-elasticsearch.yaml you will need to recreate pods, for that you can just delete all of them in our namespace kubectl delete pods --all -n collectorforkubernetes. The workloads will recreate them.

Describe command

When you apply annotations through the namespace, workload, configurations, and pods, it could be hard to track which annotations are applied to the Pod or Container. You can use a describe command of collectord to get information which annotations are used for the specific Pod. You can use any collectord Pod to run this command on the cluster

kubectl exec -n collectorforkubernetes collectorforkubernetes-elasticsearch-4gjmc -- /collectord describe --namespace default --pod postgres-pod --container postgres

Collect diagnostic information

If you need to open a support case, you can collect diagnostic information, including performance, metrics, and configuration (excluding splunk URL and Token).

1. Collect diagnostics information run following command

Choose pod from which you want to collect a diag information.

The following command takes several minutes.

kubectl exec -n collectorforkubernetes collectorforkubernetes-elasticsearch-bwmwr -- /collectord diag --stream 1>diag.tar.gz

You can extract a tar archive to verify the information that we collect. We include information about performance, memory usage, basic telemetry metrics, information file with the information of the host Linux version and basic information about the license.

Performance information is not collected by default unless you include a flag --include-performance-profiles in the command.

2. Collect logs

kubectl logs -n collectorforkubernetes --timestamps collectorforkubernetes-elasticsearch-bwmwr  1>collectorforkubernetes.log 2>&1

3. Run verify

kubectl exec -n collectorforkubernetes collectorforkubernetes-elasticsearch-bwmwr -- /collectord verify > verify.log

4. Prepare tar archive

tar -czvf collectorforkubernetes-$(date +%s).tar.gz verify.log collectorforkubernetes.log diag.tar.gz
  • Installation
    • Forwarding container logs, application logs, host logs and audit logs
    • Test our solution with the embedded 30-days evaluation license.
  • Collectord Configuration
    • Collectord configuration reference for Kubernetes and OpenShift clusters.
  • Annotations
    • Changing a type and format of messages forwarded from namespaces, workloads and pods.
    • Forwarding application logs.
    • Multi-line container logs.
    • Fields extraction for application and container logs (including timestamp extractions).
    • Hiding sensitive data, stripping terminal escape codes and colors.
  • Troubleshooting
  • FAQ and the common questions
  • License agreement
  • Pricing
  • Contact

About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and operators to keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.