Outcold Solutions LLC

Monitoring OpenShift - Version 4

You are looking at documentation for an older release. See the current release documentation.

Forwarding Application Logs

There are can several various cases when you cannot write logs to the standard output, so our collector will not automatically forward the log files

In most cases this is caused by the requirement to keep two processes running together. It is an anti-pattern caused by the necessity to schedule multiple services together, but Docker did not have an easy way to do that and did not have a simple communication mechanism for these containers. You can find original documentation on this problem Run multiple services in a container.

But Kubernetes solved this problem allowing us to schedule multiple containers in the same Pod. You can find the section about running multiple containers in the same Pod in the Kubernetes Documentation: Pod Overview. Containers running in the same Pod are sharing the same loopback interface, allowing them to communication over local network. Another way is to communicate using the Shared Volume, as in the example Communicate Between Containers in the Same Pod Using a Shared Volume .

Of course, there are still can be reasons to keep multiple processes in the same container, or when you can't redirect logs to the standard output of the container. Most commonly these logs are called as application logs. To be able to forward these logs to Splunk you can still use collector.

In our configuration file you can find that stanzas for collector logs [input.files], syslog [input.files::logs] and other logs [input.files::logs] that it can find under /rootfs/var/log/ mounted from the host /var/log/.

You can add a section for the application logs as an example to the configuration

[input.files::application-logs]

# disable logs
disabled = false

# root location of docker files
path = /rootfs/var/application-logs/

# walk all the directories under the root folder
recursive = true

# glob patten for search the files under the root location
# the pattern is matched to the recursive path to the root location
glob = */*.log

# in some cases you might want to use regex like match to match
# the name, similarly to glob patten, the relative name to the root path
# will be matched against this regex
# match =

# files are read using polling schema, when reach the EOF how often to check if files got updated,
# we use default 250ms everywhere
pollingInterval = 250ms

# how often to look for the new files under logs path
walkingInterval = 5s

# include verbose fields in events (file offset, useful only for developers)
verboseFields = false

# override type
type = kubernetes_application_logs

# specify Splunk index (or default token index will be used)
index =

# field extraction, you can use regex with named groups to extract fields,
# see syslog configuration for example
extraction =

# if you have extracted fields, using regex, you can specify which field represents timestamp
timestampField =

# format for timestamp
# the layout defines the format by showing how the reference time, defined to be `Mon Jan 2 15:04:05 -0700 MST 2006`
# see https://golang.org/pkg/time/#Parse for details
timestampFormat =

# timestamp location (if not defined by the format), if you want to override the time zone
timestampLocation =

# Adjust date, if month/day aren't set in the logs, you can tell collector to set month and day of current time
timestampSetMonth = false
timestampSetDay = false

Now collector will pick up all the logs stored on the host under /rootfs/var/application-logs/*/*.log. For that you need to mount this volume to the collector.

...
        volumeMounts:
        - name: application-logs
          mountPath: /rootfs/var/application-logs/
...
      volumes:
      - name: application-logs
        hostPath:
          path: /var/application-logs/

Assuming that application in the container writing logs to the /db/logs you can mount a host volume from /var/application-logs/my-db/ to the application container.

...
        volumeMounts:
        - name: application-logs
          mountPath: /db/logs/
...
      volumes:
      - name: application-logs
        hostPath:
          path: /var/application-logs/my-db/

Important notes:

  • The Collector cannot recognize these logs as logs from the specific pod or container, and only OpenShift node metadata will be attached to these events.
  • If one node can schedule multiple pods of the same type on the same node, you need to be sure that these containers will not write logs to the same location. You can change the logic in the application to prefix the log files with the hostname of the pod as an example.
  • Make sure that these logs are rotated, so they will not fill the whole disk space.

After applying these configurations and restarting pods with the collector, you should see files picked up by the collector in the collector container logs.


About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and operators to keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.