Outcold Solutions - Monitoring Kubernetes, OpenShift and Docker in Splunk

Getting started with Monitoring Kubernetes, Openshift and Docker on your development box

If you found any issues or having troubles with this guide, please reach out to us at support@outcoldsolutions.com.

This blog post will guide you through the process of setting up a development environment with Docker, Kubernetes, and OpenShift, and monitoring them using Splunk. This guide is mostly referencing macOS as a development environment, but you can adjust it for other operating systems as well.

We provide configurations that work out of the box in most cases. The good thing is that most of the Kubernetes and OpenShift providers have very similar default configurations. In this blog post I will guide you through steps that you need to perform on a local development box to install all three of our main applications for Monitoring Docker, Kubernetes and OpenShift in Splunk.

Install Splunk Enterprise using Docker

You can install Splunk as usual on your development box, or use Splunk official Docker image to download and install Splunk Enterprise. Below is a basic configuration to get Splunk up and running; you can find more details about available configuration options in the documentation.

Note for macOS users, make sure in Docker for Mac settings under General enable Use Rosetta for x86_64/amd64 emulation on Apple Silicon

You can change the default password changeme to a more secure password, and change the token to a unique GUID, make sure to keep this change in sync with other commands below.

docker run -d --platform linux/amd64 \
    --name splunk \
    -p 8000:8000 \
    -p 8088:8088 \
    -e 'SPLUNK_GENERAL_TERMS=--accept-sgt-current-at-splunk-com' \
    -e 'SPLUNK_START_ARGS=--accept-license' \
    -e 'SPLUNK_PASSWORD=changeme' \
    -e 'SPLUNK_HEC_TOKEN=00000000-0000-0000-0000-000000000000' \
    -v splunk_etc:/opt/splunk/etc \
    -v splunk_var:/opt/splunk/var \
    splunk/splunk:latest

Give it a few minutes and check the logs from the container docker logs splunk; at the end you should see a message Ansible playbook complete, will begin streaming var/log/splunk/splunkd_stderr.log.

Open Splunk Web in the Browser http://localhost:8000 and log in with the default user admin and password changeme

Splunk

Splunk comes with the trial license for 500MB ingestion a day, and for 30 days. If you want to restart the trial license, you can stop the container, remove it, remove both volumes docker volume rm splunk_etc splunk_var, and recreate it again.

But if you want to run it for more than 30 days, you can get a developer license from here. Which you can renew every 6 months, and it will allow you to ingest up to 10GB a day.

Install applications for Monitoring Docker, OpenShift and Kubernetes

Depending on which platform you are planning to use, you can install one or more of our applications. Assuming you are running Splunk localy, you can go to the page http://localhost:8000/en-US/manager/launcher/apps/local and click on the Browse More Apps button in top right corner. And search for a specific application or just for outcold solutions. When you will click install button, it will ask you for the Splunk credentials. Those are credentials you use to login to https://splunk.com, including https://splunkbase.splunk.com. If you don’t have account, you will need to create one.

Another option is to download applications directly from SplunkBase:

And upload them to your Splunk instance, using the same page http://localhost:8000/en-US/manager/launcher/apps/local, just use the button Install App From File.

Request development license for Collectord

You can request development license for Collectord from here. License will be delivered instantly to your inbox. You can renew license every 180 days using the same form.

Install Collectord on Docker

You can follow our latest installation instructions to learn more details how to install Collectord.

But below is the example of how to install Collectord on Docker, that will forward metrics and logs to Splunk instance that we just started. Just make sure to replace ... with the license key that you received from us.

docker run -d \
    --name collectorfordocker \
    --volume /:/rootfs/:ro \
    --volume collector_data:/data/ \
    --cpus=1 \
    --cpu-shares=204 \
    --memory=256M \
    --restart=always \
    --env "COLLECTOR__SPLUNK_URL=output.splunk__url=https://host.docker.internal:8088/services/collector/event/1.0" \
    --env "COLLECTOR__SPLUNK_TOKEN=output.splunk__token=00000000-0000-0000-0000-000000000000"  \
    --env "COLLECTOR__SPLUNK_INSECURE=output.splunk__insecure=true"  \
    --env "COLLECTOR__ACCEPTLICENSE=general__acceptLicense=true" \
    --env "COLLECTOR__LICENSE=general__license=..." \
    --env "COLLECTOR__CLUSTER=general__fields.docker_cluster=-" \
    --privileged \
    outcoldsolutions/collectorfordocker:25.10.3

Navigate to Monitoring Docker application in Splunk http://localhost:8000/en-US/manager/monitoringdocker/

Monitoring Docker

After that, we highly recommend looking at use cases of how you can configure forwarding pipelines with the annotations.

Kubernetes using minikube

The easiest way to start playing with Kubernetes is to use minikube. You can follow official documentation to learn how to install it. But if you already have Homebrew installed on your Mac, you can install minikube with brew install vfkit minikube kubernetes-cli (we also install vfkit driver and kubectl with this command).

Depending on your system, you might want to change default configuration:

  • minikube config set cpus 4 - adjust CPU (depends on how much CPU your development box has)
  • minikube config set disk-size 60GB - give more disk space for minikube VM (default is 20GB)
  • minikube config set memory 4096 - provide more memory (default is 2048)

And on macOS change default driver to vfkit and containerd as container runtime.

  • minikube config set driver vfkit
  • minikube config set container-runtime containerd

After that you can start local instance with command

minikube start

Install Collectord to start collecting and forwarding metrics and logs to Splunk instance that we just started. Just make sure to replace ... with the license key that you received from us.

For simplicity we will use helm to install Collectord.

Install helm with Homebrew

brew install helm

And after that install Collectord (make sure to replace ... with the license key that you received from us)

helm install collectorforkubernetes \
    --namespace collectorforkubernetes \
    --create-namespace \
    --set collectord.configuration.general.acceptLicense=true \
    --set collectord.configuration.general.license=... \
    --set collectord.configuration.outputs.splunk.default.url=https://host.minikube.internal:8088/services/collector/event/1.0 \
    --set collectord.configuration.outputs.splunk.default.token=00000000-0000-0000-0000-000000000000 \
    --set collectord.configuration.outputs.splunk.default.insecure=true \
    oci://registry-1.docker.io/outcoldsolutions/collectord-splunk-kubernetes

Navigate to Monitoring Kubernetes application in Splunk http://localhost:8000/en-US/app/monitoringkubernetes/.

Monitoring Kubernetes

You can try use cases of how to transform and forward additional data the annotations for Collectord.

OpenShift using crc

Installing local OpenShift using crc is pretty straightforward as well. But to use it with official OpenShift distribution, you will need to create a RedHat Account. If for some reason you cannot create RedHat account, you can still use crc to install OKD distribution of OpenShift.

To use crc with okd you can download it from here and set the preset okd with crc config set preset okd. See Running OKD with CRC.

crc config set kubeadmin-password kubeadmin
crc config set memory 12288
crc config set disk-size 100
crc config set cpus 8
crc config set host-network-access true

In case of OpenShift preset you also will need to set crc config set pull-secret-file ~/Downloads/pull-secret (where ~/Downloads/pull-secret is the path to your pull secret file that you will download from RedHat Account).

Now just setup and start crc:

crc setup
crc start

After that you can get access to oc tool (which is just an OpenShift version of kubectl) and login to the cluster:

eval $(crc oc-env)
oc login -u kubeadmin

If you have not already install helm with Homebrew

brew install helm

And install Collectord on the cluster (make sure to replace ... with the license key that you received from us)

helm install collectorforopenshift \
    --namespace collectorforopenshift \
    --create-namespace \
    --set collectord.configuration.general.acceptLicense=true \
    --set collectord.configuration.general.license=... \
    --set collectord.configuration.outputs.splunk.default.url=https://host.crc.testing:8088/services/collector/event/1.0 \
    --set collectord.configuration.outputs.splunk.default.token=00000000-0000-0000-0000-000000000000 \
    --set collectord.configuration.outputs.splunk.default.insecure=true \
    oci://registry-1.docker.io/outcoldsolutions/collectord-splunk-openshift

Navigate to Monitoring OpenShift application in Splunk http://localhost:8000/en-US/app/monitoringopenshift/.

Monitoring OpenShift

You can try use cases of how to transform and forward additional data the annotations for Collectord.

Learning Kubernetes and OpenShift

If you have just started learning Kubernetes and OpenShift, take a look at Kubernetes tutorials and OKD documentation and Red Hat OpenShift Documentation.


About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all container environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and easy-to-deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and help operators keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.