Outcold Solutions LLC

Monitoring Linux - Version 5

Monitoring Linux Installation

With our solution for Monitoring Linux, you can start monitoring your clusters in under 10 minutes, including forwarding metadata-enriched host logs and metrics. You can request an evaluation license that valid for the 30 days.

Install Monitoring Linux application

Install Monitoring Linux from splunkbase. You need to install it on Search Heads only.

If you created a dedicated index, that is not searchable by default, modify the macro macro_linux_base to include this index.

macro_linux_base = (index=linux)

Enable HTTP Event Collector in Splunk

Outcold Solutions' Collector sends data to Splunk using HTTP Event Collector. By default, Splunk does not enable HTTP Event Collector. Please read HTTP Event Collector walkthrough to learn more about HTTP Event Collector.

The minimum requirement is Splunk Enterprise or Splunk Cloud 6.5. If you are managing Splunk Clusters with version below 6.5, please read our FAQ how to setup Heavy Weight Forwarder in between.

After enabling HTTP Event Collector, you need to find correct Url for HTTP Event Collector and generate an HTTP Event Collector Token. If you are running your Splunk instance on hostname hec.example.com, it listens on port 8088, using SSL and token is B5A79AAD-D822-46CC-80D1-819F80D7BFB0 you can test it with the curl command as in the example below.

$ curl -k https://hec.example.com:8088/services/collector/event/1.0 -H "Authorization: Splunk B5A79AAD-D822-46CC-80D1-819F80D7BFB0" -d '{"event": "hello world"}'
{"text": "Success", "code": 0}

-k is necessary for self-signed certificates.

If you are using Splunk Cloud, the URL is not the same as url for Splunk Web, see Send data to HTTP Event Collector on Splunk Cloud instances for details.

Install Collector for Linux

Download collectorforlinux.tar.gz and extract in /opt/collectorforlinux (archive contains builds for both amd64 and aarch64 architectures).

sudo curl -O https://www.outcoldsolutions.com/docs/monitoring-linux/builds/5.21.410/collectorforlinux.tar.gz -o /tmp/collectorforlinux.tar.gz
sudo mkdir -p /opt/collectorforlinux
sudo tar -xvf /tmp/collectorforlinux.tar.gz -C /opt/collectorforlinux

Open /opt/collectorforlinux/etc/002-user.conf with your favorite editor

sudo edit /opt/collectorforlinux/etc/002-user.conf

This file contains basic configuration for the collectord used by Monitoring Linux. You can find default configuration in file /opt/collectorforlinux/etc/002-general.conf.

In the file 002-user.conf modify Splunk URL and Token value, review and accept license agreement and include license key (request an evaluation license key with this automated form).

[general]
acceptLicense = true
license = ...
fields.linux_cluster = dev

[output.splunk]
url = https://hec.example.com:8088/services/collector/event/1.0
token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0
insecure = true

You can run collectorforlinux from terminal

sudo /opt/collectorforlinux/bin/collectorforlinux

Install collectorforlinux service with systemd

We provide a service definition, that you can link with systemctl command and run it as a daemon in background

sudo systemctl link /opt/collectorforlinux/bin/collectorforlinux.service
sudo systemctl daemon-reload
sudo systemctl enable collectorforlinux
sudo systemctl start collectorforlinux

Verify that collectord is running with

sudo journalctl -fu collectorforlinux

Next steps


About Outcold Solutions

Outcold Solutions provides solutions for monitoring Kubernetes, OpenShift and Docker clusters in Splunk Enterprise and Splunk Cloud. We offer certified Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers. We deliver applications, which help developers monitor their applications and operators to keep their clusters healthy. With the power of Splunk Enterprise and Splunk Cloud, we offer one solution to help you keep all the metrics and logs in one place, allowing you to quickly address complex questions on container performance.