Configuration
Using images on docker.io (hub.docker.com)
These images are built on top of Scratch images.
Current most recent version of OpenShift
Configuration files for previous releases of Collectord deployment files you can find at github.com/outcoldsolutions/collectord-configurations
Using certified images on registry.connect.redhat.com
These images are built on top of RHEL images; see outcoldsolutions/collectorforopenshift. To pull images from this registry, you need to authenticate, see instructions below.
Current most recent version of OpenShift
Configuration files for previous releases of Collectord deployment files you can find at github.com/outcoldsolutions/collectord-configurations
registry.connect.redhat.com authentication
registry.connect.redhat.com
is not the same asregistry.access.redhat.com
. The second is used for Red Hat images, the first is used for certified images from partners. The second works with OpenShift clusters out of the box; the first requires authentication.
You need to specify a secret to authenticate with registry.connect.redhat.com
. Please follow the link to learn how to use other secured registries.
Allowing Pods to Reference Images from Other Secured Registries
This is an example of how you can authenticate with registry.connect.redhat.com
.
After applying the configuration to your OpenShift cluster, you need to create a secret for pulling images from
the Red Hat registry. Make sure you are in the same project/namespace where the collectord is created (collectorforopenshift
is the default project/namespace).
$ oc project collectorforopenshift
If you are on Linux (for macOS see below), you can log in to the registry using Docker and use the authentication file to create a secret in the OpenShift cluster.
$ docker login registry.connect.redhat.com
Username: [redhat-username]
Password: [redhat-user-password]
Login Succeeded
Make sure to use username and not email when you log in to this registry. Both allow you to log in, but if you logged in with email, you will not be able to download the image.
After that, you can create a secret for pulling images using the authentication file just created under $HOME/.docker/config.json
$ oc --namespace collectorforopenshift secrets new rhcc .dockerconfigjson=$HOME/.docker/config.json
On macOS, Docker does not store authentication data in
config.json
(it stores it in the keychain instead). You cannot use it to create a secret. Instead, you can create a secret from the command line withoc secrets --namespace collectorforopenshift new-dockercfg rhcc --docker-server=registry.connect.redhat.com --docker-username=<user_name> --docker-password=<password> --docker-email=<email>
. Just make sure this command is not saved in the bash history, as it will have a password in the command line. See Execute command without keeping it in history. You can executeexport HISTFILE=/dev/null
in this terminal session, which will stop recording any commands in the history.
Link the created secret rhcc
to the service account we use for the collectord collectorforopenshift
$ oc --namespace collectorforopenshift secrets link collectorforopenshift rhcc --for=pull
If some pods have been created before you linked the secret, you will need to recreate them.
You can delete all the pods under the collectorforopenshift
namespace, and the scheduler will recreate pods with the right
secret for pulling images.
oc delete --namespace collectorforopenshift pods --all
Created OpenShift Objects
The configuration file collectorforopenshift.yaml
creates several OpenShift Objects.
Project
collectorforopenshift
.ClusterRole
collectorforopenshift
with limited capabilities toget
,list
andwatch
most of the various deployment objects. The collectord uses this information to enrich logs and stats with OpenShift-specific metadata.ServiceAccount
collectorforopenshift
is used to connect to OpenShift API.ClusterRoleBinding
collectorforopenshift
to bind a service account to a cluster role.ConfigMap
collectorforopenshift
delivers configuration file for collectord.DaemonSet
collectorforopenshift
allows deploying the collectord on non-master nodes.DaemonSet
collectorforopenshift-master
allows deploying the collectord on master nodes.Deployment
collectorforopenshift-addon
is a single collectord that needs to forward data from the whole cluster once.
Please read the comments in the collectorforopenshift.yaml
file to get
more detailed information on all configurations and sources of the logs and metrics.
Collectord configuration
ConfigMap
collectorforopenshift
delivers configuration files for the collectord.
These are ini
files where you can find all the default values.
Values can be overridden using environment values with the format as specified below
COLLECTOR__{ANY_NAME}={section}__{key}={value}
Configurations with environment variables are the simplest way to explore and debug quickly, but we recommend
writing your configuration file based on the default provided with collectorforopenshift.yaml
.
Using secrets to manage configurations
You can use OpenShift secrets and map them as an environment variable to override configurations for the collectord.
As an example, we will show how you can configure HTTP Event Collector and License with secrets.
First, make sure that the collectorforopenshift
namespace already exists. If it does not exist, you need to create it.
oc create namespace collectorforopenshift
oc create secret generic collectorforopenshift \
--namespace collectorforopenshift \
--from-literal=splunk-token="output.splunk__token=B5A79AAD-D822-46CC-80D1-819F80D7BFB0" \
--from-literal=license="general__license="
In our YAML manifest, find the configuration of environment variables for each Deployment type (2 DaemonSets and 1 Deployment) and add the following information to the environment variables
env:
- name: COLLECTOR__SPLUNK_TOKEN
valueFrom:
secretKeyRef:
name: collectorforopenshift
key: splunk-token
- name: COLLECTOR__LICENSE
valueFrom:
secretKeyRef:
name: collectorforopenshift
key: license
Apply the manifest by following the installation instructions.
Attaching EC2 Metadata
You can include EC2 metadata with the forwarded data (logs and metrics) by specifying desired field name and path from Instance Metadata and User Data.
# Include EC2 Metadata (see list of possible fields https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html)
# Should be in format ec2Metadata.{desired_field_name} = {url path to read the value}
# ec2Metadata.ec2_instance_id = /latest/meta-data/instance-id
# ec2Metadata.ec2_instance_type = /latest/meta-data/instance-type
As an example, you can modify the YAML file and include two fields ec2_instance_id
and ec2_instance_type
[general]
...
ec2Metadata.ec2_instance_id = /latest/meta-data/instance-id
ec2Metadata.ec2_instance_type = /latest/meta-data/instance-type
...
Placeholders in indexes and sources
You can apply dynamic index names in the configurations to forward logs or stats to a specific index based on the meta fields. For example, you can define an index as
[input.files]
index = oc_{{openshift_namespace}}
Similarly, you can change the source of all the forwarded logs like
[input.files]
source = /{{openshift_namespace}}/{{::coalesce(openshift_daemonset_name, openshift_deployment_name, openshift_statefulset_name, openshift_cronjob_name, openshift_job_name, openshift_replicaset_name, openshift_pod_name)}}/{{openshift_pod_name}}/{{openshift_container_name}}
Links
- Installation
- Start monitoring your OpenShift environments in under 10 minutes.
- Automatically forward host, container and application logs.
- Test our solution with the embedded 30-day evaluation license.
- Collectord Configuration
- Collectord configuration reference.
- Annotations
- Changing index, source, sourcetype for namespaces, workloads and pods.
- Forwarding application logs.
- Multi-line container logs.
- Fields extraction for application and container logs (including timestamp extractions).
- Hiding sensitive data, stripping terminal escape codes and colors.
- Forwarding Prometheus metrics from Pods.
- Audit Logs
- Configure audit logs.
- Forwarding audit logs.
- Prometheus metrics
- Collect metrics from control plane (etcd cluster, API server, kubelet, scheduler, controller).
- Configure the collectord to forward metrics from the services in Prometheus format.
- Configuring Splunk Indexes
- Using non-default HTTP Event Collector index.
- Configure the Splunk application to use indexes that are not searchable by default.
- Splunk fields extraction for container logs
- Configure search-time field extractions for container logs.
- Container logs source pattern.
- Configurations for Splunk HTTP Event Collector
- Configure multiple HTTP Event Collector endpoints for Load Balancing and Fail-overs.
- Secure HTTP Event Collector endpoint.
- Configure the Proxy for HTTP Event Collector endpoint.
- Monitoring multiple clusters
- Learn how to monitor multiple clusters.
- Learn how to set up ACL in Splunk.
- Streaming OpenShift Objects from the API Server
- Learn how to stream all changes from the OpenShift API Server.
- Stream changes and objects from OpenShift API Server, including Pods, Deployments or ConfigMaps.
- License Server
- Learn how to configure a remote License URL for Collectord.
- Monitoring GPU
- Alerts
- Troubleshooting
- Release History
- Upgrade instructions
- Security
- FAQ and the common questions
- License agreement
- Pricing
- Contact