collectorfordocker forwards all the events to the default index specified for HTTP Event Collector Token.
Every HTTP Event Collector Token has a list of indexes, where this specific Token can write data. One of the indexes
from this list is also used as a default index when the sender of the data does not specify target index.
The application assumes that you are writing data to the indexes, which are searchable by default by your Splunk Role.
As an example, the
main index is searchable by default.
If you use a different index, which isn't searchable by default by your Splunk Role, you would not see data on the dashboards.
To fix that, you can include this index to the Indexes searched by default for your role under Settings - Access Control - Roles
Or you can change Search Macros we use in the application and include a list of indexes you use for the Monitoring Docker
events. You can find search macros in Splunk Web UI under Settings - Advanced search
- Search macros (or by overriding
You need to modify macros definitions and add the indexes you use.
macro_docker_stats = (index=docker_stats sourcetype=docker_stats)
You need to update macros
macro_docker_events- all the docker events.
macro_docker_host_logs- host logs.
macro_docker_logs- container logs.
macro_docker_proc_stats- proc metrics.
macro_docker_net_stats- network metrics.
macro_docker_net_socket_table- network socket tables.
macro_docker_mount_stats- container runtime storage usage metrics.
macro_docker_stats- system and container metrics.
Using dedicated indexes for different types of data
Considering the application access patterns and the content of the events, we recommend to split logs with metrics
and use dedicated indexes. For example
docker_logs for events, container and host logs and
proc and system
metrics. You can also specify dedicated index for every type of the data collector forwards.
Using dedicated indexes allows you also to specify different retention policies for logs and the metrics.
You can override indices with the configuration for collector. Read Collector Configuration for how to apply these changes.
[input.system_stats] index = docker_stats [input.proc_stats] index = docker_stats [input.net_stats] index = docker_stats [input.net_socket_table] index = docker_stats [input.mount_stats] index = docker_stats [input.files] index = docker_logs [input.app_logs] index = docker_logs [input.files::syslog] index = docker_logs [input.files::logs] index = docker_logs [input.docker_events] index = docker_logs
- Start monitoring your docker environments in under 10 minutes.
- Automatically forward host, container and application logs.
- Test our solution with the embedded 30 days evaluation license.
- Collector configuration reference.
- Build custom image on top collector image with embedded configuration.
- Forwarding application logs.
- Multi-line container logs.
- Fields extraction for application and container logs (including timestamp extractions).
- Hiding sensitive data, stripping terminal escape codes and colors.
Configuring Splunk Indexes
- Using not default HTTP Event Collector index.
- Configure the Splunk application to use not searchable by default indexes.
Splunk fields extraction for container logs
- Configure search-time fields extractions for container logs.
- Container logs source pattern.
Configurations for Splunk HTTP Event Collector
- Configure multiple HTTP Event Collector endpoints for Load Balancing and Fail-overs.
- Secure HTTP Event Collector endpoint.
- Configure the Proxy for HTTP Event Collector endpoint.
Collecting metrics from Prometheus format
- Configure collector to forward metrics from the services in Prometheus format.
Monitoring multiple clusters
- Learn how you can monitor multiple clusters.
- Learn how to set up ACL in Splunk.
- Release History
- Upgrade instructions
- FAQ and the common questions
- License agreement