When you want Splunk to parse fields out of your container logs at search time, target the extraction at the right containers by matching on Collectord’s source value. Every container log Collectord forwards carries a structured source that encodes the container ID, container name, image name, pod name, namespace, and stream:
1/kubernetes/{kubernetes_container_id}/{kubernetes_container_name}/{kubernetes_image_name}/{kubernetes_pod_name}/{kubernetes_namespace}.{docker_stream}Use that structure in props.conf with wildcards (*) for individual segments and ... to skip multiple — that way you can scope extractions to a single image, a specific container name, or any combination.
For example, to apply an nginx ingress controller access-log extraction to every container running gcr.io/google_containers/nginx-ingress-controller, regardless of container ID, container name, pod, namespace, or stream:
1[source::/kubernetes/*/*/gcr.io/google_containers/nginx-ingress-controller:*/*/*]
2EXTRACT-nginx-ingress-controller-http = ^(?P<remote_addr>[^ ]+)\s+\-\s+\[(?P<proxy_add_x_forwarded_for>[^\]]+)\]\s+\-\s+(?P<remote_user>[^ ]+)\s+\[(?P<time_local>[^\]]+)[^"\n]*"(?P<request>[^"]+)"\s+(?P<status>\d+)\s+(?P<body_bytes_sent>\d+)\s+"(?P<http_referer>[^"]+)"\s+"(?P<http_user_agent>[^"]+)"\s+(?P<request_length>\d+)\s+(?P<request_time>[^ ]+)\s+\[(?P<proxy_upstream_name>[^\]]+)]\s+(?P<upstream_addr>[^\s]+)\s+(?P<upstream_response_length>\d+)\s+(?P<upstream_response_time>[^\s]+)\s+(?P<upstream_status>\d+)$</code></pre>If you’d rather override the source or sourcetype on a per-pod or per-namespace basis instead — and key your extractions off those — see Splunk Indexes.