Monitoring Kubernetes

Audit logs

Forwarding Audit Logs | User Documentation

The Monitoring Kubernetes app ships dashboards built specifically for API server audit data — who did what, against which resource, and how the API responded. Kubernetes doesn’t emit those logs by default, so you’ll need to turn them on at the API server before Collectord can forward them. Follow the Kubernetes Auditing guide for background on the policy format and severity levels.

You only need to enable auditing on master nodes by editing the API server’s pod definition. On clusters bootstrapped with kubeadm, the manifest lives at /etc/kubernetes/manifests/kube-apiserver.yaml; on other distributions, look for /etc/kubernetes/manifests/apiserver.json.

Start with the policy. Save the file below to /etc/kubernetes/policies/audit-policy.yaml — it silences the noisiest system traffic, drops Collectord’s own activity, logs secret and configmap mutations at the metadata level, and catches everything else at the request level.

For a more exhaustive policy, see the audit profile used by GCE.

audit-policy.yaml yaml
 1apiVersion: audit.k8s.io/v1beta1
 2kind: Policy
 3rules:
 4  # Do not log from kube-system accounts
 5  - level: None
 6    userGroups:
 7    - system:serviceaccounts:kube-system
 8  - level: None
 9    users:
10    - system:apiserver
11    - system:kube-scheduler
12    - system:volume-scheduler
13    - system:kube-controller-manager
14    - system:node
15
16  # Do not log from collector
17  - level: None
18    users:
19    - system:serviceaccount:collectorforkubernetes:collectorforkubernetes
20
21  # Don't log nodes communications
22  - level: None
23    userGroups:
24    - system:nodes
25
26  # Don't log these read-only URLs.
27  - level: None
28    nonResourceURLs:
29    - /healthz*
30    - /version
31    - /swagger*
32
33  # Log configmap and secret changes in all namespaces at the metadata level.
34  - level: Metadata
35    resources:
36    - resources: ["secrets", "configmaps"]
37
38  # A catch-all rule to log all other requests at the request level.
39  - level: Request

Now wire the policy into the API server. The configuration below points the API server at the policy file and routes audit events to standard output — since the API server itself runs in a container, Collectord picks them up automatically without any extra mounts or sidecars. You also need to mount the policy file into the container so the API server can read it. Edit /etc/kubernetes/manifests/kube-apiserver.yaml:

kube-apiserver.yaml yaml
 1...
 2spec:
 3  containers:
 4  - command:
 5    - kube-apiserver
 6...
 7    - --audit-policy-file=/etc/kubernetes/policies/audit-policy.yaml
 8    - --audit-log-path=-
 9    - --audit-log-format=json
10...
11    volumeMounts:
12    - mountPath: /etc/kubernetes/pki
13      name: k8s-certs
14      readOnly: true
15    - mountPath: /etc/ssl/certs
16      name: ca-certs
17      readOnly: true
18    - mountPath: /etc/kubernetes/policies
19      name: policies
20      readOnly: true
21  hostNetwork: true
22  volumes:
23  - hostPath:
24      path: /etc/kubernetes/pki
25      type: DirectoryOrCreate
26    name: k8s-certs
27  - hostPath:
28      path: /etc/ssl/certs
29      type: DirectoryOrCreate
30    name: ca-certs
31  - hostPath:
32      path: /etc/kubernetes/policies
33      type: DirectoryOrCreate
34    name: policies

If the API server doesn’t pick up the change, restart kubelet:

bash
1sudo systemctl restart kubelet

The Splunk app finds audit events through the macro_kubernetes_audit_logs macro, which scopes searches to logs containing the audit.k8s.io API group:

text
1(`macro_kubernetes_logs` OR `macro_kubernetes_host_logs`) "audit.k8s.io"