Creating Risk Acceptances

When a policy is activated, then it is enforced for all manifests which are deployed in this cluster. From a security perspective this is exactly what we want, but as with every rule there could be exceptions. Imagine that you are using software provided by a vendor, which you are deploying by rendering a helm chart. If those manifests do not conform to all your policies then you have different options. First you could raise the issues with the vendor, or even provide a patch by yourself, but this process could take some time. Another option is making the required changes by post-processing the manifests, e.g. with kustomize. This requires some effort, you will potentially have to adapt with every version upgrade of the software.

For those rare cases where those options are not feasible or even possible, there is a way out. Every policy provides a custom Kubernetes annotation, which you can put on objects you want to exclude from validation. The reason for this exclusion has to be put into the value of the annotation.

The name of the Risk Acceptance annotation is<lower-cased-policy-name>

Risk Acceptances can be defined on different levels.

On the object itself:

apiVersion: extensions/v1beta1
kind: Ingress
  name: missing-tls
  annotations: Vendor requires non-tls connections.
    - host:
          - backend:
              serviceName: insecure-service
              servicePort: 8080

On the pod template, if the object specifies a pod via a template:

apiVersion: apps/v1
kind: Deployment
  name: test-deployment
      app: test-policies
      annotations: Vendor does not expose liveness probe.
        app: test-policies
        - name: test-policies
          image: some-software:1.0

On the namespace, to exclude the policy for all objects in that namespace:

apiVersion: v1
kind: Namespace
  name: ns-with-policyexclusion
  annotations: This is a playground namespace used for testing.

NOTE: To provide correct Pull Request feedback, Syncier Security Tower must be able to find the namespace definitions in the GitOps repository. Make sure to add the location(s) of the namespace definitions to your cluster configuration.