Enforcing policies with OPA Gatekeeper

This section explains some advanced use cases for policies. For a general introduction to policies, please see the getting started guide.

Syncier Security Tower provides feedback on Pull Requests to check if your manifests conform to the policies activated in the target cluster. This approach can only cover manifests which are statically stored in the GitOps repository. A user with access to the API server will still be able to modify/create resources without adhering to the policies. The same applies to components running inside the cluster with access to the API (e.g. Kubernetes Controllers which act on Custom Resource Definitions (CRDs) and create Pods).

To enforce policies in a running cluster you can use OPA Gatekeeper. Gatekeeper acts as a Kubernetes admission controller, and as such can validate all API requests which want to create, update or delete resources in the cluster. It provides CRDs of kind Constraint, which allow the user to define rules in the OPA Policy Language Rego. The best practice policies provided by Syncier Security Tower are designed to work with Gatekeeper, so knowledge about OPA or Rego is not required!

Installing Gatekeeper

Gatekeeper can be installed in different ways. If you want to deploy Gatekeeper via GitOps, the Security Tower CLI provides a command to download the manifests to your repository:

securitytower policies gatekeeper install --path=<gatekeeper target path>

For other installation options, please visit the official documentation.

Gatekeeper configuration

Policies need access to resources from the Gatekeeper cache. For example the Namespace of a resource needs to be available to every policy in order to consider Risk Acceptances on namespace level. Gatekeeper needs to be configured to sync those resources into the cache.

Installation via the Security Tower CLI already bundles the required configuration, which consists of a kubernetes resource of kind Config. In case Gatekeeper was not installed via the CLI, please manually download the configuration file, and deploy it in the namespace where Gatekeeper is installed.

https://raw.githubusercontent.com/securitytower/downloads/main/gatekeeper/config.yaml

Deploying policies to your cluster

When you activate a policy, the following files including the corresponding CRDs for Gatekeeper are generated by Syncier Security Tower and stored in your GitOps repository:

  • policies/enforceimageversion.policy: Contains metadata about the policy.
  • policies/constraint-templates/enforceimageversion-template.yaml: OPA Gatekeeper constraint template.
  • policies/constraints/enforceimageversion-constraint.yaml: OPA Gatekeeper constraint. Enforcing policies will be discussed later.

The Gatekeeper CRDs can be deplyoed to the cluster so Gatekeeper can pick them up and validate changes on the API server. Depending on which tool you use to synchronize your Git repository with the Kubernetes cluster, the process of setting this up is quite different.

If you are using Argo CD, you could create an Application that points to the path where the policies are stored. Here is an example configuration, which assumes that you store the policies in a directory called policies in the root of your GitOps repository. You can configure the policies location in the cluster configuration.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: policies
  namespace: argocd
spec:
  destination:
    server: https://kubernetes.default.svc
  source:
    path: policies
    repoURL: https://github.com/your-org/your-repo
    targetRevision: master
    directory:
      jsonnet: {}
      recurse: true
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

To check if policies have been successfully deployed, you should first verify whether the ConstraintTemplate CRDs have been processed by Gatekeeper. The ConstraintTemplate holds the Rego code of a policy which is used to verify manifests. Check on the status of the ConstraintTemplates:

> kubectl get constrainttemplate -o jsonpath='{range .items[*]}{.metadata.name}{": "}{.status.created}{"\n"}{end}'

enforceimageversion: true

For each ConstraintTemplate, a matching Constraint should be created, which is the instantiation of a policy. You can check on the status of those Constraints. If this command completes successfully the policies are set up correctly:

> kubectl api-resources --api-group='constraints.gatekeeper.sh' -o name | xargs -n1 kubectl get -o name

enforceimageversion.constraints.gatekeeper.sh/enforceimageversion

Assuming we have the EnforceImageVersion policy which verifies that each container image has an explicit image version specified activated in our cluster we can run a simple test to see if Gatekeeper is working.

> kubectl apply -f deployment_without_image_version.yaml

Error from server ([denied by enforceimageversion] EnforceImageVersion violated: No explicit image version for the container

containerName: nginx
field: deployment.spec.template.spec.containers.image

object:
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: no-image-version-deployment
JSONPath: .spec.template.spec.containers[?(@.name == "nginx")].image)

As you can see, the API server denied the request because the Gatekeeper Validating Webhook failed, and the user is provided with a detailed error message.