The meteoric rise of Kubernetes adoption in recent years has spurred many engineering organizations to take advantage of its cost efficiency, robust scaling, and unified interface of app runtime management.

However, as adoption grows, so does the complexity of use cases. Without uniform patterns in the resource manifest repository to standardize best practices, cluster operators can easily find themselves overwhelmed by the huge ecosystem of package managers, resource manifests, and deployment tools in the CNCF landscape. Examples of these include Helm, Kustomize, and jsonnet.

One core tenet of Tubi engineering culture is to invest in a small but focused set of tools. Kustomize has properly embodied this belief by managing resource manifest complexity since our org-wide adoption of Kubernetes three years ago.

In this article, we share our experience and challenges using Kustomize for application resource manifests.

Why use Kustomize?

When we first adopted Kubernetes, we had many resource manifests with repeated fields.

Examples include:

  • Moving an app to a namespace with namespace: website
  • Annotating Pods with environment: staging
  • Setting label for app: frontend for the Service and Pods to find each other.

We wanted to avoid the copy/pasting between resource manifest files, and minimize the chance of someone forgetting to set a field in a large manifest document.

When choosing between the various resource manifest pre-processors, consider the following principles:

Minimize developer learning

For application developers transitioning from the world of VMs, learning Kubernetes can be daunting enough. We wanted to minimize the amount of additional learning of our resource manifest tool, by sticking with vanilla, unmodified Kubernetes resource manifests, as much as possible.

Client-side compilation

We wanted the chosen manifest tool to run client-side in a self-contained binary. This makes it easy to iterate, debug, and test the rules being applied to modify manifests. This also increases compatibility with various CD systems such as ArgoCD, which supports a list of raw manifests as input for deploying changes.

Avoid templating

When adopting tools that rely on a templating engine, an organization will inevitably maintain an ever-growing list of templates, snippets, and variable constraints over the long term. This complexity warrants additional documentation, versioning, and maintenance. It steepens the learning curve for onboarding new engineers and risks domain-specific knowledge loss when seasoned engineers off-board.

Why we chose Kustomize

Our engineering team eventually landed on Kustomize: a template-free, client-side manifest pre-processor that only introduces two custom resources, the Kustomization and the Component, to aggregate and compile k8s resources.

We still use Helm as a package installer for Kubernetes. This is because vendor software tends to ship in helm charts such as for Istio, Prometheus, or Nginx. However, our engineering team limits its use to vendor-provided software and prefers Kustomize for internal applications instead.

Introducing Kustomize building blocks

Much complexity can arise from only two core functionality provided byKustomize: aggregating resources and patching fields. Let's go over a couple of examples of the problems they solve:

Aggregate resources with Kustomizations

The basic way to use Kustomize is to bundle related resources together.

In the following example, we create a file kustomization.yaml of kind: Kustomization. This file defines a Kustomization.

Input

# folder structure
frontend/
  kustomization.yaml
  resources/
    service.yaml
    deployment.yaml

# contents of frontend/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resources/service.yaml
- resources/deployment.yaml

Output

$ kustomize build frontend
# output
apiVersion: v1
kind: Service
...
---
apiVersion: apps/v1
kind: Deployment
...

All we did was bundle individual resource manifest files into a list of resource manifests. This makes it possible to send the result to a tool like kubectl for deployment or diffing.

$ kustomize build web | kubectl diff -f -
# shows diff for both service and deployment with one kubectl command

Patch resource fields

Kustomize also allows us to manipulate fields of any resource included in the Kustomization.

# let's modify the contents of kustomization.yaml thus:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: frontend
commonLabels:
  app: frontend
resources:
- resources/service.yaml
- resources/deployment.yaml
patches:
- target:
    kind: Deployment
    name: frontend
    patch: |-
      - op: replace
        path: /spec/replicas
        value: 15

This Kustomization applies the following:

  • Sets namespace: web for all resources
  • Adds label app: frontend for all resources
  • Adds alabelSelector to the Service
  • Set the replica count for the Deployment to 15

Note: the patch: field is using a multi-line string snippet in the JSON Patch format. This multiline snippet can be stored in its own file and referred to using a path: field rather than the patch: field. For more information, refer to the official documentation.

Increase complexity with Bases and Overlays

Now that we have a grasp of the two building blocks of Kustomize, we can expand the complexity of our resource manifest repository.

Aggregate App Bases with a Namespace Overlay

Not only can a Kustomization aggregate resources, but it can also aggregate other Kustomizations. This allows us to define an entire namespace with the following:

# folder structure
apps/
  frontend/
   kustomization.yaml
   resources/
  api/
    kustomization.yaml
    resources/
namespaces/
  web/kustomization.yaml

# contents of namespaces/web/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: web
bases:
- ../../apps/frontend
- ../../apps/backend

Kustomizations that are included by other Kustomizations create a parent-child relationship. The child Kustomization that is being included is the Base, and the parent Kustomization that does the including is the Overlay. In our case, there is a Namespace Overlay that includes two Application Bases.

Note: The base: declarative is equivalent to the resource: declarative, yet we conventionally use base: to refer to other Kustomizations and resource: for raw Kubernetes resources.

Aggregate Namespace Overlays with a Cluster Overlay

Now that we have a pattern for aggregating Bases, we can define all of the resources for an entire cluster using a Cluster Overlay Kustomization, which is just an aggregate of Namespace Overlays:

# folder structure
apps/
  frontend/
  api/
  internal-site/
  opentelemetry/
  maintenance-cronjobs/
clusters/
  staging/
    kustomization.yaml
    namespaces/
      web/kustomization.yaml
      internal-site/kustomization.yaml
      devops/kustomization.yaml
  production/
    kustomization.yaml
    namespaces/
      web/kustomization.yaml
      internal-site/kustomization.yaml
      devops/kustomization.yaml

# contents of clusters/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- namespaces/web
- namespaces/internal-site

This structure covers the bulk of the aggregation use cases that an organization would need to manage its resources. In summary,

  1. Define app bases in a root-level apps/ prefix folder.
  2. Pick-and-pluck App Bases into Namespace Overlay Kustomizations.
  3. Aggregate several Namespace Kustomizations in a single Cluster Overlay.

Patch across multiple Namespaces Overlays

So far, so good, right? We have a tree structure that traverses from ClusterNamespaceAppsResources, an intuitive structure that mirrors most developers' understanding of Kubernetes concepts hierarchy. But, how can we take advantage of this structure?

Let's consider our previous folder layout:

clusters/
  staging/
    kustomization.yaml
    namespaces/
      web/kustomization.yaml
      internal-site/kustomization.yaml
      devops/kustomization.yaml

Remember the environment: staging annotation that we would like to patch for all resources in the staging cluster? If we include it using a commonAnnotation field in the file clusters/staging/kustomization.yaml, the patch will apply to all resources.

This works well for most labels, but what if we want the annotation to only apply to the web namespace and the internal-site namespace, and not the devops namespace? We would need a common patch across Namespace Overlays.

A naive way of patching across overlays

The first option for solving this problem is to create a patch in the web Namespace Overlay, then copy and paste the patch into the internal-site Namespace Overlay. This solution can work but fails to scale if we have dozens or hundreds of Namespaces; we would have to fork the same patching code over many Namespace Overlays.

Instead, we create a single common patch. For example, at the path clusters/staging/common-patches/environment-annotation-patch.yaml, then include the patch in the Overlay Bases, like this:

# clusters/staging/namespaces/web/kustomization.yaml
kind: Kustomization
...
patches:
  - path: ../../common-patches/environment-annotation-patch.yaml

However, attempting to build this Overlay will result in the following error:

Error: trouble configuring builtin PatchStrategicMergeTransformer with config: `
  paths:
  - ../../common-patches/environment-annotation-patch.yaml
`: security; 
file 'clusters/staging/common-patches/environment-annotation-patch.yaml' is not in or below 
  'clusters/staging/namespaces/web/kustomization.yaml'

Similarly, aggregating resources that are outside of the Kustomization folder will result in a similar error:

Error: accumulating resources: accumulation err='accumulating resources from 
  '../../common-resources/additional-resource.yaml': security; 
file 'clusters/staging/common-resources/additional-resource.yaml' is not in or below 
  'clusters/staging/namespaces/web/kustomization.yaml'

The error is due to a security feature in Kustomize that forces all bases to only include resources and patches within its own directory. The security feature enforces good practice for containing the logic for a single app in a single folder. It avoids a huge and sprawling Kustomize folder structure with cross-references everywhere between apps.

There is a flag you can supply to the kustomize command to bypass this security check described in the documentation and in this Github issue. However, there is a better way…

Create Reusable Patches with Components

The better way of handling the use case of common patches across Overlays is to use the kind: Component resource. This relatively new part of Kustomization is still undergoing revision but is very powerful for sharing logic in a scalable way across Overlays.

The original Proposal write-up introduces many high-level concepts that are hard to grasp. Let's walk through an example that is clearer to understand.

# clusters/staging/components/environment-annotation/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
commonAnnotations:
  provider: kustomize

The example resembles a Kustomization Base with no resources and a single commonAnnotation field, so how does it differ from a Kustomization Base? After all, both Components and Kustomizations allow for many of the same fields, including:

  • commonAnnotations and commonLabels
  • resources
  • patches
  • configMapGenerator

However, there is a huge difference: Patches defined in Components will apply to resources in the parent Overlay, not just resources in the Component Base. This means that a Component can be used as a wrapper to bundle patches together. This also means that if we're not careful, a Component can patch beyond its originally intended scope.

Patch multiple Namespace Overlays with Components

The way we solve the problem of patching across multiple Namespace Overlays is with the following format:

# folder structure
clusters/
  staging/
    kustomization.yaml
    components/
      environment-annotation/kustomization.yaml
    namespaces/
      web/kustomization.yaml
      internal-site/kustomization.yaml
      devops/kustomization.yaml

# clusters/staging/namespaces/web/kustomization.yaml
# clusters/staging/namespaces/internal-site/kustomization.yaml
kind: Kustomization
...
components:
- ../../components/environment-annotation

The above is equivalent to adding the commonAnnotations: field directly into the web and internal-site Namespace Overlays.

Patch multiple Cluster Overlays with Components

Similarly, we can define a global-level component to include in any Overlays.

# folder structure
apps/
components/
  global-annotation/kustomization.yaml
clusters/
  staging/kustomization.yaml
    namespaces/
      web/kustomization.yaml
      internal-site/kustomization.yaml
      devops/kustomization.yaml
  production/namespaceskustomization.yaml
    namespaces/
      web/kustomization.yaml
      internal-site/kustomization.yaml
      devops/kustomization.yaml

For example, we can choose to include the global components in the Cluster Overlay:

# clusters/staging/kustomization.yaml
kind: Kustomization
...
components:
- ../../components/global-annotation

Or, in selective Namespace Overlays:

# clusters/staging/namespaces/web/kustomization.yaml
kind: Kustomization
...
components:
- ../../../../components/global-annotation

Bundle Components in an Overlay

The same way that a group of Kustomization Bases can be grouped together in an Overlay (parent Kustomization), a list of Components can also be bundled together in a parent Component Overlay. For example:

apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
components:
- environment-annotations
- team-annotations

Include resources in a Component

Patching is not the only function that Components can provide when included in an Overlay. We can also specify resources in Components.

apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
resources:
  - ingress.yaml
configMapGenerator: 
  - name: app-config
    files:
      - files/app-config.yaml
patches:
  - path: patches/include-configmap-deployment.yaml

Any Overlay that includes this Component will also include the ConfigMap and Ingress resources. This is useful if the Component includes a patch that assumes the existence of another resource.

Follow a Review Criteria for Including Resources in a Component

Using Resources in a Component is very handy, but doing so could also introduce undesirable patterns. A developer who is not familiar with how a Component differs from a Kustomization may accidentally create an App Base using a Component instead of using a Kustomization. Doing so could lead to patches bleeding out into the parent Namespace Overlay, thereby undesirably patching resources outside of the patch's intended scope.

If you choose to allow the usage of Components with embedded resources, It is strongly recommended that you keep its usage under strict review criteria.

In summary

Kustomization can help manage a lot of the complexity and patterns around the management of resource manifests, especially if your engineering organization chooses to run in a single-cluster setup.

Use Kustomize to aggregate and patch.

  • Kustomizations that only include resource manifests are called Bases.
  • Kustomizations that include other Kustomizations are called Overlays.
  • Use the resources: list to refer to Kubernetes Resource manifests.
  • Use the bases: list to refer to other Kustomization Bases.
  • Use patches: to modify YAML fields of resources.

Use a tree hierarchy layout to organize resources.

None
  • Use an Application Base to bundle resources related to an app.
  • Use a Namespace Overlay to bundle a list of Application Bases.
  • Use a Cluster Overlay to bundle a list of Namespace Overlays.
  • Use patches: in a Namespace Overlay or Cluster Overlay to patch common fields.

Use Components for reusable patches across Overlays.

  • Use Components to bundle patching logic to use across Overlays.
  • Parent Overlays that include child Components will apply included patches, including child Kustomizations will not.
  • Carefully monitor the use patterns of Components in your Kustomize repo to avoid developers unintentionally using Components where Kustomizations is a better option.

If you are passionate about Kubernetes and infrastructure platform engineering, join us! https://corporate.tubitv.com/careers/