Helm File Variable Substitution

I’m new to kubernetes/helm and am looking for a way to inject variable replaced configuration files (and ideally json/xml transformations) into a container.

One solution that seems promising is to use helm files (https://helm.sh/docs/chart_template_guide/accessing_files/) to create a configmap, and then mount the configmap as a volume (https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#populate-a-volume-with-data-stored-in-a-configmap)

I’m currently using the Octopus Helm Upgrade step (https://octopus.com/docs/deployment-examples/kubernetes-deployments/helm-update), but do not see any options for variable substitutions in my helm package. According to: https://octopus.com/docs/deployment-process/configuration-features/substitute-variables-in-files these options should be available in package steps, but I don’t see them in my 2020.2.11 version of Octopus.

Is there a way to do variable substitution/transforms in helm package steps? If not, is there another recommended way to get octopus transformed files into a kubernetes deployment? If not, could this feature be considered?

Workaround ideas:

  • In the past I’ve run the “deploy a package” step on a worker filesystem and then deployed the transformed files from the work directory. However, it seems that the “helm upgrade” step requires a package source instead of a filesystem source.

  • Since it seems like custom deployment scripts are supported in the “helm upgrade” step, I could attempt to write my own substitution script, but that seems very error prone.

  • I could try to modify my build pipeline to transform files at build time, then I could write them to individual values.yaml files that octopus would be able to pass to helm. This seems to be the best approach I can come up with, but adds an extra component that could fail, and a layer of indirection that I’m not sure is very beneficial. The output files could look something like this (file content copied from the “Substitute variables in files” help article):

fileContent: |-
  <authentication mode="Forms">
    <forms loginUrl="#{LoginURL}" timeout="2880" />

Thanks in advance for the help on this

Hey Kyle,

The general approach is to have your helm chart accept values for anything that can vary between deployments.
The Octopus Upgrade Helm Chart step supports a number of ways to supply the values when deploying the chart, including pulling value files from packages (see image below). These value files will have variable substitution applied to them automatically.

These values passed in can then be used by the helm chart to create a config map.

The nice thing about relying only on values passed in is it means the chart works nicely outside of Octopus also.

Would this work for your scenario?

Hi Michael,

Thanks for the idea!

I’m going to first see if I can make this work by putting my final config files with octopus variable syntax in helm values files (bullet 3 of my workaround ideas). I think this will probably require the least amount of maintenance for now, but as we transition more to helm/kubernetes, I like your idea of using helm variables in the configmap instead of Octopus variables.

A bit of backstory on my project: I’m taking an existing application that has been historically deployed through Octopus to VMs, and trying to containerize it. It has a lot of custom configuration files, with most of them being xml based. Currently, we have a “.config” with all of our default and local development settings in it, and then we have a “.octopus.config” xml transform file that gets run through the Octopus “Configuration Transforms” deployment feature.

This specific project is slow moving and I expect that we’re going to have to support both deployment methods (vms and helm) at least for the next year or two (possibly longer). I think having to support both for so long might be our biggest problem, but unfortunately that decision is out of my control. I’ve been putting a lot of thought into how we can support both deployments without having to maintain different config strategies for the same values. Otherwise, the config files are be bound to get out of sync at some point.

If I move from xml transforms to octopus variable substituted files, this will allow us to use the same config file for both deployment methods, and that will be a big help with maintainability. The only tricky piece is that I’ll have to write a custom build step that takes the octopus variable substituted file that we’ll use for VM deployments, and generate a helm values.yaml file for helm deployments. It would be great if Octopus supported variable replacement in helm files so I didn’t have to write the custom build step, but hopefully it shouldn’t be too bad.

Once we move completely to helm deployments, I think it will make more sense to replace the configmap octopus variables with helm variables. We could then have a values.local.yaml for local development, and a values.octopus.yaml for deployments.


Hi Kyle,

Another option would be to use the Run a kubectl CLI Script step. Any script run as part of this step gets a kubectl config file for the Kubernetes target it is run against. This means you can call helm yourself to deploy a package manually to your K8S cluster.

The Run a kubectl CLI Script step includes the ability to reference additional files, which in this case would be your helm chart:

You can then make use of the standard file processing features to process the files from that additional package before the script is run:

With this option, any variables in the files from your additional package (i.e. the helm chart package) are processed and ready to be deployed with a simple call to helm.


Hi Matt,

This is a great feature to know about. Even if we don’t end up using Run a kubectl CLI Script for this project specifically, I’m sure that it will be helpful to solve some future problem.

I’m going to POC both of these processes over the next few days and find out what works best for us. It’s great to have options, and I’m confident that we’ll be able to find a good solution with all of this information.


1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.