Octopus cannot find Helm packages were version starts with "v"

When using a Helm feed in Octopus, it seems that packages are not found were package versions start with “v”.

For example, Cert Manager from JetStack is versioned with a leading “v” (i.e. v1.0.0), and Octopus does not find any versions of that package. I’ve attached a HAR file containing the request to the feeds API below, just in case it’s helpful.

lumiradx.octopus.app-helm-feed.har (204.9 KB)

In an attempt to fix, I fudged the version within the chart.yaml file and added a package to a custom Helm repository (using GitHub pages). This allowed the Helm feed within Octopus to find the package/version, but when deployed to a Kubernetes cluster the chart failed to pull the required Docker containers - I guess the version of these are tied to the version of the Helm chart.

When pulling this package using the helm CLI it can be referenced as version 1.0.0 (no leading v) -

helm pull JetStack/cert-manager --version 1.0.0

Is there a way around this, as I’m kind of stuck without being able to pull the Helm chart that I need.

Thanks,
David

Hi @dgard1981,

Thank you for contacting Octopus Support.

Unfortunately, this is likely happening due to the leading “v” not being SemVer compliant:

However, I’ve reached out internally to see if there is anything that can be done here. I’ll let you know once I hear back.

I appreciate your patience.

Regards,
Donny

Thanks Donny,

I assumed as much about the “v”. Hopefully there is some work around, as it’s a valid Helm chart that I’ve been successfully using through Terraform up until now.

I think in the mean time I’ll try and roll my own script through Octopus to get the chart deployed. I’ll post the results here if I’m able to get it working.

Thanks,
David

So I’ve written my own script for this and it seems to work (at least the pods in the cert-manager namespace on my K8s cluster are running).

I’m running this script on a container image that has kubectl and helm (V3) installed, using the Run a kubectl CLI Script step, on behalf of a Kubernetes Cluster deployment target.

### Change the permissions on the K8s config file to avoid helm warnings dirtying the logs.
chmod 600 kubectl-octo.yml

### Write replacement values to a YAML file.
cat > values.yaml << EOL
installCRDs: true
ingressShim:
  defaultIssuerName: "letsencrypt"
  defaultIssuerKind: "ClusterIssuer"
  defaultIssuerGroup: "cert-manager.io"
EOL

### Add the LumiraDX Helm chart repository.
helm repo add #{Helm.RepoName} #{Helm.RepoUrl}

### Pull the Helm chart from the LumiraDX repository
helm pull #{Helm.RepoName}/cert-manager --version #{Octopus.Release.Number} --untar

### Upgrade the Helm chart on the target K8s cluster.
helm upgrade "#{K8s.Release}" "./cert-manager" --install --namespace "#{K8s.Namespace}" --create-namespace --reset-values --values "./values.yaml"

Hi @dgard1981,

Thank you for getting back to me. I’m glad to hear you were able to write a script to work around this.

I had a conversation with one of our devs who pointed out that the YAML for this helm repo is formatted poorly: https://charts.jetstack.io/index.yaml

However, he was intrigued by the helm client being able to work despite this. His suggestion was to call the helm executable directly or use a different Cert Manager such as the one from bitnami: Cert Manager Cloud Hosting, Cert Manager Installer, Docker Container and VM

I created a github issue for this that you may follow here:

I’m hopeful this will get patched and we’ll be able to match the functionality of the helm client to account for this going forward.

If I can assist with anything else, please let me know. Have a good weekend!

Regards,
Donny

Hi Donny,

Thanks for the update.

It looks like we’re all aligned on why it’s not working, so it would be great if a fix did come in a (near) future version of Octopus. Although I kind of think it should be JetStack that fix things on there end really, and I’ve added a comment to an already open issue.

I’ll keep using the script I’ve written for now as it works.

Thanks,
David

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.