We have multiple users of Octopus that currently target many different version of Kubernetes clusters and managing the version of kubectl is becoming an issue.
We currently have clusters ranging from 1.11 to 1.14 so there is no single kubectl client version that will support all of those. A client only works one minor version up or down so a 1.12 client would work for 1.11 through 1.13, but not for 1.14.
What is the recommended way to handle having an appropriate version of kubectl to talk with a target cluster?
I’ll admit upfront our answer to this isn’t one we’re happy with. We’re investigating better solutions at the moment.
But there are a couple of options:
You could configure multiple worker pools, containing workers which have the appropriate version of kubectl installed. For example, a worker pool named kubectl-1.14, and another named kubectl-1.11
The downside of this approach is you would need a worker per kubectl version.
There is a variable you can set to control the path to the installed kubectl you wish to use.
The variable is Octopus.Action.Kubernetes.CustomKubectlExecutable
This would allow you to have multiple versions of kubectl installed, and resolve to the correct one using Octopus variable scoping (so per project, environment, target, etc).
This variable should be the absolute path to the kubectl.exe you wish to use, e.g. C:\kubectl\1.14\kubectl.exe
We’d welcome your feedback on this. Would either of these approaches fit for you?
The second option would unblock us. Default would be to use
latest client version that we automatically update periodically.
Then if there’s a team with an older version cluster then they can put that specific version of kubectl in a versioned folder and set the above variable in their project for as long as they’re pinned to that legacy version.
Ideal would be two part, first is to have a standard versioned structure for storing kubectl clients. This would either be automatically updated as needed (if external access to download) or have a standard way to add manually for an air gapped solution.
Second would be a way to get the version of a kubernetes target and ensure that a compatible client version is available (highest available +/- one minor version). If not currently available, that would either fail if air gapped or download new client if external access is available.