Kubernetes deployment to an Azure AKS cluster


(David Gard) #1

Hi,

I have an Azure Kubernetes cluster (AKS) to which I wish to deploy from Octopus.

Currently, I’m having to add a copy of the config to the octopus server and reference it when using a version of kubectl.exe (that matches the version of my AKS cluster), as shown here -

$clusterConfigPath = "#{KubernetesClusterConfigPath}"
$kubectlPath = "#{KubernetesKubectlPath}"

& $kubectlPath apply -f .\k8s\mapproxy-namespace.yaml --kubeconfig=$clusterConfigPath

To avoid the need to do this, I’d like to login using the az CLI, and have that set the context. Unfortunately though, I’ve tried this in two different ways and both failed, as shown below.

Given that a Kubernetes cluster in AKS can be on one of several different versions, is there a reliable way to connect from Octopus without the need to store configs on the server, and maybe even avoid having to download multiple versions of kubectl.exe?

Asuming the config would be set somewhere where kubectl.exe would automatically find it

Trying to do it this way results in a Kubernetes error whereby there is not valid context.

The call to az aks get-credentials works and and message is displayed saying that the current context has been successfully set in C:\Windows\system32\config\systemprofile.kube\config.

However, the fact no valid context error sugges that kubectl.exe isn’t looking for the config in that location.

$kubectlPath = "#{KubernetesKubectlPath}"

az aks get-credentials --name #{AzureAksClusterName} --resource-group #{AzureResourceGroup}
& $kubectlPath apply -f .\k8s\mapproxy-namespace.yaml

Here is the relevant output of the above process step -

Merged "lmk-cs-kubernetes-non-live" as current context in C:\Windows\system32\config\systemprofile\.kube\config
error: unable to recognize ".\\k8s\\mapproxy-namespace.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

Telling kubectl.exe where to find the config after to call to az aks get-credentials

Because the successful call to az aks get-credentials outputs the location of the config, I thought that I’d be able to pass that in to kubectl.exe, but it fails saying that the config file cannot be found.

$kubectlPath = "#{KubernetesKubectlPath}"

az aks get-credentials --name #{AzureAksClusterName} --resource-group #{AzureResourceGroup}
& $kubectlPath apply -f .\k8s\mapproxy-namespace.yaml --kubeconfig=C:\Windows\system32\config\systemprofile\.kube\config

Here is the relevant output of the above process step -

Merged "lmk-cs-kubernetes-non-live" as current context in C:\Windows\system32\config\systemprofile\.kube\config
error: CreateFile C:\Windows\system32\config\systemprofile\.kube\config: The system cannot find the file specified.

(Matthew Casperson) #2

Hi David, thanks for reaching out.

When a Kubernetes target is configured in Octopus using an Azure Service Principal as the Authentication source, kubectl.exe will be configured by a call to az aks get-credentials --resource-group <resource group name> --name <cluster name>. You can see how kubectl.exe is configure at https://github.com/OctopusDeploy/Calamari/blob/master/source/Calamari/Kubernetes/Scripts/KubectlPowershellContext.ps1#L59.

You can then use the Run a kubectl CLI Script to interact with the cluster in the context of the Azure account.

image

Regards
Matt C


(David Gard) #3

Hi Matt,

Thanks for your reply.

I’m trying to add a Kubernetes Cluster deployment target, but it’s failing on the healthcheck -

SetupContext : Could not find kubectl. Make sure kubectl is on the PATH.

I’ve visited the linked documentation, but it’s not clear where the kubectl executable is installed, so I don’t know what to add to the path.

Could you please confirm the following -

Where is the kubectl executable?

Is the kubectl executable installed on the Octopus server by default, and if so, where can I find it?

And assuming the kubectl executable is installed by default, what version is it?

In the mean time, I added the kubectl executable that I downloaded to the PATH to try get me going, but the healthcheck fails with the same error. However, from logging on to the box I notice that PowerShell can find kubectl without me having to use the fully qualified path, and I can see that it is included in the PATH.

To confirm, I restarted both the tentacle service and the main octopus service after added kubectl to the PATH.

Can you run multiple versions of the kubectl executable?

How can I have multiple different versions of kubectl installed and specify which version should be used with a given cluster?

Because kubectl only supports one version forward and backward, it’s quite possible that it will be incompatible with an AKS cluster.

A quote from the Kubernetes issue #57748 -

a client should be skewed no more than one minor version from the master, but may lead the master by up to one minor version. For example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes, and should work with v1.2, v1.3, and v1.4 clients.


(Matthew Casperson) #4

Hi David,

The kubectl executable is not shipped with Octopus, so you are required to install it yourself to make use of the the Kubernetes targets (which it sounds like you have done). If kubectl is not being found by Octopus then you may find that the Octopus service needs to be restarted to pick up any changes to the PATH environment variable.

Workers (https://octopus.com/docs/infrastructure/workers) can be used to accommodate deployments to environments with specific tooling requirements such as specific versions of kubectl. So in a case where two clusters require two specific versions of kubectl, you would configure two pools of workers, and the workers themselves would have the correct versions of the tools made available to them.

Regards
Matt C


(David Gard) #5

Hi Matt,

Thanks again for your reply.

I don’t know why Octopus couldn’t see kubectl on Friday, even after I’d restarted the Octopus service, but it can see it today and I can successfully deploy to an Azure AKS cluster.

I take on board what you see about workers, as it’ll be impossible for Octopus to know what version of kubectl to add to ship. However, if it’s not already been suggested as a feature, I think it would be great to allow users to specify a path to kubectl as part of the Deployment Target creation, allowing us to have multiple versions on the Octopus server and avoid the need for workers in this instance.

Thanks,
David


(Matthew Casperson) #6

Hi David, thanks for that feedback. There are some internal discussions about how best to deal with external tools like kubectl with Octopus, with the versioning issues you’ve described being one of the motivations for changing the current approach. There is no timeline yet for implementing the changes, but keep an eye on the blog at https://octopus.com/blog/ for any announcements.

You may also like to add a suggestion to the Uservoice page at https://octopusdeploy.uservoice.com/, which gives the community a chance to vote on common improvements.

Regards
Matt C