Octopus health check error when using kubectl

Hi,

I am getting error: no configuration set to …. When running kubernetes deployment targer health check. When I check logs I can see:

Verbose | Temporary kubectl config set to C:\Octopus\Work\20220727143307\kubectl-octo.yml
Verbose | kubectl.exe version —client —short —request-timmeout=1m
Error | error: no configuration has been provided
Verbose | Exit code: 1

It looks like Octopus deploy is overriding default kubectl config location and due to missing kubectl-octo.yml is not able to perform health check.

I have configured system environment variable to KUBECONFIG = %USERPROFILE%.kube\config but it is not used during health check.

Any adivce how to solve this problem?

Hey @ErniMJ,

Thanks for reaching out and welcome to the forums!

Can you please DM me a screenshot of your target setup so I can take a look at how it’s configured? If there’s nothing confidential in there you can just reply here with it.

Looking forward to hearing back!

Best,
Jeremy

Hi @jeremy.miller,
I am not able to provide you printscreens.

In case of deployment target form:
Display name: aks-dev
Enabled: Yes
Environments: DEV
Target Roles: aks
Select an authentication type: Azure Service Principle
Select account: AZ_DEV
AKS cluster name: aks-dev
AKS resource group name: aks-dev-rg
Login with administrator credentials : false
Kubernetes namespace: default
Run directly on a worker

I was able to check kubectl-octo.yml file and it is empty.
Octopus deploy server version v2021.3

Hi @ErniMJ,

Thanks for getting back to us!

It looks like the health check for that cluster is run directly on a worker.
May I ask if the worker that runs the health check has any configurations that may cause the error you’re experiencing?
Is it possible to test running the health check within a container on that worker, you could use one of our worker tools images as a test to see if the results are any different.

Let me know what you think, please!

Kind Regards,
Adam

Hi Adam,

I am using worker because company policies are not allowing to use container on a worker.

When I execute command directly on worker:
kubectl.exe version —client —short —request-timmeout=1m
It works fine but it is using default kubectl config from $HOME/.kube/config

From check health verbose log I see that octopus is setting its own config path: Temporary kubectl config set to C:\Octopus\Work\20220727143307\kubectl-octo.yml but the file is empty.

For some reason Authentication details are not passed to this yml file.

Hey Jack,

It’s very strange that the yml is empty.

I have a few avenues we can go down:

  1. We can turn on variable logging to make sure the variables for that k8s target are making it to the deployment with this: How to turn on variable logging and export the task log - Octopus Deploy. Only use this for troubleshooting then disable it as it can slow down your deployments. With this enabled, you can look at all the variables in that step to ensure your target info is making it to the deployment.
  1. We can try creating a new target with the same settings and see if maybe it’s an issue with that specific target. Do you have other k8s targets that work or is this the first one you’re setting up?

  2. Is there any AV running on the machine that may be blocking Octopus from writing to that file? I’m thinking there’s a possibility that the YAML gets created, AV scans it and locks it up, and when Octopus tries to write to it, it gets denied because it’s in use. If there is AV, is the Octopus work directory whitelisted?

Please let me know what you think.

Best,
Jeremy