Answered - Connection Dropped to GKE Cluster

Upgraded Octopus from 2019.10.5 today to 2020.2.14 and have lost the ability to communicate with our GKE Clusters. Everything was working fine before the update, but no longer.

I have them listed as “Unhealthy” rather than offline. Trying to run a health check or upgrade Calamari on them results in an error message about not being able to find kubectl:

NotSpecified: Could not find kubectl. Make sure kubectl is on the PATH. See https://g.octopushq.com/KubernetesTarget for more information. 

Any ideas on how I can resolve this? I’ve tried removing and re-adding the clusters, updating the master node on GKE to the latest version, and checked all firewall settings, but haven’t been able to get a different response. Possible bug in this latest version?

Thanks!

-Brian

Hi Brain, thanks for reaching out.

First I’d like to double check if this Octopus server is something that you are hosting yourself or a hosted cloud instance?

It sounds like you did the upgrade yourself, so I’ll assume you have a self hosted instance.

Kubernetes health checks rely on the kubectl executable being available where it is run. By default the health checks are run on the Octopus server (or at least a default worker running on the Octopus server).

You can check that the Octopus server has access to kubectl with a Run a script step that executes a command like Get-Command kubectl:

If kubectl is on the path, you would expect output like:

CommandType     Name                                               Version
-----------                    ----                                                     -------
Application            kubectl.exe                                      0.8.1.0 

If kubectl can not be found, it will need to be installed. This can be done from https://chocolatey.org/packages/kubernetes-cli or manually downloaded from https://kubernetes.io/docs/tasks/tools/install-kubectl/ and placed in the path.

The other thing to be aware of is that Kubernetes targets may be set to use a worker for health checks, in which kubectl needs to be available on that worker:

Regards
Matt

Hi Matt,

Thank you so much for your response! Yes, it is a Self-Hosted instance. It actually was a migration from Server 2012 to 2016 since 2012 is no longer supported.

Looks like you’re 100% correct, the new server did not have kubectl on it. Got that installed through Chocolatey and my Google K8s popped right up.

10/10 would let you solve all my problems again!

Thanks again!

-Brian

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.