Unable to validate K8S deployment target

  • Running v2018.9.5
  • Adding a K8S cluster, using token authentication

I’m trying to add a K8S cluster to Octopus using the new integration points and am running into issues during the health check process for the cluster. It’s currently failing during validating the ‘default’ namespace (or any other namespace in the cluster).

The other issue I’m running in to is the work folder is immediately deleted at the conclusion of the run, preventing me from reviewing the generated config files for accuracy or other issues with them that could contribute to these errors. One item that jumps out from a time I was able to grab a copy of the work folder is the referenced kubectl-octo.yml file didn’t exist in the folder, at the time I duplicated it.

Also, attempts to manually run the Powershell scripts that were generated, with the exception of the kubectl version script, errored out and wasn’t able to successfully execute.

Hi Jason, thanks for reaching out.

This issue appears similar to the one you submitted via email previously. In case the response was lost, I have included it here.

From the logs it appears that the Kubernetes account associated with the target does not have access to the namespace. To verify that you can configure the Kubernetes account locally and run kubectl get namespace <namespace name> .

If you have access to the Work folder you can prevent it from being deleted with NTFS permissions. If you deny Everyone the ability to Delete and Delete subfolders and files on the Work folder, Octopus will be prevented from deleting the folder and you can see the files. Obviously though this would be a temporary solution and the permissions should be reverted for day to day operations.

52d0b9461662da0ea03faae1ac3c5dfb.png

Your suggestion to see the generated config file is something we’ll look to add in a future release. This will make this kind of debugging easier in future.

To access slack you should be able to open Slack. I don’t believe you need an invitation, but if there any any issues please let us know.

Regards
Matt C

Thank you Matt. I indeed never received the email reply (hence the posting in the forum).

Following your suggestion to remove the ‘delete’ permissions, I’m able to see the contents of kubectl-octo.yml and noting it’s fairly empty. The contexts and users array are both empty, which would result in the lack of permissions/credentials while querying the cluster.

Is this file rebuilt after it’s initial creation that would bring the missing components in (which may be blocked due to the change in permissions)?

Hi Jason,

We’ve release Octopus 2018.9.7 which now prints the kubernetes config file to the logs as verbose messages. You can see an example in the screenshot below. Are you able to update to this version and paste in the verbose logs that are generated by the health check? Can you also paste in a screenshot of the target setup. We can use this information to replicate your configuration locally to see if there is an issue with the way the context file is generated.

Regards
Matt C

Matthew,

Thank you for the very quick reply. We installed the new version and the additional logging output was able to provided the needed information for us to get the cluster integration online.

We’re attempting to use karthequian/helloworld as an initial container deployment test and am running in to issues with Octopus not liking ‘latest’ as a valid version, seemingly only accepting numeric version tags as valid.
ERROR: The step failed: ‘latest’ is not a valid version string Parameter name: input

Using a different image (bitnami/memcached) that has a numeric version tag makes Octopus happy, though I’m still running in to issues where the release deploys successfully, however nothing shows up in K8S (doing a simple container only deploy).
RESULT: Successfully finished memcached on US-PreProd

Not sure where to go to try and troubleshoot the false success message.

Thank you,
Jason Ziemba

After doing some more testing I’m seeing the logging saying :: No acquisition of bitnami/memcached.1.5.12 (noting the period in the version, though Docker tends to use colon to separate package from version). The very next line in the output is “Acquire Packages completed”. That seems like it’s a false success as the package was clearly not acquired, based on the preceding line. May be the cause of the false success of the overall deployment, with nothing actually being deployed to the container.

Hi Jason,

Docker image tags do need to be semver compatible to be selected as an Octopus package. The documentation at https://octopus.com/docs/packaging-applications/package-repositories/docker-registries#DockerRegistriesasFeeds-WorkingwithDockerContainerImagesinOctopus has some more details on how Octopus reads Docker tags, but the short story is that the tag latest is not recognized as semver, which is why you ran into issues.

For the package acquisition, Kubernetes steps won’t acquire the images by design. Because the Kubernetes cluster itself downloads and deploys the images, Octopus uses a Docker feed only for getting names and versions of Docker images to build up the YAML. Octopus never downloads the images itself. This is why the Docker images are shown as not being acquired.

Which step are you using to deploy a container? Are you using kubectl directly, or building a deployment with the Deploy Kubernetes containers step?

Regards
Matt C