Hello! We have set up the Octopus server on a Windows Server.
We want to set up a Kubernetes Cluster Deployment Target and therefore, need to have kubectl on the server.
I have copied the configuration I have on my own machine (C:kubectl thats has the binary and put in PATH as well).
It is returning this error, althought it first returns it can get information from kubectl (so it is installed)
: Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it
I am a bit unsure why it doesn’t seem to work…?
Ps : I saw in the documentation that with AWS accounts (so what we’ll use for the EKS cluster), ssh worker nodes were not possible. Does it mean you can not use Octopus right now if you have a linux EKS Cluster?
Thanks a lot!
I’m sorry to hear you’ve hit a snag setting ul k8s with Octopus. Let’s see if we can get it sorted.
Unfortunately, it seems like that is a generic error. Could you possibly supply the raw task log? This will help us diagnose.
You shouldn’t copy the kubeconfig file. You just need the kubectl executable, i.e. following the Kubernetes installation instructions. Octopus will generate the kubeconfig file based on the details supplied when creating the kubernetes target.
No, you can certainly use Octopus if you have a Linux EKS cluster. You just can’t currently configure the step to execute on a Linux worker. You will need to execute the step either on a Windows external worker, or on the built-in worker. The built-in worker is the default, and is perfect if you’re running self-hosted Octopus. If you’re using Octopus Cloud, you won’t be able to install kubectl on the server, and so will have to use an external worker.
I suspect you may already seen it, but this document may have some useful information, and this post is a fairly detailed walk-through.