I am trying to create my first kubernetes deployments on a fresh 30 day trial Octopus Cloud.
It seems that the default hosted agent does not have kubectl installed, so I went down the path of switching to the Hosted Ubuntu Worker Pool using the Octopus worker-tools container image. Now I the following error:
HealthCheck of the Deployment Target was green (I am using Azure Service Principal)
Deployment Process Step is configured:
2.1. uses build-in step Deploy raw Kubernetes YAML
2.2. Worker Pool: Run on a worker from a specific worker pool: Hosted Ubuntu
2.3. Container Image: Runs inside a container, on a worker: octopusdeploy/worker-tools:5.0.0-ubuntu.22.04
triggering a deployment leads to this error message: The connection to the server localhost:8080 was refused -
Thanks for reaching out and for all of the information.
Could I please have permission to log on to your cloud instance and take a look? If so, please DM me your cloud URL.
If not, would it be possible for you to DM me the full task log as well as the process json? To get that you go to the Process inside the project, click the 3 dots in the upper right and click Download JSON.
I think I’ve found the issue. Taking a look at your deployment process, can you please try selecting unstable2 in the “On Behalf Of” field? I believe this might be a bug as I don’t remember this step being able to be saved without having a role there.
Looking forward to hearing back how it goes when you get back online tomorrow.
My question is: Is it best practice to create my own build image with all the tools installed? Or are you planning to provide the basic toolset so that your process templates work out of the box?
Our dynamic workers have only a minimum number of tools installed (You can view the exact tools on each image here). Keeping tools updated on these images became too problematic as we always had several customers that required the old version and a number that wanted the new version.
We introduced the execution containers feature to allow users to load images with the exact tooling they require and update them as they desire.
Our worker tools image includes the tools listed here, and I’m unaware of any plans to add to this list at present.
We have some specific worker tools images available here.
If you require tools not available in any of these images then it will be necessary to build your own image or create a static worker within your infrastructure that the deployment can use instead of the dynamic workers.
I had the error “kubelogin not in the path”. This is tool is only needed when creating a AKS with integrated AAD. In this case I had to add the “–admin” flag, so that the worker will not follow the Azure Auth flow with 2 factor authentication.
The worker got basically stuck at kubectl get namespace unstable2 --request-timeout=1m
asking for an interaction: To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code CF2WUXFHL to authenticate.
Logging in with --admin credentials does not lead to the interaction.
Just for your information
Right now I am on my track and can focus on setting up the complete environment.