Kubernetes polling tentacles not removing after updating

I’ve setup an AKS running the octopus tentacle docker image on 3 containers. When I want to make changes to the containers it deletes the old containers and creates the new ones on the AKS and then assigns the new ones to the worker pool in octopus as expected.
What it doesn’t do is delete the old containers from the worker pool. What’s the best way to do this without doing it manually?

Hi Ben,

Welcome to the Octopus help boards!

The best way to do this automatically is through the Machine Policies on your Octopus server.
An option within this functionality allows Octopus to remove any machines that cannot be contacted on a health check after a certain time frame.

There are more details in our documentation, but should you have any further questions or if this doesn’t solve your issue, please let us know!

Kind Regards,

Thank you Sean! that’s just what I needed. Currently the minimum time is about 2 minutes until it gets deleted however, you kind of need it to happen immediately with containers.

I added in a custom cron expression (* * * * * Mon-Sun) which should run every 1 second monday through sunday but it doesn’t seem to do anything

Hey @ben.hodges, thanks for reaching out!

Wanted to share an additional resource here after talking with Sean - you can utilize the Kubernetes container lifecycle prestop hook to call the Octopus API and instantly remove the container from your worker pool.

There’s a great example of this in the 10 Pillars of Pragmatic Kubernetes Deployments. In the chapter on Verifiable Deployments, you can find a heading titled Removing the workers automatically that has the script and details written out.

You can also see the script used in a step template in the accompanying Ten Pillars Octopus instance - log in as guest at the prior link and you’ll see the script written out in the UI under
Containers → worker → Lifecycle Hooks → PreStop

Hopefully that gives you a solid example to follow for getting instantaneous worker removal from your pools - feel free to reach out if you run into any issues or have any additional questions, we’re always happy to help!

Hey Cory,

Thanks so much for this, that’s helped massively and only takes up to 20 seconds to delete and add :slight_smile:

One more thing I wanted to ask is what would you suggest is the best idea to scale the containers based on the number of tasks we have on the go?
I.E I currently have it set to 3 replicas, let’s say at 9am we have 2 tasks running but then suddenly at 10am we have 40 tasks in the queue.

I wondered if we would have to setup a kubernetes scheduled task (Keda) and do some invoke restmethod.


I haven’t used Keda, but I think you’re right on that you’ll need either a configured autoscaling behavior (natively, Kubernetes offers HPA), or you can do something manual using API scripts and kubectl commands in an Octopus runbook to check behavior/limits and fire off scaling requests or replica count.

I can do some more digging on this, it’s not something I’ve encountered yet. I’ll do some work next week to ensure that you can use HPA or Keda to scale up worker nodes based on behavior in the cluster and reach back out when I know more!

Hey Cory,

Any luck yet? Yes that’s right but HPA is more for auto-scaling based on metrics and we’d want to auto-scale based on events, i.e the number of tasks in the queue.

I started looking at running a powershell script and placing it on the AKS cronjob but might be better way of doing this. Would be surprised if we’re the only company wanting to do this.


This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.