We have a number of development Kubernetes clusters that are shut down during the night and on weekends in order to save cost. These are configured as deployment targets in Octopus.
My problem is that Octopus Deploy will detect them as unhealthy during the night and then continue showing them as unhealthy at day, too (“Last health check 7 hours ago”). Only after clicking the “Check health” button they go green again.
I therefore created a machine policy with a corresponding health check schedule based on a cron expression. However, apparently, machine policies cannot be assigned to Kubernetes Cluster deployment targets? I did assign them on the worker running the health check, but that doesn’t seem to change anything for the deployment target.
So, how can I change the health check schedule for Kubernetes Cluster deployment targets? Or, if that’s not possible, how do I get Octopus Deploy to re-check health of those targets more often?
Thank you for contacting Octopus Support. I’m sorry you are having trouble with healthchecks in your K8s targets during off hours.
You are correct. There is not a way in the Octopus UI to change the machine policy for a Kubernetes Cluster target. I know that K8s healthchecks work a bit differently when compared to other targets. That being said, I don’t see any reason why changing just the cron expression on a new machine policy and assigning that to the K8s Cluster would not work in this case. However, I have also not tested this specific scenario personally either.
I did run a quick test and confirmed the API script does work on K8s Clusters and successfully changes the machine policy. If you would like to give it a try and see how it does tonight, you can change the Machine Policy assigned via this script. Likewise, if it doesn’t work as planned, you can use the same script to change it back.
Let me know if this works for you once you get a chance to observe the behavior.
Okay, so you meant I could assign the Machine Policy via the API, it’s just the UI that doesn’t expose the setting? Interesting find, I’ll try it when I find the time to do so!
Hi @donny.bell ,
Thanks again for the suggestion. We’ve tried it and you can indeed set the Machine Policy via the API. The UI will then even list the Kubernetes deployment targets on the “Usage” tab of the Machine Policy.
Unfortunately, the deployment target still does not respect the Machine Policy’s “Health Checks” cron expression. The cron expression is
*/30 8-18 * * Mon-Fri (i.e., Monday to Friday, 8 AM to 6 PM, every 30 minutes), but I’m still seeing “Last health check 7 hours ago”, which would indicate the last health check having occurred around 2 AM. (Note that the same Machine Policy is working nicely with VM-based Workers.)
Any idea what we could do? Or is it simply that Octopus doesn’t (fully) support Kubernetes clusters that are shut down during the night?
Thank you for getting back to us. I’m sorry to hear the API workaround didn’t work out for you.
Another approach would be to create a scheduled Runbook that disables the affected Targets at the end of the day and the start of the following day. We have a ready-made API script you can use as a starting point to implement this.
Regarding support for custom machine policies for kubernetes targets, this is not something that is currently on our feature roadmap. However, if this is something you would like to see in a future version of Octopus, I recommend using the
+ Submit Idea button in the upper right corner of our Roadmap page.
If you have any additional questions or if we can assist with anything else, please don’t hesitate to reach out.
Thanks for the workaround!
Regarding the roadmap - I rather consider it a bug that the feature for configuring the health check schedule doesn’t work for Kubernetes deployment targets (especially since you can assign them via the API), but I do understand there might be more pressing things to implement .