I am trying to create machine policies on octopus cloud using Terraform. I am getting the following error.
Error: validation failure in UpdatePath; Key: 'MachinePolicy.ConnectionConnectTimeout' Error:Field validation for 'ConnectionConnectTimeout' failed on the 'min' tag
Terrafrom code.
variable.tf
variable "connection_connect_timeout" {
description = "Connect timeout- 5 minutes, meaning that the Octopus server will wait up to 5 minutes when attempting to establish a connection to target machines before timing out"
type = number
default = 30000
}
This is the same for the parameter connection_retry_time_limit as well. As per the Octopus provider documentation, these values are numbers. Tried with different numbers there and everything gives me this error.
Could you please have a look or can you tell me the possible values(or type of values) for these parameters?
Thanks for reaching out to Octopus Support, and I’m sorry you’re running into this issue creating a new Machine Policy with our Terraform provider.
First, thanks for the details you sent on your Terraform setup. It helped immensely in my testing! I found that the minimum time we can set for the Connect Timeout field is 10 seconds. The value we use in Terraform is in nanoseconds, so the minimum value you can pass is 10000000000 (or 10 seconds).
When I updated the default in my Terraform configuration to 10000000000 it was able to get past the error and created the machine policy successfully. If you update yours to a valid value, hopefully, you should be able to move forward as well.
Please let me know if this helps or if you have any other questions.
I am having another issue related to machine policy. The machine policy is created and assigned to the deployment target. I can see it is starting the process to check the deployment target instance health. But it couldn’t identify the EKS cluster that is assigned to the policy. Showing the following message;
Starting health check for machines with policy: deployment-target-machine-policy.
August 21st 2023 16:46:19Info
There are no active deployment targets to check
Raw Log;
Task ID: ServerTasks-6205
Related IDs: Spaces-124
Task status: Success
Task queued: Monday, 21 August 2023 11:16:17 AM +00:00
Task started: Monday, 21 August 2023 11:16:18 AM +00:00
Task completed: Monday, 21 August 2023 11:16:19 AM +00:00
Task duration: less than a second
Server version: 2023.3.10333
Server node: octopus-i061490-cc766c948-42cn5
| Success: Check target health for deployment-target-machine-policy
11:16:19 Info | Starting health check for machines with policy: deployment-target-machine-policy.
11:16:19 Verbose | Found 1 matching machine
11:16:19 Info | There are no active deployment targets to check
But I can see that the deployment target is listed in the policy usage tab.
I sent a DM earlier this week asking for information on your Cloud instance. If you could get back to me there, I can continue investigating. If you don’t see the DM, just ping me here, and I can resend.
Thanks for your patience while I investigated this some more.
What you’re seeing looks to be a minor bug in the Octopus UI. Kubernetes Cluster targets don’t use a Machine Policy for regular health checks. Instead, a Check target health for cloud targets system task runs daily and performs a connection test to the cluster.
When a Kubernetes Cluster is first added as a Deployment Target (either through the UI or via Terraform in your case), a MachinePolicyId value is still set, which causes it to show in the usage tab for that Machine Policy. While it doesn’t affect functionality, seeing the target listed in the UI is a bit misleading.
We are in the process of making changes to the way we handle Kubernetes targets, so this may not be an issue down the road. I’ll reach out to our engineering team to mention this behavior and see if there is a possibility we could hide our Cloud targets from the Machine Policy usage to avoid confusion in the short term.
I’ll let you know if anything comes from that discussion, and please let me know if you have any other questions.
Thanks for getting back to us and I’ll make sure to mention your request in the conversation Dan has started with our engineers.
As a workaround, you could re-run the task on a schedule via a runbook sending an API request to the Octopus Server.
If you find the task ID for the cloud target health check task (you can find this in the URL when viewing the task) you can modify the below script to add the ID and it will re-run the task.
I hope this helps, please let us know if you have any further questions and we’ll get back to you with any details from the discussion Dan has started.