Error: Error: validation failure in UpdatePath; Key: 'MachinePolicy.ConnectionConnectTimeout' Error:Field validation for 'ConnectionConnectTimeout' failed on the 'min' tag

Hi Octopus Team,

I am trying to create machine policies on octopus cloud using Terraform. I am getting the following error.

Error: validation failure in UpdatePath; Key: 'MachinePolicy.ConnectionConnectTimeout' Error:Field validation for 'ConnectionConnectTimeout' failed on the 'min' tag

Terrafrom code.

variable.tf

variable "connection_connect_timeout" {
  description = "Connect timeout-  5 minutes, meaning that the Octopus server will wait up to 5 minutes when attempting to establish a connection to target machines before timing out"
  type        = number
  default     = 30000
}
main.tf
resource "octopusdeploy_machine_policy" "this" {
  provider = octopusdeploy.space

  name                            = var.name
  description                     = "Machine policy"
  connection_connect_timeout      = var.connection_connect_timeout
  connection_retry_count_limit    = var.connection_retry_count_limit
  connection_retry_sleep_interval = var.connection_retry_sleep_interval
  # connection_retry_time_limit     = var.connection_retry_time_limit

  machine_cleanup_policy {
    delete_machines_behavior         = var.delete_machines_behavior
    delete_machines_elapsed_timespan = var.delete_machines_elapsed_timespan
  }

  machine_connectivity_policy {
    machine_connectivity_behavior = var.machine_connectivity_behavior
  }

  machine_health_check_policy {

    powershell_health_check_policy {
      run_type    = "Inline"
      script_body = <<EOF
        $freeDiskSpaceThreshold = 5GB

Try {
	Get-WmiObject win32_LogicalDisk -ErrorAction Stop  | ? { ($_.DriveType -eq 3) -and ($_.FreeSpace -ne $null)} |  % { CheckDriveCapacity @{Name =$_.DeviceId; FreeSpace=$_.FreeSpace} }
} Catch [System.Runtime.InteropServices.COMException] {
	Get-WmiObject win32_Volume | ? { ($_.DriveType -eq 3) -and ($_.FreeSpace -ne $null) -and ($_.DriveLetter -ne $null)} | % { CheckDriveCapacity @{Name =$_.DriveLetter; FreeSpace=$_.FreeSpace} }
	Get-WmiObject Win32_MappedLogicalDisk | ? { ($_.FreeSpace -ne $null) -and ($_.DeviceId -ne $null)} | % { CheckDriveCapacity @{Name =$_.DeviceId; FreeSpace=$_.FreeSpace} }	
}
      EOF
    }

    bash_health_check_policy {
      run_type    = "Inline"
      script_body = <<EOF
      EOF
    }
  }

  machine_update_policy {
    calamari_update_behavior = "UpdateAlways"
    tentacle_update_behavior = "NeverUpdate"
  }

  # polling_request_maximum_message_processing_timeout = var.polling_request_maximum_message_processing_timeout
  # polling_request_queue_timeout                      = var.polling_request_queue_timeout
}

This is the same for the parameter connection_retry_time_limit as well. As per the Octopus provider documentation, these values are numbers. Tried with different numbers there and everything gives me this error.

Could you please have a look or can you tell me the possible values(or type of values) for these parameters?

Thanks,
Arun S Raj

Hi @arun.raj,

Thanks for reaching out to Octopus Support, and I’m sorry you’re running into this issue creating a new Machine Policy with our Terraform provider.

First, thanks for the details you sent on your Terraform setup. It helped immensely in my testing! I found that the minimum time we can set for the Connect Timeout field is 10 seconds. The value we use in Terraform is in nanoseconds, so the minimum value you can pass is 10000000000 (or 10 seconds).

When I updated the default in my Terraform configuration to 10000000000 it was able to get past the error and created the machine policy successfully. If you update yours to a valid value, hopefully, you should be able to move forward as well.

Please let me know if this helps or if you have any other questions.

Thanks!
Dan

Hello, @dan_close

Thanks for looking into this. Changing the values to nanoseconds fixed the issue.

I would also like to request your team to update these details in the terraform provider documentation as well.

Thanks,
Arun S Raj

Hi @dan_close ,

I am having another issue related to machine policy. The machine policy is created and assigned to the deployment target. I can see it is starting the process to check the deployment target instance health. But it couldn’t identify the EKS cluster that is assigned to the policy. Showing the following message;

Starting health check for machines with policy: deployment-target-machine-policy. 
August 21st 2023 16:46:19Info
There are no active deployment targets to check 

Raw Log;

Task ID:        ServerTasks-6205
Related IDs:    Spaces-124
Task status:    Success
Task queued:    Monday, 21 August 2023 11:16:17 AM +00:00
Task started:   Monday, 21 August 2023 11:16:18 AM +00:00
Task completed: Monday, 21 August 2023 11:16:19 AM +00:00
Task duration:  less than a second
Server version: 2023.3.10333
Server node:    octopus-i061490-cc766c948-42cn5

                    | Success: Check target health for deployment-target-machine-policy
11:16:19   Info     |   Starting health check for machines with policy: deployment-target-machine-policy.
11:16:19   Verbose  |   Found 1 matching machine
11:16:19   Info     |   There are no active deployment targets to check

But I can see that the deployment target is listed in the policy usage tab.

Am I doing anything wrong here? Should I change anything?

Thanks,
Arun S Raj

Can I get an update on this?

Hi @arun.raj ,

I sent a DM earlier this week asking for information on your Cloud instance. If you could get back to me there, I can continue investigating. If you don’t see the DM, just ping me here, and I can resend.

Thanks,
Dan

Hi @arun.raj,

Thanks for your patience while I investigated this some more.

What you’re seeing looks to be a minor bug in the Octopus UI. Kubernetes Cluster targets don’t use a Machine Policy for regular health checks. Instead, a Check target health for cloud targets system task runs daily and performs a connection test to the cluster.

When a Kubernetes Cluster is first added as a Deployment Target (either through the UI or via Terraform in your case), a MachinePolicyId value is still set, which causes it to show in the usage tab for that Machine Policy. While it doesn’t affect functionality, seeing the target listed in the UI is a bit misleading.

We are in the process of making changes to the way we handle Kubernetes targets, so this may not be an issue down the road. I’ll reach out to our engineering team to mention this behavior and see if there is a possibility we could hide our Cloud targets from the Machine Policy usage to avoid confusion in the short term.

I’ll let you know if anything comes from that discussion, and please let me know if you have any other questions.

Thanks!
Dan

Hi @dan_close ,

Thanks for the update.

Is there a way we can make the system check the health of deployment targets more frequently? For example in each 5 minutes instead of once in a day.

Thanks,
Arun

Hi @arun.raj,

Thanks for getting back to us and I’ll make sure to mention your request in the conversation Dan has started with our engineers.

As a workaround, you could re-run the task on a schedule via a runbook sending an API request to the Octopus Server.

If you find the task ID for the cloud target health check task (you can find this in the URL when viewing the task) you can modify the below script to add the ID and it will re-run the task.

$header = @{ "X-Octopus-ApiKey" = $APIKey }

Invoke-WebRequest -method POST -uri "https://octopusURL/api/Spaces-ID/tasks/rerun/ServerTasks-ID" -headers $header

I hope this helps, please let us know if you have any further questions and we’ll get back to you with any details from the discussion Dan has started.

Kind Regards,
Adam

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.