How can I configure my Polling Tentacles to hit my Octopus Deploy High Availability instance to sitting behind an AWS Load Balancer?

I want to use polling tentacles, but my Octopus Deploy server is sitting behind a load balancer up in AWS. What do I need to do to configure in my AWS Load Balancer to support this?

Before diving into the configuration, it is important to note how polling tentacles work. A polling tentacle will connect to the Octopus Rest API over ports 80 or 443 when it is registering itself with the Octopus Server. After that, it will connect over port 10943 (default configuration) with the Octopus Server.

Octopus Deploy High Availability (HA) adds another wrinkle into the mix. A polling tentacle needs to check each node in the HA cluster. Imagine you had a three-node cluster:

  • Node 1
  • Node 2
  • Node 3

The polling tentacle would need to check Node 1 for work, then Node 2, then Node 3. When a deployment is kicked off it is placed into the task queue. Any node in the HA cluster can pick up the work. The polling tentacle needs to see which node picked up that work. The only way right now is to query each node in the HA cluster.

To do that, you would need to follow this step to configure the tentacle to hit multiple nodes. I’ll spare you the reading, the command to register additional nodes on your tentacle is:

C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=http://my.Octopus.server --apikey=API-77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6

One little problem with polling tentacles and an AWS load balancer (or load balancers in general). For this to work there has to be some way of the polling tentacle to directly connect to a node. That leaves us with a bit of a catch-22. We want to use a load balancer, yet we need a way to directly connect to a specific server via a load balancer.

What we are going to do is expose a different port per HA Node.

  • Node1: Port 10943
  • Node2: Port 10944
  • Node3: Port 10945

Picking the right kind of load balancer is what trips up most people. For polling tentacles to work you will need to select the Network Load Balancer option. Polling tentacles won’t work very well with an Application Load Balancer. This is because they connect over http/https (tentacle registration) as well as a TCP port.

After the load balancer has been created you’ll need to create a target group for each of your HA nodes for the polling tentacles to connect to. The target group will use port 10943.

But this specific target group will only have one target (EC2 instance) in it.

The health check will be a standard health HTTP health check, it will hit the /api endpoint or the /api/octopusservernodes/ping endpoint.

image

This is different than the target group for the UI. That target group will use port 80 or port 443 and have all the in your cluster EC2 instances.

Okay, back to load balancer. We are going to add a listener for each of our ports we want to expose (80 or 443, 10943, 10944, 10945). For polling tentacles, the protocol will be TCP and the port will be 10943 (or 10944 or 10945)

For tentacles, all communication is encrypted. For deployment the UI, we recommend following AWS Documentation on off-loading SSL/TLS connections to the network load balancer.

Once this is configured you would run this command for nodes 2 and 3 (replacing the URL and API key of course)

C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=http://my.Octopus.server:10944 --apikey=API-77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6
C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=http://my.Octopus.server:10945 --apikey=API-77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6