Octopus Deploy Worker Process in V2018.7.4

In our organization we have octopus v2018.7.4 in High Availability mode with 2 server nodes.
We setup the external worker process by running the below command,

.\Tentacle.exe register-worker --instance MyInstance --server “https://example.com/” --comms-style TentaclePassive --apikey “API-zzzzzzzzzzzzzzzz” --workerpool “Default Worker Pool”

After the above command is run, we noticed in the web portal’s Infrastructure --> worker pool, there were 2 pool of workers created one for each node with the worker process Tentacle Urls pointing to the specific octopus server. For example : https:// :10953/ and https://:10953/

But we also had an older version of Octopus server instance , V2018.4.1 (before the external worker process was introduced). We upgraded this instance to V2018.7.4 and applied the same command above to this instance. But then noticed on the web portal there was only a single external worker pool created with the Tentacle Url pointing to https://localhost:10953/.

Our question is:
Our requirement is , we want to run only a single external worker process in each Octopus server in our HA setup.
To achieve this, should we have only a single external worker pool with the tentacle url pointing to local host assuming the HA setup will load balance the external worker pool internally
Should we have 2 separate worker pools for each server node with it’s own tentacle url?


Thanks for getting in touch with your question.

In the default configuration, all your Octopus servers have what we’ve called a built-in worker on each server node. So, in your HA cluster, when a deployment gets run, one of the nodes in the cluster will pick up that deployment and orchestrate the whole process. So when a step runs that needs a worker (e.g. an Azure or AWS deploy, or a run on server script), the node in your HA cluster that’s running the deployment will invoke the Built-in worker locally to run the step in our Calamari tool.

If you add external workers, then the system has a view of what workers have been added and it picks (with some load balancing considerations) a worker to execute the task.

So first point to note is that if you have a two node cluster with a tentacle registered as a worker on each server node, then yes the system will distribute the worker load over those two workers as per the load it sees - however, also note that can mean that Node A might offload work to the worker running on the machine of Node B, which is different to how the Built-in worker would behave.

Second point is that localhost might not behave how you expect in this HA context. Yes if there were just one worker registered on localhost, both servers would interpret that as their own localhost, so each server would try and run the task on the localhost tentacle. The problem would come with the tentacle thumbprint and certificate, you’d have to jump through some hoops to make those the same. But beyond that, some things like health checks, calamari upgrade and tentacle update might not work as expected.

I think if you want a worker on each server node, then the two best options are:

  • register each with it’s url (note in this case that each node will also send work to the other node’s worker), or
  • use the builtin worker on each node, but configure it to use a different account to the server accound, see here

Sorry that answer got a bit long. Hope it helps. If you have any further questions, please get back to me.


Thanks @Michael_Compton . That answers our questions.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.