We have not used workers and I have just added a worker to the Default Worker Pool for the purpose of running deployment steps that make use of Docker containers.
The immediate side effect was that all old deployment projects seem to fail when the worker is offline (unhealthy) .
Why? I wouldn’t expect those to be related and if worker is unhealthy “work” should fallback to Octopus server? How can I avoid this behavior adversely affecting deployment projects that were never setup to use workers?
Also can I force execution on the Octopus Server built in worker instead of a worker in the Default Worker Pool for steps I choose?
Thanks for your help!
Thanks for getting in touch, and sorry to hear that you ran into this behaviour.
What happened here is that as soon as you add a worker to the
default worker pool it disables the built-in worker (formally
run on server) so all tasks that would normally run on server will instead attempt to run on your worker. We very highly recommend setting up a separate worker pool for this worker, as that then allows you to keep the built in worker and select it as a runtime option for any step.
Initially we didn’t highlight this change enough when you added a worker to the default pool, so since Octopus 2018.7.13 we added a warning to the UI to alert that this change will alter the default behaviour of Octopus.
Once you have multiple worker pools you can select a step to execute on your Octopus Server with the following option (or Run on a worker on behalf of each deployment target) and selecting the
Default Worker Pool as your worker pool.
Apologies that you had a bad experience with workers, please let me know if you have any feedback regarding our UI or warnings here!
thank you for your response.
After posting my question here I have kept the new worker I added in the
Default Worker Pool disabled. This seems to “enable” the default worker on the Octopus server and I had no issues.
Now, I tried to create a new Worker Pool - if I understand correctly if I put my new worker into a new worker pool outside the
Default Worker Pool the default worker won’t get disabled…
Unfortunately, I am not able to create new worker pool because I get
There was a problem with your request.
You cannot create another worker pool.
This will exceed the limits of your current license.
Please contact firstname.lastname@example.org to upgrade your licence
So I am a bit confused here - on one hand I don’t need multiple worker pools I just thought I can use it as workaround, but I can’t really because of licencing restriction??
How can I use a worker in a new deployment project steps and still have the built-in Octopus server worker running steps in old/existing deployment projects?
Thanks for your feedback on this one. You are correct that if you place your worker in a a new worker pool it won’t disable the default worker, but as you encountered you are restricted by your license here.
This sparked a conversation internally that we have been too restrictive with workers on our old license model, to the point that they are unusable. We will update the license model shortly (not 100% sure in which release yet) to allow for the inbuilt worker (aka
run on server) and one external worker in its own worker pool. This will allow you to use a worker to perform the docker deployments as you were planning.
Please keep an eye out on our release notes so that you are aware when this change is made, and thanks for bringing this to our attention.
I shall keep an eye on the upcoming releases waiting for this change.
Certainly it is a move in the right direction
Just wanted to let you know that this change has shipped in Octopus 2018.9.2, if you upgrade you can create a second worker pool and add a worker to it!
Let me know how you go, feel free to get in touch if you have any questions,
I appreciate the quick turn around and delivering the change in this recent version
It did resolve my issue - thank you for your help!
That’s great news, glad we were able to get this sorted for you!
Let me know if there is anything else you need,
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.