Here’s my situation;
We have started using an Octopus linux worker with docker to run deployment steps in parallel, with great success. However, despite calling --rm in our docker runs, if the deployment fails, the container will hang around. We had a number of failed deployments last week, around 80, so there were a lot of stuck containers.
Through some googling, I have found a command that works flawlessly:
docker container stop $(docker container ls -q --filter name=C-Deployments*). I can run this through the Task Console, targeting the Worker from there. But, I can’t seem to do this in either a process Step, or in a Runbook. What, if anything, am I missing? How can I get a process step or a Runbook to target the Default Worker Pool?
In general, you would target this using the execution location in the deployment process of your runbook, where you would specify the worker pool for the script to run. If you would like to run it on the default worker pool, you would just specify it from the dropdown here:
I hope this answers your question, but if you have any follow-ups, please let us know!
Appreciate the reply. I’ve got the Runbook step set up in the way your screenshot shows, however, when I go to run that step, I still have to choose an Environment and a Tenant:
Is this a result of the project in which this runbook has been made? Will it still run on the worker, despite being targeted to a specific environment/tenant?
That should continue to use the per-step configured value, regardless of the environment. We do have the option to have a variable used for the worker pool, but if your is setup similarly to my screenshot above, with it set specifically, then it will use the specific value.
Does that not line up with what you’re seeing, feel free to DM me through a raw task log from the runbook run, and I’ll take a look to see if anything is amiss.
This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.