One deployment process task per worker

server
usability
(Jon Vaughan) #1

Last week we put up node cap on both servers from 5 to 10. Now giving us a task cap of 20.

Around the same time, we have noticed a few Powershell issues that would suggest that Octopus workers are having multiple tasks running on then in parallel.

This was concluded yesterday when we had a Powershell script that created its own working directory and two deployments of the same project were kicked off into two separate environments at the same time.

They both ran on the same worker and we had a file clash in the working directory as one release was adding the files and another trying to remove them at the same time.

This isn’t the behaviour we want. How do we set up so one worker can only process one task?

In the meantime, I have dropped the cap count back down and modified the task to run in parallel. But this isn’t the behaviour that I wasn’t / was expected. I’m looking to have one worker runs one task.

How can I configure this, please?

(Kenneth Bates) #3

Hi Jon,

Thanks for getting in touch! When using the default working directories, we have safeguards in place to prevent these file clashes, though in your case when creating your own working directories I can see a couple of options.

  1. Use the environment variable (#{Octopus.Environment.Name}) as part of the working directory created in your script to ensure this kind of collision doesn’t happen.

  2. Set the task cap to match exactly your worker count.

Do either of those options get this working as intended/required? Let me know what you think or if you have any further questions or concerns going forward. :slight_smile:

Best regards,

Kenny

(Jon Vaughan) #4

Hey Kenny,

Point 1.)
The clashing working directory was just down to our bad code and has been addressed, but it highlighted a misconception that we have held that it was one task, one worker.

Point 2.)
Will setting the task cap to match our worker count solve the problem? our workers are divided as follows:
lower environments (6 workers)
upper environments (4 workers)

Let’s say we have 6 tasks running in our lower environments and a task cap of 10. A new project is deployed into the lower environments, we are still under the task cap of 10, so it will run on one of the workers that are currently already running another task? causing use the parallel issue we are looking to avoid?

Thanks
Jon

(Jon Vaughan) #5

Hi Octopus,

Do you have anything to control 1 task 1 worker?

I have seen now and again when running a deployment/health check/tentacle upgrade that Octopus says that it’s waiting for a task to finishes before the current one starts. So it feels like there is something there.

Thanks
Jon

(Jon Vaughan) #6

Just bouncing this ticket for a response.

(Kenneth Bates) #7

Hi Jon,

Thanks for responding again on this thread to resurrect its visibility! My sincere apologies that this one fell through the cracks.

It sounds like you might be hitting a previously known and fixed bug that caused tasks to hang, similar to what you’ve described, which was a result of incorrectly trying to acquire the machine level isolation lock. Can you let me know which version of Octopus you’re currently experiencing this issue on? This specific bug (linked below) was fixed in 2019.8.5, so if applicable in your scenario I’d be interested to hear if upgrading fixes this issue.

I look forward to hearing back!

Best regards,

Kenny

(Jon Vaughan) #8

Hi Kenny,

Thanks for the reply.

Our Octopus is 2019.12.1 LTS.

Also our problems isn’t one of locking tasks, it’s the opposite that contrary to what we thought we are seeing workers that are processing multiple tasks in parallel.

An example would be that we have steps that install and uninstall Powershell modules and don’t want to be in a position where one task is running and another tasks comes along and starts to uninstall resources the first task is using.

So in an ideal world there would be a setting that forced workers to pull tasks sequentially.

Thanks
Jon

(Jon Vaughan) #9

Hey Kenny,

Sorry for keeping digging this one up, but it’s real important that we get to the bottom of this. Our code in places has been written in the understanding that workers run one task at once, but from what we have seen this doesn’t look to be the case. Before we start refactoring a lot of our step templates which is a big job we would like to see what other options we have.

I think from this document I can see that workers do process tasks in parallel:


“and a single worker can run multiple actions in parallel.”

questions:

1.) Can we switch off this ability for a worker to run tasks/actions in parallel? before workers we had steps that installed modules, deployed, cleaned up after themselves such as deleting modules. However this process doesn’t work if a step can be run in parallel as on the worker one tasks might be using the module while another is removing it.

2.) If the ability to run multiple actions/tasks in parallel can’t be switched off, do you have any suggestions to workaround it? refactoring the steps/inline scripts is a lot of work.

Thanks
Jon

(Kenneth Bates) #10

Hi Jon,

Thanks for bringing this one back up, and my apologies about this one slipping through the cracks. I brought this good question up to my team, and while we could think of some other potential options, unfortunately I don’t think there’s a way in Octopus to handle this in a great way.

The first and probably best of the not great solutions would be to instead run this on a Tentacle, as compared to Workers, these tasks by default won’t be run in parallel.

Some other options might be to configure some very custom worker pools so a pool is dedicated to the process. I.e. one pool per environment per project, but that’s certainly not pretty. You could possibly also write some custom mutex by using some sort of marker file on system, or rewrite to take an exclusive lock as you do these sensitive bits.

I’m sorry it’s not a better answer, but I hope this helps in some way. Let me know what you think or if you have any further questions or concerns moving forward at all. :slight_smile:

Best regards,

Kenny