Unable to change 'Execution Location' to worker

Hi -

In our current model for SQL releases, deployments are executed by a special tentacle with the role name ‘OctopusProxySQL’. During production releases we have numerous projects running simultaneously and at any one moment there may be multiple SQL deployments executing. We’re using the ‘Octopus.Acquire.MaxParallelism’ and ‘Octopus.Action.MaxParallelism’ variables to increase the number of simultaneous executions and it works great; however we’re running into an issue where several projects are running retention policies at the same time on the ‘OctopusProxy’ (and it looks like the concurrency variables aren’t being used during the process). This is creating a queue of server tasks which need to be completed before the overall deployments are considered complete. During some of our larger production releases this can add 10-15 min to the deployment times for projects unlucky enough to be at the end of the queue.

As a potential workaround I started to explore the possibility of using workers to deploy SQL scripts; however for some reason I can’t change the execution location in my test project:

After combing through the step config and template it’s not clear why this isn’t allowed. Ideally I’d like to change the target to a 4 node worker pool dedicated to deploying SQL changes to all our environments. Is this by design or am I missing something?

Thanks.

Hi,

First of all, welcome to the Octopus Forums!

Thanks for reaching out.

This actually is by design. Deploy a Package steps are designed to be used on Tentacles only due to how Workers are transient and randomly selected.

You should be able to achieve this by basing your step template on Run A Script and reference a package or use a package parameter. We have many SQL step templates that use workers and this logic, for example: https://library.octopus.com/step-templates/e4a60d6f-036f-425d-a3f7-793034fc0f49/actiontemplate-sql-deploy-dacpac-from-package-parameter

Please let me know if you think that’s something you are able to implement on your side or if we need to dig in a bit more.

Best,
Jeremy

Bummer. Thanks for the quick reply though. There isn’t much appetite for converting our portfolio to use a new template but it’s good to know there’s an option.

Does the retention policy process respect the ‘Octopus.Acquire.MaxParallelism’ and ‘Octopus.Action.MaxParallelism’ variables during deployment? Thanks.

Hi @ShannonN,

You’re very welcome I’m sorry I didn’t have better news for you.

By that do you mean will the retention policy run if you have modified those variables to enable parallelism outside of the default values?

Please let me know.

Best,
Jeremy

So we define the parallelism variables in a library set and scope different values depending on server role. When projects inheriting the variable are targeting the same server role everything works as expected (concurrent deployments) until the retention policy runs. At the point the retention policy runs, we see a bottleneck of server tasks which extend the overall deployment time:

It doesn’t appear that the parallelism variables are respected during the retention policy execution but I wanted to get confirmation.

Thanks.

Thanks for the information.

I’m not sure if this is by design or not, so I would need to talk to our developers. Can I ask what types of tasks those were it was waiting on? Also, what version of Octopus are you currently on?

Please let me know.

Best,
Jeremy

Hi,

I have a quick question, have you also used the OctopusBypassDeploymentMutex variable in all projects that are clashing?

For targets to not wait on a task, that variable will need to be in every project that is planned to run simultaneously. More information is here: Run multiple processes on a target simultaneously - Octopus Deploy

Please let me know.

Best,
Jeremy

Hi Jeremy -

We have ‘Octopus.Acquire.MaxParallelism’, ‘Octopus.Action.MaxParallelism’ & ‘OctopusBypassDeploymentMutex’ defined in a library set available to all our projects; and scope the values differently depending on the target environment, role, etc. Based on my observations the issue isn’t with the actual execution of the process step; it’s with the retention policy that runs at the end of the release deployment. If more than one project runs a retention policy against the same server they are executed serially (regardless of the parallelism & mutex vars) which causes a queue of server tasks that blocks deployments from finishing.

As a work around I’m doing a pilot to rework our SQL deployment process to use workers vs dedicated tentacles. So for now I think we’re good.

One more quick question about how workers use the ‘Octopus.Acquire.MaxParallelism’, ‘Octopus.Action.MaxParallelism’ & ‘OctopusBypassDeploymentMutex’ vars. What values are used when the parallelism and mutex vars don’t have a “default” scope defined?

Thanks for the help!

Hey Shannon,

Thanks for the update. I had some discussions with others and it makes sense that the retention policy requires mutex because they all have to modify the DeploymentJournal so they can’t do it at the same time by nature of that. Retention, in general, should be very fast though so I’m surprised you noticed it. Were the other tasks it was waiting on not retention?

I’m more curious for myself and other potential users that hit this in the future.

As for the default values, they are 10.

Please let me know, and if we don’t talk, I hope you have a great rest of your week.

Best,
Jeremy

Thanks. In our environment we have a dedicated tentacles for SQL deployments so the work folders have a large number of files. Given that we have hundreds of projects that deploy SQL changes and auditing requires that we keep releases for 1 yr, the retention policy execution takes a while to execute. The pilot I’m working on (moving deployments to worker pools), should help.

Just to be sure I’m understanding the parallelism and mutex variables correctly, the default for workers is 10 regardless of the scoping? Is that configurable and if so how?

Thanks again!

Hey Shannon,

You’re very welcome!

Yeah, that makes sense why your retention may be clashing then. If you need any advice for something in specific while converting your process to workers, I can have our solutions team take a look.

This section in our docs calls out the parallelism in workers. Default is 10 and you can you configure it in a very similar way to tentacles. Workers - Octopus Deploy

Please let me know what you think.

Best,
Jeremy

Thanks Jeremy. Were good for now. Appreciate the help.

1 Like

You’re very welcome! Please let me know if you have any other questions as you move along with the transition.

Best,
Jeremy

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.