I’m trying to run 2 deploys for the same environment but different tenants. I want some steps to only run 1 instance of that step at a time because it affects shared resources, but other steps to be able to run in parallel as they only affect tenant resources.
Deploy a Package step run on targets in role OctopusServer (1 of these) to migrate the tenant db
This can be slow and is independent between tenants. I want multiple of these to execute concurrently
Azure Script step run on Octopus Server
I want to run only 1 instance of this. Azure rate limits API calls and crashes if I start running too many
Upgrade a Helm Chart step run on the Octopus Server on behalf of Kubernetes deployment targets (only 1 k8s cluster, so 1 target here)
I want to run only 1 instance of this. It configures a shared load balancer.
Step 1 initially ran 1 at a time as expected from the below docs. I configured OctopusBypassDeploymentMutex scoped to just step 1 and that seems to have allowed step 1 to run concurrently. Step 1 is working as desired now.
Based on “By default, Octopus will only run one process on each deployment target at a time, queuing the rest” from Run multiple processes on a target simultaneously - Octopus Deploy, I expected steps 2 and 3 to only allow 1 instance as desired by default, but both steps 2 and 3 execute concurrently with other tenants. How do I get steps 1 and 2 to queue instead of running concurrently? Thanks for your help!
Have you thought about creating an internal tenant and only running steps 2 & 3 on that tenant leveraging tenant tags?
I’m just going to do some digging, but you can run multiple tenant deployments to the same environments with some tweaking of Octopus system variables. I’ll get back to you later today with further information.
Thanks for your help. I don’t think tenant tags work well for us. In our case, we do want steps #2 and #3 to run for each tenant because they’re doing different things based on the tenant.
Step #2 is configuring different Azure resources. It’s just that Azure rate limits are shared across resources
Similarly, step #3 is configuring the same load balancer resource, but configuration is different because it’s different URLs it’s changing based on the tenant.
Just to clarify from your last sentence, I have multiple concurrent deployments running now with step #1 as desired. The problem is steps #2 and #3 also are running concurrently when I’d like them to queue.
I might have a solution for you, but I haven’t tested it yet. By default, steps running on the octopus server (as the built-in worker) run tasks in parallel and you want this to happen for your first step. Could you try adding another value to your OctopusBypassDeploymentMutex and set it to false with no scoping?
I’m hoping this runs the other steps sequentially across different tenant deployments. I’ll be testing this myself and I’ll feedback again later.
Thanks. I had the same idea. It didn’t work when I tested it though. Setting OctopusBypassDeploymentMutex=False for everything still allowed steps 2/3 to run while other tenant deploys had steps 2/3 running
I already had a tentacle installed on the Octopus Server and added it as a worker too. Is there any reason that will cause problems? I’d rather not add an additional worker machine if possible.
I think what I’m hearing is that the built in worker allows parallel tasks, but the tentacle worker will only execute 1 task at a time, solving the issue on steps 2/3 above. Is that correct? I’m trying to understand how migrating to a worker pool will impact these steps and others that previously ran on the built in worker.
I had some problems related to adding an external worker. After adding it, I’m getting errors that it Could not find package file: C:\Octopus\Files\[packagename]@S1.2212.2000.0001@A911E515A1242347BC2F4843D1BF022E.nupkg. At the start of the deploy, the Acquire packages step for Octopus Server logs Package OcuveraMasterDbUp version 1.2212.2000.0001 found in cache. No need to upload this 37.897 MB package. Using C:\Octopus\Files\[packagename]@S1.2212.2000.0001@A911E515A1242347BC2F4843D1BF022E.nupkg. It seems like something is clearing the package cache somehow? Switching back to the built in worker worked around this problem, but now I’m back to the issue where the built in worker’s concurrency doesn’t behave as expected. Do you have any ideas why it might be failing to find the package that it just logged was there?
Last week when we talked, you were going to try to repro the different concurrency behavior of the built in worker vs a dedicated work. Do you have any updates to share on this?