Limit Concurrent Deployments Across Projects

We have a couple scenarios where we are limited in the number of concurrent deployments to a particular API, which differs per environment. Unfortunately, we have multiple projects that hit this same API. I understand (from previous threads) that there is no out of the box support for cross-project deployment limitations like this. Is this something that is on the roadmap at all? I think it’s definitely a useful scenario.

My other reason for creating this topic is to share some scripts I have been using for the past year. These scripts are to provide support for the above scenario. Admittedly, the only downside is that the tasks that queue up still run from an Octopus perspective, so they count against the node task cap. Other than that, these work perfectly and are variable enough that we have 4 different “queues” that all use these scripts, with each queue having between 2 and 81 projects.

Process described here, and if there’s any interest, I can share the actual scripts (after scrubbing some internal documentation and references):

  1. The “Lock Mutex” script runs on the Octopus Server and acquires a system level mutex. The script accepts a parameter to denote the “lock type” which is just a string. Any projects that share a lock type are limited in their concurrency. For example, one of the lock types we use has a value of “CRM-#{Octopus.Environment.Name}”. This allows for multiple deployments when going to different environments, but only a single deployment per-environment.
  2. After acquiring the system level mutex, the script then dumps a number of important pieces of deployment data (such as project name, environment name, release number, and deployment id) into a text file in a designated folder on the Octopus Server. This is necessary because while system-level mutexes can acquire locks, once the process ends they are released. This means that Octopus would release the mutex as soon as the “Lock Mutex” script completes. The text file is named after the lock type, and if the file exists, it sits and waits until the file is deleted or has empty content.
  3. The rest of the deployment process runs.
  4. The second script “Unlock Mutex” just empties the contents of the file, which allows the next deployment task to recognize the empty file and continue down it’s deployment path.

Some other things to note on these scripts:

  • The “Unlock Mutex” script has to have it’s run condition set to “Always” so that in the event of deployment failure it still unlocks the file.
  • In the event of a canceled deployment, the mutex will still unlock because the “Lock Mutex” script queries the API every 5 minutes to confirm the deployment ID is still executing. If it finds it is in a completed state but has not properly released the file, it forces a release.

Please let me know if any of the above actually makes sense and if anyone would like the scripts.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.