Duplicate Octopus Deployment targets within an Octopus Environment


We are deploying database updates from a single server (with a tentacle installed) to all our sql servers via a dacpac by changing the connection string (stored in an Octopus variable). This works fine where there is a single sql server deployment target within an Octopus environment, as an Octopus variable can be scoped for the corresponding connection string. However, where we have multiple sql server environments within an Octopus environment we could not use the variable scoping as the deploy target and Octopus environment are the same.

To get around this, I’ve created duplicate Deploy Targets for the environment (with distinct names) all pointing at the same server. This way variables can be scoped correctly.

I can’t see any reference to this in the documentation, so I want to check if this is the correct way of managing this scenario?



Hi Chris,

Thanks for reaching out. I’m gonna need a bit more info about your scenario and needs to give you a proper answer. A possible strategy for this could be:

Lets say you have 3 SQL databases that you want to deploy to every time you deploy to a specific Octopus Environment. You could have one deploy step for each SQL database (see steps.jpg) where each step will use the variable ConnectionString . Then on your project variables, you would have the variable ConnectionString created once per each database deploy step, and each copy will have a different connection string and will be scoped to a different deploy step. This would save you from having to create a deployment target per DB like you are doing right now.

Let me know if this approach would work in your case. If you think it wont, let me know a bit more about your needs and if possible send a screenshot of your currently working deployment process where I can see all of your steps.

Best regards,


Thanks for that. The problem is for any environment there might be 1 or more sql servers to deploy to ( e.g. Octopus QA environment might contain QA1 and QA2 physical environments and UAT might contain UAT1, UAT2, UAT3). All of these sql server deploys will be of the same nuget package containing a dacpac. Because of this we can’t setup duplicate Octopus process steps as the number of targets is variable.

I’ve also noticed that that the solution I’ve documented, by creating duplicate targets, has the drawback of only running the deployments sequentially. The log (attached) shows the tentacle on the target waiting for the previous job to complete.

Another consideration is we’re trying to predominately use Octopus variable sets to control variables. I’ve written a powershell script to export these out of Octopus and then enable source control of changes. Because Variable sets only scope on Environments, Roles and Deployment Targets this means we can’t scope by Deployment step with this approach.

I’m thinking the best option is to bite the bullet and install tentacles on all the sql servers and manage this in the way Octopus intended? Having a centralised tentacle agent doesn’t seem to quite fit into the way Octopus works.



Duplicate Envs below: SQL_Deploy_QA1 and SQL_Deploy_QA2 point at the same tentacle URL

Log of sequential running of deploys:


Hi Chris,

Thanks for sharing that info. I can totally see why my approach would not work in your case. Following the Octopus way using multiple Tentacles might be the way to go here.

One thing you could do to ease the Tentacle management process is to keep using a central Tentacle machine, but with multiple Tentacle instances. Your environments will still have many targets, but they will all be running from the same VM. You could even script the Tentacle provisioning to speed up the process in case you need to add more Tentacles in the future.

RE the sequential deployments, there’s a setting you can modify to override this behavior and force the Tentacle to run multiple processes at the same time: http://docs.octopusdeploy.com/display/OD/Run+multiple+processes+on+a+Tentacle+Simultaneously

Hope that helps,

Thanks. I think installing tentacles on each of the target sql server VMs is the way to go here.

Interestingly, I did try setting the variable OctopusBypassDeploymentMutex to True and the processes still waited their turn. I think this might be because it’s the same step running twice on the same deployment server (because of the way I’ve duplicated the deployment targets within an Octopus environment). From what you say, I don’t think the tool is designed to be used like this and hence the OctopusBypassDeploymentMutex setting does not behave as intended.