We had an idea that we wanted to run by you. As an example, we have two servers and thirty websites of our software deployed to each of those servers (60 sites total). Right now, we have our sites set up as tenants and they are tagged with “Server 1” and “Server 2”, per which server they are on. While, we can tell “Server 1” and “Server 2” to deploy at the same time, it can put some higher loads on each as we are upgrading all thirty sites on each server at once.
What we were looking to accomplish is that while we still tag our sites per the server that they are on, we would also use a “wave” tag. That means that fifteen sites on “Server 1” and fifteen sites on “Server 2” would be “Wave 1”. The rest of the fifteen from each would be “Wave 2”. We would like to set up the deployment to be smart enough that even though we schedule “Wave 1” and “Wave 2” at the same time, “Wave 2” can’t start until all of “Wave 1” is finished. We could use the scheduled deployment for “Wave 2”, however, there could time in between the two where no deployments are running as “Wave 1” could be finished, but “Wave 2” is not scheduled to start yet. This would reduce the load on our servers and hopefully allow our deployments to run a bit faster.
Essentially, we are looking for a way that we can tell the deployment to do “Wave 1” first and then dynamically start “Wave 2” when all of “Wave 1” is complete without having to use a scheduled or manual deployment for “Wave 2”. Is this possible?
Automation & Installation Engineer
Cartegraph Systems, LLC.
Thanks for getting in touch!
Unfortunately I do not have a solid solution for your scenario, could I just ask what problem are you trying to solve? E.g. is it because of you don’t want to have 30 deployments against the same server which might slow down the server performance etc?
Looking forward to hear from you soon
We were brainstorming possible ways to make our software upgrades faster. We already have a good process in place, but looking if we can shave off a few seconds here or there. It doesn’t mean much when you look at it from the perspective of one deployment, but when running hundreds of them, that’s significant.
When we initially deploy our application for a client, only 1 or 2 deployments are running on a server at a time. When we upgrade, we upgrade all 30 on a server at once. Each of the deployments run in parallel.
We’ve noticed that the process is very high CPU usage on the tentacle server, which could be due to the steps we are running, or Octopus itself. That high CPU could slow some of the steps down.
We were looking to see if we could start the deployment for all 30 sites at once, but only have 15 run at a time, while the other 15 wait. Once the first 15 is completed, then the second 15 run. We could use scheduled deployments for the second 15, but that could leave us with some “idle” time, where the first 15 are done, but the second 15 hasn’t started, since we haven’t reached the scheduled deployment time yet. We could also manually trigger the second 15 once we know that the first 15 are done, but that would require somebody logged into Octopus at the time the last one of “Wave 1”, finishes to kick off the second 15. We were hoping that we could send a deployment to all 30, but let the first 15 finish and then dynamically start the second 15, without a scheduled or manual intervention.
That looks interesting to me, by default, Octopus will only run one process on each target at a time, queuing the rest. So even there are 30 deployments, they will be running one by one. Did you by any chance that have set the
OctopusBypassDeploymentMutex on for the project? You could find more information about deployment mutex from here
Yes, we do have that setting set to be “true”. That also brought me back to a few settings that we have changed many releases ago. Those are the Octopus.Acquire.MaxParallelism and Octopus.Action.MaxParallelism settings. We have both of those set to be 1000. I know that the Octopus.Acquire.MaxParallelism setting is based on the Tentacle on a machine, but the Octopus.Action.MaxParallelism one is based across machines.
Is there a setting like Octopus.Action.MaxParallelism, but would be based on the Tentacle on a specific machine to limit the number of running concurrent tasks, not across machines? Or am I not interpreting that correctly?
What is the current task cap on the node (https://octopus.com/docs/support/increase-the-octopus-server-task-cap), by default it is 5 and tentacle only runs the tasks that are given.
Octopus.Action.MaxParallelism is also being used for rolling deployment and can be overwritten by using project variable, it controls how many deployment target can run concurrently. I am thinking out loud that to set it to 1, that means Server1 and Server2 will not run concurrently, will that suit you?
I have also attached the system variables in case if you would like to dig in a bit more.
I hope this helps!
We have our task cap set to be 120 concurrently running tasks.
Unfortunately, setting Octopus.Action.MaxParallelism as a project variable to 1 isn’t really what we are looking for. What we are wondering is if there was a way to cap a specific tentacle service to only run a certain number of concurrent tasks at a time?
An example is that I have Server 1 and Server 2 with a tentacle service installed. I want to be able to have Server 1 run 15 concurrent tasks at a time, even though I triggered 30 deployments for it. For Server 2, I want it to be able to run all 30 concurrent tasks all triggered at once. Both are using the same project and release. Is there a limit that can be set on Server 1’s tentacle service itself to restrict the number of concurrently running tasks?
Currently the best way to support what you are doing is through the Task Cap in Octopus. You might set this to say 15 tasks. This won’t deploy in “waves” but it will keep the number of concurrent deployments below 15. There are some obvious limitations to this approach. In particular, this is a setting that is global to the whole Octopus node, so it affects other projects and deployments.
We don’t yet have a way to configure the degree of parallelism on a more granular basis, such as on a per tentacle basis. We have some plans around redesigning our task scheduling system to enable more configuration in this area. Some of our ideas are captured in this Github Issue. Keep an eye on this going forward.
In the meantime, the only other course of action you could take is through custom scripting against our API. The multiple wave approach might be the best option, but unfortunately it is not a trivial thing to set up with a custom script. I would only do this as your last resort as it is the most complex solution and puts a larger maintenance burden on yourself. Let me know if you want to go down this avenue and you need any help scripting this against the API.
Hope that helps and sorry this doesn’t work out of the box right now.
Thanks for the update and the helpful information! We weren’t expecting that there would be a solution, but we figured that we would ask.
That Github issue is exactly what we are looking for. We will keep monitoring it to see if any of the changes become part of the product.
We thought about using the API as a last resort but have steered clear of that.
We will let you know if we have any other questions.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.