I have a one Tentacle worker installed on remote location that performs deployment of our nuget package to all tenants. Our current task cap is set to 4 (so 4 simultaneous tenant deployments on the same worker). I saw a nice behaviour so that when tentacle fetches the package from Octopus feed, it caches it so that other 3 simultaneous threads don’t do the same. This is extremely important because our package is over 800mb big, so the download takes time.
This is really nice because it saves time and bandwidth.
Now, we’d like to take this one step further, because the extraction of such big package (with a lot of small files in it) also takes 1-2 minute. To get worse, the Anti-Virus (if enabled) extends this time to further to 5-7 minutes. And so for each tenant deployment. Given that we have ~20 tenants, the total deployment time is extremely long. Unfortunately, we can’t do much about it, as we can’t disable, nor add exclusion to the anti-virus (because of the company policy), neither we can increment amount of workers on that location.
What would really help is to let the Anti-Virus do what it needs to do, but then, after extraction, reuse the already extracted package between deployments - similarly, as the package itself is being smartly cached. This would give us a lot of improvement for the total deployment time.
The step I use is the “RUN A SCRIPT” where we execute custom powershell solution.
I can see how this kind of feature would be useful, unfortunately, there isn’t anything built-in that would achieve this.
It may be worth submitting this idea via our roadmap page for our product team to review.
I’ve tried to think of a way to achieve this, and the best I could think of would be to create a new project with two steps. The first step deploys the package to the worker into a custom installation folder and then the second step triggers the deployment of the main project using a step like Deploy a Child Octopus Deploy Project.
It would require some re-working of your main project to make use of the package within the custom installation directory rather than using a referenced package, though.
Thanks for getting back to me. I just added the feature suggestion to the roadmap page following your advice.
With the idea that you presented, I would like to know more:
Would I be able to still control the package retention? I assume that If I take the “custom installation folder” approach, then the content of that folder would not be a subject of package retention policy.
How could I achieve that the 1st step - “custom installation folder” - would be run only once, for the first tenant, even if multiple tenant deployments are running in parallel?
I’ve spent some time working on a mock-up of what I suggested and it looks like I was mistaken. I thought that a single Deploy a Child Octopus Deploy Project could have a list of tenant names added but this isn’t the case.
You would instead have to have a separate step for each tenant which I imagine would become a pain to manage very quickly.
I am wondering if we could get this working with something like this:
Step 1 - Script checks the custom folder to confirm files are there (and possibly checks the version number). Sets an output variable to true if files exist.
Step 2 - Deploys the package to the custom folder. Has a run condition set so it only executes if Step 1 doesn’t find the files
Rest of Steps
There are a couple issues that I think may need to be considered.
As you touched on with your question, the custom folder will not be deleted after the deployment completes. So, the Step 1 script would need to be able to check what version the files in that folder are and trigger the deploy step if the files are out of date.
It looks like your step is running on a worker pool. Is there a single worker in this pool or multiple? If it is a single worker than it will likely be necessary to configure a second tentacle instance designating that machine as a target too for the package deploy step. If there are multiple workers we can’t guarantee which worker each tenant deployment will use, so they would all need to be targets and the package pushed to them all
How will the step timings line up? If you’re deploying to 20 tenants at the same time, I’m unsure if the Step 1 script will all run at the same time, find the folder empty/out of date, and then all trigger the package deploy
I’ll work on putting together a sample project configured in this way to see if I can get the answers to any of those issues (and discover if there are other issues to consider). Due to the public holidays here it will likely be Tuesday before I can do this though.
Thanks so much for engagement and helping us solve the problem.
I think I’ll give up on the idea, because all proposed solutions are too difficult/risky to implement so I’m going to switch my attention elsewhere for now. So no need to take your time on doing a PoC project.
I wish you have a great, long weekend!