Not sure downloading packages is being efficient! Help!

We deploy a project to one Linux server with ~120 tenants. What we have observed is that each tenant downloads the same package file before checking for a cached file to use. The issue is that at approximately 40 tenants downloading, it seems to wait for a time slot for the next download. We have seen as much as 45 minutes and more just waiting for the previous tenants to finish downloading.

What makes it more frustrating is that once it waits 45 minutes, the process then it checks for a cached file, recognizes that there is one on the server, and uses it for the install. Why did it wait first versus checking for the cached file first?

I have been playing around with the idea of pre-downloading the file to a folder on the server and creating an install script (bash) to deploy to all the tenants folders.

Any thoughts on better ways around this issue? We are running into a maintenance window issues, which will only get worse as we add more tenants!

Hi @bgeorge
Thanks for reaching out.
That is certainly a good question and I hope we can help you with this caching issue.

Initially I would say that with tenants, each deployment is treated as an individual deployment by Octopus so it will check the cache at the beginning of the deployment for that tenant. This may be the issue here since the cache check will not be valid at the beginning of each deployment as each one is specific to the tenant.

I would need to verify that in our test labs to make sure that behaviour is consistent with the issue you are having.

In the meantime your idea of pushing a package to the Tentacle as a first step in your multi-tenant deployment might trigger the cache for all subsequent steps in each of the deployments.

If it still suffers from the same issue as above then pushing the package onto a dedicated folder and deploying from there seems the next best workaround. You can still control the deployment from within Octopus using the bash script and reap the benefits of the logging and control mechanisms we have in Octopus.

For now if the Transfer a package step helps you out do let us know:

If this isn’t a suitable solution I can do some tests in our labs to verify the behaviour and see if there is an issue in our code or if there is a better workaround.

to help us with diagnosis if you don’t mind sending us your Task log for a recent deployment where this issue was happening we can then see if there are any known issues against your version of Server and anything else that might affect the caching.

I would definitely be keen to get a good solution for you here and get to the bottom of the exact behaviour of the caching process across multiple deployments.

Kind Regards,
Paraic

Thank you Paraic,

Posting here is new to me, how would you like me to send the task long? I have seen posts with log within the comments, however the entire task long is 1200 lines long. Or would you only want the deployment section and above, which would cut it down to only 200 lines? Much of what is below the deployment is restarting processes.

Thank you,
-Brad

Hi @bgeorge
Thanks for getting back to us.

Since the log file has likely information you don’t want made public you can message me directly and upload the whole file as a zip file. Its all text so it should compress nicely.

Let me know if that works for you.

Kind Regards,
Paraic

Hi Brad,
In chatting with my colleagues and looking at your deploy process, a variation of the Transfer Package solution we looked at might work better for you.

Using a Runbook you could use the Transfer a Package step whenever your Build server has a new artefact for you to deploy. This can be triggered by the Build Server using the Octo cli tool (also runs on Linux) which will then transfer the zip file to a default folder on your destination server.

After the Runbook completes it can then trigger the rest of your deployment process. You can also include the other SFTP/SSH components in your Runbook to tidy up the deploy steps.

So essentially you are splitting out the artefact transfers from the deployment so that it runs only once and then afterwards either via a trigger or a manual step run the actual deployment.

The Runbook can be highly automated so your pipeline builds, triggers the Runbook and transfers the artefact, which then deploys, giving you a seamless CD experience.

This should still preserve the individual Tenant environments without necessitating the transfer of the artefact for each tenant.

Let me know if this works for you or you need further help with setting this up.

Kind Regards,
Paraic

Hey Paraic,

As you see below, we tried to create a simple script to check for a zip file and transfer and it doesn’t find it. It seems it still pushing the temp file, even though there is nothing in our project that suggests doing it!

From your last comment, I have not worked with Runbooks, and may need your guidance on how to set it up if that’s the solution.

Summary of our current testing:
Here is our test project summary, it’s rather simple:
Task Progress
This task started 17 minutes ago and ran for 19 seconds
Step 1: Verify Package Exists on Host (*simply checks for the zip file on our server)
Server1.xyzzy.COM
Server2.xyzzy.COM

Acquire packages
Step 2: Artifact Package Transfer (*if the file doesn’t exist, then transfer the file)
Server1.xyzzy.COM
Server2.xyzzy.COM

This project seems supper simple, however in the details, you can see it still opens a connection to transfer a temp file: (*Details only from the beginning of the first server)
Executing Verify Package Exists on Host (type Run a Script) on Server1.xyzzy.COM
October 19th 2021 14:30:00Verbose
Establishing SSH connection…
October 19th 2021 14:30:01Verbose
Using ssh-rsa to authenticate SSH Endpoint fingerprint
October 19th 2021 14:30:01Verbose
SSH connection established
October 19th 2021 14:30:02Verbose
SSH connection disposed.
October 19th 2021 14:30:02Verbose
Using Calamari.linux-x64 18.1.4
October 19th 2021 14:30:02Verbose
Requesting upload…
October 19th 2021 14:30:02Verbose
Establishing SSH connection…
October 19th 2021 14:30:03Verbose
Using ssh-rsa to authenticate SSH Endpoint fingerprint
October 19th 2021 14:30:03Verbose
SSH connection established
October 19th 2021 14:30:04Verbose
Beginning streaming transfer of command.sh to $HOME.octopus\OctopusServer\Work\20211019213000-119121-1534
October 19th 2021 14:30:04Verbose
Establishing SFTP connection…
October 19th 2021 14:30:05Verbose
Using ssh-rsa to authenticate SSH Endpoint fingerprint
October 19th 2021 14:30:05Verbose
SSFTP connection established
October 19th 2021 14:30:06Verbose
Stream transfer complete
October 19th 2021 14:30:06Verbose
Requesting upload…
October 19th 2021 14:30:06Verbose
Beginning streaming transfer of Variables.secret to $HOME.octopus\OctopusServer\Work\20211019213000-119121-1534
October 19th 2021 14:30:07Verbose
Stream transfer complete
October 19th 2021 14:30:07Verbose
Requesting upload…
October 19th 2021 14:30:07Verbose
Beginning streaming transfer of Variables.Bash.secret to $HOME.octopus\OctopusServer\Work\20211019213000-119121-1534
October 19th 2021 14:30:08Verbose
Stream transfer complete
October 19th 2021 14:30:08Verbose
Calamari Version: 18.1.4
October 19th 2021 14:30:08Verbose
Environment Information:
October 19th 2021 14:30:08Verbose
OperatingSystem: Unix 3.10.0.1160
October 19th 2021 14:30:08Verbose
OsBitVersion: x64
October 19th 2021 14:30:08Verbose
Is64BitProcess: True
October 19th 2021 14:30:08Verbose
Running on Mono: False
October 19th 2021 14:30:08Verbose
CurrentUser: svc-octopus-deploy
October 19th 2021 14:30:08Verbose
MachineName: Server1
October 19th 2021 14:30:08Verbose
ProcessorCount: 4
October 19th 2021 14:30:08Verbose
CurrentDirectory: /home/xyzzy.com/svc-octopus-deploy/.octopus/OctopusServer/Work/20211019213000-119121-1534
October 19th 2021 14:30:08Verbose
TempDirectory: /tmp/
October 19th 2021 14:30:08Verbose
HostProcess: Calamari (1423)
October 19th 2021 14:30:09Verbose
Performing variable substitution on ‘/home/xyzzy.com/svc-octopus-deploy/.octopus/OctopusServer/Work/20211019213000-119121-1534/Script.sh’
October 19th 2021 14:30:09Verbose
Executing ‘/home/xyzzy.com/svc-octopus-deploy/.octopus/OctopusServer/Work/20211019213000-119121-1534/Script.sh’
October 19th 2021 14:30:09Verbose
Setting Proxy Environment Variables
October 19th 2021 14:30:09Verbose
Bash Environment Information:
October 19th 2021 14:30:09Verbose
OperatingSystem: Linux Server1.xyzzy.COM 3.10.0-1160.36.2.el7.x86_64 #1 SMP Thu Jul 8 02:53:40 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
October 19th 2021 14:30:09Verbose
CurrentUser: svc-octopus-deploy
October 19th 2021 14:30:09Verbose
HostName: Server1.xyzzy.COM
October 19th 2021 14:30:09Verbose
ProcessorCount: 4
October 19th 2021 14:30:09Verbose
CurrentDirectory: /home/xyzzy.com/svc-octopus-deploy/.octopus/OctopusServer/Work/20211019213000-119121-1534
October 19th 2021 14:30:09Verbose
TempDirectory: /tmp
October 19th 2021 14:30:09Verbose
HostProcessID: 1434

Thanks again for your assistance! If more details would help, please let me know.

-Brad

Hi Brad,
Thanks for getting back to us.

The Runbook feature is a very useful one especially for your situation where you need a specific step run outside of your normal deploy process. To see the difference between a Runbook and a Deployment, check this link:

I would definitely recommend getting familiar with Runbooks as they come in handy in all kinds of situations.
In this case the Runbook would run a set of commands at your specified time to upload the zip file and then transfer it to the destination target servers. This can be done via the Transfer A Package step also as part of the Runbook.

To get the most effective result from the runbook, you can try manually running through the steps you need first and then capturing that in the Runbook steps.
We have a very good tutorial on using Runbooks with examples in our YT channel:

Hopefully you will be able to see how you can split out the transfer of the package versus the execution of the rest of the deploy as outlined in the previous message.

Let me know if you need more help.

Kind Regards,
Paraic