Not sure downloading packages is being efficient! Help!

We deploy a project to one Linux server with ~120 tenants. What we have observed is that each tenant downloads the same package file before checking for a cached file to use. The issue is that at approximately 40 tenants downloading, it seems to wait for a time slot for the next download. We have seen as much as 45 minutes and more just waiting for the previous tenants to finish downloading.

What makes it more frustrating is that once it waits 45 minutes, the process then it checks for a cached file, recognizes that there is one on the server, and uses it for the install. Why did it wait first versus checking for the cached file first?

I have been playing around with the idea of pre-downloading the file to a folder on the server and creating an install script (bash) to deploy to all the tenants folders.

Any thoughts on better ways around this issue? We are running into a maintenance window issues, which will only get worse as we add more tenants!

Hi @bgeorge
Thanks for reaching out.
That is certainly a good question and I hope we can help you with this caching issue.

Initially I would say that with tenants, each deployment is treated as an individual deployment by Octopus so it will check the cache at the beginning of the deployment for that tenant. This may be the issue here since the cache check will not be valid at the beginning of each deployment as each one is specific to the tenant.

I would need to verify that in our test labs to make sure that behaviour is consistent with the issue you are having.

In the meantime your idea of pushing a package to the Tentacle as a first step in your multi-tenant deployment might trigger the cache for all subsequent steps in each of the deployments.

If it still suffers from the same issue as above then pushing the package onto a dedicated folder and deploying from there seems the next best workaround. You can still control the deployment from within Octopus using the bash script and reap the benefits of the logging and control mechanisms we have in Octopus.

For now if the Transfer a package step helps you out do let us know:

If this isn’t a suitable solution I can do some tests in our labs to verify the behaviour and see if there is an issue in our code or if there is a better workaround.

to help us with diagnosis if you don’t mind sending us your Task log for a recent deployment where this issue was happening we can then see if there are any known issues against your version of Server and anything else that might affect the caching.

I would definitely be keen to get a good solution for you here and get to the bottom of the exact behaviour of the caching process across multiple deployments.

Kind Regards,
Paraic

Thank you Paraic,

Posting here is new to me, how would you like me to send the task long? I have seen posts with log within the comments, however the entire task long is 1200 lines long. Or would you only want the deployment section and above, which would cut it down to only 200 lines? Much of what is below the deployment is restarting processes.

Thank you,
-Brad

Hi @bgeorge
Thanks for getting back to us.

Since the log file has likely information you don’t want made public you can message me directly and upload the whole file as a zip file. Its all text so it should compress nicely.

Let me know if that works for you.

Kind Regards,
Paraic

Hi Brad,
In chatting with my colleagues and looking at your deploy process, a variation of the Transfer Package solution we looked at might work better for you.

Using a Runbook you could use the Transfer a Package step whenever your Build server has a new artefact for you to deploy. This can be triggered by the Build Server using the Octo cli tool (also runs on Linux) which will then transfer the zip file to a default folder on your destination server.

After the Runbook completes it can then trigger the rest of your deployment process. You can also include the other SFTP/SSH components in your Runbook to tidy up the deploy steps.

So essentially you are splitting out the artefact transfers from the deployment so that it runs only once and then afterwards either via a trigger or a manual step run the actual deployment.

The Runbook can be highly automated so your pipeline builds, triggers the Runbook and transfers the artefact, which then deploys, giving you a seamless CD experience.

This should still preserve the individual Tenant environments without necessitating the transfer of the artefact for each tenant.

Let me know if this works for you or you need further help with setting this up.

Kind Regards,
Paraic