Retention Policy settings

Dear Octopus team,

Recently we’ve discovered that Octopus tentacles hold more than the 3 packages that we told it to hold. We’ve noticed that, if we needed multiple tries, each try got it’s own package.

What I cannot understand, if we retry a package, shouldn’t it use (or delete) the older, same package? on one server we got more than 21 packages (spread over 4 releases).

the question in this is, are we doing something wrong in deploying? (no retry, but click on the failed deploy and hit a special button that retries the same deployment without creating a deploy_#) is there a butten that automattically cleans out older _# packages?

regards,
Wim

Hi Wim,

Thanks for getting in touch!

The packages shouldn’t be pushed to the Tentacles again if they’re already been downloaded, and I’m not seeing the same thing happen on my local instance.

Could you tell me what version of Octopus you’re using?
Also, can you confirm that it’s the actual NuGet package that’s being repeatedly downloaded? They’ll end up in the Files folder of your home directory for the tentacle.
Do you have any logs from the repeated deployment you can share?

As much information you can give, the better!

Thanks,
Damo

Hi Damo,

we’re using Octopus deploy version 3.0.19.2485 (3.0.21 gave us issues due to an internet-less deploy-server)

The logs are inconclusive, I saw that there the nuget packages are downloaded only once, but get extracted to a deploy_# folder (if it fails and press ‘retry’. To simulate this, deny the octopus tentacle rights to the folder you’re deploying).

The attached file shows the package ‘work?’ folder and the _# number is the same as the number of tries we had. So to rephrase my question a bit, why does octopus extract the package to a new folder, matching the retry count? (the package hasn’t changed nor has the variables set).

each retry can add up to 300 MB of space, which creates an overhead on the infra guys to keep an eye on the disk space and cleaning the disk if it fills up (usually when a deployment fails due to the lack of disk space)

if you need more details, i will see what i can supply.

Regards,
Wim

Untitled.png

Hi Wim,

Thanks for the reply!

As you’ve found out, Octopus will extract the package to a new deployment-specific folder each time there’s a deployment. If you’ve specified a custom installation directory, the contents will then get moved to that folder. If not, you’ll get a new folder each time.

Based on your description, it sounds like the move is what’s failing. In that case, Octopus won’t clean up the original folder. There are a few reasons for this aside from the fact this should be a bit of a once-off situation before you fixed the issue.

First, we always extract to a new folder so as not to break anything that’s currently using the previous version (to minimise downtime). If you have a custom folder specified, we leave it alone until we’ve completed all the related package steps (e.g. configuration transforms, variable replacements, any post-deployment scripts), only then will we move it.

Second, in the case of retries we don’t know at exactly which point your previous deployment failed. It may have failed due to permissions, disk space, post-deployment scripts, etc. That means we can’t just use the previously-extracted package as it may not be complete or correct. Similarly, we don’t just want to overwrite it in case you’re actually using the folder - you may have fixed it manually to get up and running again.

I hope this helps explain what’s happening!
Damo

thank you for your reply Damo,

I’m going to see if we can work these things out with our development guys, or if we’re going to write a script that cleans out this sub-folder for us at a daily basis.

thanks again!
Cheerios
Wim