TaskLogs not deleted in accordance with retention policy

We have our retention policies currently set to keep the two latest releases for each lifecycle. We do have a single lifecycle with a phase that overrides the retention policy to a single release. From what I can tell the task logs have never gotten cleaned up. We have logs dating back to the original deployment. We only recently noticed this issue as it was beginning to eat up a lot of space.

Hi,

Thanks for reaching out, I’d be happy to help figure out what’s going on.

We do have a few caveats when it comes to retention policies that could be preventing the files from being removed. Our documentation outlines how the policies work here, and for tentacles specifically here.

So I can be clear of your exact configuration, are you able to please share a screenshot of the log files you are referring to, as well as your retention policy settings? I would just like to confirm how it’s setup and make sure we are both referring to the same files, as well as the filepath they are stored under and what size the folder has grown to.

If you still have any questions after checking out our documentation, let me know!

Regards,

Finn

Hi Finn,

Here is a sample of the logs I’m referring to.

image

These logs are located at Octopus\TaskLogs. This directory has grow to 10.8 GB.

The retention policies for our lifecycles are setup as follows.

Thanks,
Alex

Hi Alex,

Thanks for that info, 10gb of logs is definitely larger than it should be!

I think the best option so I can figure out what’s going on quickly, is if you send through the Task Log for a recent execution of the ‘Apply retention polices’ task. This will show which policies are being used as well as the reason behind the decisions to retain or delete.

The Task Log can be retrieved by navigating to Tasks, selecting ‘Show Advanced Filters’ and changing the ‘task type’ to ‘Apply retention policies’. Select the most recent execution, and then ‘Task log’ and ‘Download’.

The log file can be uploaded securely here. Please let me know if you have any issues or questions.

Regards,

Finn

Hi Finn,

I’ve uploaded our most recent ‘Apply retention policies’ log to the link provided.

Thanks,
Alex

Hi Alex,

Thanks for that log file. While I wasn’t able to identify the cause of the excessive 10gb, it has shown that the retention policy is finding some Releases to delete, however many are still being retained. I’ll need some more info to determine what could be causing the exact cause of the excessive growth.

My first impression is that it might have something to do with the configuration or usage of the ‘Anywhere’ Lifecycle or the ‘Anywhere’ & ‘Any’ channels. Are you able to please send me screenshots of their configurations so I can try to reproduce them on my end?

I’d also like to check the oldest log file you can see in the TaskLogs folder. We’ve seen past issues where releases can get orphaned after an Octopus Server upgrade. I see you are currently using 2020.3.2, have you upgraded from a previous version, or was this the version that was originally installed?

The files can be uploaded again securely here, please let me know if you have any issues.

Regards,

Finn

Hi Finn,

Here’s our configuration for the Anywhere Lifecycle.

image

Here is the configuration for an Any channel on one of our projects.

I’ve uploaded our oldest log file with the link provided.

As for the version, I believe we’ve upgraded a few times since the original deployment.

Thanks,
Alex

Hey Alex,

Thanks for that info, I’ll work on reproducing your setup and let you know if I need any more info.

Unfortunately I can’t seem to find the oldest log file that was uploaded. If you could please upload it again that would be much appreciated. I’ve made another link to use here.

Let me know if you have any issues.

Regards,

Hi Finn,

I’ve uploaded our oldest log file to the link provided.

Thanks,
Alex

Hi @ABarbour,

Thank you for getting back to us and providing the log file as requested. I’m just stepping in for Finn while the AU team comes back online.

Can you confirm whether the server task from the “oldest log file” still shows up in Octopus? You can navigate directly to it via:
http://YOUR_OCTOPUS_SERVER_URL/app#/Spaces-1/tasks/ServerTasks-2244

You may get an error like the one below. If you do, be sure to adjust the URL to include the correct SpaceId:

However, you may get the message The resource 'ServerTasks-2244' was not found. instead.

Let me know what you find out based on the above at your earliest convenience.

Regards,
Donny

Hi Donny,

Looks like the log file still appears in Octopus.

image

Thanks,
Alex

Hi Alex,

Thanks for confirming the oldest log is still present!

That does imply that the retention policy is working as expected and that no task logs are being orphaned, however a video call might be the next best step so we can see exactly what’s going on. I can schedule one if you would like, just let me know which time and timezone would suit you.

Apologies, I removed my previous message as I want to confirm the plausibility of such a script to clear the old logs however that approach will only resolve this temporarily. The example here shows how more than 2 tasks are being retained and so altering the retention policies for certain projects would definitely be the long term solution.

Regards,

Finn

Hi Finn,

Sorry for the delayed response. I think a video call would be a good idea as well. Next Tuesday 1:30-4:00pm CST and Wednesday 3:00-4:30pm CST are available for us.

We also noticed while looking through the recent “Apply retention policy” logs that some fairly old releases are not getting deleted because they were never deployed to our higher environments. Like you mentioned adjusting retention policies for these projects might be necessary.

Thanks,
Alex

Hi Alex,

No problem, thank you for providing those suitable timeframes!

Unfortunately I won’t be able to make it as I’m based in Australia, however I’ll get our US team to reach out to schedule a call. I’ll make sure whoever attends is up to speed with the issue and ready to help out.

Regards,

Hi @ABarbour,

Thank you for your patience.

I just sent you a calendar invite to Zoom on Tuesday at 3pm ET/2pm CT.

If you need to reschedule, just let me know.

Regards,
Donny

Hi @ABarbour,

Thank you for taking the time to meet with me today.

Let us know once you have a chance to internally discuss your ideal scenario for retentions. Meanwhile, I will reach out to our Customer Solutions team as discussed on the call.

If you have any additional questions, don’t hesitate to ask.

Regards,
Donny