Octopus Store Tasklogs

Hi,
I was wondering if there is a setting or and automatic way to reduce the number of Task logs that are saved in the Octopus store location. At the moment we have over 7GB of task log files stored in the “Store\Tasklogs” directory for our Octopus Server install.

Cheers,
Chris

Hi @chris.walsh,

Thanks for posting your question, and welcome to the community!

That’s a lot of disk space for task logs to occupy, so I have a few questions that may help me better understand what could be causing this:

  • How big is each task log? I’m just trying to determine if it’s a few large logs or cumulative small ones.
  • Have you configured any retention policies? These should help clean up older tasks (and task logs) based on how you set up the policies.
  • Have you configured any variable logging for your project(s)? This is the primary culprit for large task logs, so make sure this is disabled if you aren’t using it.
  • Are there any scripts in your deployment process(es) that are overly verbose or are encountering issues that cause a lot of information to be dumped to the log?

Let me know on the above and we’ll go from there.

Best,
Patrick

Hi Patrick,

The task logs range from 1kb - around 2.5MB and the total amount of log files in the folder is around 20,852 files.
Looking through some of the files it’s listing the retention policies for some of the projects as Keep all, even though I have the retention policies set to keep only the last 2 - 5 releases.

We had some trouble recently with a custom python script that is run using a powershell step from Octopus that was hanging an returning the std out of the python in deployment task log, but that has since been fixed.

Also checked and there is no variable logging enabled.

Hey Chris,

Thanks for all of the information.

Just jumping in for Patrick as he won’t be on for a bit.

When you said “Looking through some of the files it’s listing the retention policies for some of the projects as Keep all”, where are you seeing this part specifically?

If you have tasks in that folder that belong to releases that have been deleted from retention, you are free to delete the logs to free up space.

Please let me know.

Best,
Jeremy

Hi Jeremy,

I’ve added a screenshot with an example of what I’m talking about. The Project “Party Hub Proxy” has existed for the last 3~4 yrs and has had retention policies applied to it from the Life Cycle I assigned when I created the project in Octopus. You can see that it still lists some recent releases as Keep all.

I also checked the largest file which is filled with logs relating to Calamari upgardes on my tentacles.

I should also mention that the 20k files in the tasklog folder have only been created within the last year, with the earliest dating 2nd Jan 2020.

Hi Chris,

Within the lifecycle, it is possible to set retention policies on a per-phase basis that override the retention policy set for the lifecycle as a whole. Is it possible that the Development (CI) phase has this configured?
e.g.

If that doesn’t shed any light on the situation, we’d likely need to take a look at the full retention policy task log and the JSON output of <octopusURL>/api/<spaceID>/lifecycles?skip=0&take=2147483647

I don’t believe there should be any sensitive data within that but if you have any issues uploading it here, you can email it through to support@octopus.com

Regards,
Paul

Thanks Paul.
I hadn’t noticed that in the life cycle setting for my project the retention policy is set to keep all, but a limit set on each phase.

Still not sure if this will affect the tasklog files themselves. Is there a recommend way to maintain them or reduce the amount that gets produced?

There isn’t really any way to reduce the number of task logs being produced, but when a release is deleted, either manually or via retention, it should delete any task logs related to that release.

There have been a few issues around retention policies recently that may also be in play here and causing some of this behaviour.
e.g.

Thanks Paul,

We’re using version 2019.12.1, do you know if these issues were present with this version?

It is difficult to say, we log in the issue what version we encountered the issue in, but there is no guarantee that it didn’t exist for a number of versions prior to that without being noticed.

If the main policy within the lifecycle is set to Keep All and the Development (CI) phase is set to something lower, but according to the logs is still applying the Keep All value, then it does suggest that the issue may be there.

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.