Tentacle.exe Memory usage keeps increasing

Tentacle Version - 1.6.3.1723

Hello

The Tentacle our on dev machine is used throughout the day - in some cases deploying 70 web applications. We have noticed that the Tentacle.exe application appears to utilise a large amount of memory and not release it.

Its currently at 642K and no deployments are currently in progress. Our second machine (different deploy set) is currently at 192K. We would expect the memory usage to go up during deployment and then reduce after…

This issue is currently stopping us from using Octopus on our live servers as we cant accept the memory usage.

Are you able to shed any more light?

Hi Graham,

How large are the packages that you are deploying on average?

Paul

Hi Paul

The largest is 400mb, average is around 30MB.

Graham

Any update on this issue?

Hi Graham,

The issue comes down to the way NuGet deals with files - it tends to like loading all the files as byte arrays into memory. This memory will eventually be garbage collected, but Task Manager can make it seem like Tentacle is using a lot of memory.

In Octopus 2.0 we’ll shortly be doing a round of testing with large packages, and using our own code for dealing with the NuGet packages rather than using NuGet libraries because of this. For now it’s a bit late to make any changes for this in 1.6.

Paul

Paul

“This memory will eventually be garbage collected” - I deployed to an environment on the 9th December, right now the Tentacle has memory usage of 72mb… when you restart the Tentacle its about 16mb.

Another environment was deployed to at 00:37 this morning, the Tentacle is currently at 101mb.

When the Tentacle is idle what memory usage do you expect it to go down to?

As a very basic example I have deployed one package to an environment, Tentacle starts at 16mb, it then goes up to 30mb…24 hours later its still at 30mb.

Something is not being disposed of… as none of the Tentacles on each environment are even close in memory usage when the Tentacle has been idle for hours.

Graham

Hi Graham,

There are a few parts to this. As Tentacle does work the CLR requests more
memory from Windows. When that work is done and the objects are no longer
needed, they become candidates for garbage collection. But the garbage
collector is tuned to only run when it is optimal to do so. So the garbage
collector may not run at all, or, even if it does run, that memory may
remain allocated to the Tentacle process anyway because the OS has no other
use for it yet.

So even though the Tentacle hasn’t done any work in 24 hours, the
CLR/Windows have decided that it may as well keep that extra 14mb allocated
to it rather than taking it away, because it will probably want it at some
point in the future and the OS isn’t under pressure to find memory for
other processes yet. The CLR does this on purpose so that it doesn’t keep
asking the OS for 14mb, then returning it, and asking again, resulting in
fragmentation and wasted time.

Seeing Tentacle use 100MB of memory even when it isn’t deploying isn’t
unexpected, and it’s not something we can actually control (we can’t
forcethe CLR to give that memory back). If you kept deploying and
that usage
grew to 500MB, or a GB, or the process crashed because of an out of memory
exception, then it would be a sign that there is an actual memory leak or
something we could fix.

I guess what I’m saying is, if the memory usage goes up, and then stays
relatively constant (at, say, 100-200MB) then that’s expected even if it
never goes down again, because as a .NET process we can’t actually do much
about that. If it grows and grows and grows and eventually crashes,
however, then that’s a bug we would need to fix.

The following page on StackOverflow may provide some more details:

Hope that helps,

Regards,

Paul Stovell
Octopus Deploy
W: octopusdeploy.com | T: @octopusdeploy http://twitter.com/octopusdeploy

Thanks for the explanation Paul, based on my original example the memory did go up to 642MB, then we had to restart the Tentacle, also based on your third paragraph you wouldn’t expect the Tentacle to go above 500MB. So is there a bug?

What kind of memory usage does your most active Tentacles use?

…I cant be the only person seeing these issues!

Graham

Hi Graham,

Thanks for the clarification, I should have realised you meant 642MB in the original post, not 642KB. It does sound like there is a bug there if that memory is never released. We won’t be able to fix this in Octopus 1.6, but I’ve got an open issue in Octopus 2.0 to test deployments using very large NuGet packages to ensure we are using memory as efficiently as possible and ensuring the memory is available to the GC. You can view the status and add comments here:

Paul

Okay thanks, my current workaround is to restart the tentacle as a last step in my deployment… but im guessing that may cause issues if two deployments are executing on the same machine??