Octopus Server Memory Leak

Hi,
We’re evaluating Octopus currently and we are noticing high memory use in version 3. The server process appears to be grabbing more and more memory over time. There are several threads on this forum mentioning similar things but most have no resolution.

Are you aware of this issue? Is there work underway currently to fix it?

I noticed you use a lot of text blobs in your data model to store JSON. We had a memory leak issue with the XML Serialiser in .Net. Calling dispose() or Close() would not release the memory for certain method signatures. In our case we were using:

Serialize(Type, {type2, type3, typen}). This is listed as a known issue on Technet and the only resolution is to minimise the number of times you instantiate the serializer object.

This might be unrelated but mentioning it in case you are using .Net serialization a lot.

Hi Donsmy,
From our load testing we haven’t really come upon any clear memory leaks on the server. Looking through some of the previous tickets the only other issues I could see are around v2.6 and lower which has since been completely re-architected in 3.x and so should no longer apply. As you noted we do store some JSON directly in the database but am not aware of any memory leaks with the Newtonsoft.Json that might be causing your described problem.
We will take another look at trying to discover any leak however in the meantime could you provide more details about what you’ve observed. What sort of increase are we talking about and does it occur with some regularity when performing certain actions?
If we can find some source of this occurring then it would definitely be something we would wish to reduce.
Thanks
Robert

Hey Robert,

Thanks for responding so quickly. I’ve got the 64 bit version of the deployment server installed on a Rackspace Virtual machine running Server 2012 R2. This vm has 8GB of ram.

As we’re at the start of setting up our CI/CD environment, we haven’t really done much with the Octopus configuration. Effectively we have one listener tentacle installed on the same VM as the Octopus server. Currently there is only one environment configured and the machine is listed under that environment. We have not as yet installed any deployment packages. and everything else should still be in the default configuration.

This is what we are noticing…

We start the server and the initial memory use is around 50,000K… Every 10 seconds or so there’s about 100K added to that. We left it over night yesterday and this morning it had grown to around 500MB.

We did fiddle with the server config a bit. Initially we pointed it to an SQLServer 2014 instance on the same machine as the Octopus server. I then pointed it to a High Availability cluster by editing the .config file and changing the server name.

The only other thing we fiddled with was getting it to work with host headers and a sub domain: https://deploy.OurDomain.com. Other than assuming IIS was involved initially, this didn’t present much of a problem and the url and SSL certificate are working well.

We’ve been monitoring it today and while you do see a small reduction in memory use at times (less than 100k), Octopus is steadily growing every hour.We’ll leave it over night again and see if it gets back up to 500MB.

If you need access to our data let me know and I’ll send you an SQLServer 2014 backup of the Octopus db.

Thanks
Don

Just an update Robert. We left it running over night and the memory use for the Octopus server seems to have stabilised at between 400 and 430MB. Is this what you’d expect for an almost empty Octopus configuration?

Hi Donsmy,
We ran some load tests for over a week that was deploying non-stop and the memory usage got quite high quickly (about 2GB) but it was fairly stable for the duration and had no discernible growth. I will look into doing a similar test but allow the server to idle for some time and ensure it comes back down but I have no reason to think it wont recover most of that.
Considering that even when “idle” the server is constantly doing background work like checking pending tasks, performing health checks, checking licence details etc I think your stabilized level seams reasonable. Let me know if\how this is causing a problem and we can investigate if there is anything obvious that can be cleaned up more efficiently.
Cheers,
Robert

Hey Robert,

Thanks for getting back to me. I think we’re a little more comfortable now that we know memory use isn’t out of control. We’ll continue on with our evaluation I think and see if it’s still stable after we start deploying packages and environments.

Thanks for the great support so far.

Don