Octopus Database import from 2.6 to 3.17 stuck with 100% memory usage

Hi,

I have been trying to import octopus 2.6.4 data backup into latest version 3.17. The import however, gets stuck with the message “Match source documents to destination documents or new Ids” for hours. The log file doesn’t get update either and the Octopus.Migrator process seems to consume all available memory. Is there any other way to get this working?

LOG:OctopusMigrator.2017-09-30 22_03_25.6823.txt

2017-09-30 22:03:27.3534 21452 6 DEBUG Reader took 0ms (0ms until the first record) in transaction ‘’: SELECT * FROM dbo.[ServerTask] ORDER BY [Id]
2017-09-30 22:03:27.3534 21452 1 INFO 33 destination documents added
2017-09-30 22:03:27.3534 21452 1 INFO Step took 00:00:00s
2017-09-30 22:03:27.3534 21452 1 INFO
2017-09-30 22:03:27.3534 21452 1 INFO Match source documents to destination documents or new Ids

Thanks,
Abi

Hi Abi,

I’m sorry to hear your migration hasn’t been smooth.

Can I ask what is the size of the 2.6 backup file?

Migrations can take a while to run, and it is a memory-hungry process. But there are some strategies to optimize this:

Remove unecessary data from your 2.6 instance

Our strongest recommendation is to use retention-policies in your 2.6 instance to remove some data.

Ideally you want to aim to get the document count in RavenDB down to <150k documents (we’d be interested to know how many documents you currently have?). You can find the document count by viewing Raven through the Octopus Manager. The document count is in the footer of the Raven studio.

You can also run the process mentioned in this ticket to shave some documents that won’t be imported anyway.

You can always retain the original large backup somewhere if you require it for audit purposes.

As a pre-emptive warning, if you have a very large deployment history, you may be best to do a staged running of the retention policies, so not to cause Raven to freeze with a large number of deletions.
For example, if you have days-to-keep set to 500, dial it down to 400, then 300, etc.

Give the machine as much RAM as possible

The more RAM the server executing the migrator has access to, the faster the process will complete. If you can temporarily dial this up to a big number (e.g. 64GB), this will help.

No Logs and/or MaxAge

There are configuration options which allow you to ignore task logs, and/or documents past a certain age. This can greatly speed-up the migration.

The logs can be imported later using the --onlylogs option if required.

I hope this helps. Even with these strategies, a migration may take multiple hours.

Please let us know the result?

Regards,
Michael

Hi Michael,

Thank you for your reply.

The backups were initially ~3GB with 28GB of activity logs. When I tried migrating it to a 24GB box (Intel Xeon E5620 @2.40ghz), i was getting OutOfMemory Exception. I then moved the ActivityLogs folder after which the backup size came down to 777MB.

Raven DB document count = 361911
Backup file size = 777MB

I executed the migration script in command line with --maxage=“30” --nologs parameters. it has been running for 48+ hours now and I do see few entries in the log file. Attached is the log file after 48+ hours of execution. I wonder how long this process is going to take and if it’s actually updating the database. I checked table dbo.Users in database that I configured v3.17 but there were no additional rows other than the one added during installation.

I am more concerned about retaining Octopus configurations, variable sets and deployment plans/configuration than retaining artifacts/previous releases, history. We have 37 tentacles and a lot of configurations/variable sets that we cannot recreate manually when we migrate to the latest ocotups version. Can I manually delete documents in following lists- Artifacts (36190), Releases (~3914), Events (161813), Deployments (42460) to reduce the backup size? Will this reduce the time time required for migration process?

Thanks,
Abi

command.PNG

OctopusMigrator.2017-10-02_11_55_31.0256.txt (5 MB)

Abi,

I have looked at your log file, and nothing seems amiss.

A backup of that size can take multiple days to migrate.

I would not recommend deleting documents manually, but rather by applying retention policies to your 2.6 instance.
This would allow you to remove many of the releases, deployments, events, artifacts, etc that you mention you are not particularly concerned about.

Please keep us updated with how it goes?

Hi Michael,

Thanks for taking your time to look at the logs. I have changed retention policy to retain only last 5 builds or builds up to last 30 days. However, for retention policies to take effect, I would need to deploy all the projects (around 240). This is not feasible for us at the moment but we will be enforcing retention policies going forward.

So, after 60+ hours of execution, the migration process has reached to a point “Convert documents”. It is nearing 72hour mark now. Based on the number of documents I posted earlier, would it be possible to estimate the time this step as well as the entire process would require to complete?

Thanks,
Abi

Abi,

The Octopus Server retention-policies are actually run on a schedule every 4 hours. It is only the Tentacle policies which occur after a deployment.

I wish I could tell you how long is remaining, but there are many factors.
“Convert documents” means it has read all the documents from the 2.6 backup, and has begun converting and writing them to the 3.x SQL Server database.
The fact you have run it using --maxage will help now, as the older releases, deployments, events, etc will not need to be converted.