Upgrade (migrate) from 2.6 to 3.1.3 running forever at "Import Task logs" step

Trying to upgrade from 2.6 to 3.1.3 using “Import Data” approach. Now it’s more than 24hours and it is still running. Current step is ‘Import Task logs’ and it got to this step before I left office yesterday. It’s like taking forever.

The 4GB ram server is not responsive because the migration process takes more than 3.5GB of the ram so far. Looks like a memory leak.

Logs are zipped and attached.

Please help.

OctopusServer.zip (7 MB)

Hi Tiger,

Thanks for getting in touch. The migration can definitely take a long time depending on the size of your database. Large databases can often take a very long time (24+ hours) depending on the server machine specs. The high memory usage is odd because it generally only tracks IDs. It could be the garbage collector isn’t running much.

I assume you’ve attempted the migration three times as you attached three log files. Did the migration complete?

Let me know how you go.


Unfortunately it is still running for more than 72 hours. The entire machine is extremely slow due to frequent pagefile IO operations.


The log files I sent are for the only migration process. I guess since it lasts more than 24 hours, so there is 1 log file per day.

Due to this issue, we can’t move forward to migrate our 2.6 instance to latest version officially.


We’re experiencing the exact same problem right now. We have couple of Octopus instances on separate machines and one RavenDB backup file is 1,2 Gb and its been running for about 28 hours now. Another is 800 Mb and its been running for about 8 hours. A third one was only 100 Mb and that finished in just a few minutes. Both large ones are in the “Import Task logs” mode and they are still writing to the logs-. So its not acually stuck just really slow it seems.

Hi Tiger and Erik,

I’m sorry that you’ve experienced issues with the migration. If the migration hasn’t completed since your last message, I’d suggest killing the process and trying one of the following.

  1. Consider truncating your history - you can call the migrator with the -maxage= argument to limit the amount of historical data you bring across from Octopus 2.6. This will also make the migration process much faster for your situation. (If you’re using the UI to run the Migrator, show the script it generates and just add the -maxage=90 to keep the last 90 days of historical data)

  2. If you need to keep all of the history consider moving the *.octobak file to a machine with plenty of RAM and try the migration there.

We have had a few really big migrations (databases multiple gigabytes in size) that have run for several days and the “Import Task logs” part can be the slowest part of the migration. If it is still writing to the log files, then it’s likely that it’s still running.

Let me know how you go!


Hi Rob,

Thanks for your reply. Both my migrations finally ended and I thought I’d post the times taken here for reference:

1,2 Gb Octobak-file (complex structure with hundreds of projects with many steps) took about 42,5 hours on a Windows 2012 R2 with 4 Gb RAM and Dual-Core 2,6 MHz.

0,8 Gb Octobak-file (simpler structure with about 50 projects and less steps) took about 13,5 hours on the same specs. But about half way through we expanded the RAM to 10 Gb (from 4).

Both migrations were successful in the end.

If I would do it again I would take your advice and limit the historical data to one year (minimum for compliance needs for us).



This post saved me. Atleast I came to know that it will take that long.

I was having the same issues where ~1GB of octobak file, running for more than 36hrs with ram usage ~99% on a Windows Server 2008 R2, 2.7 GHz Dual Core, 4Gb Ram.

Will wait for another ~12hrs as per above post.

Is there a way to find, how much %completed and how much %left over from log files (so as to decide if we can stop and restart with 120days or 240days of historical logs)

Octopus support should have added the progress bar or atleast should have mentioned earlier sample import times in their upgrade wiki, so that migration can be planned effectively. We are down now.