Database backups not finishing

Sometimes our automated database backups seem to hang at 20% for hours. This is not the first time this has happened for us. This issue was reported in another thread but for an earlier version of Octopus ( We’re currently on looking to upgrade to 2.5.12 in a few weeks but still, we’d like to have some solid backups before upgrading. I’ve cancelled the backup but I grow a little concerned before our planned upgrade to 2.5.12.


Hi Ian!

Really sorry about this issue. There’s a bug in System.IO.Packaging (which we use to write the backup zip file) that causes deadlocks if two bits of code try to create a backup at the same time. We’ve fixed it in 2.6.

If none of your backups are finishing, can you try restarting the Octopus server? This should clear the deadlock.


Thanks Paul for the info! I’ll be restarting the Octopus server tonight and I’ll let you know how it goes.

Thanks again for the info!


Hi Paul, in trying to restart the service, the Octopus service is still trying to restart after 40 minutes. In using process explorer, I tried to find any handles or dll locks to raven and found the Octopus service still hold said locks. What should I do at this point?

We’ve restarted the server as the service was unable to stop. We’re now trying a manual database backup.

Success! Manual database backup worked. Thanks again, Paul. Looking forward to 2.6! :slight_smile:

Slightly off-topic - for database backups, what are the main factors in determining the speed of a backup? Since this incident, we monitor the RavenDB backups like a hawk (excuse the pun) and we’ve noticed that time is not constant or predictable. The range appears to be approximately 35 minutes to 1 hour. Today our backup finished within 2 hours and we were growing concerned. I’m curious to know if there’s any factors that we can control to expedite the backup process.


Hi Ian,

For exactly what happens, I might have to wait til the new year for a response. In 3.0 your backups will become much more reliable with SQL.
The only super cause for concern would be if a backup doesn’t complete and after being canceled cannot be run manually.
Or if the backups are REALLY large in size, compared to previously, but based on your version of Octopus this should not be an issue.


Our current DB size is relatively small (~500MB) but the growth rate is increasing noticeably. I realize 3.0 will be a major shift in terms of control for us regarding the database, high availability/disaster recovery. Right now, my concern is just keeping the 2.5.x server working intact before 3.0 arrives. That means good backups.
Since our DB issue, we’re backing up only once a day instead of 4-6 times/day. I’m just trying to make sure that we can keep the backups working consistently just in case.

Thanks, Vanessa!

Hi Ian,

Yeah I totally understand, and I wasn’t trying to imply that backups aren’t important, there is just not much we can change in the mean time.
I spoke to Paul about this, and server use, Octopus use, and things like running deployments all can impact the speed of backups. So if you are running a deployment or any other high priority task the backup task gets a lower priority to run and thus can take longer. Scheduling it in an off-peak time (if you have any!) should produce more consistent results in time taken to complete the backup.


Hi Vanessa,

I meant to respond a while ago. Our backups are doing fine now. We monitor them everyday and things are working just fine. We we’re just growing concerned about the amount of time. Our DB backups now are pushing 650MB - still rather small but the growth is trending upwards significantly. That said, I’m going to close this problem.