ModifyResponder error post Octopus Server upgrade

We recently upgraded our Octopus server from 2020.5.9 to 2021.1.7665 and since the upgrade users are reporting that they see this error when they try to update project templates;

Error while executing SQL command in transaction ‘ModifyResponder ProjectModifyResponder’: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated. The command being executed was: DELETE FROM [dbo].[EventRelatedDocument] WHERE ([EventId] = @eventid) SQL Error -2 - Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated. The wait operation timed out.

I’m unable to find any relevant logs in the Octopus Server application logs.

Hi @amnelson,

Thanks for reaching out to Octopus Deploy, sorry to see you’re having issues.

Usually, we see this type of error when index fragmentation is very high on the SQL Database. It would be a good idea to look into this as a first step. An easy way to check fragmentation is to run the Community Step Template SQL - Query Octopus Database for Fragmentation.

If you would like to run the check manually, this article outlines the steps as well as resolution: How to identify and resolve SQL Server Index Fragmentation

If you haven’t seen this already, we have a database maintenance page here that might be helpful.
As with any Database work, it’s recommended you have a current backup before running any maintenance.

Please let me know if you have any more questions.


All fragmented indexes identified by the scripts provided have been rebuilt but users are still seeing the errors.

Hi @amnelson,

Thanks for getting back to me so quickly. Sorry to see that didn’t help!

How do your SQL Metrics look when you are experiencing this error? Are CPU/Memory particularly taxed?

If your metrics look fine, we have seen some performance improvements by increasing the Max Pool Size on the SQL Connection.

To increase your Max Pool Size, you will need to find your config file and alter the connection string line.

Firstly, make sure you have a working and valid backup of Octopus Deploy and your Master Key tucked away safely. We recommend having regular backups at all times.


  1. Put your instance into maintenance mode and stop the instance via Octopus Manager.

  2. Also in Octopus Manager and you will find the location of your home directory.

  3. Within your home directory, you will find a “.config” file. This will contain your SQL Connection string.

  4. You can add the parameter “Max Pool Size” to the Database Connection String with a number higher than the default. Our docs have 200 as an example (100 should be the default).
    Here’s an example:
    Data Source=(local);Initial Catalog=OctopusDeploy-MyDB;Integrated Security=True;Max Pool Size=xxx;

Your connection string will be slightly different, with regards to the Initial Catalog value.

  1. Save the file, start your Octopus Deploy instance back up and try again.

Please let me know if this is something you can try.


Our connection count for the database never goes above 20 which is well under the 100 max size. Our DBA’s have confirmed that resource utilization is low on the database as far as CPU/Memory is concerned. Our DBA did run a trace while our end user replicated the issue and found that there was no failure for the command execution which the Octopus application UI says there was. I also don’t find any exceptions being logged in the Octopus Server application…
The error is specific to only this one workflow - when trying to add a project template variable to a project. Do you know what happens behind the scenes and what calls to the database happen during that process that could help us identify potential issues?

Hi @amnelson,

Thanks for getting back to me.

Would it be possible to Change your logs levels for Octopus Server to Trace as outlined by this document here: Log files - Octopus Deploy. Then have someone reproduce the problem while you tail the most recent log, by default it would be in C:\Octopus\Logs. Once you have the new log, could you send over a copy?

Looking forward to hearing back.


I will look at getting application side logging turned up and provided to you. In the meantime, our DBA has identified the following;

Ok, so further digging indicates that there is some blocking going on.

The DELETE statement is being blocked by the following. This is an example query preventing the DELETE statement from getting the lock:

<?query -- (@0__Id nvarchar(100),@0__JSON nvarchar(max) ,@0__EventId nvarchar(100),@0__RelatedDocumentId nvarchar(100),@0__RelatedDocumentIdPrefix nvarchar(100),@1__Id nvarchar(100),@1__JSON nvarchar(max) ,@1__EventId nvarchar(100),@1__RelatedDocumentId nvarchar(100),@1__RelatedDocumentIdPrefix nvarchar(100))INSERT INTO [dbo].[EventRelatedDocument] ([EventId], [RelatedDocumentId], [RelatedDocumentIdPrefix]) VALUES (@0__EventId, @0__RelatedDocumentId, @0__RelatedDocumentIdPrefix) ,(@1__EventId, @1__RelatedDocumentId, @1__RelatedDocumentIdPrefix) --?>

Is there a way to increate the timeout that the application uses for the query/statements?

Hi @amnelson,

I appreciate the quick response.

I am going to forward this to our engineers, as they are based out of Australia it will be a few hours before they will be able to look at it. We will keep you updated with any information when we have it.

I will also send them the logs when you’re able to get them, feel free to direct message them to me if you do not want them public.

Thanks for your patience while we work through this.


Hi @amnelson,

Just stepping in for Garrett while he’s offline.

Our engineers are looking into this and would like to know if you are able to profile an SQL query such as:
delete from [EventRelatedDocument] where [EventId] = 'FakeId' and share the Actual Execution Plan?

I’ve created a link for you to securely upload any files here. Please feel free to let me know if you have any questions or run into any issues!

Best Regards,

Profile has been uploaded for you.


Hi @amnelson,

Thanks for sending those files over.

We appreciate your patience while we work through this and dig into them. We will keep you updated with any information as we get it.


This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.