Deployment error SQL Error - elastic pool request limit reached

(Chris Thwaites) #1

Our octopus deploy clound instance has been running v. slowly recently, but today it’s actually started failing with ms azure db issues, which I assume are at your end?

Here’s the issue we’re getting in the task log on the “apply retention policy” step:

The step failed: Resource ID : 1. The request limit for the elastic pool is 600 and has been reached. See ‘http://go.microsoft.com/fwlink/?LinkId=267637’ for assistance.

And here’s the full error stack:

SQL Error 10936 - Resource ID : 1. The request limit for the elastic pool is 600 and has been reached. See ‘http://go.microsoft.com/fwlink/?LinkId=267637’ for assistance.
Microsoft.Data.SqlClient.SqlException
at Microsoft.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action1 wrapCloseInAction) at Microsoft.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at Microsoft.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at Microsoft.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at Microsoft.Data.SqlClient.TdsParser.TdsExecuteTransactionManagerRequest(Byte[] buffer, TransactionManagerRequestType request, String transactionName, TransactionManagerIsolationLevel isoLevel, Int32 timeout, SqlInternalTransaction transaction, TdsParserStateObject stateObj, Boolean isDelegateControlRequest) at Microsoft.Data.SqlClient.SqlInternalConnectionTds.ExecuteTransactionYukon(TransactionRequest transactionRequest, String transactionName, IsolationLevel iso, SqlInternalTransaction internalTransaction, Boolean isDelegateControlRequest) at Microsoft.Data.SqlClient.SqlInternalConnection.BeginSqlTransaction(IsolationLevel iso, String transactionName, Boolean shouldReconnect) at Microsoft.Data.SqlClient.SqlConnection.BeginTransaction(IsolationLevel iso, String transactionName) at Microsoft.Data.SqlClient.SqlConnection.BeginDbTransaction(IsolationLevel isolationLevel) at System.Data.Common.DbConnection.System.Data.IDbConnection.BeginTransaction(IsolationLevel isolationLevel) at Nevermore.RelationalTransaction..ctor(RelationalTransactionRegistry registry, RetriableOperation retriableOperation, IsolationLevel isolationLevel, ISqlCommandFactory sqlCommandFactory, JsonSerializerSettings jsonSerializerSettings, RelationalMappings mappings, IKeyAllocator keyAllocator, IRelatedDocumentStore relatedDocumentStore, String name, ObjectInitialisationOptions objectInitialisationOptions) at Octopus.Core.RelationalStorage.RawRelationalStore.BeginTransaction(RetriableOperation retriableOperation, String name) in C:\buildAgent\work\abb2fbfce959a439\source\Octopus.Core\RelationalStorage\RawRelationalStore.cs:line 26 at Octopus.Server.Web.Infrastructure.OctopusRelationalStore.BeginTransaction(RetriableOperation retriableOperation, String name) in C:\buildAgent\work\abb2fbfce959a439\source\Octopus.Server\Web\Infrastructure\OctopusRelationalStore.cs:line 65 at Octopus.Server.Orchestration.ServerTasks.Deploy.ExecutionPlanService3.Persist(DeploymentPlan plan) in C:\buildAgent\work\abb2fbfce959a439\source\Octopus.Server\Orchestration\ServerTasks\Deploy\ExecutionPlanService.cs:line 48
at Octopus.Server.Orchestration.ServerTasks.Deploy.ExecutionTaskController`1.ExecuteBase() in C:\buildAgent\work\abb2fbfce959a439\source\Octopus.Server\Orchestration\ServerTasks\Deploy\ExecutionTaskController.cs:line 127
at Octopus.Server.Orchestration.ServerTasks.RunningTask.RunMainThread() in C:\buildAgent\work\abb2fbfce959a439\source\Octopus.Server\Orchestration\ServerTasks\RunningTask.cs:line 101

This error doesn’t look to me like it’s caused by our setup, but I’m happy to be corrected if someone could tell me if there’s some sort of cleanup I have to do?

Thanks,
Chris

(Justin Walsh) #3

Hi Chris,

Thanks for reaching out, and apologies for the inconvenience cause by this.

Sadly, this is something on our end, but rest assured that our engineers are on the case - it looks like the spike that was causing this error to present has passed, but we want to ensure that the root cause is fixed so that we don’t see this happening again.

Please don’t hesitate to reach out if you have any further questions.

(Production Info) #4

Appreciate the info, Justin, we are still experiencing 500 errors on our end. Please keep us updated as to when this issue is resolved.