An existing connection was forcibly closed by the remote host

Hi!

We’ve been all the day trying to solve this error and we cannot solve it :frowning:

We have many octopus projects pointing to one azure account each one. Each project has 7 deployment steps (one for a different deploying slot).

We’re cancelling and retrying unsuccessfully.

Everything has been working perfectly since yesterday… any help? we’re desesperate!

System.Net.Http.HttpRequestException: An error occurred while sending the request. —> System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. —> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. —> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
May 8th 2018 13:43:35Error
at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
at System.Net.Sockets.NetworkStream.EndRead(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
— End of inner exception stack trace —
May 8th 2018 13:43:35Error
at System.Net.Security._SslStream.EndRead(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
at System.Net.TlsStream.EndRead(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
at System.Net.Connection.ReadCallback(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
— End of inner exception stack trace —
May 8th 2018 13:43:35Error
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)
May 8th 2018 13:43:35Error
— End of inner exception stack trace —
May 8th 2018 13:43:35Error
at Microsoft.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
May 8th 2018 13:43:35Error
at Microsoft.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccess(Task task)
May 8th 2018 13:43:35Error
at Microsoft.WindowsAzure.WebSpaceOperationsExtensions.List(IWebSpaceOperations operations)
May 8th 2018 13:43:35Error
at Calamari.Azure.Integration.Websites.Publishing.ServiceManagementPublishProfileProvider.GetPublishProperties(String subscriptionId, Byte[] certificateBytes, String siteName, String serviceManagementEndpoint) in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari.Azure\Integration\Websites\Publishing\ServiceManagementPublishProfileProvider.cs:line 19
May 8th 2018 13:43:35Error
at Calamari.Azure.Deployment.Conventions.AzureWebAppConvention.GetPublishProfile(VariableDictionary variables) in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari.Azure\Deployment\Conventions\AzureWebAppConvention.cs:line 150
May 8th 2018 13:43:35Error
at Calamari.Azure.Deployment.Conventions.AzureWebAppConvention.Install(RunningDeployment deployment) in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari.Azure\Deployment\Conventions\AzureWebAppConvention.cs:line 33
May 8th 2018 13:43:35Error
at Calamari.Deployment.ConventionProcessor.RunInstallConventions() in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari\Deployment\ConventionProcessor.cs:line 60
May 8th 2018 13:43:35Error
at Calamari.Deployment.ConventionProcessor.RunConventions() in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari\Deployment\ConventionProcessor.cs:line 28
May 8th 2018 13:43:35Error
Running rollback conventions…
May 8th 2018 13:43:35Error
An error occurred while sending the request.
May 8th 2018 13:43:35Error
System.Net.Http.HttpRequestException
May 8th 2018 13:43:35Error
at Microsoft.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
May 8th 2018 13:43:35Error
at Microsoft.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccess(Task task)
May 8th 2018 13:43:35Error
at Microsoft.WindowsAzure.WebSpaceOperationsExtensions.List(IWebSpaceOperations operations)
May 8th 2018 13:43:35Error
at Calamari.Azure.Integration.Websites.Publishing.ServiceManagementPublishProfileProvider.GetPublishProperties(String subscriptionId, Byte[] certificateBytes, String siteName, String serviceManagementEndpoint) in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari.Azure\Integration\Websites\Publishing\ServiceManagementPublishProfileProvider.cs:line 19
May 8th 2018 13:43:35Error
at Calamari.Azure.Deployment.Conventions.AzureWebAppConvention.GetPublishProfile(VariableDictionary variables) in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari.Azure\Deployment\Conventions\AzureWebAppConvention.cs:line 150
May 8th 2018 13:43:35Error
at Calamari.Azure.Deployment.Conventions.AzureWebAppConvention.Install(RunningDeployment deployment) in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari.Azure\Deployment\Conventions\AzureWebAppConvention.cs:line 33
May 8th 2018 13:43:35Error
at Calamari.Deployment.ConventionProcessor.RunInstallConventions() in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari\Deployment\ConventionProcessor.cs:line 60
May 8th 2018 13:43:35Error
at Calamari.Deployment.ConventionProcessor.RunConventions() in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari\Deployment\ConventionProcessor.cs:line 50
May 8th 2018 13:43:35Error
at Calamari.Azure.Commands.DeployAzureWebCommand.Execute(String[] commandLineArguments) in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari.Azure\Commands\DeployAzureWebCommand.cs:line 83
May 8th 2018 13:43:35Error
at Calamari.Program.Execute(String[] args) in Z:\buildAgent\workDir\733e182abdf775f7\source\Calamari\Program.cs:line 46
May 8th 2018 13:43:35Error
–Inner Exception–
May 8th 2018 13:43:35Error
The underlying connection was closed: An unexpected error occurred on a receive.
May 8th 2018 13:43:35Error
System.Net.WebException
May 8th 2018 13:43:35Error
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)
May 8th 2018 13:43:35Error
–Inner Exception–
May 8th 2018 13:43:35Error
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
May 8th 2018 13:43:35Error
System.IO.IOException
May 8th 2018 13:43:35Error
at System.Net.Security._SslStream.EndRead(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
at System.Net.TlsStream.EndRead(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
at System.Net.Connection.ReadCallback(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
–Inner Exception–
May 8th 2018 13:43:35Error
An existing connection was forcibly closed by the remote host
May 8th 2018 13:43:35Error
System.Net.Sockets.SocketException
May 8th 2018 13:43:35Error
at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)
May 8th 2018 13:43:35Error
at System.Net.Sockets.NetworkStream.EndRead(IAsyncResult asyncResult)

1 Like

I am seeing the same issue today. Can you confirm what version of Octopus you are running? We are still on 13.6.0.

Hi!

We have the very last one version: v2018.4.11

I guess we’ll have to wait to see if one of the support team picks this up.

2 Likes

I am experiencing the same problem. Hope support team gives us a solution

I cannot get it working either… I thought I had broken my environment, but now I see it’s general…

Here is a list of the things we have tried:

  1. restart the octopus service
  2. restart the server
  3. Checked our Azure subscriptions under Environments > Accounts > Azure Subscriptions. The save a test verifies as OK

We have found the issue intermittent with about 75% failing.

We’ve also done 1 and 2.

As far as it happens 75% to 80% of times means that… success 20% of times so from the point of view of Azure (subscription, permissions, certificates…) seems that everything is ok

It does seem like the issue could be a on the Azure side though. At least for me, it only seems to be app services that are the issue. Cloud services always deploy fine for me.

Is anyone experiencing issues away from the Azure ecosystem? Can anyone confirm they also get this issue with app services only?

Another one chiming in with exactly the same issue.

Tried similar steps mentioned above. Also tried re-uploading the management certificate but it hasn’t helped.

Cloud services also seem to deploy fine for us, so it just leaves web apps.

Hi Vicent, hi everyone,

Thanks for getting in touch and sorry to hear about this issue.

Azure recently announced that from June 30th 2018 they are retiring support for Service Management API. This affects Azure Web Apps / App Services.

If you’re suddenly experiencing issues deploying to your Azure App Service and you’re using a Management Certificate account (which uses the Service Management API), we advise you create a new Service Principal account moving forward and see if that fixes the issues you are seeing. Our guess at this stage is that Azure have started trialling this migration in selected regions earlier than their announcement date :frowning:

Unfortunately we do not allow changing the type of Azure account in Octopus, so you will need to update all places the old Azure Mgt Cert account was used and replace it manually with your new Azure Service Principal account.

We have been trying to reproduce this issue without any luck so far. We’re continuing to try various combinations of Azure regions, O/S and .NET framework to see what triggers it.

Sorry we don’t have better news at this stage.

We have created a central GitHub issue here that you can track to be notified of any updates.

Cheers
Mark

1 Like

Changing to an Azure Service Principal account seems to have solved the issue for me.

many thanks

Thanks Mark,

This solution is also working for us.

Many Thanks

Works for us as well!

Thanks for your assitance.

(and bad for azure)

Hi,
We are deploying to an Azure VM and are experiencing the same issue.
Question: Do you think that we need to create a Service Principal Account for that too?
(as we are not deploying web apps directly to Azure, but to a vm hosted on Azure)
We are running Octopus 2018.8.6

Hi Magnus,

Sorry to hear you’re having trouble.

No, if you’re deploying to a VM on Azure (via a Tentacle) and not using a Web App step directly, this would be a different issue unrelated to the Azure account types.

Could you please open a new issue on our support forum (or email our support team @ support@octopus.com directly) and include a raw task log of the failing deployment to help us see the full details of the error, then we can take it from there.

Thanks
Mark