Azure deployment - mismatch of certificates

I’m trying to run an azure deployment but it’s failing at the ‘Creating a new deployment’ step with the following error message

New-AzureDeployment : "An exception occurred when calling the Error 15:41:43 ServiceManagement API. HTTP Status Code: 400. Service Management Error Code: Error 15:41:44 BadRequest. Message: The certificate with thumbprint Error 15:41:44 24972df7a3bd59e0baa703bc6507ca3c3cc5bfa9 was not found.. Operation Tracking

I can’t figure out where this thumbprint is coming from as it doesn’t match that of the certificate that Octopus Server generated and that I uploaded to Azure. Even after regenerating the certificate (following these steps [here] (http://help.octopusdeploy.com/discussions/problems/23875-regenerate-azure-certificate)) it still tries to use the same thumbprint.

I’ve exhausted all things that I can think of, so hopefully someone can help me out here.

Cheers
Richard

Hi Richard,

Sorry for the delay in getting back to you, could you please tell us what version of Octopus you are currently running?

Thank you,
Henrik Andersson

Hi Henrik

I’m using the latest version 2.5.10

Thanks

Sent from my iPhone

Hi Richard,

I have just today, been wrestling this exact same issue.

What I had done was enable remote desktop connections for the roles within my cloud service and I hadn’t uploaded the certificate configured for the remote desktop settings.

So I guess my question is, have you enable remote desktop connections for your roles in your cloud project? If this is the case, you will need to export certificate (with private key) from the cert store and upload it to your cloud service.

The first two answers in this SO question, http://stackoverflow.com/questions/18780281/azure-cloud-deployment-fails-certificate-with-thumbprint-was-not-found, gives detailed steps on how to do this, in case you are not sure how to export the certificate.

Hope that helps!

Henrik Andersson

Hi Henrik,

My cloud service currently has remote desktop connections disabled so doesn’t look like this is the issue.

However, I switched to using a different cloud service name, this time one that already existed and that had been manually deployed to previously, and magically everything worked. My original cloud service that I was trying to deploy to was brand new and had never been deployed to - I wonder if this was causing the issue. I haven’t had time yet to go back and test this with the original cloud service, but will let you know when I do.

Thanks

Ok, this thread is a bit old but if I’m right Richard is a colleague of mine and I’m just now tripping over the vestiges of this issue in the exact same deployment Richard was having trouble with, so it’s still current :slight_smile:

Actually I reckon that Henrik is right. At some point in the past you turned on the Remote Desktop Access during a manual deployment and that did three things:

  1. Created a new certificate “CN=Windows Azure Tools” in your personal store which it used to encrypt your remote user credentials. (Friendly name generated is based on the solution you deployed).
  2. Uploaded that certificate to Azure so the credentials could be decrypted when needed.
  3. Created a whole bunch of entries in your .cscfg file that VS won’t let you remove pointing to all this.

It did this automatically for you making it very easy not to notice and making it very likely that you’d then lose the certificate for ever as it will exist only in your personal certificate store. It doesn’t go along with the source - only the thumbprint reference does.

These settings became part of the .cscfg file and were then checked in with the source and became part of your deployment. Use of this .cscfg is only possible after this has been done if the certificate “CN=Windows Azure Tools” with the thumbprint given exists in the Azure environment being deployed to. It doesn’t matter whether the enable remote desktop flag is currently true or false, the deployment will fail if the certificate is not found. This is why it works for an environment that has been manually deployed before (by you specifically and no other person - they’d make a brand new different certificate)- the manual deployment uploaded the certificate.

Here’s an example of the settings it makes in the .cscfg file:

 <ConfigurationSettings>
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="myaccounthere" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="encyrptedpassword" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2016-08-19T23:59:59.0000000+01:00" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="false" />
    </ConfigurationSettings>
    <Certificates>
      <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="MyCertificateThumbprint-TheOneThatTheDeploymentWantsToExistsForeverAfter" thumbprintAlgorithm="sha1" />
    </Certificates>

It also changes the csdef file to import two modules:

<Imports>
      <Import moduleName="RemoteAccess" />
      <Import moduleName="RemoteForwarder" />
    </Imports>

You can either fix the issue by uploading the certificate or by removing the added information from the cscfg and csdef files and rebuilding. You have to modify the files manually as the VS tooling won’t let you delete them in the UI.

^ Thanks to John Swallow for this post. I was experiencing this issue today and found this thread within minutes and was able to send it to the developer and have them remove the offending CSCFG entries. Thanks again!