Adding MSDeploy.contentPath

Hi,

I have a 2 releases/projects, which deploy from an on-prem Octopus Server to Azure App Services via an on-prem tentacle in a worker pool.

Both releases have effectively two steps in the process:

  • Deploy an Azure AppService using the standard template to an Azure App Service staging slot
  • Swap the staging and production slots using AzureRM powershell script

7 days ago we upgraded to 2021.1 build 7379, from 2020.11 or there about, the tentacle is currently 6.1.670

Now in several places in the Octopus Portal it recommends an upgrade, such as “Upgrade Calamari” in above screen shot. Have tried this option several times to no avail - so we think we are all up to date.

4 days ago we deployed the releases to Azure through the worker. In the second step, the process failed - though the deployment to the app service was successful. To complete the deployment we swapped the slots manually through the Azure Portal. Ending up with the following state (putting aside the dates that is, which relate to today as we have tried re-deploying without success)

Deployments

Today having corrected a variable within the project settings, updated snapshot variables and also restarted both the Octopus Server and Tentacle worker (which both releases use) to hopefully rectify the error in the second step and after also …

manually replacing the target staging slot with an empty slot by; creating an empty slot and swapping within Azure Portal (we do this to get round another issue we face)…

We re-deployed

Then, instead of failing on the second step…it failed on the first, the relevant lines from the a task log are as follows:

Now something in common with both releases, now failing consistently, and different from the initial release and all other releases I have witnessed in our projects, is the line:

Adding MSDeploy.contentPath (MSDeploy.contentPath)

This is not occurring on on the same release, deployed to another environment and app service via a different tentacle, albeit a different version … 6.1.736.

Apart from upgrading the tentacle to the same version, which we are looking to do. Any ideas where this mysterious new line comes from?

Cheers

John

Hey John,

Thanks for reaching out and for all of the detailed information.

I just want to confirm that this specific tentacle was working before you upgraded your Octopus Server. Did you also upgrade the Octopus Tentacle version at this time? If so, do you know the previous versions of Server and Tentacle? It looks like Server was 2020.1.#?

The upgrade button not going away is actually a known visual bug which you can track here: Calamari 'upgrade available' message displaying despite Tentacle not needing upgrade. · Issue #6909 · OctopusDeploy/Issues · GitHub

I think the best first test would be to manually upgrade the tentacle like you suggested and test it and see if it works at that point.

Would you be able to Direct Message me 2 task logs? One from a working tentacle and one where it isnt?

Please let me know how the test goes.

Best,
Jeremy

Here’s a copy of what I’ve messaged privately…

Hey Jeremy,

I am going to give you 4 for good measure.

Have updated both our tentacles to 6.1.736 with no change in the behaviour we are seeing.

The first two are to our production environment.

The first from 4 days ago where we we able to deploy to our app service, the second from today using 6.1.736 where we get the differing behaviour. Both these deployments are to empty slots.

... file removed
... file removed

The next 2 are to our development environment which is under a different Azure subscription/resource group - previous investigations have also found these to be in different data centres. We have Octopus set-up to use a different worker when deploying to non-production environments…it was an architecture design thing.

... file removed
... file removed

The third log shows a deployment to a slot which we haven’t cleared. The fourth is a log to a slot we have cleared. Both have worked as expected.

Regards

John

Hey John,

Thanks for all of the information.

I’m sure you saw, but the biggest difference I see is the working one seems to be adding a folder with dependencies and the other is not, then Octopus says “(13/07/2021 08:56:21) An error occurred when the request was processed on the remote computer.”

I had a discussion with some colleagues and we were curious if you could test this with one of our newer step types. It was introduced in 2021.1 and will eventually be necessary as Microsoft will be deprecating the Msdeploy method of deploying webapps. We have also seen other customers run into MSDeploy issues and this step type has alleviated the issues for them.

The step is Deploy an Azure App Service

Here is a blog post about it with more information: Improved Azure App Service deployments - Octopus Deploy

The only downside is you will need to create an Azure Web App deployment target.

Please let me know if you can test that and how it goes.

Best,
Jeremy

Hi Jeremy,

I’m afraid using the new step is not going to be practical option in this instance, as we are in the final stages of releasing to production - a last minute change of this nature will not go down well within my organisation.

With respect to your comment about “adding a folder with dependencies” are you referring to the line in the log:

Adding MSDeploy.contentPath (MSDeploy.contentPath).

If so what would cause this?

This is the same package, same process the only thing different between our first deployment of this release to this environment 4-5 days ago and now is a variable update and some server reboots.

This would lead me to believe that the Octopus upgrade is a key factor, in the behaviour we are now seeing.

Which leads me to next question, and I feel foolish asking it but…

We upgraded from 2020.4.11 to 2021.1 build 7379 last Wednesday, is it possible to reinstall the earlier version of Octopus Server and for it to still function reliably?

Regards

John

Hey John,

The part I was referring to was this:

14:46:09   Verbose  |       Adding directory (************\App_Data).
14:46:09   Verbose  |       The dependency check 'DependencyCheckInUse' found no issues.
14:46:09   Verbose  |       Received response from agent (HTTP status 'OK').

The non working log only has the final line, not the first two. Then in the broken one it says Adding MSDeploy.contentPath and the working doesn’t.

I’m not sure on the cause of this. I am going to reach out to some colleagues and get some extra eyes on this.

As for reverting to an older version, unfortunately, your only option would be to revert to the DB Backup from before the upgrade. You would lose any work you did post upgrade, but you could potentially look at the audit log to recreate your work if it isn’t too much.

Please let me know if you have any questions in the meantime will I discuss this with my colleagues.

Best,
Jeremy

Cheers Jeremy,

Thanks for confirming what I expected would be the case in trying to revert to a previous version.

The \App_Data folder lines are what I would expect to see as the installation process manages the files at the destination.

Which is now not occurring, or we are not getting to that point.

Instead what we have is:

“Adding MSDeploy.contentPath (MSDeploy.contentPath).” Which I believe is resulting in:

“An error occurred when the request was processed on the remote computer.”

The “remote computer” in this case being Azure. There is plenty online relating this line and Azure deployments from one source or another.

We are raising with Microsoft if they can tell use from their end what this error may have been.

I am currently associating “Adding MSDeploy.contentPath (MSDeploy.contentPath).” to the creation of virtual folder \sub application as you might do in IIS - there some hits on this with online information - but this is not appropriate with what I have seen with the linux based services we are using. Typically we end up with as directory structure:

home\site\wwwroot\all our files here

So in terms of App_Data

home\site\wwwroot\App_Data

Currently thinking the deployment is try to create

home\site\wwwroot\<something else perhaps>\App_Data

But am at a loss to where this may be coming from , and it is failing before even trying to create the “App_Data” folder.

The “Adding MSDeploy.contentPath (MSDeploy.contentPath)” line is occurring in two of our projects - both which deployed fine before the weekend with out encountering this line or error.

I look forward to anything else you or your colleagues may be able to add.

Regards

John

Hi John,

Just stepping in for Jeremy as he’s signed off for the day.

I’m curious to see if Microsoft is able to shed any light on this issue, as we have seen past issues where they change the endpoint unexpectedly. In the meantime, I thought I would describe an issue I’ve seen with another customer that could be relevant here.

We initially bundled versions of the Azure Resource Manager Powershell modules (AzureRM) and Azure CLI for convenience, however these are now out of date and we are advising customers to provide their own versions of these tools, see the documentation here.

I see that the successful deployments are using the Development workers, whereas the Production workers are the ones that are failing. Is it possible that the development workers have a different version of AzureRM modules installed from the production workers? Could you please confirm whether these workers are using the tools bundled with Octopus or Pre-Installed on the worker, and if so that the versions match those of the Development workers?

Looking forward to hearing back from you and getting to the bottom of this! Feel free to let me know if you have any questions or would like me to elaborate further.

Best Regards,

Hi Finnian,

We definitely use the tools bundled with Octopus. Our workers tentacles are just vanilla installs with a custom proxy settings.

My current thinking on this issue is that when Octopus / MSDeploy is negotiating with the Azure for what needs to be done it is now taking a different route for these targets.

Regards

John

Hey John,

Have you by chance heard back from Microsoft at all?

And just to confirm, even after the upgrade you have some working targets, but others don’t? (Same process, same webapp etc)

Please let me know.

Best,
Jeremy

Hi Jeremy,

We are in the process of getting through to them which involves going through several departments of a third party. I am hoping that by this time tomorrow we will be speaking directly with Microsoft.

We have taken steps to directly deploy to production app services and non-production app services using like for like deployment process and are currently happy the only difference in play is the azure app services we are deploying to.

I currently believe something about them is triggering the behaviour we are seeing.

I will let you know how we get on.

Thank you for your ongoing support.

Cheers

John

Hey John,

You’re very welcome and thanks for the clarification.

Please let me know how it goes with Microsoft and if you need anything else from me.

Best,
Jeremy

Hi Jeremy,

Had a meeting with Microsoft today, and wouldn’t you know it… the deployments went through as we sat there watching them process. Arrggghhhh…

Not that we have done anything at our end. Anyrate have provided Microsoft with a range of logs to see if they can determine any cause from what they have at their end. I’ll let you know if they come back with anything interesting.

Thanks again

John

1 Like

Hey John,

That’s just the worst when that happens. We’ve all been there. At least it seems like they fixed whatever it was on their end, and we will know in the future that this error likely needs to go straight to Microsoft and save the next person a bit of time.

You’re very welcome and thanks for being patient and for updating me. If they get back to you and you have the time please feel free to upgrade the thread for anyone else that runs into this error in the future.

If we don’t speak, I hope you have a great rest of your week.

Best,
Jeremy

Hi Jeremy,

Well here it is in terms of what we understand the problem was:

> Engineers have identified that the failure occurred because the MSDeploy server for your Linux App Service Plan was in a bad state. The issue was automatically mitigated when the MSDeploy server performed an unscheduled reinitialization.

I’m not going mad! Interesting to note is that we had a similar issue earlier in the year when MSDeploy was "Pre-authenticating" or there about in the logs. It too resolved itself in a fashion which we could not explain at the time.

Will be looking into the new Deploy an Azure App Service to see if it improves things.

Cheers

John

Hey John,

Thanks so much for the update. It’s a shame that the message it was giving us wasn’t really indicative of an Azure App problem on the surface. At least we (and others finding this thread) will know for the future.

Please let us know if you have any questions if/when you pivot to the new step.

Best,
Jeremy

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.