We would like to have a single Azure App Service that hosts both our front end and API. In order to achieve this we would like to have two virtual applications set up as so:
When our project is built a nupkg is created with the following folder structure:
I have used both the Az CLI and the new Az PowerShell
Publish-AzWebApp cmdlet to deploy this as a zip archive and this works fine. However, because Octopus only supports the older Azure RM PowerShell module which doesn’t have an equivalent
Publish cmdlet I opted to use the “Deploy an Azure Web App” step in Octopus.
When I run this step instead of putting the contents of nupkg in the
site\wwwroot folder on the app service, the contents are instead uploaded to
site\wwwroot\website, as seen in the screen shot below.
This then obviously leads to the site not starting because IIS does not find any applications in the physical paths that have been registered. It seems to me that Octopus is querying the web app to find its virtual application paths and then deciding to deploy the application to the physical path that maps to the
/ virtual path.
To be clear, I do not have anything set in the
Physical Path property of the Octopus step and if I do set that to a value of
foo, for example, then Octopus will deploy the contents of the nupkg to
site\wwwroot\website\foo. Also, I have run further tests to confirm this behaviour by changing the App Service to have a single virtual application which maps
\ (the default setup). Octopus will then deploy the files to
site\wwwroot but again the application will still fail to start as there is no application entry point in the root of our nupkg.
It seems to me that I have a couple of options:
- Change the structure of the nupkg so that the website project is published to the root with a subfolder for the API. I can revert the first virtual application back to the default physical path and Octopus will deploy the contents to the
wwwroot folder and the application path mappings will match again.
- Run some PowerShell pre-deployment to temporarily change the virtual application paths and then revert them after deployment.
Option 1 is undesirable to me as I would rather keep the projects in separate folders to avoid potential weirdness from having applications nested inside of each other on disk and 2 is undesirable as it make the deployment process more complicated. Is there another way around this or am I doing something more fundamentally wrong with the way the virtual applications are structured?
Please could you advise?
Hi choc13, thanks for getting in touch,
I managed to reproduce this scenario and looked into the underlying cause and this won’t work very well with the current Azure Web App since mapping the root route
/ to something other than
site\wwwroot will cause MsDeploy/Web Deploy to interpret that as the new root for everything. I have created a new issue for us to support zip deployments, which you can use to track the progress. This should alleviate this pain, however it does mean that the only real viable option is to use a workaround for now.
Could you please clarify what version of Octopus you are currently using as well, please? We have a fairly recent version of the Azure Powershell modules bundled along with the Azure CLI in newer versions. Of course if you have access to the worker/server then you could also install your own Azure Powershell modules.
Thanks for the response. Thanks for reproducing the issue and confirming the underlying cause. I had a hunch this might be the case after looking through the verbose deployment logs, as they output the folders that are synchronised and this seemed to suggest Octopus was connecting to the root directory of the app service.
For now, I have decided to go for the first solution I mentioned in my original post. The long term goal for us is to use the new Az PowerShell module so that we can have our deployment scripts in the repo alongside the code, so I will stick with this solution until we are able to use Az PowerShell.
On that note, the version of the Octopus we are running is 2019.1.3. When I tried to run the Az PowerShell cmdlets they weren’t found, so it seems to me that they are not available on that version. Are you able to confirm the version in which they were first bundled? Also, will you be providing a step template like the current “Run an Azure PowerShell Script”? As it is really useful how that template makes it easy to connect as a service principal to Azure so that we can then just invoke the deployment scripts from within our nupkg. Additionally, do you see yourselves adding support for running Az CLI commands as an authenticated service principal in a similar way?
Finally, while I do have access to the server, it has restricted internet access so it is quite difficult for us to install additional PowerShell modules and keep them up to date. We would rather wait until they are bundled as part of Octopus to make maintenance easier and avoid and potential incompatibility issues.
Apologies, you are correct, we have not bundled the Az Powershell modules and misinterpreted Az to mean the Azure CLI. We have recently been discussing this somewhat and generally leaning towards not bundling tooling if at all possible as we don’t want to bundle every version of tools for every possible platform. We are still trying to determine the exact direction on this, including whether the current Azure PS modules should be replaced by the new Az Powershell modules. We may have to introduce other conveniences in order to maintain backwards compatibility. Regardless of direction however, we will make it easy to use our built in accounts in whichever context they are being used.
In regards to your question about the Azure CLI and authenticating using a service principal; this is already supported as part of the Run an Azure Powershell script step as can be seen here.