Acquire Packages prefixing a v to the package name

I’ve posted a bit before on using the Deploy an Azure App Service, it has all been working fine for months until yesterday when I created a new environment to deploy to.

For some reason when the packages are acquired for an image tag name in the Azure Container registry, it is prefixing the tag name ‘develop’ with a v so the step looks for ‘vdevelop’, yet if I run the same step with the same package on a different environment it doesn’t prefix with a v and works fine.

I encountered this problem before and it always happened when I cloned the step and tried to use it elsewhere - I now create them manually and until yesterday was working fine.

I can attach 2 log files for both deployments one with a fail and one without.

The server is on 2022.2 (Build 7580) Windows 64bit.

Hey @JamesH,

Thank you for contacting Octopus Support and sorry to hear you are having issues with one of your deployments.

Its very odd it is putting an extra character in the tag only for one environment! This might be a daft question but you don’t have any variables set on that project do you which are scoped to different environments? That is the only thing I can think for the fact it is picking up the ‘v’ for one environment.

Is it the new environment you created that is picking up the extra ‘v’ or is it an older environment? Also, just to make sure we are talking about the same thing, you mean you created a new (and are deploying to) an Octopus Environment when you are saying its adding the ‘v’ for one environment. You are not talking about Azure environments?

If you create a new release and deploy it does it still put the ‘v’ in the tag for the same environment?

How have you got your deployment step set up, when you say tag are you talking about the container image:

I have created you a secure link here where you can upload the log files to, I can take a look at them and see if I can spot anything.

If you have any variables set for that project it would be worth turning on Variable Logging and getting the task logs, just so we can make sure that a variable is not being used somewhere it should not.

Also, if you are willing to share it are we able to get a copy of the deployment process please. You can take a look at it and scrub out any sensitive information such as Azure subscription details if required.

I look forward to hearing from you, let us know once those files have been uploaded as we are not notified when a customer uploads to the secure site.

Kind Regards,


I’ve uploaded two log files, Succeeded and Failed added to the names respectively.

This is the package name it should be using

Thanks for your help

Hi Clare I’ve now also uploaded a failed release Raw log with logging turned on.



And just added an image of the process step

Hey @JamesH,

Great thank you we have got them, ill take a look and see if I can spot anything, I will let you know the outcomes of the investigation.

Kind Regards,


Hey @JamesH,

This post is a little long so please bare with me.

I have taken a look at your logs and it doesn’t seem like the vdevelop is the issue, that appears in both the successful and failed logs.

Failed logs:

18:41:30   Verbose  |     Package rm-ui vdevelop is required by action 'Deploy UI to Azure Staging'
18:41:30   Verbose  |     Package octopusdeploy/worker-tools v3.3.2-ubuntu.18.04 is required by action 'Deploy UI to Azure Staging'
18:41:30   Verbose  |     No packages are required on the Octopus Server

Successful logs:

18:37:13   Verbose  |     Package rm-ui vdevelop is required by action 'Deploy UI to Azure Staging'
18:37:13   Verbose  |     Package octopusdeploy/worker-tools v3.3.2-ubuntu.18.04 is required by action 'Deploy UI to Azure Staging'
18:37:13   Verbose  |     Package octopusdeploy/worker-tools v3.3.2-ubuntu.18.04 is required by action 'Swap Slots, Staging into Production'
18:37:13   Verbose  |     Package octopusdeploy/worker-tools v3.3.2-ubuntu.18.04 is required by action 'Delete Staging Slot'
18:37:13   Verbose  |     No packages are required on the Octopus Server

Where it seems to fail is on the Step 3: Deploy UI to Azure Staging step as we mentioned (I have scrubbed out sensitive info and replaced with ‘xxxx’'s):

Failed log:

18:41:54   Info     |       Updating web app to use image from registry
18:41:54   Verbose  |       Retrieving config (this is required to update image)
18:41:55   Error    |       Running rollback behaviours...
18:41:55   Error    |       Operation returned an invalid status code 'NotFound'

Successful Log:

18:38:43   Info     |       Updating web app to use image from registry
18:38:44   Verbose  |       Retrieving config (this is required to update image)
18:38:44   Verbose  |       Retrieving app settings
18:38:54   Info     |       Updating app settings with container registry

So it looks like its the config to update the image it cannot find for some reason. If we look at the failed log with variable logging in:

[ConfigJson] = '/opt/#{ContainerName}/config.json'

I think (but am not 100% sure) that this is the variable its using and this is what it cant find.

Container name is:

[ContainerName] = '#{ImageName}#{unless Octopus.Deployment.Tenant.Name == ""}-#{Octopus.Deployment.Tenant.Name | ToLower}#{/unless}-#{Octopus.Environment.Name | ToLower}

Image name is:

[ImageName] = 'rm-ui'

Would it be possible to get me a successful deployment with variable logging enabled so I can compare the two. I cant quite see why this is failing from the logs without variable logging enabled to see what the successful one is showing.

You can upload it to the same link, if you can get us that log let me know when its uploaded and I can start to compare the two.

Kind Regards,


Some of those variables that aren’t populated now are part of our old on-prem docker servers that we are moving over to Azure.

I’ve run this on an env where it works and with logging enabled. There are a couple of extra steps that can be discounted, creating a slot, swapping slots and deleting the slot - the slot isn’t populated for the failing env.

The file is now uploaded

With this failed log step I wonder if the authentication is working correctly if it cannot see the image and tag in the repo (and I think the vdevelop is possibly a red herring as it is named correctly in the lookup)

18:41:54   Info     |       Updating web app to use image from registry
18:41:54   Verbose  |       Retrieving config (this is required to update image)
18:41:55   Error    |       Running rollback behaviours...
18:41:55   Error    |       Operation returned an invalid status code 'NotFound'

Hey @JamesH,

I would say the vdevelop is a red herring here since it shows up on both successful and failed logs and the step that fails it does try to find the same package as the step that succeeds:

In the logs for the failed deployment here is where its failing:

10:09:05   Verbose  |       [Volumes] = '/var/log/rm-ui:/var/log/rm-ui|/opt/rm-ui-ppe-development/config.json:/app/static/config.json'
10:09:05   Verbose  |       [WaitInSeconds] = '90'
10:09:05   Verbose  |       [WCF_NM_File_Services] = 'https://#{HostName}/NMFileService.asmx'
10:09:05   Verbose  |       [WCF_NM_RMUploadService] = 'https://#{HostName}/RMUploadService.asmx'
10:09:05   Verbose  |       [WCF_NM_Services] = 'https://#{HostName}/NMServices.svc'
10:09:06   Verbose  |       Deployment type: Container
10:09:06   Info     |       Updating web app to use xxx/rm-ui:develop from
10:09:06   Verbose  |       Retrieving config (this is required to update image)
10:09:06   Error    |       Running rollback behaviours...
10:09:06   Error    |       Operation returned an invalid status code 'NotFound'

Successful logs:

11:18:58   Verbose  |       [Volumes] = '/var/log/n-rm-ui:/var/log/rm-ui|/opt/rm-ui-ppe-nightly/config.json:/app/static/config.json'
11:18:58   Verbose  |       [WaitInSeconds] = '90'
11:18:58   Verbose  |       [WCF_NM_File_Services] = 'https://#{HostName}/NMFileService.asmx'
11:18:58   Verbose  |       [WCF_NM_RMUploadService] = 'https://#{HostName}/RMUploadService.asmx'
11:18:58   Verbose  |       [WCF_NM_Services] = 'https://#{HostName}/NMServices.svc'
11:18:59   Verbose  |       Deployment type: Container
11:18:59   Info     |       Updating web app to use image from registry
11:19:00   Verbose  |       Retrieving config (this is required to update image)
11:19:00   Verbose  |       Retrieving app settings

So the config logs are in different places which makes sense as they are on different servers, one does say /var/log/rm-ui:/ in the failed logs and /var/log/n-rm-ui:/ in the successful ones but I don’t think that is the issue here.

I am going to get this in front of our engineers to see if they can spot the issue as they know the backend code better than me and what config that step looks for.

One thing that would be good to check if you can, are you able to update that webapp on the failed target outside of Octopus? So if you run the azure code to update the app via the CLI on the Octopus server does it work or does it throw the same error? I don’t think its down to authentication as you would get unauthorised errors and having searched your failed logs for authorised it doesn’t yield any results.

I will let you know what out engineers say but if you can try updating the web app outside of Octopus that would be a good test.

Kind Regards,


Those unpopulated variables aren’t relevant as the project is pulling them in from a previous library variable set.

I have been using az cli to create the apps from scratch, as to keep costs down we delete and create the web apps as needed for testing so don’t pay for when they aren’t in use. I’m about to do the same with this one and will see if it works, I’ll get back to you on this.

1 Like

I created three scripts that update the above app, I ran these as a single runbook step and they run fine

az webapp create -g $AzureResourceGroup -n $AzureAppName -p $AzureAppServicePlan -i $containerUrl --tags octopus-environment=$EnvironmentName octopus-role=$AzureDeploymentRole
az webapp vnet-integration add -g $AzureResourceGroup -n $AzureAppName --vnet $AzureVnet --subnet $AzureSubnet
az webapp config access-restriction add -g $AzureResourceGroup -n $AzureAppName --rule-name ApplicationGatewayAccess --action allow --vnet-name $AzureVnet --subnet $AzureSubnetGateway --priority 300

I’ll upload the Application Settings next and make sure that works

Hey @JamesH,

Thanks for running those tests and keeping us informed, if your application tests are successful then its definitely an Octopus issue and I can update the engineers with the fact it works outside of Octopus.

I just checked the thread I posted up to the engineers actually and they did provide a response to the ‘vdevelop’ query we had:

That vdevelop thing is something we do in the logs. We prepend v to any package version. Probably a good question as to whether we should be doing that at all anymore considering we cater for docker which doesn’t necessarily have semantic versions

So this can officially be ignored as something that would not contribute to this issue, they will probably look at getting rid of the ‘v’ too in future releases.

I look forward to hearing the results of your tests,

Kind Regards,


Hi Clare

Just to let you know, configs are working fine too for manual deployments from a run book for the environment.

I set up a new env too and that errors with the Octopus created Step Template and runs fine with the manual one in a runbook



Hey @JamesH,

Thank you for confirming this works in a runbook, very odd!

I have given all of this information to our engineers who are looking into this for you, hopefully they can spot something in the logs that I have not managed to find which points to why this is happening.

The other thing the engineers would like if possible is can you export your deployment process JSON and upload it via the support site, I know you sent over a picture of one of the steps but the engineers need the full process JSON if possible. Again please take a look at it and retract any sensitive information before sending it over.

Once they have all this they will have a good idea of what is going on. The link I gave you before for our secure site should still be valid but let me know if it has expired and I will generate a new link for you. Let me know once the files have been uploaded too and I can get them straight to our engineers.

Thank you for your patience whilst we continue to investigate this for you,

Kind Regards,


Hi Clare

I’ve uploaded successful runbook deployment logs, the process for the runbook and the process for the main project.

I’ve done it for the simplest project so it’s easier to read.

This happens on all the projects with a new environment now.


Hey @JamesH,

Brilliant thanks for the logs, the engineers can now really dig into this and figure out what is going on. Its frustrating this is now happening for all new environments but that is a good piece of information for us to have as it might help us narrow this down.

I will keep you informed of our investigations into this but please reach out in the meantime if you have any other questions.

Kind Regards,


Hey @JamesH,

Thanks for your patience as we looked into this. I’m one of the engineers who is a custodian the Deploy an App Service step. Jumping in for @clare.martin here due to timezones.

I think I’ve figured out what the root cause of your issue may be. I noticed that the deployment process you uploaded to us is using Deployment Slots.

The process uses a variable of #{AzureDeploymentSlot} in the Deployment Slot field of the “Deploy an App Service” step. Based on the logs you submitted with OctopusPrintVariables turned on, it looks like in your variables for the project, you have only defined a value for this AzureDeploymentSlot variable scoped to the environments that use a slot. There is no value for the other environments that don’t expect to use a slot.

In our variable templating/substitution engine, if a substitution template references an unknown variable, then we always evaluate it as the template itself, not an empty string. So “#{DoesntExist}” is returned literally as “#{DoesntExist}”, not an empty string.

In your usage, I suspect the “Deploy an App Service” is trying to deploy to a slot literally called “#{AzureDeploymentSlot}”.

I get that this can be counter-intuitive, and it’s definitely not helped by the fact that we don’t give you any hints as to what’s happening here in the logs. I’m sorry for this, and it’s something I’ll add in to our backlog to improve. I think we could also make the “unknown variable” behaviour more explicit in our product documentation. I couldn’t find a clear call-out that this is how it behaves. I’ll look into that, too.

On the upside, if my assessment is correct, it means that you should fairly easily be able to fix these failing deployments. All you would need to do is add an extra, empty value for your AzureDeploymentSlot, scoped to the environments which don’t use deployment slots, so that for those environments, the Deploy an App Service step will get an empty string for Deployment Slot, and so proceed with deploying to the main/default slot in the App Servce.

As per our Scoping Variables documentation, scoping works from most-to-least tightly scoped. So you may be able to simply leave this empty value of AzureDeploymentSlot unscoped, which will then make it the fallback for any environment which doesn’t have/need a Deployment Slot.

So the process would be, find the AzureDeploymentSlot variable:

Add a value using the contextual menu:

Leave that value empty, and do not scope it

I hope this helps you get your deployments a nice bright Green! Please let us know how you go, and do reach out if you have any questions or need clarification on the above.


Hi Matt

Thanks for your help. Interestingly on another project I had just created a step that used an unpopulated Slot variable for use on the new environments and was about to test it.

But I just tested the blank slot unscoped variable approach and that works fine.

Will try that on my other projects now and make sure they are working.

More feedback in the logs regarding failures would be greatly appreciated.

Many thanks for sorting this


Hey @JamesH,

Brilliant news your testing has been successful so far, I agree with Matt too, having more information in the logs would have made this a lot easier to fault find on. We will look at improving in that area.

Let me know if you run into trouble with the other projects.

Kind Regards,