When using a worker that exists in an Azure Virtual Network with an Azure Monitor Private Link Scope, the Azure Script step freezes when it attempts to login to Azure via the Service Principal.
Looking in the Azure CLI telemetry logs in the work folder, I can see failed private link messages, which I assume is because the CLI is trying to log to something that uses monitor.azure.com which is now managed by my Private DNS Zone in Azure. I disabled AZ CLI logging via environment variable, which appears to have stopped the logging, but the behaviour persists.
If I create a Script step and login via using the same command that the Azure Script step was running, then it works perfectly and does not hang.
We’ve disabled using the Azure CLI with a variable for now, but it would be good to understand the cause and get the AZ CLI as something we are able to use.
Thank you for contacting Octopus Support and welcome to the forums!
I am sorry you are hitting the Azure CLI issue, fortunately we are aware there is an issue with the Azure CLI and our engineers are currently in the process of working through this to see where the problem lies.
We have had a few customers contact us today regarding this so getting a fix out is high on the priority list.
I will let you know once we have a workaround if there is one, or when a fix is out, it looks like you have managed to disable the Azure CLI auth for now but we really want to get a better workaround for our customers so I will keep you informed.
Sorry for the double post but we have just gotten some direction on possible workarounds you could implement:
The current workaround for all affected is to use Azure CLI: 2.39 on your workers, our current Octopus Worker-Tools image - if you are using that, is still using a working version of Azure CLI so you should not need to change any workers using our image.
An even quicker solution is to “use Azure Tools bundled with Octopus” in the Octopus UI for now (although that is visually discouraged in the UI but it is a quick workaround to get you deploying if you need to).
One more update on this, we created a Public GitHub Issue you can subscribe to to get updates, that also includes notes from our engineers on the best course of action to take at the moment for a workaround.