We have a space where projects use the default windows worker to run kubectl commands.
This is no longer working as of sometime yesterday night NZT.
If I try to run a health check on one of our AKS cluster deployment targets it fails.
Error below.
I can workaround by pointing the clusters at our own worker with kubectl, but would prefer not to ideally.
Leasing WindowsDefault dynamic worker…
Obtained WindowsDefault worker lease successfully.
Could not find kubelogin. Make sure kubelogin is on the PATH.
Successfully authenticated with the Azure CLI
Creating kubectl context to AKS Cluster in resource group myresourcegroup called mycluster (namespace mynamespace) using a AzureServicePrincipal
NotSpecified: Method ‘get_SerializationSettings’ in type ‘Microsoft.Azure.Management.Internal.Resources.ResourceManagementClient’ from assembly ‘Microsoft.Azure.PowerShell.Clients.ResourceManager, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35’ does not have an implementation.
At C:\Octopus\Tentacle\Work\20230823023837-1981305-3130\Octopus.AzurePowershellContext.ps1:84 char:5
Disable-AzContextAutosave -Scope Process
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
at Initialize-AzContext, C:\Octopus\Tentacle\Work\20230823023837-1981305-3130\Octopus.AzurePowershellContext.ps1: line 84
at ConnectAzAccount, C:\Octopus\Tentacle\Work\20230823023837-1981305-3130\Octopus.AzurePowershellContext.ps1: line 129
at , C:\Octopus\Tentacle\Work\20230823023837-1981305-3130\Octopus.AzurePowershellContext.ps1: line 139
at , C:\Octopus\Tentacle\Work\20230823023837-1981305-3130\Bootstrap.Octopus.AzurePowershellContext.ps1: line 1136
The Az modules aren’t installed on DynamicWorker by default, so it must have been installed by another project or process targeting that worker.
To resolve it and prevent it in future, I’d recommend swapping to use our Execution Containers so that the correct tooling is always available and the execution environment isn’t able to be impacted by other projects/processes targeting that same worker: Dynamic Worker pools | Documentation and Support
Alternatively, leasing a new worker should resolve the issue but it will likely occur again if the same process that installed the Az modules is still active, however let me know if you’d like to get a new worker leased to get you unblocked in the meantime!
Hope that helps but feel free to reach out with any questions at all!
We’ve got different spaces and different projects all doing different things right now.
Time for a clean up and a move to execution containers as you suggest.
This is now a high priority for us to sort out.
Until that time, can you please unblock us by getting a new worker leased for us?
We should not have anything or anybody trying to install the az module on a default worker.
I’ll try to ensure we don’t see it happening again.