get_SerializationSettings default windows worker

We have a space where projects use the default windows worker to run kubectl commands.
This is no longer working as of sometime yesterday night NZT.

If I try to run a health check on one of our AKS cluster deployment targets it fails.
Error below.

I can workaround by pointing the clusters at our own worker with kubectl, but would prefer not to ideally.

Leasing WindowsDefault dynamic worker…
Obtained WindowsDefault worker lease successfully.
Could not find kubelogin. Make sure kubelogin is on the PATH.
Successfully authenticated with the Azure CLI
Creating kubectl context to AKS Cluster in resource group myresourcegroup called mycluster (namespace mynamespace) using a AzureServicePrincipal
NotSpecified: Method ‘get_SerializationSettings’ in type ‘Microsoft.Azure.Management.Internal.Resources.ResourceManagementClient’ from assembly ‘Microsoft.Azure.PowerShell.Clients.ResourceManager, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35’ does not have an implementation.
At C:\Octopus\Tentacle\Work\20230823023837-1981305-3130\Octopus.AzurePowershellContext.ps1:84 char:5

  • Disable-AzContextAutosave -Scope Process 
    
  • ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    

at Initialize-AzContext, C:\Octopus\Tentacle\Work\20230823023837-1981305-3130\Octopus.AzurePowershellContext.ps1: line 84
at ConnectAzAccount, C:\Octopus\Tentacle\Work\20230823023837-1981305-3130\Octopus.AzurePowershellContext.ps1: line 129
at , C:\Octopus\Tentacle\Work\20230823023837-1981305-3130\Octopus.AzurePowershellContext.ps1: line 139
at , C:\Octopus\Tentacle\Work\20230823023837-1981305-3130\Bootstrap.Octopus.AzurePowershellContext.ps1: line 1136

Hi @Phil.Evans,

Thanks for reaching out, great to hear from you again!

I recall seeing this error recently and it’s caused by having both Az and AzureRM modules installed on the worker. See this GH issue for more details: Not able to connect to Azure with Az.Accounts version 2.12.3 (get_SerializationSettings) from within Visual studio code / Azure Automation runbook on Hybrid Worker · Issue #21960 · Azure/azure-powershell · GitHub

The Az modules aren’t installed on DynamicWorker by default, so it must have been installed by another project or process targeting that worker.

To resolve it and prevent it in future, I’d recommend swapping to use our Execution Containers so that the correct tooling is always available and the execution environment isn’t able to be impacted by other projects/processes targeting that same worker: Dynamic Worker pools | Documentation and Support

Alternatively, leasing a new worker should resolve the issue but it will likely occur again if the same process that installed the Az modules is still active, however let me know if you’d like to get a new worker leased to get you unblocked in the meantime!

Hope that helps but feel free to reach out with any questions at all!

Best Regards,

Hi there @finnian.dempsey,

Thanks as always for the info.

We’ve got different spaces and different projects all doing different things right now.
Time for a clean up and a move to execution containers as you suggest.
This is now a high priority for us to sort out.

Until that time, can you please unblock us by getting a new worker leased for us?
We should not have anything or anybody trying to install the az module on a default worker.
I’ll try to ensure we don’t see it happening again.

Cheers,
Phil

Hi Phil,

If you can confirm the URL of the instance I can remove the current worker.

Regards,
Paul

Hi Paul,

URL: https://vista.octopus.app

Kind regards,

Phil

Hey @Phil.Evans,

Cheers for confirming that!

I’ve flagged the worker so a new one should be leased for future deployments, but please let us know if there are any issues with it!

Best Regards,

Yep. That’s working fine, now.

Thanks as always, Finnian.

We’ll do our best not to break this one.

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.