Tentacle upgrades are deadlocked and blocking other jobs

We have a problem with Tentacle doing automatic upgrades.
Each time this happens, it will kick off 2 jobs to upgrade the same tentacle. Each job will block one another and also block other jobs, unrelated to Tentacle upgrades.

In these logs, they are referring to one another and trying to upgrade the same machine: DEV-SS201-0C5B8

Is there a setting we need to change or is there bug in this Octopus version? 2023.2.12046.

There are also some scenarios where Octopus gives the following error, but the alternative job continues to block and will not complete:

Warning: Another upgrader is already running. Exiting on the assumption that the other process will take care of the upgrade. 

.

No exit code log exists at 'D:\Octopus\Upgrade\20230718213627-NKP8V' 
July 18th 2023 16:37:43Verbose
Process C:\Windows\system32\WindowsPowershell\v1.0\PowerShell.exe in D:\Octopus\Work\20230718213739-8755020-8 exited with code 0 
July 18th 2023 16:37:43Verbose
Exit code: 0 
July 18th 2023 16:37:47Verbose
Acquiring isolation mutex RunningScript with NoIsolation in ServerTasks-8755020 
July 18th 2023 16:37:47Wait
Waiting for the script in task ServerTasks-8755035 to finish as that script requires that no other Octopus scripts are executing on this target at the same time. 
80 additional lines not shown
July 18th 2023 16:38:02Verbose
WSManStackVersion              3.0  

AND

No exit code log exists at 'D:\Octopus\Upgrade\20230718213631-DYM23' 
July 18th 2023 16:37:26Verbose
Process C:\Windows\system32\WindowsPowershell\v1.0\PowerShell.exe in D:\Octopus\Work\20230718213720-8755035-2 exited with code 0 
July 18th 2023 16:37:26Verbose
Exit code: 0 
July 18th 2023 16:37:31Verbose
Acquiring isolation mutex RunningScript with NoIsolation in ServerTasks-8755035 
July 18th 2023 16:37:31Wait
Waiting for the script in task ServerTasks-8755020 to finish as that script requires that no other Octopus scripts are executing on this target at the same time. 
July 18th 2023 16:37:32Verbose
Executable directory is C:\Windows\system32\WindowsPowershell\v1.0 
July 18th 2023 16:37:32Verbose
Executable name or full path: C:\Windows\system32\WindowsPowershell\v1.0\PowerShell.exe 
July 18th 2023 16:37:32Verbose
Starting C:\Windows\system32\WindowsPowershell\v1.0\PowerShell.exe in working directory 'D:\Octopus\Work\20230718213731-8755035-5' using 'OEM United States' encoding running as 'NT AUTHORITY\SYSTEM' 
July 18th 2023 16:37:33Verbose
Process C:\Windows\system32\WindowsPowershell\v1.0\PowerShell.exe in D:\Octopus\Work\20230718213731-8755035-5 exited with code 0 
July 18th 2023 16:37:33Verbose
Using Calamari.win-x64 25.5.8 

Hi @paul.benoit,

Thanks for getting in touch about this issue, great to hear from you again!

it will kick off 2 jobs to upgrade the same tentacle

That sounds like there could be multiple deployment targets/workers with the same thumbprint/subscription Id, are you aware of any instances that have been added twice (either as a target or worker or both)?

The /machines REST API endpoint can be leveraged to list info about your targets/workers to find any duplicates.

Are the machines with this issue have a machine policy that sets Tentacle upgrades to occur automatically or manually? Machine policies | Documentation and Support

Looking forward to getting to the bottom of this, feel free to reach out with any questions at all!

Best Regards,

The target is only listed once in the infrastructure tab.
And each of the jobs are running on different Nodes. The jobs are waiting for each other to complete on the same target with different Nodes trying to run the same upgrade job.

It does look like a certain set of dev instances have the policy set to automatically upgrade. And those are the ones I have noticed to cause problems. For now, I will change that to manual. I think that should fix it and we are generally installing the newest Tentacle on new machines.

1 Like

Hi @paul.benoit,

Cheers for confirming that!

I’ll work on a repro to check if there’s an issue with our auto tentacle upgrader for HA nodes and keep you posted with any findings.

Feel free to reach out with any questions or updates in the meantime!

Best Regards,

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.