Linux tentacle stuck performing another task

When I try to run a health check on one specific Linux tentacle (the others seem to be fine), even though there are no running tasks in the Tasks pane, I always get:

Task ID: ServerTasks-12509
Task status: Executing
Task queued: Tuesday, March 1, 2016 10:21 AM
Task started: Tuesday, March 1, 2016 10:21 AM
Task duration: 5 minutes
Server version: 3.3.0+Branch.master.Sha.1dfeb7d3a9d11a9b519f46d42f6fcddba9539237

                | == Running: Check <servername redacted> health ==

10:21:32 Info | Starting health check for a limited set of deployment targets
10:21:32 Info | 1 machines will have their health checks taken.
| == Running: Check deployment target: ==
10:21:32 Verbose | This Tentacle is currently busy performing a task that cannot be run in conjunction with any other task. Please wait…

Even restarting the machine didn’t help.

Hi Ryan,

Thanks for getting in touch! Are you able to perform an Octopus Server service restart?
We haven’t seen this before and it might take further investigation, especially because the SSH targets don’t have Tentacle on them.

A restart of the service is supposed to make sure all tasks are cancelled.
Let me know if it helps resolve the issue otherwise we might have to check the tasks in the database for their statuses.


Indeed, when I installed the 3.3.1 update, it seemed to clear up the issue. I will keep an eye out to see if it comes back again.

This issue seems to still exist, and I think we can reproduce it. If you are deploying a somewhat large package to a Linux target, and during the Acquire Package step, Octopus decides to run a health check on that node, it gets into a deadlocked state where both the deployment and health check tasks say:
Cannot start this task yet. There is already another task running that cannot be run in conjunction with any other task. Please wait…

The only fix is to cancel the tasks, and restart the Octopus server service.

Hi Ryan,

Could you confirm what version you are on now?



Hi Ryan,

Sorry you haven’t heard from us one this one. I’ve spent quite a bit of time trying to reproduce this in 3.4.10 without any luck unfortunately. However we have made several changes around process locking on Linux targets in our 3.5 release. To be clear, none are intended to specifically fix the issue you are seeing, but you may wish to upgrade.

Another option as a work around, if you are still seeing this linked to Health Checks, is that you could set a custom Machine Policy for the Machine you see this issue on to reduce the regularity of Health Checks or even remove them. The docs on Machine Policies can be found here:


Haven’t seen it since 3.5. I will reopen if it reappears.