Thank you for getting back to me with those answers,
You are correct in that, if you cancel a task on the Octopus server and it hangs and does not cancel it wont send the cancel request to the tentacle so if you then send another task to that tentacle it will queue as there is already a task running and tentacles only will perform one task at a time and queue the rest.
We do have a way to mitigate this though with our
OctopusBypassDeploymentMutex feature which you can read about here, so, if you set that variable up on your projects that you want to deploy in parallel on the target it wont queue the other tasks if you are struggling to cancel one.
As your tentacles are on 6.2.277 you should be able to cancel those tasks by restarting the tentacle, its not ideal but its the only way to get those unstuck in Octopus itself.
The other thing to mention is this might be one of a few old bugs we had in for 2022.3.x versions where cancelling tasks was an issue, I imagine you might be running into this one here:
Cancelled Tasks (e.g. a deployment) completes but the Task is never marked as completed and stays in the cancelling state with SQL Error 1222.
You should be able to check your Octopus Server logs for something like
SQL Error 1222 - Lock request time out period exceeded. To see if you are bumping into that old GitHub issue, that was fixed in 2022.3.10827 though and I am not sure if you are on a version higher than that as you did mention you were on 2022.3 but not what hotfix.
It looks like you are trying to force a hang with a script for testing so am I right in saying this issue only occurs at the moment if you force the deployment to hang, that is good news as it means there is no underlying issue with your Octopus instance, you are just almost trying to get this to break so you can test if you can cancel hanging deployments?
If so, it looks like you may be affected with one of the bugs that was in 2022.3 versions, it might be worth looking to upgrade to at least 2022.4 to see if you can mitigate the cancelling issue, otherwise you will have to keep restarting tentacles (which, once you upgrade past 6.2.277 you will have to actually restart the Octopus Server as a tentacle restart will no longer work) to force the cancel of those hanging tasks.
I hope that makes sense and has answered all of your questions, let me know what you think of the GitHub issue and if you can see any SQL 1222 errors in your Octopus server logs, if so this will be an upgrade of your instance in order to fix this for the future.