Deployment never finishes


Occasionally our deployments get stuck and never time out. This leads to all subsequent deployments to that environment timing out.

I’ve looked at the Octopus log on the target machine and the following line is logged immediately after our deploy step has finished:

2014-11-07 11:46:19.6654 ERROR Undeliverable message detected: Pipefish.Messages.Delivery.DeliveryFailureEvent from: ScriptActionOrchestrator-D64-AXJ_WfayWw@SQ-PSCAPPD00176-6AD625F6 to: TentacleScriptRun-AXA-AXJ_XbHGcg@SQ-PSCAPPT00131-14C54AC8

  • Body is: {“FailedMessage”:{“Id”:“e99bfe98-4858-429d-9858-1e5d0bb34574”,“From”:{“SerializedValue”:“TentacleScriptRun-AXA-AXJ_XbHGcg@SQ-PSCAPPT00131-14C54AC8”},“Headers”:{“Expires-At”:“635535675647201316”,“In-Reply-To”:“5676698b-7fd3-420a-ab0e-dd02ab7072f3”,“MessageStore-Envelope-NoFwd”:“08D1C8C17C2AC2100005DB34”},“To”:{“SerializedValue”:“ScriptActionOrchestrator-D64-AXJ_WfayWw@SQ-PSCAPPD00176-6AD625F6”},“MessageType”:“Octopus.Platform.Deployment.Messages.Deploy.Script.TentacleScriptRunEvent”,“Body”:{"$type":“Octopus.Platform.Deployment.Messages.Deploy.Script.TentacleScriptRunEvent, Octopus.Platform.Deployment”,“CreatedArtifacts”:[],“OutputVariables”:[],“RetentionTokens”:[]}},“Error”:{“Message”:“Actor does not exist”,“Detail”:null}}

After I manually cancel the deployment the following message is logged:

03:32:40 Verbose | Delivery of a Pipefish.Messages.Supervision.CancelCommand failed
| Actor does not exist
| Octopus.Server version

Is this a potential Octopus bug or does it indicate problems with our own setup?



Hi Kieran,

Thanks for getting in touch! What we need to try to figure this out is a full deployment log of the timed out process.

This should help us pin point the error.



I’m having this same problem where something is failing or hanging up tentacles and causing a log Jam.


I’ve attached the raw log and also the log from the tentacle. There’s seems to be a difference between these files. The tentacle log correctly logs all steps in our second step whereas the raw log seems to truncate a large number of log messages.

In the meantime I’ve changed the tentacle to polling and the issue appears to have disappeared.



OctopusTentacleLog.txt (41 KB)

ServerTasks-17615.log.txt (24 KB)

We are having the same issue on version The problem is happening approx once a week.

We are having the same issue on our side. This is what we are seeing and we are also using version 2.5:

A request for the next file chunk could not be delivered.
Actor does not exist
Octopus.Server version

Hi Mike,

In 2.6 we changed how Octopus Server sends the files to the Tentacle to a streaming method from the file chunk method.
We feel it should resolve this issue, but it has also made deployments 4x faster.


Hi Vanessa,
it comes for us with 2.6 too.

Hi Janos,

Are you able to provide us with full deployment logs?


Hi Vanessa,

I attached the RAW log. Thanks for help!


ServerTasks-8925.log.txt (1 MB)

Hi Janos,

So I am seeing that you are on version 2.6.3. We are aware of a race condition that was fixed by 2.6.5. So I would ask that you upgrade.

We also had some customers who experience this hanging and they found doing the following on the Tentacles in question helped resolve their issues:

  1. Uninstalled the tentacle (the tentacle installation folder in C:\Program Files was properly removed by the uninstallation).
  2. Deleted the service using “sc.exe”
  3. Deleted the Octopus-directory on C:.
  4. Reinstalled version

So if a straight upgrade to 2.6.5 does not help, then the Tentacle reinstall is the next step.

Hopefully one or both of these combined will resolve this issue for you.