We register new Tentacles automatically when they power on using “tentacle.exe register-with”. Lately the Tentacles have been registering fine and appear in the GUI, but within seconds display the following error:
Pipefish.PipefishException: The request failed: BadRequest
The incoming request was on a communication link (subscription) that is no longer valid. Reset connectivity to perform a new handshake and reestablish communication.
at Pipefish.Transport.SecureTcp.MessageExchange.Client.ClientWorker.PerformExchange() in y:\work\3cbe05672d69a231\source\Pipefish.Transport.SecureTcp\MessageExchange\Client\ClientWorker.cs:line 335
at Pipefish.Transport.SecureTcp.MessageExchange.Client.ClientWorker.Run() in y:\work\3cbe05672d69a231\source\Pipefish.Transport.SecureTcp\MessageExchange\Client\ClientWorker.cs:line 182
If I hit reset in the GUI they reconnect no problem, but this manual intervention breaks our deployment automation.
Any ideas what could cause the communication link to immediately become invalid after registering?
Thanks for getting in touch! I have seen this happen and then within a minute it fixes itself. I do not know if this is too long a gap between the registering and when you need to deploy.
Also what version of Octopus/Tentacle are you using?
It’s caused us a few problems when people have tried to deploy shortly after a Tentacle has automatically registered itself, so if there’s any way to identify what’s causing it that would be great.
We’re using Octopus 2.5.11.614 with the latest Tentacle release (2.5.12.666)
I have created a GitHub issue to look into this and see if we can make the process behave better.
You can track it at the above link.
If you could also provide details of the servers such as OS from Server to Tentacle and any network information that ou might think is relevant so we can properly replicate the issue.
Either by reply here or directly in the issue.
The Tentacles are all Server 2012 R2 hosted in various Azure regions. The Octopus Deploy server is Server 2012 (non R2) hosted in AWS.
Multiple Tentacles are behind a static NAT in Azure, so when new ones register themselves the IP address will already be in use in Octopus, but the hostname and port are different each time.
…and just to add to this, I’ve tested with registering a new Tentacle and even after 10 mins I’m still getting the same error. That’s with the server automatically retrying the connection periodically. The only way to get it to connect is to manually hit reset.
Our current workflow is that new cloud servers/Tentacles are spawned during scaling, they register themselves in Octopus then automatically deploy the latest release to themselves. At the moment this is broken because the new Tentacle connection fails and therefore so does the automatic deployment.
Just to let you know that I found a simple workaround for this. Stopping then starting the Tentacle service immediately after it registers itself is enough to reset the connection. i.e. at the end of the ‘register-with’ script I added:
“C:\Program Files\Octopus Deploy\Tentacle\Tentacle.exe” service --stop
"C:\Program Files\Octopus Deploy\Tentacle\Tentacle.exe" service --start
That is great! We are looking at changing the process slightly in 3.0 that will include a similar fix, but I am happy to know you have a workaround until then.
In a nutshell, my script installs the tentacle as the blog post does and then I add additional tentacle instances which was failing - so the first one would always work and the subsequent tentacle instances would always fail. Resetting the connection fixed the issue albeit inconsistently, but adding the stop/start to my script made it work like a charm.