What healthcheck can I use from ECS running octopusdeploy/tentacle?

I’m going to be running the octopusdeploy/tentacle docker image as a task inside AWS Elastic Container Service (ECS). The tentacle will be running in “polling” mode because we also use Octopus Cloud and I don’t want to grant perms for that system to reach into our AWS accounts. Thus tentacle should not be starting in listening mode and should not have opened the tentacle port.

Is there a container health check (CMD-SHELL) that I can add to my TaskDef to validate that the process inside the container is running correctly? (and thus force a restart if not)

Sincerely
Pete

Hey @peter_m_mcevoy,

Thanks for reaching out!

If you add the polling port (Default 10943) to the ServerPort environment variable and leave the listening port blank, then Octopus will run the tentacle in polling mode and register the tentacle with Octopus cloud over HTTPS, and poll on 10943. Unless you have already tried that?

I checked the docs, and they could do with an update to explain this a little better.

Thanks,
Adam

Hi Adam,
Yep - I’m aware of the config to get a polling tentacle, the docs were ok on that point. But they also say that the tentacle will not open the listener port which is also correct.

However that means my container orchestrator has no mechanism to monitor the tentacle container and restart it if the container process fails.

Note that this is completely different from Octupus Server health check.

Cheers
Pete

Hey @peter_m_mcevoy,

Thanks for the additional context.

I don’t know of a health check that you can run from inside the container to check the status of the tentacle install. I was thinking that you shouldn’t need one because the container should restart if the tentacle registration fails, but it does that over HTTPS, and I’m assuming you want a health check that communicates with the Octopus server over 10943. Is that the case?

Are you planning to use this as a worker or deployment target?

Thanks,
Adam

Hi Adam,
Thanks for your assitance so far. Regarding if this is a worker or deployment target, at the moment we are using these tentacles as targets to run deloyment scripts in the various AWS environments that we have. No actual binary or asset is required on the target as the scripts interact with the AWS APi in the envronment, so in a way it is a worker too.

the container should restart if the tentacle registration fails

Well, there are many reasons a process can fail at runtime well after registration and it’s the job of the container orchestrator to kill the existing container and start a new one. If the process was a web application, the health check would normally be a curl command run inside the container to the application port to ensure the process is still valid and if that did not return 200 OK, ECS would start a new container and kill the old. But in this tentacle case the process is not a listening tentacle, so I can’t probe the tentacle port.

I’m assuming you want a health check that communicates with the Octopus server over 10943

Not entirely - but inability to “see” the server could be one health problem that may get fixed with a restart. Since this is a polling tentacle, if the container cant talk to the server, Octopus will mark the tecnacle as “unavailable”.

I’ve also setup a Machine Policy to remove and deregister the tentacle if it does not hear from the tentacle within 5 mins, so constant kill/start will clean up automatically.

Does the tentacle.exe have a “status” command that could perhaps output a message I could grep for success?

Pete

Hey @peter_m_mcevoy,

Again, thanks for more context!

I’ve looked, and we don’t have anything you could reference as a tentacle status unless you grepped the log files from the tentacle. I understand your use case now, and we need to look at a good way of addressing this.

How about calling the AWS API to restart the container as part of the machine policy in Octopus? You can edit the policy to call the AWS ECS service and remove/restart a container using the hostname of the container.

Thanks