If only 1 server node is active behind F5 load balancer, health check on all polling Tentacles succeeds.
If 2 server nodes are active behind F5, health check succeeds on some polling Tentacles (around half) and fails on others (i.e. the other half).
It appears that health check job is initiated on one server node, and Tentacles’ polling lands on the other server node (because of load balancer distributing the load), then first server node will think that the polling Tentacle never responded and therefore marks that Tentacle as unhealthy.
Can you confirm this could be the behavior and offer a solution?
Hi Michael - I didn’t know about this configuration on the polling Tentacle side. What I have is a single entry for the load balancer itself, instead of one entry for each of the server. I’ll modify the configuration as documented, then report back the result. I think it should resolve the issue.