We manage our own Octopus servers.
We have hundreds of tenants, each representing one customer. Each customer provides a connection based on what their ISPs can provide and what is appropriate for their business. With locations scattered around the world we deal with varying degrees of bandwidth and quality connections. Fundamentally we have little control over those connections other than they are expected to always-on broadband.
There is an edge server concept built into WSUS called Branch Cache that we investigated. There’s no compelling reasons for us to implement that since the delivery model for WSUS is lazy and doesn’t have this type of failure mode. For Octopus, it’s an interesting idea, but I’m not sure it addresses this underlying issue.
More effecient use of available bandwidth. Delivering large updates to our fleet is a choke we manage on other systems.
Having tenticals source packages from a system on site should improve update performance at bandwidth limited sites.
Since the timeouts still exist, the package size and bandwidth restrictions may fail delivering to the edge itself.
Would the edge server a true proxy for all tentacle communication between the site and server? Or only for package delivery?
How would it recover when the Edge server is unavailable, or becomes unavailable mid-deployment.
Scale issues may exist as packages collect on a single remote system. In our case the Tentacles are mostly inexpensive and unique physical systems with limited storage.
Perhaps more interesting woudl be a edge server that is a… deplyoment agent… where deployments to related tentacles is offloaded from the primary Octopus server to the edge.
What we expected is that release steps (especially the package deployment step) would tolerate slow transfers and recover from any short connection disruptions. This package is only 200MB, but we have update packages in the works that will be closer to 500 MB.
There is one point I accidentally left off my original post is that we also have trouble with Tentacles after this time out error. At that point they show as offline. Even manual health checks failed until we restarted their Octopus service. I presume that’s related, but do not have more information other than it happened regularly.