Idea: Package Caching or Pre-Staging

One of my deployment projects has 21 steps / packages. Most of these are sent to as many as 5 machines in my production environment. I have a crappy connection to my production environment with very limited upstream bandwidth. As a result it takes over an hour to push all of the packages out for a production deployment.

##Solution 1
Pre-stage packages. From the Octopus Portal there could be a button next to the deploy button or a checkbox similar to the Force Redeployment option that would indicate that the packages for a given release should be sent to the tentacles but deployment should not begin. Depending on the implementation, this may be possible with the Manual deployment step that is current in development.

##Solution 2
Add support for a local package cache. This would be a server that could be placed in the remote environment* that would mirror copies of all Octopus packages on the main server. The packages would be synced to this cache server and the machines in that environment* would be configured to look to that cache server for packages first. *A physical environment like a remote data center. Not necessarily an Octopus environment.


I love this suggestion. I think solution 1 would be the better approach (solution 2 requires you to find an out-of-band way to upload to the cache, and copying from the cache to the local machine could still sometimes be slow).

I can definitely imagine Octopus having a new kind of task that pre-stages packages. What’s nice is you could also re-run that task multiple times until it works if the connection is flakey. Once the packages are uploaded you can then tell your users the application is going down, do the deployment quickly, and bring it up again. I love it.

Thanks for the suggestion,