We’ve been using Octopus to deploy to a on-premise Service Fabric cluster for a while, but found it quite slow, maybe due to the distance (across the Pacific Ocean) and network status between Octopus and SF cluster.
So I wonder if you can consider adding an option in the standard SF deployment step so we can switch on the -CompressPackage option and the transfer should be faster?
But meanwhile, do you have any suggestions/workarounds that I can add this option by myself? I tried customized deployment script, but it’s seems not easy by just copying the content of DeployToServiceFabric.ps1 (failed at reading variables like $OctopusParameters[“Octopus.Action.ServiceFabric.ApplicationPackagePath”]).
Thanks for getting in touch! Unfortunately the probable bad news is that I’m not sure if we would consider expanding on the Service Fabric capabilities, and I know we don’t have any plans to do so for now. However I think there might be a good way to approach this requirement to improve the speed of this deployment.
You could add a worker that’s physically closer to your Service Fabric cluster, and configure the step to run on that worker (by adding the worker to a new worker pool and scoping the step to that worker pool). With delta compression enabled as well, I think you’d see an improvement. Would you be willing to try that approach out and let me know how much of a difference that makes?
I hope this helps, and I look forward to hearing your thoughts!
I didn’t know about Worker and just tried it. It’s working and solved my problem.
Thanks for keeping in touch and letting me know the outcome of that! Great to hear it’s helped solve this problem, and please don’t hesitate to reach out if you have any questions or concerns in the future.