I just recently upgraded to Octopus Deploy 22.214.171.1240 from an earlier version (2.5.?) and there has been a slight change in behaviour that I have a question about.
Previously I could deploy a release to an environment, and if there were no machines in the appropriate roles in that environment the deployment would succeed with warnings.
Now I don’t seem to be able to do that.
I can understand the change, and I agree that a deployment that doesn’t actually do anything should fail in most cases, however it has left me in an awkward position with our environment.
Previously I could “cache” deployments so that when the appropriate machines appeared in the environment they would automatically get the last release that was “successfully” deployed to that environment.
We evolved this solution as a result of using Auto Scaling Groups in AWS. Our Auto Scaling Groups are configured via CloudFormation to install the last successful release of a component to an environment on instance startup. We do this rather than use the “latest” deployment, as that led to issues with the Production environment auto scaling some of its instances to the version currently in CI.
The problem I describe above might have disappeared in this later version of Octopus Deploy, so I’m basically just looking for some guidance, if you have any.
Thanks for getting in touch!
We have a few customers who follow a similar process to you. However, it’s very rare for a deployment to go to an environment with no machines. As I’m sure you appreciate, it’s difficult (and a bit dangerous) to say a deployment has “succeeded” if it never went anywhere.
I’d be interested in knowing a bit more about your process. Is this a common scenario or just something that happens on a rare occasion due to losing a node, or AWS auto-scaling back to zero machines?
We are definitely planning to add support for more elastic environments which would make this technique redundant, however this won’t make it into the 3.0 release.
Probably the most straightforward option would be to establish a “canary” machine for use in each environment so there’s a deployment target set up. If testing the actual deployment process itself doesn’t matter, you could set up a new “canary” role and include a single placeholder step (e.g. PowerShell that does nothing) that runs on that role. That should allow the deployment to succeed.
Hope that helps!
The canary idea is a good one, but it will mean I need to alter all of my projects in Octopus, and remember to add that step to all new projects in the future.
Our issue is kind of a chicken and egg problem.
The environment setup (via CloudFormation, you can see an example of this in a repository that I spun off for load test workers at https://github.com/ToddBowles/Solavirum.Testing.JMeter) requires the deployment to succeed in order to be successful.
The deployment requires the environment to be successful.
I used to be able to keep the environment down and still deploy releases (which happens automatically from TeamCity), then when the environment was created it would automatically pick up the latest version that had been “deployed”. The same issue can occur when we scale back our instances to 0 overnight/weekend for staging environments. Granted builds are not overly likely during that time, but they could still happen.
I’ve worked around the issue by essentially allowing the environment to fail deployments and still succeed overall and have accepted for now that the environment must be up in order for deployments to succeed.
Does that help to add some context?
Thanks for the reply and for that info, it does add some context.
I think allowing the deployment to fail is a good option, but you obviously lose that visibility in TeamCity if there is a real failure.
The canary suggestion may be easier with 3.0 when it comes out (very soon), as you can add a deployment target without requiring a machine and tentacle. For example, you could create an Offline Target and just point at a folder that gets cleaned up regularly - that target could be given any (or all) roles and the result would just be a file dump you could ignore.
What about if all your environments are Azure Websites, technically we don’t event need to add a Machine at all? everything can be done as a step and a step that deploys an azure web app.
I will be in intermittent contact over the coming 3 weeks (8 February 2016 to 29 February 2016), as I am on annual leave.
I will still be recieving emails, but cannot guarantee timely responses.
Have a nice day.
We actually have better support for “elastic environments” on our roadmap. You should see some progress on this fairly soon, and probably another RFC. That would enable a much better story for what you’re talking about.