Deploy to multiple instances (from separate API calls) simultaniously

My current setup has AWS instances that are created by the ASG and request deployments from octopus via an API call once they’re ready.

I’ve noticed that if 2 or more boxes come up at the same time, the deployment to them is serial, and due to some longer running processes and delays, sometimes it takes so long that my ELB healthcheck fails.

Is there a way to make multiple deployments to the same environment & project, but on different machine targets work when using the API calls to do the deploy? Ideally, I’d like to have it just be an option I can enable to prevent it from queuing a deployment and push it right away if the machines don’t conflict. I’m not sure if Octopus is designed to handle that though, so if not, any other suggestions would be helpful.

Thanks,
Jason

Hi Jason,

Thanks for reaching out! We are about to release version 3.4 which has a feature called Elastic Environments which basically does out of the box what you were trying to do using the API.

Being so close to RTM 3.4(only a couple of weeks away), I’d strongly recommend you to take a look at our blogposts about that release and that particular feature. At the same time you can give our 3.4 beta a try and see if the current state of these features would fit your scenario.

3.4 Release blog post: https://octopus.com/blog/octopus-deploy-3.4-eap-beta2 (includes links to get started with the beta)

Elastic Environments feature: https://octopus.com/blog/whats-new-elastic-transient-environments

Let me know how that goes,
Dalmiro

I took a closer look at this today and it appears to solve most of my issues.
There is one thing that’s stopping me from making the change right now and I was wondering if you could give some clarification on it.

“If the initial deployment of a release was successful but an automatic deployment of that release fails, Octopus will stop automatically deploying that release.”

I’ve seen some times occasionally where a deployment failed (issue with the ec2 instance, someone terminates it, etc). If this prevents auto deployment, someone needs to be constantly monitoring for a failure to allow it to recover.

Is there a way currently (or planned), to make automatic deployments continue and just send a notification on the failure instead?

Thanks,
Jason

Hi Jason,

We’re currently working on implementing the notification side of your question, this way you can sign teams/users to certain events that happen, such as failed deployments.

Regarding failed deployments stopping further auto-deployments, you can use the Machine Connectivity setting on the project to remove machines that either are, or become, unavailable during a deployment (see our Deploying to transient targets docs for some more information.

I hope that helps!

Thank you and best regards,
Henrik