OctopusDeploy deployment in AWS Cloudformation (RDS + EC2)

Hi guys,

About to move our environment into AWS and am wanting to put Octopus Deploy and the MSSQL Server all into Cloudformation so it can be spun up again in an automated fashion.

My Question(s):
Is this possible to do completely end to end without any user interaction?
My main concern is when it comes to things like updating Octo to a new version which requires updating information within SQL. Can the EC2 Instance be removed and then the OctoServer MSI just be executed again and it will see its data within the SQL Table and just all start working again?

If anyone has setup the full end to end automated deployment of Octo + RDS Please let me know!

Many thanks

Hi Flynn,

Thanks for getting in touch! Sorry about the delay in getting back to you. I have the most experience here at Octopus in guiding customers with automation, High Availability and disaster recovery scenarios.
I know you haven’t mentioned all of those in your ticket, but we will need to consider all of them when it comes to what I believe you are after and how to complete this with Octopus.

Octopus Server, like Tentacle, can be automated in it’s setup, but there are considerations you need to take into account when it is automated recovery. Automation of the Octopus Server can be done for new instances, but once you want to rebuild a previously existing instance there are complexities.

I’ll say this first. Our High Availability model is designed specifically for this purpose. Having the ability to recover if a server is lost is very easy. We have many customers who have scripted the recovery of a node for whatever reason.
That being said you do not need High Availability for your scenario, but you do have to borrow from the configuration and will have to accept that there will be downtime between the old server and instance going offline and the new one being provisioned and installed.
High Availability will resolve the issue with downtime.

The major issue with what you are after is that we store a specific amount of things on the server. Task logs, packages and artifacts are all things that need to be either in a shared environment or replicated into a backup that can be reinstalled as part of the automation. Using a shared drive would be the best solution as it will have the most up to date information, where replication or synchronization may have missing data depending on the timing.

Automating Octopus Server to connect to an existing database requires the master key so you will need to take measures to secure this.

The easiest way to determine the scripts you need for this process is to go through it via the API and then use the show script feature. You will need to add some extra configuration like changing your directories for tasks, artifacts and packages. You will also need to give the new instance the same node name as that which previously existed so it can take over correctly as the ‘same’ instance/node.

We have some documentation about HA setup which you should read regarding the shared storage: http://docs.octopusdeploy.com/display/OD/Configuring+Octopus+for+High+Availability

If you have any questions or would like me to clarify anything please let me know.
I can give more instructions such as node name if required or let you know about High Availability if you think you want a ‘zero downtime’ environment.