I am seeking a second opinion on an architectural approach. We will have a simple configuration: a management subnet where Octopus sits, and two environments: UAT and Production (every one consisting of several VMs). Our infrastructure is provisioned by Terraform/Puppet scripts (and some other tools in between) on AWS.
Now the requirement is to make the UAT deployable on demand (to save AWS costs) for a short period. This process must be automated as far as possible to not spend to much time on it every time we need it. Also both the infrastructure and the applications should have their versioning, so we know exactly what infrastructure version is deployed on UAT or Prod. The idea was to actually make the Infrastructure scripts a package for Octopus, which is deployed as every other application prior to business applications (maybe make a trigger for the application deployments on machine available). We would have control over the package version, we have all of it managed in one place.
Concern is that Octopus isn’t actually the right tool for this job, as it shouldn’t orchestrate, should be only used for deployment of our .NET applications. So looking at some second opinions on this solution, would be grateful for it.
I’ve seen articles about using DSC as an Octopus step to configure a server ( Pauls blog ) and also situations when people use Octopus for the full infrastructure provisioning with a Cloud Formation community step template ( Discussion ).
Thanks for getting in touch! Octopus is definitely a great tool to implement your idea. While Octopus was originally marketed at .NET applications, it has since grown a lot and can do so much more.
I don’t think we have any Community step templates that deploy Terraform and/or Puppet code so if do end up writing a custom step template, we’d definitely appreciate a PR to the Community library.
I think that there is still some confusion about it in DevOps teams - even if you google Octopus, you will see “Automated deployment for .NET”, so I think there could be some marketing around that
We have been discussing in our team two approaches with Jenkins (I didn’t mention that as wasn’t sure about it) in the scene:
Where Jenkins is responsible for compiling .NET and deploying the AWS infrastructure (invoke Terraform/Puppet/other wild tools) and Octopus is triggered by Jenkins in the end to deploy only the .NET tools - that was a DevOps choice based on the premise, that their experience with Octopus was that it is actually good at .NET.
OR Where Jenkins is responsible for Compiling .NET, Packing/Octopack Infrastructure scripts, and Octopus is responsible for deploying all of those - that was my choice as somebody having watch on the wider project (I’m of a .NET development background, not purely DevOps).
So when I see people pairing Jenkins+Octopus and ask the question - why (Jenkins is something devops use for deployment after all)? My answer would be: Octopus takes the deployment responsibility from Jenkins, on which Octopus is focused and particularly good at.
In terms of a solution, I couldn’t play on the actual AWS environment, but locally I came up with this Process (Octopus steps):
Healthcheck step to check if the Octopus server role is fine. I don’t know if that is a bug or a feature, but even setting the server policies right, so that negative health check won’t generate a warning, the default health check done at the beginning did produce a warning. To overcome that I put an explicit healthcheck step at the beginning and check the health of the Octopus server (as other servers by requirement are not yet there, they are about to be provisioned). As far as I see putting it first overrides the original default check (?) with was failing as it was checking all server roles.
Run PowerShell script on the Octopus Server that uses the delivered AWS infrastructure bundle ZIP/package and runs Terraform/Puppet/… (in that it installs the tentacles)
Run another Healthcheck step to now actually check if the target machines are available. In consequence of having the first step checking only the Octopus Server, I could actually set the Server Policy to produce a warning if they are not available (as we don’t have any load-balancers in the scenario right now). I have also set this healthcheck to:
add machines that became available (so the newly created ones)
fail if they are not (as mentioned, no load-balancer means I require all the machines to be there at this moment)
Run other steps like DbUp database updates, then the actual business application deployments.
So as said - that was tested locally with VMs and not been tested in combat with the enterprise scenario. However after we’ve discussed that yesterday (before your response unfortunately), the DevOps convinced the main decision maker to go with a mix of Jenkins doing infrastructure and Octopus just .NET. I don’t feel comfortable with that, but I noticed Octopus didn’t yet make that impact in our DevOps minds to actually be for something more than .NET
I agree that our marketing message might not be attractive to people from outside of .NET world. I’ve shared your comment internally.
That being said, when you look under the covers you can see that Octopus can be seen as a general purpose orchestration and deployment engine which happens to streamline certain scenarios, like the ones used by .NET developers.
Below I included a list of just a few examples that show how you can deploy code that is not related to .NET:
I can think of a few reasons regarding why you would want to use Octopus for deploying both Infrastructure and Code. In most cases Infrastructure and Code are coupled so it is valuable to be able to see their deployed versions in one place and Octopus can do that for you.
Same applies to configuration values, especially sensitive values. If Infrastructre and Code require access to the same configuration values then having them both in Octopus will make it much easier to manage, audit and secure them.
Please let me know if there is anything else I can do to help.