Octopus lift and shift

We are currently in the early stages of looking to move our “on prem” Octopus Server instances. It’s currently sitting on a Xen Server which is due to be decommissioned and we need to move the Octopus Server and Database instance to a Hyper-V server in the same datacentre.

The criticality of this move is quite large due to the amount of tentacles that we have (1000+) and the near 24/7 usage.

We are looking for some guidance on best practices for this kind of work, as our primary concern is that we end up leaving 1000+ Octopus tentacles in an unusable state and bring down our deployment process for days to come. The tentacles are installed by the Ops team and we as CICD don’t have easy access to these tentacles, so it’s very important that what ever we do, doesn’t create any impact on the existing tentacles. An example of this could be that we accidentally update the tentacles to a new version that isn’t compatible with the version of Octopus we are setting up.

Current position:

Octopus server: Octopus 3.15.7 hosted In the datacentre on a Xen server (win 2012 r2).

Database: SQL Server (Version?!?!) hosted In the datacentre on a Xen server (win 2012 r2).

Tentacles: 1000+ polling in the Datacentres (Ireland and US)

New Position:

New Octopus Server Instance hosted In the datacentre on Hyper-V (win 2019)

New SQL Server instance with the database migrated hosted In the datacentre on Hyper-V (win 2019)

Tentacles: 1000+ polling in the Datacentre (Not being moved)

Nice to haves after the migration:

Upgrade to the latest Octopus LTS

Move to HA Octopus

Okta

The current proposed solution:

Drain Octopus, put it into maintenance mode and finally switch the service off.

Take a backup of the SQL Server instance and restore it on the newly created Hyper-V instance.

Install Octopus Server (3.15.7 which is the same as the one on Xen server) on a Hyper-V instance ensuring we are using the same database Master Key.

Configure the Octopus and Database servers with the same firewall rules.

Configure Octopus to use the newly installed SQL Server database.

Start up the newly created Octopus instance and do some local testing (at this point all tentacles will still be pointing to the old instance.)

Point the CNAME from the Xen Octopus to the Hyper-V Octopus. (This is the point of major concern is if we accidentally update 1000+ tentacles)

Questions:

Does the above process seem like it will be the best approach for our lift and shift?

Is it sufficient to move the database or do we have to update settings in the database?

Are there any setting on Octopus server that will need to be updated? Such as our new instances we set the paths for the files (log / artifacts and packages)

Is there anything we can do to dry-run this?

Can we mitigate the tentacles from being updated by locking their versions? Or is this overkill as we are doing a like for like on the versions? 3.15.7 -> 3.15.7

Is there anything we need to look out for when changing the Operating System from Windows 2012 r2 to Windows 2019. I’m guessing 3.15.7 has no known issues running on Win 2019?

Our other Octopus installs (4 separate setups) are all using listening tentacles (or workers) and we have seen previously that when we update the CNAME they happily connect to the new instance, but we have limited knowledge on polling tentacles, will they act the same way?

Hi @Jon_Vaughan2

Thanks for getting in touch with Octopus!

As there was a lot of information in your original post, I have tried to answer your questions under some headings below. As with any type of migration, trying to reduce the number of moving parts will help here, so please disregard any information you consider irrelevant to whatever plan you settle on :slight_smile:.

I’ve also written up some advice on some of your future aspirations covering an upgrade, High Availability, and Okta.

Moving Octopus Server and Database

If you just want to move the Octopus Server and database (without any upgrade), then we have a guide on our website on Moving the Octopus Server and database | Documentation and Support

Proposed Migration Plan

Your proposed solution looks very good. I would only add that you would also need to copy the following directories from your original server to the new server (each of these folders are located in C:\Octopus in standard installations).

  • Artifacts
  • Task Logs
  • Packages
    • This folder only needs to be moved if using the built-in package repository. External feed details are stored in the database, and they will connect automatically.

The above is taken from our migration process guide here.

The copy step should be done prior to this step:

Start up the newly created Octopus instance and do some local testing (at this point all tentacles will still be pointing to the old instance.)

CNAME Record and Polling tentacles

Just a specific mention on this as you noted this is of major concern to you.

Assuming the CNAME record you will move was used with your polling tentacles when they were first registered e.g. https://CNAME-RECORD-TO-SERVER-ADDRESS:10943 then this looks spot on.

By changing the CNAME record, this should mean the change of hosting Server should be transparent to the polling tentacles. I’d also recommend reducing the TTL on the CNAME record as well.


Specific Questions

Does the above process seem like it will be the best approach for our lift and shift?

A: Yes, I think this approach looks good, please refer to the Proposed Migration Plan for additional comments on your plan :slight_smile:.

Is it sufficient to move the database or do we have to update settings in the database?

A: Yes, you shouldn’t need to update the settings in the database unless you plan to change the directories for Artifacts, Packages, and Logs

Are there any setting on Octopus server that will need to be updated? Such as our new instances we set the paths for the files (log / artifacts and packages)

A: As above. Please see here if you do plan to alter the standard installation folders.

Is there anything we can do to dry-run this?

A: Yes, you can indeed dry-run a large majority of your proposed plan. My suggestion would be to do a dry-run up to the point of switching the CNAME record.

One thing that might catch you out is that the deployment targets and any project triggers may also attempt to fire when you start the new version up.

To counter this, you can disable all deployment targets and any project triggers.

  1. You can run an API script to disable all project triggers on your test migrated instance. You can find a sample script here - OctopusDeploy-Api/REST/PowerShell/Projects/DisableAllProjectTriggers.ps1 at master · OctopusDeploy/OctopusDeploy-Api · GitHub
  2. You can run an API script to disable all machines on your test migrated instance, to be absolutely sure that nothing will deploy. Again a sample can be found here - https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/REST/PowerShell/Targets/EnableOrDisableAMachine.ps1. You can then enable them as you run through testing.

:warning: Please note: that scripts are given with no warranty. I have not run or verified them. You will need to double-check scripts before running them. :warning:

Can we mitigate the tentacles from being updated by locking their versions? Or is this overkill as we are doing a like for like on the versions? 3.15.7 → 3.15.7

A: I wouldn’t lock their versions personally as you are doing a like-for-like version migration. You can lock them if you want to be sure though.

Is there anything we need to look out for when changing the Operating System from Windows 2012 r2 to Windows 2019. I’m guessing 3.15.7 has no known issues running on Win 2019?

A: Nothing specific that I am aware of. However, I haven’t tested a migration of this specific version, so you should verify this yourself. Generally speaking, I would check things such as the version of Powershell, and any other tooling that may be installed by default on Windows Server 2019 - such as the .NET Framework version.

Our other Octopus installs (4 separate setups) are all using listening tentacles (or workers) and we have seen previously that when we update the CNAME they happily connect to the new instance, but we have limited knowledge on polling tentacles, will they act the same way?

A: See the section on CNAME above. I have not tested this myself, but I believe it should be transparent to the polling tentacles providing the TTL is low, and a health-check is performed after start-up of your new instance.

You should be able to test the CNAME change by simulating a change on one of your polling tentacles (or install one locally, providing it can access the new instance) using a HOSTS file entry.

Upgrading Octopus

We have a page that discusses upgrading in-depth, and you can read this on Upgrading a modern version of Octopus | Documentation and Support. The TL;DR; version of this is to ensure you have something you can roll back to in case of a failure in your upgrade process.

I’d recommend testing out the upgrade, to ensure that there is no interruption to your deployments.

The 2 main options you have are:

  1. An in-place upgrade; where you would perform a database backup and file system backup. (If your Octopus Server is hosted on a Virtual Machine, you could also look to perform a snapshot too). This option is easier but slightly riskier as the upgrade is in-place.
  2. A new server and database (Lift and shift). This would involve cloning the database and the machine that Octopus Server is running on and then test the upgrade in isolation. This option is less risky but there is also a lot more work involved with this approach.

In most cases, an in-place upgrade is fine and this is what I would recommend - particularly when you are moving from versions between “modern” versions, which are considered from v3.x up - as is the case for you.

At the time of writing, the latest LTS version is 2020.1.5.

Please note: the minimum requirements for Octopus Server have been raised from version 2020.1 and above. The TL;DR; is that to install 2020.1 we recommend

  • Windows Server 2016 or higher
  • SQL Server 2017

You can see the announcement on the reasons for this change on Raising the minimum requirements for hosting and using Octopus Server - Octopus Deploy

Based on your New position - 2020.1 would be able to be installed as long a you choose to also use SQL Server 2017 or higher.

I’d also recommend checking for any breaking changes from 3.15.7 to 2020.1.5, to compare the release notes between versions and view breaking changes, please see https://octopus.com/downloads/compare?from=3.15.7&to=2020.1.5


Move to HA Octopus

We have a page that discusses configuring Octopus High Availability, and you can read this on Redirecting to https://octopus.com/docs/administration/high-availability/configure

I’d pay particular attention to a couple of sections of this guide:

The polling tentacle set-up is of importance when going to a HA configuration. Listening Tentacles require no special configuration for High Availability. Polling Tentacles, however, poll a server at regular intervals to check if there are any tasks waiting for the Tentacle to perform. In a High Availability scenario Polling Tentacles must poll all of the Octopus Servers in your configuration.

Note: There are ways to automate this that don’t involve restarting the tentacle. You can run a script using the Script console, or using a worker on behalf of all the deployment targets to configure the additional nodes. Please see here for the command line you would need.


Okta

We have documentation covering Okta authentication on Redirecting to https://octopus.com/docs/security/authentication/okta-authentication


I hope that helps, and best of luck with your migration!

Kind Regards,
Mark

Thanks Mark for the detailed response. Some great points in there that we had over looked, such as moving the artifacts/tasklog/packages etc and links to invaluable documents.

Some interesting points have come up around the CNAME, We have two changes in this area that we would like to do.

1.) Update to HTTPS, As we would like to authenticate with Okta (plus it’s good practice.)
2.) We have updated our domain, so would like to update this as well. http://octopus.[olddomain] to http(s)://octopus.[newdomain]

As I stated in the initial post, our primary concern is updating the polling tentacles as we don’t have easy access to them (They are owned by a separate team).

So the current thought process is to look into potentially redirecting either protocol (http -> https), the CNAME domain (octopus.[olddomain] -> octopus.[newdomain]) or ideally both.

Do you know of any other customers who have taken this approach with polling tentacles? Are there any recommendations in this area?

Hi @Jon_Vaughan2,

Thanks for getting back in touch!

Update to HTTPS
When you mention updating to HTTPS, are you referring to the Octopus Web portal? If so, it’s worth saying that the polling tentacles (and indeed listening tentacles) would already be communicating securely, as they use TLS.

I’ve not tested it, but I would think that you should be able to change to HTTPS for the web portal without impacting the polling tentacles, as they communicate over TCP Port 10943 by default (instead of 443).

You can also add multiple host-headers for Octopus Server to listen on, and as polling tentacles are secure as above, you could add a new listening domain for Okta (either in the short or long term).

Updating domain
I’m not aware of any customers who have taken this approach, but I tried it out myself, see below.

I tried creating a CNAME record called newoctopus.home.local on my local home domain:

image

I then navigated to the Octopus Server on port 10943 using the new CNAME record:

It’s also worth noting that the Octopus Server node is not listening on that specific host-header e.g. newoctopus.home.local:

image

I then tried adding a new polling tentacle, using the new CNAME record (which also doesn’t use https in its URL):

This all worked successfully, and the polling tentacle was registered in my Octopus Server:

Summary

It looks like the CNAME should work for you, and depending on your requirement for HTTPS, that should also be possible. I would caveat this by saying you should test this. If you don’t have access to the existing polling tentacles, but you do have access to the Octopus Server, I’d consider installing a polling tentacle, either on the same machine or on the same network (where it can communicate with the Octopus Server).

Hi @mark.harrison

Thanks again for the information, as part of the migration we are looking at exploring all options, which I have tried to summarise as:

  • CNAME redirect (As discussed in this post)
    • PROS:
      • Doesn’t impact existing tentacles. so gives a rollback position via updating the CNAME back.
      • Allows legacy scripts that still use the old URL to continue working.
    • CONS
      • Can we redirect all traffic? Something in the back of my mind says we had problems with another project doing this.
  • Update all polling tentacles URLs to the new server and do CNAME redirect for legacy scripts.
    • PROS:
      • Cleaner solution, as the tentacles will point directly to the new server.
      • CNAME redirect to support any custom scripts.
    • CONS:
      • We don’t know how to update all tentacles easily yet, as you probably can’t using the polling tentacle to update it’s own endpoint? (unless Octopus has a a solution?)
      • Once they are updated and if fail we have 1000+ tentacles in an incorrect state.
  • Same as above, no redirect for custom script
    • PROS:
      • Quickly identifies custom scripts, as they will fail early.
    • CONS:
      • Custom scripts using the old endpoint will fail. There could be a large amount and have an unknown impact on deployments (could all deploys fail)
  • two polling tentacles per server
    • PROS:
      • A truer blue/green deployment with duplicate infrastructure and easier rollback.
    • CONS:
      • Difficult to install 1000+ tentacles? Can probably use the existing tentacle and RESTAPI calls to setup a new tentacle.
      • Can both tentacles run on port 10943?
  • Change from [polling to listening] (This isn’t a valid option as Octopus doesn’t support it)
    • PROS:
      • Once updated to listening, the migration is easier as the server will be looking for the tentacle, so no update needed.
    • CONS:
      • Firewall changes on each 1000+ tentacles. (Additional checks show that this isn’t an issue as the port is open)
      • 3.14 Octopus doesn’t support tentacle communication type change via the interface.
  • Two tentacles per server, the second tentacle being listening (alternative of point 4.)
    • PROS:
      • Blue/Greenish, Shouldn’t be impacting existing setup, just installing a new tentacle, so cleaner rollback scenario.
      • Deploy new tentacles via existing Octopus polling tentacle and RESTAPI calls to get role / env / machine policy. Via script console.
      • Tentacles on different ports, so no potential clash.
      • Listening ports already open.
      • Listening tentacles are how we have our other installations
    • CONS:
      • Still updating 1000+ tentacles, so need to be sure one generic install tentacle script can cater for all installations.
      • Risk associated with damaging 1000+ severs with a bad installation script.
      • Ops will have to change their tentacle installation scrips to be listening and not polling.
      • An additional script will need to be run after to remove the old polling tentacles.

At the moment I’m leaning towards the two tentacle approach, the existing one a polling tentacle and the newly installed one a listening tentacle. It aligns the installation up with our existing installations that are all listening (or workers) .

My initial thought was to go via the CNAME redirect, but firstly it’s not a “clean” installation as we will always have the old CNAME hanging around with a redirect. There’s something in the back of my mind about redirect that I had a problem with last year (I need to investigate this further.).

So; with this in mind I’m investigating further the polling and listening tentacle on the same server. Do you have any opinion on this approach? Does it seem like a sound approach? A\re there any cons that i’m unaware of?

Sorry again for all the information, But I feel it’s better to give you more information than less.

Additional question, Should we do the Octopus Deploy Server version upgrade as part of the migration or after the migration as a separate task. The main issue I can think of this that the tentacle version already installed on the tentacle servers can’t be upgraded and is Octopus.Tentacle.3.7.7-x64.msi. I’m just going to create a new listening tentacle. Is it compatible with the latest Octopus LTS? IS there anything else I need to be aware of before considering doing the Octopus Deploy Server upgrades?

Hi @Jon_Vaughan2,

Thanks for the additional summary you have provided.

I think it’s important when you want a migration to go smoothly, you should consider all of the benefits and risks like you have. Even more so when you don’t have easy access to all of the infrastructure.

I like the idea of the installation of the tentacles side-by-side. In order to minimize the impact of the CONS you have listed, I can really only offer what is probably obvious advice, which is to test as much of the process as is reasonably practical. So for this approach, that would be to test the install script, test the script needed to remove the old polling tentacles.

Given the approach is a side-by-side, I’d also consider ways to “rollback” the addition of the listening tentacles. That should be pretty straightforward if you keep the polling tentacles online until you are satisfied the listening tentacles are working as you expect.

In terms of the upgrade to Octopus Deploy Server, this largely depends on what infrastructure you choose to run your new version on. As mentioned in my previous post, from 2020.1, the minimum OS and SQL Server requirements have been raised.

It’s a bit of a chicken and egg situation, on the one hand, it’s always a good idea to be on the latest version. On the other side, reducing your surface area of work to do is beneficial. My personal preference would, therefore, be to keep moving parts to a minimum with a migration of this size, so it would be my recommendation to consider an upgrade to the latest version afterward.

For tentacle compatibility, you can look at this page - Compatibility | Documentation and Support

In addition to what I have already discussed in the section marked below

I’d also recommend checking for any breaking changes from 3.15.7 to whichever version you choose to go to, then compare the release notes between versions and view breaking changes, please see: Octopus Deploy to do that.

Hi Mark,

We have pretty much decided that the right approach for us is to:

In AWS build out a latest Octopus HA instance (2020.x)
On Prem have two tentacles instances, the original polling tentacle to the on-prem Octopus server and a new listening tentacle pointing back to the newly installed AWS build 2020.x.

The concern here is that we would want to upgrade the tentacle version to 4.0+ to keep compatibility with the 2020.x Octopus. But have concerns that as the tentacle instances are coming from one installation and one service that if we selected to upgrade the tentacle via the Octopus 2020.x to latest that both tentacles will be upgraded to latest.

As one of our primary aims is to have as close to zero impact on the existing on-prem infrastructure, so we can have a fall back position if something goes wrong, this would go against what we are trying to achieve.

A’m I right in saying that if a server has one install of Octopus tentacle, yet more than one tentacle instance, that when you upgrade one tentacle instances version that both will be upgraded as they come from the same installation and service?

Thanks

Hi @Jon_Vaughan2

Thanks for getting back in touch - I’ve answered your query below

You are correct in your assumption, all of the instances on a machine are pointing at the install directory. For an x64 install, the default directory is: C:\Program Files\Octopus\Tentacle

One option if you want to keep to exactly the existing versions across the board is installing Octopus Server 3.15.7 on the new AWS instance, do your migration and then look to upgrade after the move of the database and Server hosting the Octopus install.

The alternative to this is to update your existing on-prem tentacles to 4.0 and then run the 2020.x version in AWS with the same version tentacle.

Either of the 2 ways above, there is a trade-off.

  • The first option means you maintain complete backward compatibility as everything is in sync, but you still need to upgrade your tentacles later if you wish to use Server 2020.x
  • The latter means you can use the 2020.x Octopus Server version as its compatible with Tentacle 4.0+, but you need to update your existing On-Prem tentacles from 3.15.7 to 4.0+

Kind Regards
Mark

So the option of two installs and two instances is off the table?

I can’t leave one tentacle @ C:\Program Files\Octopus\Tentacle (polling port)

And the new tentacle @ C:\Program Files\Octopus\TentacleListening (listening port)

The installer looks to allow me to select the installation location:
image

And there shouldn’t be a port clash, as one is polling and one is listening.

Hi @Jon_Vaughan2

Unfortunately yes. I actually tried this out before replying earlier, using version 3.15.7 (and using the version name in the directory structure for the install).

When you install the new tentacle MSI (I tried 4.0.0), it will remove the other installation (the 3.15.7 one). This will break your 3.15.7 tentacle as it wont find the previous install directory when trying to run the existing instance of the tentacle.

Kind Regards
Mark

Thanks Mark all the answer are much appreciated, we are getting there slowly on our side :slight_smile:

The next question about option 2, I’m looking at the on-prem Octopus and the documents and don’t see a way via the interface to upgrade the tentacles to the latest version:

If an upgrade via the UI isn’t possible on 3.15.7, is there another way you would recommend? Ideally we would look at upgrading per environment to minimize the exposure to any issues/problems.

Hi @Jon_Vaughan2

You’re very welcome! :slight_smile:

Unfortunately, there isn’t a way to update to a newer version of the tentacle from the web portal. It is only aware of the tools (known as Calamari) which could be updated from an older version to one compatible with the Octopus Server version that is running (effectively the Calamari version goes hand-in-hand with the Octopus Server version)

As the Octopus tentacle is highly backward compatible, it would make sense to consider upgrading to the latest version of the tentacle - at the time of writing - 5.0.13 (Download Octopus Tentacle - Octopus Deploy). That way, you only need to do the update to the tentacles once.

Automating the install of the tentacle

Since the MSI can be executed from the command line you can automate its execution. You can see some details on the exact commands to use on our doco - Redirecting to https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/automating-tentacle-installation

In terms of automating this, tools exist to do this such as Chocolatey for example.

If you wanted to run this from within Octopus itself then it could be done using a one-off Project or in the Script Console. Both give you the ability to run the MSI for specific environments.

The Script Console could be beneficial in this instance though - since you wouldn’t have to add roles to all of your machines in order to deploy to.

Running the install as a bare PowerShell command causes you an issue. This is because as soon as you execute the MSI, the tentacle service running the command within PowerShell will shut down as part of the install, so the task would report itself as a failure.

I had a quick test on a local 3.15.7 version of Octopus, and I managed to come up with a script which worked for me on a single deployment target running Tentacle version 3.15.7 and reported as completed.

Sample Upgrade Script

:warning: Please note: This script is provided as-is. You should verify the script for correctness and run your own tests before running on any of your Environments :warning:

Write-Host "Downloading Tentacle MSI"
Invoke-WebRequest -Uri "https://download.octopusdeploy.com/octopus/Octopus.Tentacle.5.0.13-x64.msi" -OutFile "C:\Octopus.Tentacle.5.0.13-x64.msi"

Write-Host "Creating PS script to install"
Set-Content -Path C:\UpdateTentacle.ps1 -Value "Start-Process  msiexec.exe -Wait -ArgumentList '/I C:\Octopus.Tentacle.5.0.13-x64.msi'"

Write-Host "Adding command to start service"
Add-Content -Path C:\UpdateTentacle.ps1 -Value "Start-Service 'OctopusDeploy Tentacle'"

Write-Host "Calling UpdateTentacle.ps1 script (not waiting for exit)"
Start-Process powershell.exe -ArgumentList '-File C:\UpdateTentacle.ps1'

The script will output like below:

image

A few moments later, I ran a health check on my machine; test-machine

From the screenshot, you can see that it’s running the latest tentacle version and the latest version Calamari that is supported on this version of Octopus Server.

Some caveats

The usual caveats apply and should be considered as part of your decision-making process:

  • The script should be verified for correctness before running on your live Octopus instance.
  • The install may still fail, and if the service doesn’t start up again, you would have to manually intervene on each machine to investigate and correct any issues
  • Testing the update on a few machines first and a project deployment would be highly recommended. This ensures there are no issues in both the update of the tentacle but also your deployment processes

I hope that helps!

Best regards,
Mark