We’re having some trouble with our migration & upgrade strategy (v2.6 to v3.0):
The Octopus Deploy instance needs to move to a new server, and the decision was made to upgrade from v2.6 to v3.0 at the same time.
The approach is as follows:
- Provision new server, install v3.0
- Run migration utililty on v3.0 using latest v2.6 backup as source
- Provision new machine for tentacle, and install v3.0 tentacle.
- Add tentacle to server using the thumbprint reported by the new server (which is the same as the original server’s thumbprint)
- Tentacle is added successfully to new server.
- Running a health check on the tentacle fails with the following: An error occurred when sending a request to ‘https://mytentacle:10933/’, before the request could begin: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
- Upon further investigation, the tentacle log file reports that a connection was attempted from a server with a different thumbprint than what the server UI is reporting.
Any ideas / suggestions?
We’ve chosen this strategy in order to start deployments using v3.0 in parallel to v2.6, until we’re happy that everything works smoothly before dropping v2.6, and then doing another final migration.
Thanks for getting in touch! The migration of data will overwrite the thumbprint with the one from your 2.6 backup. This is to facilitate a seemless upgrade. If you haven’t yet migrated your data, then the installation would have made a new one for that server.
Does it sound like either of these could be the cause?
Side by side upgrades are our recommended approach, as it gives you the most piece of mind and allows you to test both the migration and hydra processes while maintaining and active instance.
I am happy to help guide or help provide a strategy. Let me know what you find.
The fact that the thumbprint would be overwritten from the backup makes sense. I can also confirm that we’re seeing the same thumbprint in the new server UI as in the old.
What I don’t understand, is why the server is trying to communicate with the tentacle using a different thumbprint - probably the new one that was generated during installation, instead of the thumbprint from the backup?
I managed to solve the issue by simply restarting the Octopus server. I should probably have done this after the data migration, but never did.
The thumbprints seem to be in sync now.
Thanks for the help!
Glad you got it sorted. I haven’t heard the need for a restart before, but maybe the certificate caching was higher or the service restarting did not force it.
Just to note, as you are planning on a slow migration. If you do a full migration to get a base data set in, any deleted data from the source will remain in the destination.
The backup only overwrites and inserts, but never considers missing data ‘deleted’. I had one customer who took a month and ended up with some variable anomalies as some records had been deleted - but remained in the upgraded data.
Let me know if you run into any problems.