Unexpected upgrade history detected

Continuing the discussion from The INSERT statement conflicted with the FOREIGN KEY constraint:

We recently discovered an issue with the shared storage in our HA setup. Packages uploaded to the internal NuGet repository were uploaded to a path on one of the nodes (C:\repository\Spaces-1\feeds-builtin), instead of to the shared storage path we had previously set. We thought this would be easy to solve, so we disabled one of the nodes to update the shared storage paths on that node (and then planned to do the same for the other node after). But we ran into the following issue when running the command Octopus.Server.exe path --clusterShared '...':

Unexpected upgrade history detected. You probably need to upgrade to a later version of Octopus Server.
Extra upgrade script(s) found: Script0319RepairEntityJsonBlobs, Script0320RepairMachineAndActionTemplateJsonBlobs, Script0331CleanupGuestPrivateSpaces

This seems like it might be related to running those SQL scripts we received in our previous issue. We also tried running with --skipDatabaseCompatibilityCheck and --skipDatabaseSchemaUpgradeCheck, and that fixed the shared storage commands. But now we see the same error in the logs when trying to start the disabled node:

Octopus.Shared.ControlledFailureException: Unexpected upgrade history detected. You probably need to upgrade to a later version of Octopus Server.
Extra upgrade script(s) found: Script0319RepairEntityJsonBlobs, Script0320RepairMachineAndActionTemplateJsonBlobs, Script0331CleanupGuestPrivateSpaces

What is the best way forward here? Do we just upgrade to the latest version, and is this safe given the current state of the database? We are currently running version 2022.1.2278.

Hey @yvin_Richardsen,

Thank you for contacting us and updating us on the latest error you are getting. Having looked at the previous forum post you linked I do recall you having lots of issues with the HA upgrade and the way that was performed.

You managed to get into a good, working state at the end of the last post. The SQL queries we gave you were to replace some missing indexes so your instance would pass its integrity checks. We give those missing index scripts out to lots of customers who have issues with their integrity checks, the scripts are generated specifically for each customer but would not produce the errors you are now seeing.

I imagine those errors are down to the upgrade process that was performed last time as, when we were going through it on the previous ticket, there were a lot of steps performed that caused some nodes to be upgraded in the wrong order, therefore messing up the DB and causing the missing indexes.

Is this the first time you have restarted those nodes since your last comment on the last ticket? Are all of the nodes running the same version of Octopus?

Are you able to send us over the Octopus Server logs for each node please so we can take a look? It might be that you only need to upgrade one node so it is compatible with the DB. Another thing to look at here, is the table in the DB in my screenshot below. This information may help to see what version of Octopus you have shown in the DB for each node, this way you can marry that up with what all of your nodes are running and see if there is a mismatch somewhere. They should all be running the same Octopus version.

I have set up a private link here for you to upload the logs to and if you can check the DB and let us know what version it says you should be running we can go from there.

I look forward to hearing from you,

Kind Regards,

Clare

Thanks for the quick response!

Both nodes report the same version:

PS C:\Octopus> .\Octopus.Server.exe version
2022.1.2278

I am sure both nodes have been restarted between the last issue and now. We added some more steps to our automated setup, like setting the task cap for each node and enabling LDAP.

I uploaded the logs for the node that is failing, but only from the point where we started the current maintenance. Let me know if you need more, and I’ll try to accommodate.

But I think I might have found the issue here:

OctopusServerInstallationHistories-121   <first node>    2022.1.2278   2022-04-05 13:13:41.8676033 +02:00 {}

OctopusServerInstallationHistories-141   <second node>   2022.1.2278   2022-04-05 14:44:17.1991875 +02:00 {}

OctopusServerInstallationHistories-161   <unknown node>  2022.1.2637   2022-06-01 12:10:59.8053668 +00:00 {}

OctopusServerInstallationHistories-181   <unknown node>  2022.1.2663   2022-06-13 08:11:57.6743137 +00:00 {}

I did a little digging, and it turns out someone was testing out the Octopus Deploy Terraform provider on a separate node (the <unknown node> above) that was supposed to run against an empty database, but they accidentally connected it to the production database instead. This probably also explains the issue with shared storage.

I’m guessing this means we need to upgrade all nodes to at least version 2022.1.2663, and hope that we do not see any issues with deleted indexes like we saw last time. We will also look into any measures we can take to avoid something like this happening again.

Hey @yvin_Richardsen,

Good spot there with those upgrade logs, was that from the DB?

Unfortunately yes, as soon as you set up another node and connect it to the Octopus HA DB it will upgrade the DB to that version (if the node is on a higher version to the DB). So your other nodes will need to be upgraded to 2022.1.2663 so they are all in the correct state. Because you didn’t go through the Octopus manager to set up the other nodes this is why they are shown as Unknown.

Once you have upgraded all the nodes can you run a system integrity check for us and post up the results if its failed and we can get another script sent out for the missing indexes if there are any.

I did take a look at the logs you uploaded but they do just show what you have already pointed out, that there is a DB mismatch and you need to upgrade as the DB has scripts in there your Octopus version doesn’t have and therefore is failing to restart the service.

Let me know if you get stuck with the upgrades, it will be a good time to test out any new processes you have from last time at least!

I look forward to hearing if the integrity check runs ok,

Kind Regards,

Clare

We managed to squeeze in a short maintenance window tonight, and after upgrading both nodes to version 2022.1.2849 everything seems fine again.

2 Likes

Hey @yvin_Richardsen,

Awesome news you are now up and running again, thanks for letting us know! Looks like you managed to get on a higher build number too which is good.

Reach out if you need anything further,

Kind Regards,

Clare

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.