Unexpected upgrade history detected during migration from Server to Container

I’m trying to migrate from Octopus Windows Server v2023.2 (Build 9088-hotfix.9778) - which I got to match the Cloud version to allow a project export/import and now to Octopus Container octopusdeploy/octopusdeploy:2023.1.10597 which is the latest container version

I’m seeing the following error during task start up:

Checking database schema upgrade history...
Database upgrade "abandoned" after 431ms

Unexpected upgrade history detected. You probably need to upgrade to a later version of Octopus Server. Extra upgrade script(s) found: Script0374RemoveTypesFromHealthCheckServerTasks, Script0375CreateKubernetesTaskResourceStatusTable, Script0375RemoveMachineHealthCheck, Script0376RemoveInsightsFeaturesConfiguration

Octopus.Shared.ControlledFailureException: Unexpected upgrade history detected. You probably need to upgrade to a later version of Octopus Server. Extra upgrade script(s) found: Script0374RemoveTypesFromHealthCheckServerTasks, Script0375CreateKubernetesTaskResourceStatusTable, Script0375RemoveMachineHealthCheck, Script0376RemoveInsightsFeaturesConfiguration

Clearly this is due to the version mismatch, but the container version is lagging far behind server and cloud. Do you expect to rev the published container version to one that would match the hotfix version I have?

Sincerely
Pete

Hi @peter_m_mcevoy,

Thanks for reaching out, I’d be happy to help with getting your server instance that is on 2023.2 migrated to a container!

Do you expect to rev the published container version to one that would match the hotfix version I have?

Our docker hub images follow our general on-prem release so 2023.2 should be coming very soon but we do have another repository for images that are being tested.

You should be able to use the following image:
docker.packages.octopushq.com/octopusdeploy/octopusdeploy:2023.2.9088-hotfix.9778

Hope that helps, feel free to reach out with any questions or issues at all!

Best Regards,

Brilliant! that worked a treat :slight_smile:

(although I now have other issues on startup, but that’s with me - I think I’m not giving the server enough time to start…)

Regarding the approriate time for me to swap from the “hotfix” channel, to the main release channel on docker hub, do I just need to wait for a version higher than 2023.2.9088? Will that upgrade from the hotfix?

Pete

That’s correct, once 2023.2 is fully released, which we’re hoping will be within the next week or so, the available container version will be a build higher than the hotfix allowing you to upgrade.

Thank you both for swift response and resolution.

Pete

1 Like

Hi Paul,
I’ve noticed today that there are a few new container releases in the docker hub repo that appear to be higher than the hotfix version that you pointed me at. However when I visit this “compare” page, I’m seeing that the upgrade is not supported:

Can you advise?

Pete

Hey Pete,

Sorry you are running into this issue again, the way we create these hotfixes and main branch releases can be a little off with regards to the build numbers.

However, taking a look at this 2023.2.9088-hotfix.9778 was released in May but it is not public (as it was released to our Cloud customers). It is, however, set to pre-release.

2023.2.12209 was released yesterday as a public version so in theory you should be able to upgrade to it but I can see, as you mentioned, in the comparison page it throws issues when you select those two to compare the upgrade.

If you select 2023.2.9087 to 2023.2.12209 on the comparison page that works fine though.

2023.2.9087 was released in April so that’s even older than 2023.2.9088-hotfix.9778 but thats not pre-release or public.

I have sent this through to our R and D team to see if we are missing something here or if they can just untick a pre-release box on 2023.2.9088-hotfix.9778 and that will fix this, you may just be able to do the upgrade fine but I would rather consult the engineers and make sure everything will run smoothly for you during the upgrade.

I do think it is because 2023.2.9088-hotfix.9778 is a pre-release and that is why the comparison page is kicking up a fuss so hopefully its just an untick of that box. You cant even upgrade from that version to latest 2023.3 and that is definitely older than 2023.2.9088-hotfix.9778.

I will get back to you with what the engineers say.

Kind Regards,
Clare

Yep - I kinda guessed as much. TBH, I was surprised to see the pre-release/hotfix build numbers on that comparison page at all.

I can wait for your update - I was just keen to get back on to the “official” track (and to see if newer build numbers clear a snyk critical alert).

Pete

Hey Pete,

Some good news, the engineers got back to me last night and said a merge from the hotfix branch was missed, which is why the comparison page reports no valid upgrade path. You can safely upgrade to 2023.2.12209.

The engineers are going to clear up that upgrade path, but the comparison will continue to say the upgrade is invalid until we get a version that contains the hotfix commits.

Hopefully that helps alleviate any concerns you have with the upgrade, from a PR point of view you can upgrade to 2023.2.12209. But I would air on the side of caution and perform a database integrity check before you upgrade, then take a DB backup (you dont need to encrypt the master key), then upgrade and run another integrity check.

This is the standard process for upgrading anyway but I thought I would highlight it so you know your DB tables are good pre-upgrade, you have a backup if you do need to revert back and once you upgrade you can make sure your DB is still in a good state.

We are here every step of the way in case those integrity checks don’t pass or the upgrade does not go smoothly but we don’t usually see issues.

Let us know once you have performed the upgrade so we know it has gone well, also, the comparison page just takes the PR updates from GitHub when we push builds out and if some hotfixes contain breaking changes or a fix that is flagged it will log it on the comparison page which is why you see some hotfixes on there.

Let me know if you need any more information, and I hope the upgrade goes well!

Kind Regards,
Clare

Apologies for not getting back sooner, Clare - With time off and subsequent work priotiies when I got back, I still haven’t had a chance to look at this new build and the upgrade. I hope to by end of week… I’ll revert when I can.

Pete

Hey @peter_m_mcevoy,

No worries at all, we are all busy bees and upgrades can sometimes take a while depending on what’s going on work wise.

Thank you for keeping me in the loop about where you are at with the upgrade, reach out if it does not work and we will investigate further for you.

Kind Regards,
Clare

I’ve successfully updated the instance in our Pci Staging account to v2023.2 (Build 12513). I was not aware of the Diagnostic check in advance - but when I ran that before upgrade it was all “Passed”, so I was comfortable to proceed.

One gotcha did arise: We are running the container as a single task in AWS ECS. I believed that I could use the ECS function to “Update Service | Force New Deployment” to start a new task pointing at the new build, and terminate the old task. However the start up logs for the new instance had this message:

Detected instance connected to this database (DingPciStag). All nodes must be shut down before proceeding with database upgrade. If a process exited uncleanly it may take a couple minutes before its offline status can be detected.

When ECS redeploys it starts a new instance first and waits for that to be healthy before terminating the previous so as to get close to zero downtime. The solution was easy: scale the service to zero instances first, and then start up a new service with 1 task.

I’ll be scheduling updating our live PCI environment for sometime next week to follow this exact procedure.

Thanks Clare!

Pete

(PS: was a little disappointed that there are still a few snyk vulnerabilities listed against the image)

Hey @peter_m_mcevoy,

Great news you managed to get the upgrade to work, I am glad we could sort the images out so that you had a valid upgrade path.

Thank you for posting the gotcha too, we have recently learnt that we now do log that warning about the instances not being shut down. We do have a document for upgrading Octopus HA configurations which I am sure you have seen but it does mention there will be a brief period of downtime due to the fact its required you shut all the nodes down before an upgrade.

Some of our customers were running into issues because their nodes were not shutting down properly before an upgrade so we have now put measures in place to stop that which assures a much smoother upgrade.

Its great you managed to sort your scripts to account for this though so thank you for posting that up for us and other customers.

As for the vulnerabilities, we do patch our latest releases and account for any new CVEs when they come out but we do not backport those security patches so when you download an Octopus version it should be up to date for any CVEs we have logged at the time we created the Octopus release.

So if you are seeing any vulnerabilities it is probably because they came out after we patched the latest release, any upgrades you do will contain the latest security patches but they are not backported to older versions.

I hope that helps clarify why you may be seeing some vulnerabilities still.

If you have any other queries surrounding this let us know and I will do my best to help.

Kind Regards,
Clare

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.