The INSERT statement conflicted with the FOREIGN KEY constraint

After upgrading to version 2022.1 (Build 2278), we have started getting this error a lot across different tasks (creating releases, triggering health checks or calamari update via the API, registering tentacles etc). It might also be relevant that we changed our installation from a single server node to two nodes with shared storage as a part of this upgrade.

The full error message typically looks something like this:

[Octopus Deploy] Octopus Server returned an error: Error while executing SQL command in transaction 'CreateDeployment.Create|80001141-0002-ad00-b63f-84710c7967bb|T250': The INSERT statement conflicted with the FOREIGN KEY constraint "FK_EventRelatedDocument_EventId". The conflict occurred in database "octopus", table "dbo.Event", column 'Id'.

The error message is pretty much identical in all cases, except for the GUID and the name of the transaction (CreateDeployment.Create, CreateTask.CreateTask, /api/(?<baseSpaceId>Spaces-\d+)/machines, etc).

In some cases it seems to be related to these tasks being triggered from multiple sources in parallel, and it goes away if we force them to trigger sequentially. But this is not feasible to do in every case, since we run a lot of deployments across different teams every day, and they mostly manage their own Octopus tentacles.

Weā€™re also seeing this error a lot in the server log, and are not sure if itā€™s related:

An error occurred while trying to trying fetch the number of active SQL transactions: "Error while executing SQL command in transaction 'SqlTransactionMetricsProducer.UpdateMetrics': VIEW SERVER STATE permission was denied on object 'server', database 'master'.
The user does not have permission to perform this action.
The command being executed was:
SELECT COUNT(*) FROM sys.dm_tran_active_transactions" System.Exception: Error while executing SQL command in transaction 'SqlTransactionMetricsProducer.UpdateMetrics': VIEW SERVER STATE permission was denied on object 'server', database 'master'.
The user does not have permission to perform this action.
The command being executed was:
SELECT COUNT(*) FROM sys.dm_tran_active_transactions

Is there anything we can do to mitigate this?

Good morning @yvin_Richardsen,

Thank you for contacting Octopus Support and sorry to hear you are having issues after upgrading.

I will tackle your second issue first if you donā€™t mind as this is a known issue we have a GitHub Issue in place for. The VIEW SERVER STATE permission error can be ignored as it does not affect Octopus in any way. This should be fixed in version 2022.1.2300, so the one up from the version you upgraded to.

As for your first error of;

The INSERT statement conflicted with the FOREIGN KEY constraint ā€œFK_EventRelatedDocument_EventIdā€. The conflict occurred in database ā€œoctopusā€, table ā€œdbo.Eventā€, column ā€˜Idā€™.

I know you did allude to the fact that error is the same across the board but are you getting this exact line in all of the errors you are seeing or are they different Key constraints in different tables?

Are you able to run a system integrity check for me please and post the raw results.

I look forward to hearing from you,

Kind Regards,

Clare Martin

Itā€™s the same key constraint, but different events/tasks. A few more examples:

Error while executing SQL command in transaction '/api/(?<baseSpaceId>Spaces-\d+)/machines|64a0bc8f-91e8-45c0-8302-20c581dfef83|T372': The INSERT statement conflicted with the FOREIGN KEY constraint "FK_EventRelatedDocument_EventId"
Error while executing SQL command in transaction 'CreateTask.CreateTask|80002697-0002-e900-b63f-84710c7967bb|T465': The INSERT statement conflicted with the FOREIGN KEY constraint "FK_EventRelatedDocument_EventId".

I will run the integrity check and post the results shortly.

The integrity check did indeed find some errors.

Missing item: IDX dbo.IX_Blob_BlobId BlobId
Missing item: IDX dbo.IX_Blob_BlobId CLUSTERED 0
Missing item: IDX dbo.IX_Blob_BlobId ExtensionId
Missing item: IDX dbo.IX_BuildInformationCreatedDate CreatedDate
Missing item: IDX dbo.IX_BuildInformationCreatedDate NONCLUSTERED 0
Missing item: IDX dbo.IX_BuildInformationPackageIdPerSpace NONCLUSTERED 0
Missing item: IDX dbo.IX_BuildInformationPackageIdPerSpace PackageId
Missing item: IDX dbo.IX_BuildInformationPackageIdPerSpace SpaceId
Missing item: IDX dbo.IX_DeploymentSettings_DataVersion DataVersion
Missing item: IDX dbo.IX_DeploymentSettings_DataVersion NONCLUSTERED 0
Missing item: IDX dbo.IX_DeploymentSettings_SpaceId_ProjectId CLUSTERED 0
Missing item: IDX dbo.IX_DeploymentSettings_SpaceId_ProjectId ProjectId
Missing item: IDX dbo.IX_DeploymentSettings_SpaceId_ProjectId SpaceId
Missing item: IDX dbo.IX_DynamicInfrastructureLifecycle_BusinessProcess BusinessProcessState
Missing item: IDX dbo.IX_DynamicInfrastructureLifecycle_BusinessProcess LastModified
Missing item: IDX dbo.IX_DynamicInfrastructureLifecycle_BusinessProcess NONCLUSTERED 0
Missing item: IDX dbo.IX_EventRelatedDocument_EventId_RelatedDocumentIdPrefix EventId
Missing item: IDX dbo.IX_EventRelatedDocument_EventId_RelatedDocumentIdPrefix NONCLUSTERED 0
Missing item: IDX dbo.IX_EventRelatedDocument_EventId_RelatedDocumentIdPrefix RelatedDocumentId
Missing item: IDX dbo.IX_EventRelatedDocument_EventId_RelatedDocumentIdPrefix RelatedDocumentIdPrefix
Missing item: IDX dbo.IX_EventRelatedDocument_RelatedDocumentIdPrefix EventId
Missing item: IDX dbo.IX_EventRelatedDocument_RelatedDocumentIdPrefix NONCLUSTERED 0
Missing item: IDX dbo.IX_EventRelatedDocument_RelatedDocumentIdPrefix RelatedDocumentId
Missing item: IDX dbo.IX_EventRelatedDocument_RelatedDocumentIdPrefix RelatedDocumentIdPrefix
Missing item: IDX dbo.IX_GitCredential_SpaceId NONCLUSTERED 0
Missing item: IDX dbo.IX_GitCredential_SpaceId SpaceId
Missing item: IDX dbo.IX_HalibutMessageQueueItem_SequenceNumber CLUSTERED 0
Missing item: IDX dbo.IX_HalibutMessageQueueItem_SequenceNumber Direction
Missing item: IDX dbo.IX_HalibutMessageQueueItem_SequenceNumber Endpoint
Missing item: IDX dbo.IX_HalibutMessageQueueItem_SequenceNumber SequenceNumber
Missing item: IDX dbo.IX_MachineHealthCheck_BusinessProcess BusinessProcessState
Missing item: IDX dbo.IX_MachineHealthCheck_BusinessProcess LastModified
Missing item: IDX dbo.IX_MachineHealthCheck_BusinessProcess NONCLUSTERED 0
Missing item: IDX dbo.IX_MachineScriptTask_BusinessProcess BusinessProcessState
Missing item: IDX dbo.IX_MachineScriptTask_BusinessProcess LastModified
Missing item: IDX dbo.IX_MachineScriptTask_BusinessProcess NONCLUSTERED 0
Missing item: IDX dbo.IX_MessageBusCursor_ConsumerGroupId ConsumerGroupId
Missing item: IDX dbo.IX_MessageBusCursor_ConsumerGroupId NONCLUSTERED 0
Missing item: IDX dbo.IX_MessageBusEvent_SequenceNumber CLUSTERED 0
Missing item: IDX dbo.IX_MessageBusEvent_SequenceNumber SequenceNumber
Missing item: IDX dbo.IX_ProjectTrigger_RunbookId NONCLUSTERED 0
Missing item: IDX dbo.IX_ProjectTrigger_RunbookId RunbookId
Missing item: IDX dbo.IX_ProjectTrigger_RunbookId SpaceId
Missing item: IDX dbo.IX_Release_SpaceId_ProjectId_ChannelId_Assembled Assembled
Missing item: IDX dbo.IX_Release_SpaceId_ProjectId_ChannelId_Assembled ChannelId
Missing item: IDX dbo.IX_Release_SpaceId_ProjectId_ChannelId_Assembled Id
Missing item: IDX dbo.IX_Release_SpaceId_ProjectId_ChannelId_Assembled NONCLUSTERED 0
Missing item: IDX dbo.IX_Release_SpaceId_ProjectId_ChannelId_Assembled ProjectDeploymentProcessSnapshotId
Missing item: IDX dbo.IX_Release_SpaceId_ProjectId_ChannelId_Assembled ProjectId
Missing item: IDX dbo.IX_Release_SpaceId_ProjectId_ChannelId_Assembled ProjectVariableSetSnapshotId
Missing item: IDX dbo.IX_Release_SpaceId_ProjectId_ChannelId_Assembled SpaceId
Missing item: IDX dbo.IX_Release_SpaceId_ProjectId_ChannelId_Assembled Version
Missing item: IDX dbo.IX_Runbook_DataVersion DataVersion
Missing item: IDX dbo.IX_Runbook_DataVersion NONCLUSTERED 0
Missing item: IDX dbo.IX_Runbook_DataVersion SpaceId
Missing item: IDX dbo.IX_Runbook_ProjectId NONCLUSTERED 0
Missing item: IDX dbo.IX_Runbook_ProjectId ProjectId
Missing item: IDX dbo.IX_Runbook_Published_PublishedRunbookSnapshotId NONCLUSTERED 0
Missing item: IDX dbo.IX_Runbook_Published_PublishedRunbookSnapshotId PublishedRunbookSnapshotId
Missing item: IDX dbo.IX_Runbook_Published_PublishedRunbookSnapshotId SpaceId
Missing item: IDX dbo.IX_Runbook_SpaceId NONCLUSTERED 0
Missing item: IDX dbo.IX_Runbook_SpaceId SpaceId
Missing item: IDX dbo.IX_RunbookProcess_ProjectId NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookProcess_ProjectId ProjectId
Missing item: IDX dbo.IX_RunbookProcess_SpaceId NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookProcess_SpaceId SpaceId
Missing item: IDX dbo.IX_RunbookRun_Index Created
Missing item: IDX dbo.IX_RunbookRun_Index EnvironmentId
Missing item: IDX dbo.IX_RunbookRun_Index Id
Missing item: IDX dbo.IX_RunbookRun_Index Name
Missing item: IDX dbo.IX_RunbookRun_Index NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookRun_Index ProjectId
Missing item: IDX dbo.IX_RunbookRun_Index RunbookSnapshotId
Missing item: IDX dbo.IX_RunbookRun_Index TaskId
Missing item: IDX dbo.IX_RunbookRun_ProjectId NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookRun_ProjectId ProjectId
Missing item: IDX dbo.IX_RunbookRun_RunbookId NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookRun_RunbookId RunbookId
Missing item: IDX dbo.IX_RunbookRun_SpaceId NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookRun_SpaceId SpaceId
Missing item: IDX dbo.IX_RunbookRun_TenantId NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookRun_TenantId TenantId
Missing item: IDX dbo.IX_RunbookRun_UpdateRunbookRunHistory Created
Missing item: IDX dbo.IX_RunbookRun_UpdateRunbookRunHistory DeployedBy
Missing item: IDX dbo.IX_RunbookRun_UpdateRunbookRunHistory EnvironmentId
Missing item: IDX dbo.IX_RunbookRun_UpdateRunbookRunHistory Name
Missing item: IDX dbo.IX_RunbookRun_UpdateRunbookRunHistory NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookRun_UpdateRunbookRunHistory ProjectId
Missing item: IDX dbo.IX_RunbookRun_UpdateRunbookRunHistory RunbookId
Missing item: IDX dbo.IX_RunbookRun_UpdateRunbookRunHistory RunbookSnapshotId
Missing item: IDX dbo.IX_RunbookRun_UpdateRunbookRunHistory TaskId
Missing item: IDX dbo.IX_RunbookRun_UpdateRunbookRunHistory TenantId
Missing item: IDX dbo.IX_RunbookRunHistory_IsPublished IsPublished
Missing item: IDX dbo.IX_RunbookRunHistory_IsPublished NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookRunRelatedMachine_RunbookRun_Machine MachineId
Missing item: IDX dbo.IX_RunbookRunRelatedMachine_RunbookRun_Machine NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookRunRelatedMachine_RunbookRun_Machine RunbookRunId
Missing item: IDX dbo.IX_RunbookSnapshot_Assembled Assembled
Missing item: IDX dbo.IX_RunbookSnapshot_Assembled NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookSnapshot_DataVersion DataVersion
Missing item: IDX dbo.IX_RunbookSnapshot_DataVersion NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookSnapshot_FrozenRunbookProcessId FrozenRunbookProcessId
Missing item: IDX dbo.IX_RunbookSnapshot_FrozenRunbookProcessId NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookSnapshot_ProjectId NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookSnapshot_ProjectId ProjectId
Missing item: IDX dbo.IX_RunbookSnapshot_RunbookId NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookSnapshot_RunbookId RunbookId
Missing item: IDX dbo.IX_RunbookSnapshot_SpaceId NONCLUSTERED 0
Missing item: IDX dbo.IX_RunbookSnapshot_SpaceId SpaceId
Missing item: IDX dbo.IX_ServerTask_BusinessProcess BusinessProcessState
Missing item: IDX dbo.IX_ServerTask_BusinessProcess LastModified
Missing item: IDX dbo.IX_ServerTask_BusinessProcess NONCLUSTERED 0
Missing item: IDX dbo.IX_ServerTask_BusinessProcess SpaceId
Missing item: IDX dbo.IX_Subscription_DataVersion DataVersion
Missing item: IDX dbo.IX_Subscription_DataVersion NONCLUSTERED 0 

Hey @yvin_Richardsen,

Thank you for posting the results of the integrity check, that is a lot of missing indexes and would explain the issues you are facing with your instance.

I am just gathering together an SQL query for you so you can create those indexes which should hopefully get you back up and running again.

It is taking me awhile as there are quite a few indexes that are missing but I will post when I have the script for you to run.

I just wanted to keep you in the loop and to let you know I am working on your issue.

If you need anything else in the meantime please reach out.

Kind Regards,

Clare Martin

We appreciate the quick response. Iā€™m not sure why these indexes are missing, but Iā€™m guessing there might have been an issue with the database migration during the upgrade. We upgraded from an older version (2019.9.10), so there were probably quite a few changes. The migration itself was not monitored very closely, but I believe it took close to 30 minutes. We have made some adjustments to our system architecture that should allow us to upgrade much more frequently, so hopefully that should make things smoother going forward.

Good afternoon @yvin_Richardsen,

Sorry that took so long, I actually had a nightmare with both of my test instances so it took awhile to get those up and running again.

I have now got the SQL Query you need to run against your Octopus Database to create those indexes, please ensure you perform a database and master keybackup first just in case you need to revert back.

I cannot put the index re-creation SQL query in here as there are too many characters in the query so I have put it in a text document for you:

FixedIndexes.txt (12.2 KB)

Once this has run can you perform the integrity check again please and see if it now passes,

Kind Regards,

Clare Martin

Just on another note with why this might have happened;

Did you migrate to HA while still on 2019.9.10 and then upgrade after the migration was complete?

If you set up a new instance in 2022.1.2278, then overwrote the database with the migrated data, that could potentially cause you are seeing?

Supported:

2019.9.10 --migrateā€“> 2019.9.10 --upgradeā€“> 2022.1.2278
2019.9.10 --upgradeā€“> 2022.1.2278 --migrateā€“> 2022.1.2278

Not Supported:

2019.9.10 --migrateā€“> 2022.1.2278

Hopefully doing the re-index helps though and gets you back up and running.

Kind Regards,

Clare Martin

Thanks! We will schedule a maintenance window, run the queries, and then a new integrity check.

I think what you are describing is essentially what we did when we upgraded. We disabled the existing server node (without upgrading), and then set up a completely new node on 2022.1.2278, which probably counts as migrating to HA, even if only one node is active? Then we added another node on 2022.1.2278.

Hey @yvin_Richardsen,

We have some documentation on migrating over to HA if you havenā€™t seen it.

This has some really good information in it but I just wanted to point out the below as it helps with the rest of my response;

After configuring a load balancer and moving the database and files, adding a new node is trivial.

  1. Create a new server to host Octopus Deploy.
  2. Install the same version by downloading it from our download archive.
  3. When the Octopus Manager loads, click the Add this instance to a High Availability cluster and follow the wizard.
  4. Add that server to your load balancer.

So from what you are describing it seems like you disabled the first node (the only one) then you added a new node using the Add this instance to a High Availability cluster in the wizard, but the new node was on 2022.1.2278. You mentioned you then added another node on 2022.1.2278 but didnā€™t mention what you did with the original node (on 2019.9.10), did you re-renable that and then upgrade it or does that node no longer get used?

I only ask because if you added another Octopus node you essentially upgraded the DB to 2022.1.2278 but then if you switched on the old node at any point after that (the one on 2019.9.10) it would have deleted the indexes that 2022.1.2278 made as its seen those as not needed.

So ideally I would have upgraded my stand-alone Octopus instance to 2022.1.2278, made sure all your projects and things worked, then I would have followed the migration guide to change the DB, add a load balancer and then add a new Octopus instance as 2022.1.2278. This way you knew your Octopus instance worked on the new version, then migrated, meaning the DB would always be on the same Octopus version and it would have also eliminated potential doubt if something did go wrong as you would not have known if it was the upgrade that causes the issues or the migration.

From my experience, itā€™s always best to do one big change at a time, test the big change, ensure it all works, and then do the next big change, rather than do two big changes at once. (This is a note for future customers by the way in case they come across this and want to do the same thing - Upgrade and migrate to HA).

Hopefully though the index creation will get you back up and working properly and you can now enjoy your HA Octopus environment.

Just as a side note, we have some documentation here on how to upgrade your HA instances if you havenā€™t seen it yet which will provide useful for the future.

I look forward to hearing if that query resolves the issues you are seeing,

Reach out if you need anything in the meantime,

Kind Regards,

Clare Martin

I did some more digging into our upgrade process, and I will try to outline it here for clarity. It looks like we stumbled somewhere, and that is what is causing these problems. In retrospect, I completely agree with your assessment of doing one big change at a time, but in this particular case, I think it is likely that we could have stumbled anyway.

First of all, we strive to automate the setup of all our infrastructure, and our chosen tool for this is Ansible. So when we set up an Octopus Deploy server node, everything is handled by Ansible, from creating the new VM in our datacenter, to downloading and running the Octopus Deploy installer and setting up the server instance by using Octopus.Server.exe commands. Before the actual upgrade, we set up a parallel environment where we tested that our Ansible scripts would handle the upgrade from 2019.9.10 to 2022.1.2278 and register everything correctly. But we did not test the HA migration, and this was probably our first mistake.

The old server node on 2019.9.10 was set up manually, so we wanted to phase it out completely to ensure everything could be handled by Ansible, making it easy to recover in case of disaster. This is the reason why we chose to set up a completely new node instead of upgrading the existing one. So the old node was shut down, but not ā€œunregisteredā€. At some point after the upgrade, someone booted this VM to check a few things, and this may have caused the service to start automatically and try to connect to the database. This was possibly another mistake.

Our Octopus Deploy database was already managed separately by our database team, so no further action was required to make HA mode work. But we had a separate issue during the upgrade. We tested our Ansible setup a few weeks before the actual maintenance window, and during that time, the database team migrated the Octopus Deploy database to a newer version of SQL server (without notifying us). This somehow corrupted the user configuration in the database, which caused the new 2022.1.2278 node to fail the database connection when we set it up during the maintenance window, so we spent some time troubleshooting and fixing this, before Ansible could finish setting up the new node.

We of course had to set up shared storage and move all BLOB data, but this was a trivial exercise.

This brings us to our final (and probably critical) mistake. Since the 3rd step in adding a new node (Add this instance to a High Availability cluster) is a manual one, it was never added to our Ansible setup, and was most likely forgotten in the span of those weeks before the maintenance window. Instead, the second node was registered with Octopus.Server.exe in exactly the same way as the first one. But Iā€™m still a little bit confused, since I tried to start the Octopus Manager on both of our server nodes today, but was unable to find the Add this instance to a High Availability cluster option anywhere. Are we missing something, or is there something I have misunderstood? And is there currently a way to automate this final step too?

Hi @yvin_Richardsen,

Thankyou for that detailed description of your upgrade and migration process.

It does seem like there were a lot of ā€˜gotchaā€™ issues there but I think the main one for the index issue would be:

So the old node was shut down, but not ā€œunregisteredā€. At some point after the upgrade, someone booted the this VM to check a few things, and this may have caused the service to start automatically and try to connect to the database. This was possibly another mistake.

If you booted the old node up it would have connected to the DB, if that happened it would have seen the newer indexes (as being the latest instance to access the DB) and it would have deleted them as it would have seen them as not required (we do this to ensure the DB is not huge in size with old indexes that are not required for that Octopus version). We are looking to improve the way Octopus does this in future though as we do get a lot of customers with Indexing issues after an upgrade (usually in a HA environment) but this only tends to happen if our Upgrade process for HA instances is not followed verbatim. Sometimes this happens because something goes wrong and a service starts by accident.

Have you confirmed your HA instance is working as expected? Have you performed a failover test just to make sure your nodes are actually part of the HA cluster? It would be a bit of a shock if one of your nodes went down and HA didnā€™t kick in and flip over to the other node if you needed to get a deployment out at a critical time.

If you have done a failover test and you are happy then I think we can leave the setup issues and get that index recreation done to get your integrity tests to pass. You should then hopefully get rid of the Foreign Key issue and be able to deploy and perform Octopus tasks as expected.

On the query of the adding a new node using Add this instance to a High Availability cluster you would need to create a new Instance in the Octopus Manager for this to appear:

This should also appear if you have a new server and go through the Octopus Install wizard on setup.

Your first node (the 2019.9.10 one) should have been your first Octopus server in the HA cluster, you then should have created the new node (as 2019.9.10) and added that to the cluster. I think you may have made your migration a lot harder by creating a new server (as 2022.1.2278), adding that to the existing DB THEN adding another server in the HA cluster. Using that Add this instance to a High Availability cluster button will link it to the cluster and will allow it to share the DB with no issues.

Iā€™m pretty sure if you point a new instance to an existing database that has a server already configured, it will automatically add it as another HA node.

To automate things like this I usually manually go through the Octopus install wizard and click the relevant buttons, if you use the wizard for the adding to HA cluster it takes you through a series of options inc what existing DB to use, a unique server name, the master key for the existing DB, the port no to use for the web portal etc. Right at the end of that wizard before you install it allows you to see the script.

This is the easiest way for you to script this, go through the wizard yourself though because your settings and DB will be different to my example. We have some documentation on this (For a setup with Active Directory) but I would go through the wizard manually to get the full script you need.

Does that information help at all? Sorry it was another long post but I wanted to try and answer all your questions in as much detail as possible.

Let me know how the index creation goes. I would definitely perform a failover test to ensure your instances have actually been setup for HA and not just two separate nodes sharing your DB.

I imagine you would have gone through our docs on HA but I will link them here for you, we have a lot of information on setup, upgrades, migration, scaling etc so you may find something in there thatā€™s useful since it looks like you are scripting a lot of this.

Kind Regards,

Clare Martin

We have tested the HA setup, and it is working as expected. We have already been able to utilize this to update service certificates and add new binds (which require a restart of the service) without requiring downtime or a maintenance window, by ensuring the node we are updating is drained before restarting. We also see that tasks are split between nodes when both are active.

Based on your assessment that the old node is the cause of our problems, I think our automated setup of new nodes is correct, also with respect to HA. I thought maybe the Add this instance to a High Availability cluster was something completely different, but the Octopus.Server.exe commands seem to match the ones we are currently running. I will double-check to make sure.

For the record, we are generally very happy with how easy it is to automate the setup of both single nodes and HA mode. Supplementing with some API interaction, we can now do pretty much all maintenance almost completely hands-off.

The SQL queries have been sent to our database team, and we are still waiting for an update from them. But thank you for taking the time to look into our issue! This has been a great help in understanding how the upgrades and migrations are performed.

Hey @yvin_Richardsen,

Thank you for getting back to me, I am glad its all working as expected, the last thing we want is a customer to run into issues on their instance and it create a headache for you during deployments or a critical update.

I did also notice there is no specific ā€˜HAā€™ command when you show the script when running the HA wizard. I think it just sees the DB and master key and assumes its a HA node, which is neat!

Let me know if you run into issues with the SQL query as we are always here to help.

What you have done with automating your HA setup is pretty cool and some of the ways our customers use Octopus are really creative and even surprise Support sometimes (in a good way). So its nice to see how you have set this up and that it does work well for you.

I am also glad you find it easy to automate both single and HA nodes, its nice to hear our customers are enjoying using Octopus!

The hard part of migrating is now done, upgrading HA clusters is pretty straight forward as long as the services for the nodes stay stopped and only start when they should so hopefully you will now experience a more painless process moving forward.

Reach out if there is anything else you need but thank you for all your detailed comments and responses, another customer might see this and get ideas of their own on how to automate their setups so you will have helped some other Octopus user out there too!

Kind Regards,

Clare Martin

The integrity check reports no issues after running the SQL queries, and we have confirmed that some operations that previously failed now work as they should. Thanks for the help!

1 Like

Hey @yvin_Richardsen,

Fantastic news your instance is now passing its integrity checks and your instance is looking in a lot better state.

Thank you for your patience whilst we worked through this and for all your detailed answers to our questions.

Donā€™t hesitate to post again if there are any other issues you experience in the future, we love to help here at Octopus Support!

Kind Regards,

Clare Martin

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.