Sporadic "Missing Resources" Error When Deploying

Occasionally we run into an issue where release teams can’t deploy to an environment (specifically our “Production” env.)

The target environment is part of the Project’s lifecycle and there’s no step-restrictions for Prod. Our work has been to change the project’s lifecycle to a different one that has same QA environment that was used for testing. After making the change, the “Production” target environment is available. The frequency seems to be increasing so any guidance would be helpful. Thanks!

Hi Shannon,

Thanks for getting in touch! I’m sorry to see you’re hitting this unexpected issue. The fact that it seems intermittent is very odd. Usually this issue is attributed to the user attempting to initiate the deployment doesn’t have adequate permissions to do so to this environment (the environment in the lifecycle progression the release is currently in).

Do you notice any pattern at all in regards to users who encounter this issue, or channels the releases are in, or anything else? Does a refresh of the browser page allow this environment to populate in that field?

Are you experiencing this issue in a cloud hosted instance? If so, would you be willing to grant me permission to create a support user to have a direct look, while also pointing me to your URL? (No changes will be made, everything is audited and the user will be deleted afterwards.)

If you’re on-prem, and if it’s easily reproducible, do you see anything in the browser’s dev console reported as you hit this issue?

I look forward to trying to get to the bottom of this one!

Best regards,


Thanks Kenny. I was able to reproduce the issue myself as a sys admin. I’ve dug through the audit logs and can’t find anything project changes (lifecycle, deleted releases, retention policy deletes) that could explain in. Someone on our team came up with the idea that maybe we have too many environments (we’ve gone through a few standup / decom cycles since we brought Octopus onsite). Any chance there’s merit to the theory?

Hi @ShannonN,

Thank you for getting back to us.

To get a better idea of what is causing this Missing Resource issue, are you able to capture a HAR file on a page where this occurs and share that with us? Also, it would be helpful if you could generate a System Diagnostics Report and share that as well.

I’ve created a secure upload link for you to share those files with our Support Team.

Please let us know once you are able to upload the files at your earliest convenience.

Best Regards,

Thanks Donny. I’ve uploaded the diagnostic file and will instruct my team on how to create the HAR file if/when the issue arises again.

Hi @ShannonN,

Thank you for getting back to me and providing the System Diagnostics Report.

I noticed a few SQL db timeouts in your Octopus Server logs and with the System Integrity Check included with the System Diagnostics Report.

It looks like you are currently running 2021.1.7788. Our Dev Team has done quite a bit of work to help improve performance in this area. While I didn’t notice anything specifically in the logs that points to the Environments showing a “Missing Resource” icon, I suspect this may be due to the UI rendering faster than the SQL db query is able to return the results. Are you in a position to consider testing an upgrade to our latest self-hosted version of Octopus, 2022.4.8471?

If you are unable to consider an upgrade at this time, another possible way to possibly improve performance is to check the index fragmentation of your Octopus SQL db indexes and re-build indexes with a high page count with poor fragmentation. Can you run the following SQL query against your Octopus db, export the results in CSV format, and share those results via the secure upload link?

SELECT S.name as 'Schema',
T.name as 'Table',
I.name as 'Index',
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS DDIPS
INNER JOIN sys.tables T on T.object_id = DDIPS.object_id
INNER JOIN sys.schemas S on T.schema_id = S.schema_id
INNER JOIN sys.indexes I ON I.object_id = DDIPS.object_id
AND DDIPS.index_id = I.index_id
WHERE DDIPS.database_id = DB_ID()
and I.name is not null
AND DDIPS.avg_fragmentation_in_percent > 0
ORDER BY DDIPS.avg_fragmentation_in_percent desc

Let me know at your earliest convenience.

Best Regards,

Thanks Donny. We’re running v2022.4 (build 8423) in Dev but this environment isn’t used as much. Its definitely something we can consider.

I ran into this myself only a few minutes before posting this. I’ve attached the HAR file.
devopsdeploy.corp.lpl.com.har (1.1 MB)

Once our DBA team runs the query I’ll post the results. Thanks again!

Hi Shannon,

Thank you for following up with the update and HAR file. Unfortunately I’ve yet to spot any smoking gun that would explain this strange issue. One more question that may prove vital in addition to the fragmentation query Donny sent - are you hitting this problem in a high availability (HA) setup? If so, it may be possible that this could be a result of some issue with data replication somehow (though we haven’t seen this manifest in the same way before that I’m aware of).

Best regards,


Hi Kenny. Please find the attached output from the script run. Let us know if there are any red flags.

Octopus.csv (8.5 KB)

Hi Kenny. Please let me know if there is anything that stands out.

Hi Shannon,

Thank you for following up and pinging me on this one. My sincere apologies for the delay, as I took some days off lately due to sickness.

I appreciate you running and providing the results of the fragmentation query. The fragmentation and page counts in a number of tables reported does seem exceedingly high on a number of tables, so addressing this should be very beneficial regardless on whether or not it is the root cause of this issue.

I’ve found the following external resource helpful in the past on resolving fragmentation issues in SQL databases.


Specifically in your case, rebuilding the indexes on those with fragmentation percentages exceeding 50%, especially those with large page counts. I’d be keen to see if that helps, and by how much. :slight_smile:

I hope that helps, and I look forward to hearing back!

Best regards,


Thanks Kenny.

So we’ve never set up an maintenance routine but after reviewing the fragmentation results file it’s clear we need to do something. I’ve reviewed the recommended strategy on your website; however, it’s a little light on details. We tend not to do anything directly to the database to be sure we’re don’t unintentionally cause harm.

Is the suggested approach to defragment existing indexes (those listed in the table) at some scheduled interval rather than creating custom ones? Ideally I would like specific instructions that I can hand off to our DBA team to implement. Please advise. Thanks.

Hi @ShannonN

Thanks for getting back to us.

We have some documentation that goes into more detail the types of maintenance that should be conducted to keep the fragmentation down.

Hopefully this helps.

Kind Regards,

Thanks Dom. We were able to identify some indexes that required attention and we think the issue is resolved :crossed_fingers:

1 Like

Hi Dom -

We encountered another ‘missing resource’ error when deploying to the same environment (Production in our case). Is there a specific table/index that’s responsible for deploying to a specific environment?

Hi Shannon,

Thanks for following up, however I’m sorry to see you’ve hit this issue again! Good question, though we may want to have an updated look at at the results from the same fragmentation script Donny provided previously to see how it stands at this point. Seeing these new results should hopefully point us in the right direction on how to approach this.

I’ve created a new secure upload link in case you’re able to supply that updated current info.

I look forward to hearing back!

Best regards,


Thanks Kenny. I’ve submitted the request and will post the results when I get them.

Results posted. Thanks!

Hi Shannon,

Thanks for following up and sending that through! If I remember your first fragmentation results correctly, the fragmentation and page counts look a bit better now than before, though it would be worth continuing to bring those down quite a bit. The general recommendation mentioned in the previously linked external resource is:

  • When the Fragmentation percentage is between 15-30: REORGANIZE
  • When the Fragmentation is greater than 30: REBUILD

There are still a significant number of indexes with very high fragmentation (30 indexes with 30% or higher), and a number with very large page counts. Addressing these I’m hoping will help prevent this issue going forward.

Let me know how you go, or if you have any further questions!

Best regards,