Bug Report: Variables intermittently not being replaced in parallel project deploys from scheduled trigger

I believe this might be a similar issue to this one already reported:

We are seeing that sometimes, when a project release is deployed from a scheduled trigger, to many tagged tenants in parallel, variable substitution is randomly failing. In our case, oddly, it is only in one project, for all tagged tenants, and only the connectionStrings section is not replacing. If we go back and choose to redeploy the same release to all tagged tenants manually, that seems to correct the issue. This obviously is not a scalable solution in the long-term though.

Any assistance would be appreciated!

Hi @msayers,

Thanks for posting, and welcome to the Octopus community.

Sorry to hear youā€™re encountering this issue, and it does seem similar to the one you linked. I was wondering if you could share a raw task log from a run that succeeded and a raw task log from a run that failed?

You can use the following link to upload these to us securely: Support - Octopus Deploy

Looking forward to hearing back from you.

Best,
Patrick

Hi Patrick,

The two log files are uploaded. The 442766 happened this past Saturday, and 456648 was a ā€œby handā€ deploy to multiple tenants today. Note the two extra database-related matches found in 456648 that were missing from the scheduled trigger run (442766).

Let me know if you need anything else.

Thanks,
Matt

Hi Matt,

Thanks very much for providing those.

Tentatively, this does look like youā€™re encountering the same issue as linked, but Iā€™m reaching out internally to see if I can get an exact answer for you on this.

In the meantime, there is a suggested workaround on the GitHub issue from that other thread: Runbooks fail to evaluate tenant variables after the first environment when scheduled to run in parallel to different environments Ā· Issue #7042 Ā· OctopusDeploy/Issues Ā· GitHub

Iā€™m not sure how feasible it is for your use case, but if youā€™re looking to deploy to multiple environments with the same tenant, you could copy the project process steps for the number of environments required and scope the steps to those environments.

Otherwise, thereā€™s a chance this will be fixed in the release scheduled for next week (if all goes according to plan). If youā€™re comfortable performing a minor upgrade and it isnā€™t too impactful, this might be something worth waiting for. The version to upgrade to will be posted in the GitHub release once itā€™s available.

Let me know what you think.

Kind regards,
Patrick

Hi Patrick,

You are very welcome.

In our case, we are only deploying to a single environment. A project scheduled trigger kicks off multiple deployments of a release to one environment, but against many tenants. Iā€™m not sure that workaround would work or be feasible.

We are definitely not against taking minor updates. We try to stay fairly close to latest version. If you can let me know when that version becomes available, Iā€™ll happily get it scheduled on our side to see if that resolves it.

Hi Mark,

Iā€™m sorry the workaround isnā€™t very helpful, but I agree that doesnā€™t seem very viable in your case.

Thatā€™s great to hear regarding the update, and Iā€™ll do my best to keep you posted on when the new version with this fix becomes available. You might also hit ā€˜Subscribeā€™ on the GitHub issue to track it, as itā€™s probably a bit more reliable than me notifying you.

Best,
Patrick

Hi Patrick,

It is hard to tell with 100% certainty, but it sure seems like release 2021.2.7808 resolved the issues reported here. Weā€™ve yet to find any variable replacement failures from parallel project deploys via schedule triggers. :crossed_fingers: I think we can consider this resolved. If something pops up again we will report it in a new topic.

Thank you again for your fast and helpful support!

Sincerely,
Matt Sayers

2 Likes

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.