Outstanding Retention tasks were not completed before the next task was due to be scheduled

I get the next Warning in the diagnostics (Detailed Server Logs):
2016-12-11 06:28:06 Warning
Outstanding Retention tasks were not completed before the next task was due to be scheduled. If this error persists, check the Tasks tab for any running Retention tasks, and cancel them manually.
I get this error every 4 hours.
In the retention task overview everything looks fine: all the icon’s are green.
When I open the last retention task, it tells me the next: ‘This task started 2 hours ago and ran for less than a second’
I have included the raw output of the last retention task.
I have also include the output of the last tentacle health tasks (because at the same moment, i see that an environment give a warining.)
I’m running Octopus 3.4.13
Can you tell me what possibly is going wrong?

ServerTasks-13257.log.txt (29 KB)

ServerTasks-13259.log.txt (20 KB)

Hi,
Thanks for getting in touch. The retention policy running at 6:28 indeed looks to complete successfully. You are saying that there is a 2nd task log at 6:28 saying that “the retention task was not completed…”?
Do you have another server running somewhere that might be trying to run in parallel and is queuing up these tasks at the same time? Perhaps from an upgrade or server migration?
I’m not sure I understand where the health check fits into this though. The health check appears to run at 8:28 and the warning took place at 6:28 when the other retention was running.
Let me know if its possible that you have another Octopus Server instance running at the same time. It may be trying to perform the same retention at the same time.
Cheers,
Rob

Hello Rob,
There is a second Octopus instance which is supposed NOT to be looking to the same environments. This second instance gives the same warnings (Outstanding Retention tasks were not completed before the next task was due to be scheduled.)
Should i make a list of both Instances and compare machines?

The retention task of the first instance (octacaf01) runs at 06:28:21 every 4 hours
The retention task of the second instance (octisodev01) runs at 05:34:15 every 4 hours

There has been a migration to 3.4.13 last friday. And during this migration, there was a deployment busy, which of course failed. Could this be the cause of the retention warning?

About the health check: I misread 8.28 for 6.28 and though that both tasks were at the same time. But they aren’t, so just forget this though.
Yours Sincerely,
Eric

Hi,
Yes if you have multiple indipendant Octopus Deploy services running at the same time pointing to the same database then I would expect them to both be trying to run the retention policy at the same time resulting in the error you are seeing. We dont support having multiple server instances pointing to the same database at the same time unless you are running in High-Availability mode where there is a master/slave type architecture. If you are doing this for a DR scenario then I would reccomend you keep the second server process offline unless it is needed to avoid these kinds of scenarioes.
What architecture are you trying to cofigure that requires multiple instances running at the same time?
Rob

Sorry Rob, There seems to be a misunderstanding. The 2 Ocotpus Deploy instances we have DO NOT share a database. There is no connection between them. (one we use voor development and one we use for production).
Eric

Hi,
So to confirm you have several instances of Octopus Deploy set up but none of them are configured to use the same database? Both of them have the same message saying "the retention task was not completed…"
Could you please send through a screenshot of both of your instances task screens so that I can take a look at what is showing up on both at the same point in time. It certainly seems like there is some interaction going on between the two.
If you go to Tasks -> Script Console, and run an ad-hoc script on your development instance, does the task show up in your production instance task list?
Cheers
Rob

Hello Rob,

  1. I include a document with screenshots in which i did run a script in the script console. As you can see. this task is not been seen on the other Octopus.
  2. I include a document with screenshots of the task screens (filter=retention) and the output of the diagnostics of both instances. and i also included the original system reports.
    bye
    Eric

Proof_of_having_2_seperated_octopusdelpoy_servers.docx (304 KB)

Screenshots_of_retention_tasks_on_both_systems.docx (157 KB)

Octacaf01_-_OctopusDeploy-636173888342408852.zip (1 KB)

Octisodev01.idodev.loc_-_OctopusDeploy-636173883498490251.zip (31 KB)

Hi Eric,
Looking at the logs it still feels like there is some other Octopus Server process running that is trying to execute the retention policy task every 4 hours, just 1 minute after the other server. It might be easier to have a look at your instance if we can schedule a support call so that I can screen share and take a look at the servers directly.
Closer to the booked time I will send through the GoToMeeting details that we use to perform these support calls.

Let me know if you have any questions regarding this call,
Cheers,
Rob

Hi Eric,
Looking forward to having the support call soon to dig a little deeper into this issue. As described earlier we use GoToMetting for these calls as they allow us to easily screenshare.
Please see the connection details below.

Please join my meeting from your computer, tablet or smartphone.
GoTo

You can also dial in using your phone.
Australia: +61 2 8355 1034

Access Code: 598-146-829

More phone numbers
United States: +1 (312) 757-3119

First GoToMeeting? Try a test session: http://help.citrix.com/getready

Let me know if you need any further information. Talk to you soon.
Cheers,
Rob

Hi Eric,
Thanks again for taking the time to join the support call. Hopefully in the new year we can dig further into this issue once you have access to the database. In the meantime, I’m still wondering if there is another server running somewhere firing off those tasks.

You mentioned that you just recently upgraded, could you describe your upgrade process? Did you install the Octopus Server service on a whole new machine? Is there definitely no other Octopus Deploy Service running on these machines that perhaps someone else set up? I would also be interested if you could run a little experiment for me. If you temporarily stop your server instance and start running SQL Server profiler pointed at your database instance do you see any connections or commands being executed against your Octopus database after a few minutes?

Thanks again,
Rob