Recurring Expensive Query happening 40 times an hour Related to scheduled tasks?

reliability
(Cwhite) #1

Not sure where it is coming from was hoping for some pointers. It seems related to scheduled tasks

(@startId bigint,@endId bigint)SELECT
StartEvent.[AutoId] AS StartAutoId,
StartEvent.[Occurred] AS StartTime,
EndEvent.EndAutoId,
EndEvent.EndTime,
EndEvent.Deployment_Id,
EndEvent.Deployment_Name,
EndEvent.Deployment_Created,
EndEvent.Deployment_EnvironmentId,
EndEvent.Deployment_ProjectId,
EndEvent.Deployment_ReleaseId,
EndEvent.Deployment_TaskId,
EndEvent.Deployment_JSON,
EndEvent.Deployment_DeployedBy,
EndEvent.Deployment_TenantId,
EndEvent.Deployment_DeployedToMachineIds,
EndEvent.Deployment_ChannelId,
EndEvent.Deployment_SpaceId
FROM (
SELECT EndEvent.[AutoId] AS EndAutoId
,EndEvent.[Occurred] AS EndTime
,Deployment.[Id] AS Deployment_Id
,Deployment.[Name] AS Deployment_Name
,Deployment.[Created] AS Deployment_Created
,Deployment.[EnvironmentId] AS Deployment_EnvironmentId
,Deployment.[ProjectId] AS Deployment_ProjectId
,Deployment.[ReleaseId] AS Deployment_ReleaseId
,Deployment.[TaskId] AS Deployment_TaskId
,Deployment.[JSON] AS Deployment_JSON
,Deployment.[DeployedBy] AS Deployment_DeployedBy
,Deployment.[TenantId] AS Deployment_TenantId
,Deployment.[DeployedToMachineIds] AS Deployment_DeployedToMachineIds
,Deployment.[ChannelId] AS Deployment_ChannelId
,Deployment.[SpaceId] AS Deployment_SpaceId
,EndEvent.RelatedDocumentIds
FROM [Event] AS EndEvent
INNER JOIN EventRelatedDocument
ON EndEvent.Id = EventRelatedDocument.EventId
INNER JOIN Deployment
ON Deployment.Id = EventRelatedDocument.RelatedDocumentId
WHERE EndEvent.Category IN (‘DeploymentSucceeded’, ‘DeploymentFailed’)
AND EndEvent.AutoId > @startId
AND EndEvent.AutoId <= @endId
) EndEvent
CROSS APPLY
(
– Find the equivalent StartEvent (DeploymentQueued) for our EndEvent (DeploymentSucceeded)
SELECT TOP 1 *
FROM [Event]
WHERE Category = ‘DeploymentQueued’
AND [Event].RelatedDocumentIds = EndEvent.RelatedDocumentIds
AND [Event].AutoId < EndEvent.EndAutoId
AND [Event].ProjectId = EndEvent.Deployment_ProjectId
ORDER BY [Event].AutoId DESC
) StartEvent

(Cwhite) #4

This problem seems to have occurred after a windows update. Also interesting is that I see none of this code the db so it must be related to some .net process

(Kenneth Bates) #5

Hi @cwhite,

Thanks for getting in touch! This query is used as part of the auto deploy feature, which is used to fetch deployments that have completed (along with some other related info) in order to help us discover (through a subsequent query) which new machines that have appeared that need to receive those deployments. It runs as a scheduled task every 30 seconds on the leader node.

Could you elaborate on how you’re seeing it being expensive? Are you seeing a resource spike or similar? What kind of database are you using (AzureSQL, etc.)? Also, what version of Octopus are you currently running?

I look forward to hearing back and getting to the bottom of this one!

Best regards,

Kenny

(Cwhite) #6

It’s an azure sql db, which I am currently trying to migrate off of by using the Export feature from the Octopus Manager tool.(DTU’s are a pain) But ya its an azure db, we tried upgrading to see if it would ward it off, so currently the Octopus Instance says it is on 2019.10.8 it was on something similar before this. It’s still doing the same. I’m going to upload the specs but you can see after we upgraded to windows 19.09 (I believe will double check) this started spiking up really high in terms of DTU’s. If you have any ideas on getting a the DB off of azure I’m definitely interested.

We are seeing that our update may of failed so perhaps something is in a broken state.

The pictures below are from the performance monitor tool they have.

image

(Cwhite) #7

The above may not make much sense or be seen as a problem but that is where the majority of the CPU goes and after our tried windows update our DTU percentage usage went way up.

It goes without saying we have tried to max(scale) this machine out to cover for the lag that is happening but it isn’t really helping.

(Cwhite) #8

For some reason the query is not causing as much impact as it was before. We haven’t really done anything.

(Cwhite) #9

We migrated off, from our Azure DB to SQL Server 2019 GA using a bacpac file.
Thanks you can close the thread.

(Kenneth Bates) #10

Hi @cwhite,

Thanks for keeping in touch and letting me know the outcome here! I’m sorry about the delayed response, as I was out for the last few days on leave. Glad to hear you’re good to go, and please don’t hesitate to reach out if you have any questions or concerns in the future. :slight_smile:

Best regards,

Kenny