SQL Dacpac deployment failed

23:45:49 Info | Gathering credentials
23:45:49 Info | Gathering logins
23:45:49 Info | Gathering server audits
23:45:49 Info | Extracting schema (Complete)
23:45:49 Info | Initializing deployment (Start)
23:45:49 Info | Initializing deployment (Complete)
23:45:49 Info | Analyzing deployment plan (Start)
23:45:49 Info | Analyzing deployment plan (Complete)
23:45:49 Info | Updating database (Start)
23:45:49 Info | Hangfire SQL objects installed
23:45:49 Info | Update complete.
23:45:49 Info | Updating database (Complete)
23:45:50 Verbose | Updating manifest with output variables
23:45:50 Verbose | Updating manifest with action evaluated variables
23:45:51 Verbose | Released worker from lease WorkerTaskLeases-1755484
23:45:51 Fatal | The remote script failed with exit code 1
23:45:51 Fatal | The action SQL - Deploy DACPAC on Workers-Pool-LE failed

The deployment successfully done in the Database , but the step is getting failed in the octopus
we have faced the same issue long back and that time you guys told us to upgrade the version of the SQL - Deploy DACPAC , after that we have not faced issue for some months , but now again the same issue popping up
The version of the “sql dacpac deploy” is 16 now and its upto date.
Kindly help us

Hi @YuvarajDuraisamy,

Thanks for reaching out to Octopus Support!
Sorry to hear you’ve experienced issues.

Would you mind uploading the raw task log and process json for the deployment/project to our support files repo, please?
With this I can better understand what’s going wrong and how you have everything configured.

Please let me know if you run into any issues.

Kind Regards,
Adam

I have update the files which you have request , kindly check .

Hi @YuvarajDuraisamy,

I took a look at the task log and it appears that the task is erroring when attempting to collect artifacts to upload to the task log within Octopus.

This is done at the end of the deployment, which would explain why everything succeeds before the error.

As the target for this deployment is running Windows, it may be worth checking the event viewer logs on server-ITAAG89 around the time this task was executed as it may provide more information as to what has went wrong; whether it’s permissions, a security issue, etc.

Please let me know if you’re able to do this and what you’re able to find.

Kind Regards,
Adam

let me check and update

1 Like

I have verified the event log in the windows server , There no error logged in the time of the event.

our job is running parallel and try to connect db, then doing the db update will it cause the issue ?

Hi @YuvarajDuraisamy,

Thanks for getting back to me.

If something is attempting to access the directory where Octopus is trying to retrieve the artifacts from, it’s possible that something like a mutex lock is stopping Octopus from being able to retrieve that file, and erroring the deployment at the end.

Is there any way you could attempt to do these jobs separately to test if there’s success?

Kind Regards,
Adam

when we deploy the individual applications , the step works fine with out any issues
Got error when we deploy the applications together.

Hey @YuvarajDuraisamy,

It would appear then that running these in parallel is what’s causing the issue…

Is there any way it is viable for you to separate these in your deployment process going forward?
Or is it a requirement that these are done at the same time?

Kind Regards,
Adam

image
we are using this values as octopus recommended in the application ,is there any other option to avoid this error.

because we can’t deploy the application separately every time.

Hi @YuvarajDuraisamy,

Just stepping in for Adam while he’s offline, it looks like you might need to configure Named Mutex for any of the shared resources that are being targeted at the same time:

If you need even more finely grained control to a shared resource, we recommend using a named mutex around the process. To learn more about how you can create a named mutex around a process using PowerShell, see this log file example.

Let us know if you have any questions or run into any issues!

Best Regards,