Our deployment process deploys components to a staging server, then creates an AMI (AWS machine image) from the staging server and then shuts down the staging server.
This worked fine until we upgraded Octopus server to 2023.1.9749. Now there is a step added at the end called “Release packages” that attempts to connect to the target and fails with a warning.
Would it be possible to suppress the warning or work around this somehow?
FYI: here is the failure message. Remember the target server has been deliberately shut down here:
Warning: MYSHUTDOWNSTAGINGSERVER
17:00:39 Info | Releasing package lock for MYSHUTDOWNSTAGINGSERVER.
17:02:28 Error | Failed to release package lock for MYSHUTDOWNSTAGINGSERVER
| An error occurred when sending a request to ‘https://99.99.99.99:10933/’, before the request could begin: The client was unable to establish the initial connection within 00:01:00.
| Halibut.HalibutClientException: An error occurred when sending a request to ‘https://99.99.99.99:10933/’, before the request could begin: The client was unable to establish the initial connection within 00:01:00.
| —> Halibut.HalibutClientException: The client was unable to establish the initial connection within 00:01:00.
| at Halibut.Transport.TcpClientExtensions.Connect(TcpClient client, String host, Int32 port, TimeSpan timeout, CancellationToken cancellationToken)
| at Halibut.Transport.TcpConnectionFactory.CreateConnectedTcpClient(ServiceEndPoint endPoint, ILog log, CancellationToken cancellationToken)
Thanks for getting in touch! Sadly, Octopus Tentacle isn’t very good at handling its server being restarted during a deployment. This is something we have attempted to work around in multiple ways, but as far as I’m aware we have yet to make it work. Once the server disconnects, the connection to the Octopus server is severed and the deployment target is no longer available for that duration of that task.
It is possible to restart a target server with a runbook, but this would require you to break your deployment into separate pre and post restart projects, which kind of invalidates the need for using the runbook in this specific scenario anyway.
If you just want to ignore the error and continue the deployment without it failing, I think there may be a couple of options. The first thing you could try is modifying the run conditions on subsequent steps so that they run even when the previous step fails. Alternatively, you could try enabling guided failure mode for your project, which creates a manual intervention on the failure of a deployment, which gives you the option to assess the failure then continue the deployment if acceptable.
Let me know what you think of these options or if you would like any further help at all.
Thanks Daniel,
Thie problem we were getting was only a warning, so the job completes OK, but I wanted to suppress the warning so everything looks clean and we dont need to investigate.
We have some code that looks as though it will do the job. It needs to run as a separate powershell step to force the repository to get the correct state of the server.
Thanks for the update here! I’m glad to hear that you were able to get this working. Please don’t hesitate to get in touch at any time if you have any further issues or questions.