Connection Issue when deploying large .tar.gz package

I have an issue with deploying a (rather large) tar.gz package to a linux machine. It seems like the package gets transferred to the target machine but I guess it takes too long and then before executing the post-deployment script it gets a timeout or something. Unfortunately transferring the package takes very long (in that case about 2 hours) -> I cannot reduce that time.

I tried a few settings on the (target)server-side (to not disconnect the client) - but it did not help. Do you have any ideas - or is there something clientside keepalive setting that I can enable?

Heres the Ocotpus log:

Task ID:        ServerTasks-18779
Task status:    Failed
Task queued:    Monday, July 18, 2016 1:38 PM
Task started:   Monday, July 18, 2016 1:38 PM
Task duration:  2 hours
Server version: 3.3.18+Branch.master.Sha.35d5fa30e1297f96d082e900e1d7a50edff2d789

                    | == Failed: Deploy Model release 2016.0712.1012 to Prod ==
13:38:31   Verbose  |   Guided failure is not enabled for this task
15:31:44   Fatal    |   The deployment failed because one or more steps failed. Please see the deployment log for details.
                    |   == Failed: Acquire packages ==
13:38:31   Info     |     Acquiring packages
13:38:31   Info     |     Making a list of packages to download
13:38:31   Info     |     Looking up the package location from the built-in package repository...
13:38:31   Verbose  |     SHA1 hash of package [Packagename] is: 3b057efdea0d95c73c4e51ebeede4a518b918e2a
15:31:44   Fatal    |     The step failed: One or more child activities failed.
15:31:44   Verbose  |     Acquire Packages completed
                    |     Failed: [Targetmachine]
15:31:43   Verbose  |       Disposing SFTP connection...
15:31:44   Fatal    |       A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
                    |       Running: Upload package [Packagename] version 2016.0712.1012
13:38:31   Info     |         Uploading package [Packagename]
13:38:31   Verbose  |         Requesting upload...
13:38:31   Verbose  |         Establishing SSH connection...
13:38:32   Verbose  |         SSH connection established
13:38:32   Verbose  |         Beginning streaming transfer of [Packagename].2016.0712.1012.tar.gz-46f295f1-0612-47f4-9db2-0a189540a072 to $HOME\.octopus\OctopusServer\Files
13:38:32   Verbose  |         Establishing SFTP connection...
13:38:33   Verbose  |         SSFTP connection established
15:31:24   Verbose  |         Stream transfer complete
                    |   Canceled: Step 1: Deploy Model Package
15:31:44   Verbose  |     Step "Deploy Model Package" runs only when all previous steps succeeded; skipping

Thanks and best regards,

Hi Daniel,
It sounds as though the SSH connection is possibly timing out while the upload takes place. During a SSH deployment task a SSH connection is first established to perform various tasks like checking for the existence of the package or create the working directories. A seperate SFTP connection is then set up to transfer the file across to the remote server. In your case since this appears to be taking so long, the SSH connection is timing out. I have added a ticket in GitHub to perform a keep alive ping for the SSH connection but you may be able to work around the issue by increasing the timeout on the server side. Following the steps outlined in you can modify the /etc/ssh/sshd_config file and increase the ClientAliveInterval and ClientAliveMax to ensure that the server doesn’t drop the connection. Keep in mind that when changing this you will need to restart your ssh service for the changes to take effect.
Give this a go and let me know if it helps to solve your problem.
Thanks for letting us know.

Hi Robert,
Thanks for your help - I tried with the mentioned settings on the linux machine -> but it did not work. I also tried with an SSH client from the octopus server to the linux machine -> same thing. BUT with the keepalive option turned on in the client I did not get disconnected. So I guess your proposed solution will work.

This seems like a bug, but it was labelled as an enhancement. This issue is currently preventing us from deploying one of our projects. Is there any chance of getting this looked at in the near future?

Is there any news on this?

This item is marked as enhancement since although there is a work around by adjusting the server timeout, it could just do with a bit more tweaking to help with those edge cases where large packages are being transferred. Have you tried with work around as suggested above by modifying the server timeout periods? It is possible that the timeout is still occurring from the SSH client library regardless of the server changes.
In any case as noted above there is a git hub ticket listed in the post above that you can subscribe to in order to get updated as to further news. As you can hopefully appreciate we have several other pieces of work that we are in the middle of however continue to worth through the ticket backlog as efficiently as possible.

Hi Danial & Ryan,
We have pushed out an update in 3.4.8 which includes a change to how we use the SSH library in order to perform the KeepAlive check every 30 seconds. When possible, please give this latest build a try and let me know if you encounter any issues.