FTP deployments behaving oddly after upgrade to 2.x

Hi, we recently upgraded from 1.6 to 2.x (currently on 2.3.3) and we’ve noticed some odd behavior with our projects that use an FTP step for deployment. An initial deployment will work fine, but any subsequent deployments within that same day do not seem to actually go through. The deployment does not contain errors and looks to complete successfully, but the newer files do not show up on the FTP site. Usually after waiting several hours, or the next day, another deployment will work as expected again.

It looks like the detailed FTP file copying output in the logs is no longer present in 2.x so I can’t tell for sure if files are actually being copied during the times when it doesn’t seem like they are so I’m not sure how to troubleshoot this.

The only change we made was to account for how in 1.6 it was the server doing the FTP step and now it is done by a Tentacle. We tagged one of our Tentacles with a role named ‘FTPer’, and for the times when it works it seems to work fine.

Could there any any sort of caching going on that might be causing it to not actually copy out the files for a subsequent deployment? We have tried using all of the options for forcing a package download and installing on the Tentacle, but it still seems to occur.

Has anyone else seen this issue?

Hi - I haven’t heard any other reports of this behaviour, but given the changes to FTP in 2.x (upgraded the client library we use etc.) it is possible there’s something funny going on :slight_smile:

If you open the task log in 2.0 you should be able to click “Verbose” to see the FTP transfer output - can you please take a look and let me know what you see?

Cheers,
Nick

Ah, thanks. Didn’t realize about the Verbose mode, now I can see all the specific behavior.

So from what I can tell it looks like it’s doing what it’s supposed to, but the end result is that the updated files just don’t seem to be present on the FTP site. While it very well could be an issue on the FTP side, nothing has changed there since moving from Octopus 1.6 to 2.x.

One thing that we’ve tried that looks to help the problem is to manually delete the current contents of the remote folder before doing the deployment, so we are looking to try to automate that step. Before I go down the road of writing a powershell script to do that I wanted to make sure that there wasn’t something in Octopus already that would do it for me.

In the File Transfer Options area of the FTP step, I don’t see an option for purging the root directory before copying but is there any undocumented way to do that? I’m also assuming that the Custom Install Directory option doesn’t apply to FTP steps, or at least not to the remote FTP folder, is that accurate? My assumption was that this would only purge any custom location for where the Tentacle unpacks the package before starting the FTP transfer.

So next steps for me are to write a powershell script to purge the remote folder before deployment, unless there is built in functionality for that, and we’ll see if that resolves our issue. It’s hard to say that this is anything that Octopus is doing, as it looks to be doing just what it’s supposed to.

Thanks for your help
-Scott

Hi Scott - sounds a bit puzzling. We don’t provide a “purge” option for the FTP step today; writing PowerShell is probably the fastest way to get this done.

If you turn up more detailed info or find any other clues please let us know.

Cheers,
Nick

This is happening to me also. Maybe someone with more knowledge of how the ftp sync function works could chime in?

Could it be related to time/zone differences between the servers? That might be why the sync is not detecting files as being modified.

I am also considering going the route of a custom powershell script. If this can’t be fixed, it would be nice to have the option to use the old ftp functionality.

Hi Tom - yes, a date/time issue definitely seems to be a top contender for causing this.

Between 1.x and 2.x we moved forward quite a few versions in the FTP sync library we use, which may be at the root, but given the frequency with which clocks are off and so-on, I think we’d be better off simply overriding the sync option and forcing all files to be pushed:

Thanks for the feedback.

Nick

Wow, I’m so glad I’m not the only one here seeing this. I was seeing this on version 2.3.3. I just upgraded to 2.4.7 and now I keep getting errors like this when trying to push to an Azure website.

Connecting…

Info 15:48:06
Connected; beginning synchronization…

Info 15:48:16
Synchronize complete

Info 15:48:16
Total operations: 5

Fatal 15:48:16
5 out of 5 transfers failed
EnterpriseDT.Net.Ftp.FTPSyncException: 5 out of 5 transfers failed
at EnterpriseDT.Net.Ftp.FTPTask.r4M6UXrqub()
at YCI9u27a1oMu68GXnaS.OFnVm47SuspJNvpIdPM.uoCtiI730uhppJYuAkP.get_ReturnValue()
at YCI9u27a1oMu68GXnaS.OFnVm47SuspJNvpIdPM.LIVb42QLjtN(IAsyncResult )
at YCI9u27a1oMu68GXnaS.OFnVm47SuspJNvpIdPM.r1rb4k9V5sa(FTPSyncRules , String , String )
at EnterpriseDT.Net.Ftp.ExFTPConnection.Synchronize(String localDirectory, String serverDirectory, FTPSyncRules syncRules)
at Octopus.Tentacle.Integration.Ftp.FtpSynchronizer.SynchronizationSession.Execute() in y:\work\refs\heads\master\source\Octopus.Tentacle\Integration\Ftp\FtpSynchronizer.cs:line 89
at Octopus.Tentacle.Integration.Ftp.FtpSynchronizer.Synchronize(FtpSynchronizationSettings settings) in y:\work\refs\heads\master\source\Octopus.Tentacle\Integration\Ftp\FtpSynchronizer.cs:line 16
at Octopus.Tentacle.Procedures.Implementations.Ftp.FtpUploadProcedure.Upload(ProcedureState state, CancellationToken cancel) in y:\work\refs\heads\master\source\Octopus.Tentacle\Procedures\Implementations\Ftp\FtpUploadProcedure.cs:line 69
at Pipefish.Async.CaptiveThread1.ThreadAction(Action2 action, Guid operationId, IActivitySpace space, Guid captiveThreadId) in c:\TeamCity\buildAgent\work\cf0b1f41263b24b9\source\Pipefish\Async\CaptiveThread.cs:line 114

Any updates to this? we are on version 2.6.1.796 and are having these issues as well.

Hi Erik,

For us, we were able to move away from using FTP so that was our resolution. It had only been a temporary thing until we had Tentacles on these particular servers. We were pretty sure that we had nailed it down to time zone differences between the two sides, though we never confirmed that or needed to drive it to resolution.

Sorry I couldn’t be more help.
-Scott

I also had to switch to using Web Deploy instead of FTP.

Yeah ofcourse we aim to use tentacles on all machines, but we do not have access to several machines with anything but FTP so we are stuck with sometimes having to manually retry a build 1-5 times.

Paul: I’m not sure what “web deploy” using octopus is, is it some sort of custom deploy step? And how could I use that to deploy to a ftp host?

Hi Erik,

This is our Web Deploy library script: https://library.octopusdeploy.com/#!/step-template/actiontemplate-web-deploy-publish-website-(msdeploy)
In 3.0 this will be its own step as part of our project processes. Our FTP step is being depreciated due to both lack of use and inconsistencies.
FTP clients themselves have built in reconnect,retry etc and even these can fail in environments like Azure, because both are unstable and unreliable.
There is a lot of information online available regarding web deploy including all the official MS stuff http://www.iis.net/learn/publish/using-web-deploy/use-the-web-deployment-tool

Vanessa