I’m running into something that, at first glance, looks like a bug in octopus deploy. We have a Deployment step that executes a powershell script. This powershell script downloads a SQL backup and attempts to restore it.
We are noticing that during the execution of the step, octopus prematurely completes the step, marking the step as successful, when no errors have occurs AND the script is still running in the background!!
Is this a known issue that has been fixed in a newer version?
We also have a paid version of octopus so if there is another process i need to follow to get better/faster support that would be great to know about as well.
Current Server Version:
Thanks for contacting us about this issue.
I suspect it may be an issue with the script reporting an exit code early, which Octopus interprets as a success. Is it possible for you to share the PowerShell script here? We should be able to determine what the issue might be and suggest some possible changes.
Also, you can always reach out to us in support via our email at email@example.com if you wish to log a ticket in our system. We try to get any tickets, whether on this public forum or logged in our ticket queue, replied to as quickly as we can.
Hey @sean.stanway ,
I can’t post the whole script here but i can send it to the support email. Let me do that now
If you want I can also send you a private repository link that you can upload to, where only us in Octopus can see the files. It’s entirely up to you though!
Hey @sean.stanway ,
I have sent an email with the script to firstname.lastname@example.org with the following subject ( Sean.Stanway : Octopus deploy step prematurely exits with success ).
Let me know once you get it.
Also, the script has Write-Verbose all over the place. If i don’t see further After the download i would expect to see more verbose messages like one indicating the download was complete ( which is missing ). Does this mean it didn’t hit that line of code yet?
I’m stepping in for Sean as he has gone offline for the day, but I’m happy to help!
In taking a look through everything you provided, I’m wondering if you might be hitting an operating system/framework limitation in downloading such a large file in one go, which is causing the odd behavior/output in your deployment process.
To account for this, you could potentially chunk up this download by leveraging the byte-range fetch feature of the AWS S3
GetObject API endpoint to download this large file in smaller parts rather than as one large object.
Another AWS feature that might help with this is S3 Transfer Acceleration, which can speed up transfers to and from S3 by as much 50-500% (this is dependent on distance and file size).
I’ve linked the documentation for these features for your review, in case you would like to investigate these more on your side.
I hope that the information I provided above helps make your deployment process more resilient, but let me know how it goes in experimenting with these features.
Hey @britton.riggs ,
As a side note, this deployment does work but sometimes we fail with this error so it’s not a 100% failure rate. Just wanted to make sure this was understood.
In regards to the AWS stuff, i will look through these and get back with you on if this is something we can/want to do.
Thank you for clarifying that this doesn’t happen every time. I didn’t explain it well in my initial response, but in transferring a large file in one request like this then your process runs the risk of failing/exhibiting odd behavior in the event of any networking issues/latency between the target machine and AWS. By breaking these requests into smaller chunks, you reduce the risk of the request/data breaking/corrupting during the transfer process.
I hope this additional information adds some context to my previous post.
Let me know how it goes in investigating/experimenting with those features, but I hope they help reduce the intermittent issue you currently see in your deployment process.
I quickly looked but does this only work for clients outside of the aws network? Currently, the File being downloaded from S3 is happening from a EC2 Instance in our AWS VPC.
We are currently using the AWS powershell module and i don’t see an option for this.
Just stepping in for Britton while he’s offline.
AWS S3 Transfer Acceleration is great for improving data transfer speed between regions that are geographically distant from each other, while a VPC endpoint will improve the performance of data between EC2 and S3 in the same region. I’d definitely recommend checking out the AWS post here regarding the different options for getting the best performance!
I’ve also seen that AWS Powershell module might not be as performant as the AWS CLI but this definitely could be outdated info by now, however it might be worth looking into testing: amazon web services - AWS PowerShell Significantly Slower the AWS CLI - Server Fault
Let us know if you have any questions at all!
This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.