Hello, I’m having a similar issue to this topic. I noticed that the issue was fixed according to the instructions on this GitHub Page. However, we aren’t using a reverse proxy server, so, as far as we know, we’re unable to solve this issue by adding an IIS rewrite rule. IIS isn’t event running on the Azure Government hosted windows server where our Octopus Deploy service is running. Our repository is hosted in BitBucket. I’m able to select any branch from the drop-down list on the process page within Octopus that doesn’t have a forward slash in it. Any ideas on what might be causing this issue for us? Thanks
Thanks for reaching out and I’m sorry to see you’re running into this issue as well.
Could I know which version of Octopus Deploy you’re running locally? I am going to see if I can reproduce this behavior and speak to the engineers.
I note that you do not have a reverse proxy server but just to be thorough you don’t have any device in-between BitBucket and Octopus that might be rewriting the URL? I assume the Azure gov region is fairly locked down I’m just trying to get a full picture here. I appreciate the information.
Looking forward to hearing back.
Thanks for looking into the issue for us.
Our Octopus Deploy Server version is: 2022.1.2412.
I discussed the reverse proxy server with my IT team and as far as they know there isn’t anything between BitBucket and Octopus that is rewriting the URL.
I uploaded our logs to the link you provided.
Thanks for the logs they were helpful.
It does look like the verbatim the same issue that you cited in your first post minus the reverse proxy settings for IIS. If you haven’t already, I’m wondering if you are able to log in to the Octopus machine locally to test if the URL still returns a 404 for that branch? It might shed some light on if there is anything between your workstation and the Octopus server itself that is causing the feature/renaming branch to not escape.
We’ve done some testing here and when escaped properly the typically the GET from Octopus looks like this:
"HTTPS" "GET" to "
""/api/Spaces-1/projects/Projects-21/feature%2fnet6-upgrade/deploymentsettings" completed with 200 in 00:00:00.0401089 (040ms) by "UserName"
Your logs are resolving in this format:
"HTTP" "GET" to "0.0.0.0""/api/Spaces-#/projects/Projects-#/git/branches/feature/renaming-long-paths" completed with 404 in 00:00:00.0074341 (007ms) by "UserName"
Looking forward to hearing back.
Unfortunately, I do receive the same error after logging in to our Octopus machine. It looks exactly like the screenshot I sent in my original post (including the console log errors).
Although this might not help us solve the problem, it’s probably worth noting that we also have an Azure DevOps build process that uses the OctopusCreateRelease task. That task is essentially utilizing the octo.cmd create-release command. In that command, we’re telling Octopus which branch to use to get the “Config to Code” configuration. However, we’re seeing the same issue there too:
I’m just stepping in for Garrett as he’s offline at the moment. Thanks for attempting to access your branch directly from the Octopus server. We were hoping that it would keep the traffic local however depending on your network configuration that may not be the case.
404 is coming directly from the Octopus server so it’s not surprising that you’re seeing it in your ADO pipeline as well. All that we can tell on this side is that something on the network seems to be decoding the URL when the request is sent to the Octopus server.
Aside from the IIS reverse proxy mentioned in the original GitHub issue, this type of behavior can be caused by firewalls or other traffic inspection devices. If it’s not too much trouble could your network team take another look at any inbound devices that may be sitting in front of your Octopus server? I will also start a conversation with our engineering team to see if we can find any other possible causes of this behavior.
I look forward to hearing back and please let us know if you have any questions for us.
Thanks for your response. I’m assuming there may be something between the inbound traffic and the Octopus server as well but I’m unsure how that could be because the Octopus server is hosted in the Azure cloud. We’ll take a closer look at the configuration of the virtual machine the Octopus software is running on to see if we can find anything that might be between the two. I’ll report back with our results once we’ve taken a second look.
I installed Wireshark on our Octopus server and found out by looking at the IP addresses in the incoming requests that we do actually have an IIS reverse proxy website setup on a different server in the Azure cloud. This was probably set up a long time ago by someone at our company that no longer works for us. I wasn’t expecting the reverse proxy website to be located on a different server (apparently my IT team didn’t think it would be there either). I followed the steps (found here: GitHub Page) for updating the reverse proxy rewrite rule and everything is working now.
Thanks for your assistance!
Great detective work! I’m glad you were able to track that down and get everything working again. Let us know if you ever need anything else down the road.
This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.