This problem was discovered when we suddenly deployed one of our applications to Production instead of to QA (!). We have two different DNS for the QA and Prod environments. For example DNS “SQL001.company.com” is mapped to a machine in production while the corresponding machine in QA can be reached with “SQL001.qa.company.com”.
In Octopus we have two different environments (Prod and QA) with machines using urls with these DNS. So the machine in Prod-environment will use https://sql001.company.com:10933/ and the machine in QA will use https://sql001.qa.company.com:10933/.
So what happened was that an application was deployed to the QA-environment, but ended up being deployed to the machine in the Prod-environment. At first I almost blamed the people who had created the DNS entries, but fortunately before I did that I noticed that in the “Connectivity” tab on the machine in QA (in Octopus) the address it was actually using was the Production machine url (dns) and not the url (dns) specified in the machine settings!
After changing the url a couple of times on both the QA-machine and the Prod-machine it seems Octopus is mixing up what url it should be using. Even deleting and recreating the machine in QA would not make it use the correct url with correct DNS. Apparently it has something to do with the urls being so similar since creating a machine with a completely different url works. It also should be noted that we have about 6 machines with this setup and two of these are affected by this bug.
Is this a known problem? Is it fixed in later versions?
We are using Octopus 2.5.10.567.
Hi Matsmortensen,
Thanks for getting in touch!
There was a known issue in 2.5 where if machines shared the same unique identifier or “SQUID”, the server could get confused about which one to deploy to. As a result, we prevented duplicate SQUIDs in version 2.6 and beyond. This often happens when you clone machines for VMs. You can find more information in the Installation section of our Installing Tentacles documentation.
To verify this is the problem, can you send a screenshot of the settings for the affected machines as well as the tentacle config files? (usually C:\Octopus\Tentacle\Tentacle.config
on the tentacle machines).
Kind Regards,
Damo
Thanks for the quick response! We are cloning the machines from Prod to QA so it seems this is the explanation. The tentacle configs are identical on both servers (as is everything else) which means the SQUIDs are duplicated. I hadn’t noticed the warning about cloning VMs in your documentation.
It doesn’t quite explain why only two of the eight configured machines are misbehaving, but it might be a little random which tentacle the Octopus server decides to communicate with.
You say you have prevented duplicate SQUIDs in version 2.6 and beyond. What happens then if you clone machines? Do you get an error?
I guess the easiest way to fix this for us right now is reinstall the tentacles in our QA-environment (I assume this will give us new SQUIDs?), but we also need to find a way to automate this when servers are cloned from production. I know we can run a powershell script, but it is still yet another thing we need to do in QA after cloning.
Apparently a reinstall does not update SQUID or certificate so I ended up running the following script which works:
Tentacle.exe --service --stop
Tentacle.exe new-squid
Tentacle.exe new-certificate
Tentacle.exe–service --start
Hi Matsmortensen,
Thanks for the quick reply and I’m glad you got it sorted out!
Thanks,
Damo