Why deployment target gets modified?

We have deployment environments consisting of Azure VM Scale Sets. Each Azure VM has Tentacle installed and shows up in the environment as deployment target. The problem is – tentacles change their thumbprints from time to time for the reason we don’t understand. At that moment they become unhealthy as server cannot communicate with them anymore. We noticed that when this happens we get very specific type of record in the Audit log - please see the picture.

So at some moment some automated process running under my own credentials modifies deployment target and changes its thumbprint! Could you please give us any clue what process could it be and why is it doing it?

Thank you!
Konstantin

Hi Konstantin,

Thanks for getting in touch. I must admit my experience with Azure Scale Sets is limited, but what you’re describing sounds like it would be typical of the VMs being recreated and getting re-registered with a different thumbprint.

Could you share any details of how the scale set is configured? Given the audit is showing up as you, do you have an API key that you’ve entered somewhere in the VM setup? From what I can make out in the screenshot, you have an API Key named Azure that it’s using. The call was also made with OctopusClient 5.0.0, which is fairly recent. Would you be able to include the rest of the details from that line (if you hover it will show the complete user-agent details for the origin of the call)?

Also, just to make sure I’m checking the right code, which version of Octopus are you currently running?

Regards
Shannon

We are running Octopus v2018.10.1 LTS.
We were suspecting VMs being recreated and getting re-registered with a different thumbprint, but it does not seem to be the case. Instead thumbprint seems to be changing during application of the custom machine policy which is exact copy of default machine policy.
Those “API key created” events at random times seems to be caused by machine policy applications, but I’m not aware of any API key called Azure.

It shows Windows Server version and then NoneOrUnknown at the end. Not very descriptive.

The user agent string carries information about the caller’s context. So from what you’ve provided we know it was something using v5.0.0 of OctopusClient library and it was running on a Windows Server that wasn’t a known build server type (the NoneOrUnknown relates to whether OctopusClient can detect if it’s running in TeamCity, Bamboo, Azure DevOps etc). There would also be an octo value in there if it was octo.exe vs a hand written script that’s calling OctopusClient directly. So if NoneOrUnknown is the last thing you see then it is not octo.exe, it is a script.

Could you try switching on the request logging to see if you can correlate the IP address of the callers? I.e. it might show that it is the VMs or something in Azure that is firing when the scale set changes.

Could you also check on your profile and have a look under the API keys? Based on that audit log there should be one in there called Azure. If there is, removing it would possibly highlight what’s using it, as whatever is making the calls will start failing.

– Could you try switching on the request logging to see if you can correlate the IP address of the callers?

How would I do that?

Sorry, you run Octopus.Server.exe configure --requestLoggingEnabled=true on the command line. I think the service needs a restart to pick this change up too. The data should end up in the server log.