Proper way to deploy to Windows Cluster with multiple resources

We are having trouble properly configuring a deploy environment when we need to deploy to a clustered resource. I believe that is it not actually currently possible to achieve this in the current version. Here are our requirements:

We have 2 nodes in the cluster, with multiple roles. Each of these roles can be on either node at any time. Each of these represents a deploy target for us.
The tentacles should be able to run in polling mode.

From my understanding, we need a separate instance corresponding with each role. The question is, how do we configure it such that the tentacle is aware of which instances are currently active on that node, depending on which node the role is assigned to? We cannot put the Tentacle.exe into the role resources, because we have multiple roles, and the Tentacle.exe is common for all instances (C: drive).

Any ideas on this? Should I move it to the Problems forum?

Hi, sorry for the delayed reply.

First I’ll say that it’s a while since I did anything with Windows clusters. Octopus/Tentacle doesn’t natively do anything with clusters (they aren’t cluster aware), so there are really two ways people approach clustering with Octopus.

  1. Install Tentacle and make it run on only the active node in the cluster, treating the cluster as one single “machine” in Octopus. I’m actually not sure how successful this has been.
  2. Install Tentacle on both nodes in the cluster, and in Octopus treat them as two completely separate machines. The fact that they are clustered doesn’t really matter.

Assuming you go for option 2, just install the two Tentacle instances, register them as machines in Octopus, and tag them with the appropriate roles. Treat them as if they aren’t in a cluster from Octopus’ point of view.

Hope this helps, let me know if this makes sense or if I’ve misunderstood anything.

Paul

Paul,

Thank you for the response. Unfortunately, this does not fit our requirements, and in fact, doesn’t fit to the idea of having a cluster in general. In a cluster, it is possible to have two (or more) active nodes, and services (roles) can be running on any node at any time. This means that Octopus can never know which machine to deploy to any any point in time. We have currently worked around this by using a Powershell script to fail all services to one node for purposes of deployment, but this is a hack, to put it simply.

What we would need from Octopus, is to be able to install the Tentacle EXE as a service in each cluster role, so that the EXE moves with the cluster role, if it is moved or fails over to another node. In this case, each cluster role also has its own DNS name and IP, which moves along with the resources. So for the Tentacle, it could be pretty transparent.

As it stands currently, though, it appears to only be possible to install 1 EXE per machine. Or is there another way?

Just to make sure I understand the scenario:

I’d imagined Tentacle to be like Remote Desktop - e.g., just because you have a cluster for IIS or something else, you might still need to be able to remote desktop into each individual machine. It sounds like in your case, it’s important for Tentacle to run on whichever machines are the active nodes for that cluster role. Thanks for the explanation!

Architecturally this isn’t something we could have supported in Octopus 2.6, but it is something we should be able to support in 3.0. After we ship the 3.0 pre-release, we’ll investigate exactly what is involved in making Tentacle cluster aware.

Paul

Dear Paul,

Basically, it seems you’ve got it correctly. While I do need to RDP to a node from time to time, that is on a lower level than where I believe Octopus should be aware. The cluster presents a group of resources (including drive, IP, domain name), and it is on this level that the Tentacle should be aware of itself. It seems that because of the restriction of installing 1 Tentacle per machine, it is unnecessarily removing a layer of abstraction that would help to solve the clustering issue. Looking forward to seeing progress here! Thanks!

Ryan,

The trouble it, it depends what you are trying to achieve. I’ve tried (in anger) a couple of different ways of doing this, but the key is to know up front whether you are deploying onto the cluster nodes, or ‘into’ a cluster resource group (ie to a cluster service like SQL server).

Sometimes you are deploying something that has to be actually on each node (eg: assemblies into the GAC). In this case you’ll need a separate tentacle per node, as per a normal Octopus deployment.

Other times you are deploying something ‘into’ a cluster resource itself. Lets keep it simple and say we’re just dropping a file into a disk that’s owned by a resource group. In that case the deployment, as you say, has to run on the node that’s active for that group (otherwise the disks aren’t mounted). For this scenario, the simplest option seems to be to enrol the tentacle as a cluster service. You’ll need to install the tentacle across both (all) nodes, and ensure each tentacle shares the same private key (Paul did a note on that somewhere). The act of registering that service as a cluster service ensures that only one instance of the tentacle service is running at any one time, so as far as the Octopus server is concerned, it only ever sees one Tentacle.

If you want to do this for multiple cluster resource groups, you have to install multiple (side-by-site) tentacles, one for each resource group, all of which have to appear as separate tentacles to the Octopus server.

You can see how this gets messy and complicated quite quickly. Plus, if you need to target the nodes also (the GAC scenario) you still need the per-node tentacles also, which is a pain.

One get-out is if you can deploy into the cluster resource group remotely, such as if you are doing a deployment to a cluster-aware service like SQL Server. In this case it doesn’t matter where the deployment runs from, so you can do it from the ‘node’ tentacles, though you will have to be careful to either handle the deployment running ‘twice’, or handle it (by having a role for the SQL deployment which only one node ever holds - depends if you want the ability to deploy to itself be fault-tolerant).

So my TL;DR:

  • there are scenarios where a Tentacle might need to deploy to a cluster node
  • there are scenarios where a Tentacle might need to deploy to a cluster resource group
  • there are scenarios where you can deploy to a cluster resource remotely, which is easier to handle, but can have ‘doubling up’ issues

At present, all of these require different side-by-side Tentacle installations to cater for cleanly, and any ‘cluster aware’ Tentacle functionality would have to cope with at least the first two scenarios (I have separate thoughts about making the 3rd work better).

Piers,

Thank you for the great overview. I completely agree. The point where we hit a wall was:
“If you want to do this for multiple cluster resource groups, you have to install multiple (side-by-side) tentacles, one for each resource group, all of which have to appear as separate tentacles to the Octopus server.”

At least in version 2.6, it was not possible to install tentacles side-by-side, as attempting to reinstall the MSI would not allow you the option of having two completely separate tentacles on the same machine. Installing two instances (in the same installation) was of course possible, but this doesn’t help when you need to separate the installations completely to allow for a resource group to move between nodes.

I’m not sure if this changed in 3.0, but I don’t recall reading anything about it.

We are actually using Octopus 2.4. When I say ‘installing Tentacles side-by-side’, it’s the instances that I’m referring to.

It’s been a while since I looked at this, but if I remember correctly, all I did was use the command line tool to register the extra tentacle instances (because I wanted to re-use the key from the other node)

Someone from Octopus might correct me here, but my understanding is that the ‘Tentacle Install’ pretty much just extracts the management UI and drops the binaries on the disk. The actual ‘Tentacle’ (as seen by the Octopus server) is the instance - a collection of registry settings plus a discrete Windows Service registration.

All Tentacle instances on a node will share the same binary, but that doesn’t prevent you from taking the registered services which pertain to a specific instance and associating that with a resource group.

My bad, but the terminology is a bit confusing here.

Yes, it is quite confusing. :slight_smile:

The problem here comes from the situation described earlier, where a resource may be present on only one node at a time. If the tentacles are installed identically on both nodes, how does Octopus correctly deploy to only the active node? It would always try to deploy to both, and one would always fail. So essentially, the Tentacle installation, complete with executable and registration, needs to move along with the resource group.

Ryan:

No. If you install a Tentacle instance onto both nodes, and add the Windows service associated with that Tentacle instance to the cluster resource group (as a ‘generic service’), then the cluster manager will ensure that only one instance of that windows service is ever running at any one time, and it will be running on the active node for that resource group.

You can do this with two totally independent tentacle instances, but I find the most useful is to setup both tentacle instances using same private key (ie command line install). You can then register them as a single entity with the Octopus server, using an IP address associated with the cluster resource group. The Octopus server never knows there are there are ‘two’ tentacle instances involved, because they share the same key, apparently the same network address and only one is ever running at the same time.

In this kind of topology you will need to make sure that the package you are installing doesn’t just sit in the default install location however - either move the default drop location for that tentacle to a mount point that’s associated with the cluster resource group, or have the deployment itself copy artifacts somewhere else. Otherwise, after failover, what you’ve ‘installed’ won’t be there.

At this point I’m going to have to ask what are you trying to achieve in your install? Then we can stop talking in the abstract case.

I’ll try to explain the issue with an example, and a few questions which highlight the issues. I’ll do my best to stick to the actual terminology from MS:
https://msdn.microsoft.com/en-us/library/aa372869(v=vs.85).aspx

Let’s say you have a Windows cluster comprised of two nodes (physical machines). On each node, you have 2 groups, which are themselves comprised of: a Generic Service resource, a Physical Disk resource, an IP Address resource, and a DNS Name resource. Note that none of these groups are duplicates or copies of each other: they each represent distinct services to be deployed.

Also assume that each of these groups correspond to different Octopus Deploy roles. In other words, they will be deployed by different Octopus Deploy processes/projects.

Any of these groups can be on either node at any time.

If I follow your suggestion, and basically configure both nodes with the same Octopus configuration, that means that Octopus only sees one entity, as you correctly pointed out. This is a problem, though, since then the question is: which node will receive the actual deployment? Since each group has its own IP: which IP will be used for the Tentacle communication?

Reading your response, it seems that your assumption is that we will only have one cluster resource group. Of course, the described solution would work for that case, since the Tentacle moves with the one and only resource group. Once another cluster group is added, which can be on a separate node from the other one, the solution falls apart (since Octopus itself won’t let you have two installations (exe’s) on the same node). In fact, this is exactly what we attempted, and the solution didn’t work for us.

EDIT: As I am rereading, I see your earlier comment: “If you want to do this for multiple cluster resource groups, you have to install multiple (side-by-site) tentacles, one for each resource group, all of which have to appear as separate tentacles to the Octopus server.” So it seems we are in complete agreement, EXCEPT that the actual installation of side-by-side tentacles is not possible due to the way that the Tentacles register themselves on the server. If there is a way to do this (supported by Octopus), please send details, and we can try this.