Octo 3.4.0-beta0002 feedback - intersecting targets, roles, and tenants

been experimenting with 3.4 beta 2, and the addition of tenants as a first class dimension will be very useful and simplify our environments greatly.

however, for a few conditions, it complicated things.

we have 10 clients, which we will now track as tenants. in our enterprise, we have 5 services, and we track the deployment of each of them with roles, applied to steps in the project, and applied to deployment targets in the environments. depending on the environment, these services might run on one common server, 2 servers, or a handful of servers (but in no case is there a separate server for every client.)

currently, in v3.3, we have each octopus environment map to an environment/client combination, and so the servers that host these services will appear more than once as a deployment target in multiple octopus environments, and each instance will have the appropriate roles assigned (as appropriate for the client.)

in 3.4, when we create a deployment target, we assign it 1 or more Roles, and 1 or more Tenants (or tags).

this doesn’t quite work for our server(s) that host these 5 services. it is NOT the case that all the tenants assigned to this deployment target run all five of the services (as defined by the roles.) it’s more like Tenant 1 uses services 1,2,3; Tenant 2 uses services 3,4,5, and Tenant 3 uses services 1,3,5; but they all will run on the same server.

what would be useful to us would be that, on a deployment target, we assign tenants to each Role assigned to the deployment target. e.g. for Role 1: Tenants A,B,C; Role 2: Tenants C,D,E; Role 3: Tenants A,C,E; etc. OR the reverse, e.g. Tenant A: roles 1,2,3; Tenant B: roles 3,4,5; Tenant C: roles 1,3,5.

our workaround looks like we will be creating multiple deployment targets for the same physical server, and apply one role to each target, each with it’s unique combination of tenants (or tags.) This should work, but isn’t ideal, because Octopus doesn’t recognize that these multiple targets are really the same physical server, and whenever there is an environment-wide task (health check, tentacle upgrade), it will send the commands to the same server multiple times, and this often causes problems, and they get out of sync (upgrade a tentacle though one target, the other targets don’t realize that they’ve been upgraded.)

this isn’t a must have, but if it could be considered for a future release, that would be great. thanks.

Hi Mike,
Its great to hear that you have already started thinking about how your existing architecture will fit into the 3.4 multi-tenanted way of doing things. I certainly hope you will find it simpler to use and manage your complex deployments. We have purposefully tried to keep the relationship between tenant-tags and other entities in the system as simple as possible to make it obvious what the results of any given configuration will be. I can imagine it getting quite confusing (and difficult to manage) if some tags are assigned to a specific role for a specific machine while others go across all roles.

Unless I am mistaking your set-up, perhaps you could consider the approach of creating a set of tenant-tags that describe your services and assign these to your tenants and deployment steps.
So for arguments sake you could create
Service-A, Service-B, Service-C.

And assign them so that
Tenant-A => Service-A & Service-C

Tenant-B => Service-B & Service-C

and then the deployment steps could be scoped to the appropriate tag(s).
In this way the step for Service-A will only run when deployments are being run for Tenant-A, and when the deployment takes place for Tenant-B, the Service-A scoped steps will skip.

Does this sound like it might work in your case? If you think this still has other knock-on problems let me know and we can have another think about how else 3.4 might work for you. Perhaps there is a scenario we haven’t yet thought through and if this is the case then we can always revisit our approach.

Thanks for getting in touch regarding your particular problem. I certainly hope we can help simplify your deployments.
Cheers,
Rob

but it’s more complicated than that. Service A might run on Server 1 for Tenant A and on Server 2 for Tenant B. so a single Service A tag, having both Tenant A and B, would be over assigning tenants to each server.

you might then suggest tags named something like ServiceA-Server1 (assigned Tenants A and B) and ServiceA-Service2 (assigned Tenants C and D). but if we then also have ServiceB-Server1 and ServiceB-Server2 with different Tenants in each than we did for Service A, we’ll get the sum of all Tenants from each of the tags assigned to each Server, which won’t be right. (Tenant B might get assigned to Server 2 for Service A, but should not have been for Service B.)

Due to the complexity, it looks like out best solution will be to create more than one deployment target for each server, one per service, and assign the appropriate Tenants/tags to each of them, specific to the service.

We’ve used this solution in out current Octo v3.3 environments, and it works. but it has the side effect of having Octopus think that these are separate servers, when they are in fact one. And so things like Health Checks and Upgrades are attempted more than once if done en masse. and if done individually, doing an upgrade on the first instance of the deployment target will update that deployment target, but the other deployment targets for the same server will not realize they have been updated until their next health check.

Perhaps there could be an enhancement to make Octo more aware of multiple deployment targets for the same server, and not launch server activities more than once, and update the status of them if they share a common server.

Hi Mike,
It does sound like a bit of an edge case and at the moment for simplicity sake, the relationship between tenants and targets in 3.4.0 will probably just look as it currently does in the beta-2 since that reduces the chance of confusion to the users, both in the UI and in logic. More than likely when the full RTW is provided we will get other suggestions such as your own that we hadn’t explicitly catered for and will likely continue to iterate on this feature in the future. Feel free to add your thoughts and ideas to uservoice so that we can consolidate all these suggestions and gauge how much support they get from the community so that we know what direction to take.
Thanks again for your feedback on the betas as it did raise some discussions internally and made us take stock of our current approach.
Cheers,
Robert

Notice:

This issue has been closed due to inactivity. If you encounter the same or a similar issue and require help, please open a new discussion (if we asked for logs or extra details in this thread, consider including them in the new thread). If you are the creator of this thread and believe it should not be closed let us know via our support email.