Multipe instances of packages

We have a setup where our customer has multiple production sites (let’s call them A, B, C, D and E), and on each site, they have a number of hardware devices that they control by means of software. The number of devices is different for each site and ranges between 10 and 25.

For every device, there are currently 3 windows services that need to be installed: one “connector” service, one “controller” service, and one “store” service.

Per site, there are only 2 physical windows servers. So, on one physical windows server, the services for more than one device should be installed (so, not one device per server, but multiple devices per server). The services of a certain type (for example, the “controller” service) are identical in that they use the same package; the difference is just a configurable device number.

So, as an illustration, when we take site “A” and assume it has 20 devices spread over 2 servers, we have, per physical server, 10 times a “connector” service, 10 times a “controller” service, and 10 times a “store” service.

How do we model this in Octopus?

  • Using separate projects (like 25 separate projects for each of the 25 controller services) does not seem the proper approach (not scalable, a lot of cloning and redundancy).
  • Using tenants? Is that the proper way? It seems like misusing the tenants functionality for this? How would such a setup look like?
  • Something else?

Hey @t.v.d.donk , thanks for reaching out and welcome!

I want to make sure I understand what you’ve said so I can hopefully provide some useful advice -

  • Per site, you have two physical servers and 10-25 hardware devices
  • Each site will spread the device management across the two servers, but will need 3 services per device installed (a controller, a connector, and a store)
  • From Octopus, your end goal is deploying all of the services necessary (30 - 75 total instances of the services, depending on number of devices)

Is this a correct understanding of what you’ve outlined above and your ultimate goal for the project within Octopus? Beyond that, is there any additional configuration that needs to be considered, or are the three service deployments self sustaining? Are there any other challenges from spreading the devices across the servers that have to be handled as a part of your deployment?

Look forward to hearing back soon, we can definitely find the best way to model this within Octopus!

You are right about my situation. The 3 services are self sustaining. However, what makes things more complicated (so I did not yet mention them at first) is that when a device is in use, it cannot be updated. Only when the device operator is logged out, the underlying 3 services can be upgraded. In practice, that means that some devices may already be at the new version, whilst others still have the old version active.

Excellent, thanks for confirming!

You’ve definitely got a unique situation on your hands :slight_smile:

I think your initial point with tenants was right on, although it’s not quite a 1:1 match for features. I’m borrowing heavily from the multi-tenant SaaS applications guide as a point of reference, but consider the following map of those concepts for your use case

  • Creating a lifecycle
    • This is totally dependent on your process - for this, we’ll assume a simple lifecycle like Development/Test/Production
  • Creating the project
    • We have some best practices for project grouping, I’d recommend checking those out. In general, if your deployment is just these three Windows services without much external configuration, you could probably pretty easily slot them all into a single deployment project
  • Creating tenant tags
    • As a start, you could add a tenant tag per site/server combination. In your initial topic, that was 5 sites (A through E), and two servers each. So you could have a tenant tag for SiteA-Server1, SiteA-Server2, SiteB-Server1SiteE-Server2.
    • These would allow you to map your devices back to their home servers, and attach those tenant tags to the relevant deployment targets as well.
  • Creating tenants
    • This would be one tenant per hardware device - we’ve got some scripts to help automate that, you can find those here
  • Creating project template variables
    • This would be project dependent, but as an example, a variable like Project.Service.Name could be used to represent ConnectorService_#{Tenant.Alias}, which would allow you to get unique service names per device when they’re deployed.

At this point, the rest of the behavior is up to your general deployment approach. We have some general documentation on deploying Windows Services that cover the most common options and scenarios. In addition, I’d recommend taking a look at our documentation on guided failure, which can help you add more resiliency into deployments if there’s a chance the deployment may fail on a device still being connected. If you can narrow down a deployment window, you could also take advantage of scheduled deployment triggers.

I know that’s a LOT of content to go through - take your time to read through, and let me know if you have any questions or concerns around what I’ve said above. You’re correct that tenants aren’t a total match as designed, but I think you can easily get a working deployment model via careful tenancy planning and intentional steps forward!

Thank you for your reply. Just to be sure I understand you: We create one tenant per device - so, let’s say, DEVICE01, DEVICE02, DEVICE03, … DEVICE25. We use tags to map these to servers. The service names (according to your suggestion) would become like Controller_DEVICE01. And all Controller_* services would then be using the same “controller” software package.

Some remaining questions:

  • Does Octopus understand that it should install multiple instances of the “Controller” package on the same server? (One for each tenant that is mapped to that server)? Or do we need special configuration for that?
  • I do not see how your solution regarding scheduling and guided failure would reliably solve our case of having to wait deployment until a device indicates that it is idle.
  • Would “manual confirmation steps” within the project also be an option? That we wait until the device becomes idle?
    • How does our Controller service know that Octopus wants to update, so that the controller can ask the device to become “idle”?
    • After the device is idle, how can the Controller service resume the deployment? In you api, I do not find anything on programmatically resuming a paused step?
    • Does this approach of using “manual steps” work when deploying multiple tenants in parallel? Will those deployments for devices that are already idle just continue, whilst others are still waiting? Are there any timeouts that we should deal with (like max time a “manual step” may take)?
  • Another approach is just to start deployment straight from Octopus, and then include a powershell script hook that runs locally on the server. That script hook could inform the Controller that an update is being performed; the controller could ask the device to become idle; and the controller could then make the script resume the actual deployment.
    • Would this also work?
    • Would there be any timeouts involved that we have to take care of?
    • Which approach would you recommend and why? Using the “manual steps” or using this “script hook” approach?

Thanks for the additional questions! Here’s my take:

  • Does Octopus understand that it should install multiple instances of the “Controller” package on the same server? (One for each tenant that is mapped to that server)? Or do we need special configuration for that? Correct - by utilizing tenants, Octopus is deploying your project per tenant, which means it will run your deployment once per hardware device assigned to the target.

From the other side of suggestions - guided failures (or scheduling) are meant to help without having to define any additional configuration - if you had an “off time” where devices weren’t normally connected (say overnight), you could schedule a deployment during that timeframe to ensure devices were offline. Guided failure would allow you to rerun the update for any devices that failed while letting other devices proceed successfully.

As far as ensuring the devices are offline before proceeding, both of your options would work.

Manual Interventions


  • Would “manual confirmation steps” within the project also be an option? That we wait until the device becomes idle?
    Yes, manual interventions will work in any deployment process
  • How does our Controller service know that Octopus wants to update, so that the controller can ask the device to become “idle”?
    This would be built into your product - you can script the behavior from Octopus. The Windows service step allows for custom pre-deploy scripts to be specified.
  • Does this approach of using “manual steps” work when deploying multiple tenants in parallel? Will those deployments for devices that are already idle just continue, whilst others are still waiting? Are there any timeouts that we should deal with (like max time a “manual step” may take)? Manual interventions will run per deployment, so each tenant will be held in a suspended state until they are actioned to be continued or aborted, unless “Block Deployments” is selected

Script Steps


  • Another approach is just to start deployment straight from Octopus, and then include a powershell script hook that runs locally on the server. That script hook could inform the Controller that an update is being performed; the controller could ask the device to become idle; and the controller could then make the script resume the actual deployment.
  • Would this also work? Yes, this would work!
  • Would there be any timeouts involved that we have to take care of? Dependent on your projects - there shouldn’t be any native timeouts in Octopus unless you have a script that’s hanging on for a response and not resolving
  • Which approach would you recommend and why? Using the “manual steps” or using this “script hook” approach? This depends on the current state of your project. Scripting can be done to get a reliable and repeatable result without requiring additional orchestration and efforts (like manual interventions)

You’re spot on with where things are headed, and either of these implementations should work well for your deployment process. I’d recommend trying to model a POC with a few devices in a test environment and seeing how the process feels and scales with both approaches. Octopus should be able to handle either approach, and we’re always here to help give advice on implementing when you need it!

Thank you for all your exhaustive replies! We are getting very close to what we need!

Some remaining questions:

  • When we follow the approach that the powershell script performs the waiting, is it possible to display a descriptive message in the Octopus UI? That people can see that the deployment is ongoing but paused until the device becomes idle?
  • We currently have packages with more than 1 executable (to be precise, we have 1 package that contains the controller, connector and store service executables and config files etc). What is the best way to deploy that? Deploy the same package 3 times in 3 “Deploy windows service” projects? Or to make a standard “Deployment” proces to which we manually add the logic to stop and start the services? Or is it better to split the package up into 3 distinct packages, each with 1 service executable?

I’m glad you’re getting closer!

  • When we follow the approach that the powershell script performs the waiting, is it possible to display a descriptive message in the Octopus UI? That people can see that the deployment is ongoing but paused until the device becomes idle?
    This is absolutely possible - for logging script messaging, we recommend write-highlight to surface output from your script. You can see this in action in our samples instance - the step template for deploying to Azure scale sets makes heavy use of write-highlight to show what’s happening on the machine registration aspect of the scale set.
    You can see the step I described here (log in as a guest)
  • We currently have packages with more than 1 executable (to be precise, we have 1 package that contains the controller, connector and store service executables and config files etc). What is the best way to deploy that? Deploy the same package 3 times in 3 “Deploy windows service” projects? Or to make a standard “Deployment” proces to which we manually add the logic to stop and start the services? Or is it better to split the package up into 3 distinct packages, each with 1 service executable?
    This is up to your team preference - there are pros/cons to both! Octopus, for its part, isn’t very concerned with whether you deploy from three separate packages or three executables from a single package. From a development perspective, it can be easier to package your executables all into a single package, but that can also make it tougher when you only want to deploy a fix to a single executable instead. For more information, I’d recommend checking out our Best Practices page for project groups/project organization, as well as deployment and runbook processes. There are some good guidelines in there for anti patterns to watch out for that may help you choose a path forward for how you want to package your application.

Hopefully those helped you with the last bits you’re still working out! Feel free to follow up if you have any additional questions or need help, happy you’re zeroing in on your final process!

Thank you very much! You have been of great help!