Service Fabric - Specifying Target Cluster for Deployment

Hi,

I have been working on using Octopus Deploy to deploy some of our service fabric applications. I have been running into an issue when I attempt to deploy:

"The step failed: Activity Deploy Service on {name} failed with error ‘The machine {name} will not be deployed to because it is not an Azure Service Fabric Cluster target.’."

The machine name being listed there is the Octopus Server, so it makes sense that it would not deploy there because that server is not a Service Fabric Cluster. However, I am not sure why it is attempting to deploy to that server in the first place. My understanding was that it should be looking in the Publish Profile file that I specify when creating the deployment step (i.e. PublishProfiles\Cloud.xml), and then using the connection information in there to point towards the appropriate Service Fabric Cluster. Is that correct, or is there an additional step I am missing?

Thanks!

I did see today that release 2018.5 included the ability to add Service Fabric Clusters as deployment targets, allowing you to specify in your execution plan to deploy the package to that target. Is that the preferred way of handling Service Fabric app deployments? Some of the documentation I read did not mention the SF deployment targets, although it looks like it’s a new addition so that would explain it.

Hi,

Thanks for getting in touch.

The error message you are getting suggests that you are executing a “Deploy a Service Fabric App” step but the target role on the step maps to a deployment target that isn’t a Service Fabric Cluster. You will probably need to create a new Service Fabric Cluster target with a new role name, and update the step to use the new role.

Regards
Ben

Ben,

Is it possible to handle a SF deployment without setting up Service Fabric Cluster target but use the Tentacle on the target server to execute the PowerShell scripts locally on the target server?
For example:

  1. Deploy the SF package
  2. Connect to Cluster
  3. Deploy/Upgrade SF Application

This way:

  • I can avoid having to install the Service Fabric SDK on the Octopus server
  • I do not have to open ports to communicate to the SF Cluster which is OnPrem

I really like the Tentacle model because always works and bypasses the Firewall security complications, however now with the Service Fabric steps I am not sure what is your road map but deffinitelly is moving away from that… I am sorry if I am missing the point on the recent built-in worker - we have recently upgraded from version 3.1.3

Thanks,
Emil

Hi Emil,

I have a couple of suggestions for you today, pretty soon we will have a better answer.

Firstly, can I ask, is this a new deployment process to this Service Fabric cluster? Were you deploying to this cluster previously and the change for targets has broken the process for you?

To support deploying to an on-premises cluster, you can use the external workers configuration as previously suggested. The only caveat to this is today you can only have one worker per Octopus instance. Your license allows you to have three Octopus instances so you could run your Service Fabric deployments through a separate instance with an external worker. You will still need a Service Fabric target for this option.

One other option is to copy the PowerShell code from the Calamari code and run that as a PowerShell step on the Service Fabric host, still allowing you to leverage the Tentacle and locally installed SDK.

Another option is to open up a hole in your firewall to allow the deployments to run from the server, this will require the Service Fabric SDK to be installed on the Octopus Server.

In an upcoming release, most likely 2018.7 or 2018.8, you will have to ability to configure multiple external workers, and the first option above will be easily achievable.

I hope this helps.

Regards
Ben

Hi Ben,

I appreciate your response. From the options suggested, I had already implemented a Tentacle deployment model with PowerShell code running on the target server (Service Fabric cluster)

Opening firewall ports is not an option and running workers also does not make sense because of the requirement of the Service Fabric SDK in addition to connectivity to the deployment target. Our Octopus server is in Azure where 90% of our environments are on-prem.

I don’t see a compelling reason to use workers in our current setup because they don’t offer any advantages in our case. The problem is that none of the SF steps available can be used in our setup and I had to use custom script steps for Service Fabric deployments. I have to admit at the beginning I was confused how those steps work because I was not aware of the specific Service Fabric targets that need to be setup in advance. We only recently upgraded from 3.1.3 and still catching up and all new Octopus features.
A suggestion to improve the documentation is to highlight the new Service Fabric target requirement - I did miss that bit…

Thanks,
Emil

Hi Emil,

I have updated some of the Service Fabric documentation to make it a bit clearer around the target requirements and fix a couple of technical errors.

Feel free to let me know if you think it needs further improvement.

Regards
Ben

So is this a breaking change for existing deployments? We’re testing out the upgrade to 2018.8.8 and none of our existing Service Fabric deployments are working. They all get the error.

To clarify, we were using the model of deploying from the Octopus Server and have the SDK installed.

Hi Andrew,

Sorry for the delay answering your query.
I am currently reproducing the issue you are running into, this change should still be backwards compatible.
I’ll update this thread once I figure what is going on and why it is failing.

Regards
John

Hi Andrew,

I have create an issue for this - https://github.com/OctopusDeploy/OctopusDeploy/pull/2999
And I have a fix for it that will be reviewed very shortly and released soon.

Regards
John

Hello @benpearce . We are using a single Octopus server, v2018.6.6

I’m new to Octopus - please could you help with these questions?

  • Do you support both of the options above?

  • For our version of the server would we need a another server to use an External worker?

  • Is it straight-forward to create a Tentacle and run the Calamari code, as you suggest in the 2nd point.

Thanks
Howard

Hi @HowardB,

Yes, we support deployments to service fabric using workers as well as allowing you to customize the service fabric deployment by changing the deployment script that is used by calamari. This is done by placing a DeployToServiceFabric.ps1 at the root of your package as shown in our documentation. Please note that we can only provide a limited amount of support if you do end up changing the deployment script.

As for external workers, you can install an external worker on the octopus server, however we usually suggest moving this to a separate server. The process for doing this is pretty straightforward, as you would install a tentacle on the machine and register it as a worker with the Octopus server. I believe the licensing around this may have changed since this issue was initially created as we now allow unlimited workers which is only restricted by the machine count.

Regards,
Shaun

Thanks @Shaun_Marx. I should say that we need to consider alternate approaches to on-premises Service Fabric deployments as the client does not want to use Client Certificates, and I believe Octopus does not support Windows Authentication via a local AD domain.

Please correct me if the Windows Auth. restriction has changed?

I guess that would rule out customising the standard deployment approach as as Service Fabric Deployment Target is required (and we can’t use Client Certs with that) - please could you confirm?

Please note that we can only provide a limited amount of support if you do end up changing the deployment script” - that is also less appealing than the External Work approach.

For External Workers, we could create this on the client’s network/AD domain.

Is there documentation which describes the next steps? For example:

  • Does the SF SDK packaging get done on the External Worker, or on the Octopus server?
  • What PS script would be run - would it be completely custom or (again) a modified version of DeployToServiceFabric.ps1?
  • Would this approach allow us use Windows Auth then to deploy to SF?

Thanks for your help on this
Howard

No problem at all @HowardB,

Unfortunately the Windows Auth restriction has not changed. It may, however, be possible to change the security mode by setting a project variable called Octopus.Action.ServiceFabric.SecurityMode with a value of SecureAD. This should force Octopus to authenticate with the current account the server or worker is running as. Please note that we don’t officially support this yet, however the good news is that we are looking to add it as an option behind a feature flag, which should make it a lot easier. We have created an issue for this work here which you can use to track the progress.

As for your external worker question, we don’t have specific documentation around Service Fabric deployments using workers. Workers are fairly transparent and only represent a location where the actual deployment will take place from. Dependencies which are bundled with Octopus will be pushed down to the worker automatically. The bad news here, is that the Service Fabric SDK has dependencies in the GAC so we can’t bundle it as a dependency and can’t install it for you. This means you have to install the SDK on each worker machine manually. This may however change sometime in the future with Service Fabric core.

The script I was referring to is the one Ben originally linked here. You are free to use and customize this script or create one from scratch. It just needs to be called DeployToServiceFabric.ps1 and exist at the root of the package. With that being said, it doesn’t sound like it is going to provide much benefit in this scenario nor would I advise customizing unless you absolutely have to.

I hope the above helps.

Thanks again @Shaun_Marx.

OK, based on that, I think this narrows down the process to:

1 As an Octopus Step, run our own custom Powershell against the Service Fabric SDK (installed on Octopus sever) to create a Service Fabric package, then…
2 Run our own custom Powershell to push the Service Fabric package to a Tentacle or External Worker (running on the client’s network), then…
3 Run our own custom Powershell on the Tentacle or External Worker to push the package into Service Fabric, authenticating via a local AD domain account

Would you agree? If so, please could you help with these:

  • I read in your docs that an External Worker can be a Tentacle. Why would you use an External Worker over a regular Tentacle?
  • Can we configure the Tentacle or Worker of step 3 to execute as a specific local AD user, or does that have to be done in the Powershell itself?

Much obliged
Howard

@HowardB, not sure I entirely follow. We wouldn’t usually recommend Octopus take care of creating the service fabric package, that should be done by the build server. You also wouldn’t need to run a custom PowerShell script to push the package. You should be able to achieve what you are looking to do by doing the following:

  1. Create an Azure Service Fabric deployment target and add a service fabric step to a project
  2. Push your service fabric package generated by your build server which optionally may contain a custom DeployToServiceFabric.ps1 at the root of this package
  3. Deploy using Octopus which will take care of the rest

Please note that we will be introducing an AD option to the Azure Service Fabric step very soon. In the interim, you can possibly add a variable called Octopus.Action.ServiceFabric.SecurityMode with a value of SecureAD or you could customize the deployment script to always use Active Directory. You can follow the progress regarding AD support for our Service Fabric step here

I also understand the confusion around workers and tentacles. A worker is just a tentacle that has no roles and is used to execute tasks somewhere other than the Octopus server process itself. The important thing to note here though is that our Azure Service Fabric step always runs on a worker. By default, the server has a built-in worker which is usually used, however, you can shift this work somewhere else and that is what an external worker allows you to do.

Regards,
Shaun

Thanks @Shaun_Marx.

I think we are going round in circles a little. I thought we couldn’t use an Azure Service Fabric deployment target because that, currently doesn’t support local AD auth…

Anyhow, I’ve revised our approach below:

1 On the Build Server, run custom script to create a Service Fabric package using the Service Fabric SDK (installed on the Build server), then…
2 Push the Service Fabric package to a Tentacle (running on the client’s network), then…
3 Run our own custom Powershell on the Tentacle to push the package into Service Fabric, authenticating via a local AD domain account

Thanks Howard

Hi Howard,

Sorry for the delay responding.
Your revised approach looks good, the only change I would recommend is in step 2, instead of uploading the package directly to the Tentacle, upload it to our built-in feed and then have the script reference that package.

Hope this make sense

Regards
John