Initialise new tentacle remotely

Hi,

Octopus seems pretty slick. If we were to use this on AWS infrastructure, I’m wondering how we would ensure auto-scaling and self-healing support?

For this to work we would need a way to register a new instance to a particular role remotely, and initiate a pull or a remote push to the new instance, such that it’s ready to be added to the load balancer by the time the init script completes.

Are there any recommendations for how to achieve this?

Thanks,
Mike

UPDATE

I figured out how to force a push of a release to the new instance synchronously using the command line tool. The bit that’s missing now is how to register the new machine with the environment remotely. It seems odd the tentacle needs to register the server, and the server needs to register the tentacle. Am I missing something? Does the REST API support registering a machine?

To complete the answer in case this helps someone else, you can use the REST API on the Octopus server to register the new machine from the newly provisioned instance on startup.

Hi Mike,

We also have a tool built into Tentacle.exe to do this - you just have to give it a server, API key and environment name:

Paul

Awesome that’s what I was looking for. It would be great if you could also use Tentacle to pull latest to the current machine synchronously.

Without this am I right in thinking I need to use Octo.exe to push to all machines in the role?

Hi Mike,

That’s right. Currently we have almost all the data we need to reason that
a new machine needs a certain set of packages deployed to it. But there’s a
small amount of information we’d need to gather (and we would need to make
some UI changes) to fully implement this. It’s something I’ll be doing some
planning for in version 2.0.

Regards,

Paul Stovell
Octopus Deploy
W: octopusdeploy.com | T: @octopusdeploy http://twitter.com/octopusdeploy

When using the Tentacle.exe to register an instance, is there any way to also define a role?

Update: Scratch that, I just tried adding --role=web and that worked.

Hi Paul.

It would be really great if Octopus had built-in support for autoscaling scenarios.

Looking forward to seeing that in Octopus 2.0. :slight_smile:

Regards.

We’re using autoscaling, just use a bootstrap package to install the tentacle and phone home to update itself.

Hi Mike.

That’s precisely what I’m planning to do. One question: when the autoscaling system shuts down a machine, I presume that machine keeps showing in Octopus dashboard. Do you remove the dead servers manually from the dashboard or do you have some sort of cleanup script?

Regards.

Hi,

It’s still early days. I’ve only just got the bootstrapping working with AWSDeploy and CloudFormation, but yes if a server drops out it’s leaves a rogue item behind. I’m not sure yet whether this is going to block further deployments and cause problems.

At the moment I’m just manually removing them. I would like to find a automated way to clean them up - possibly a scheduled task.

Mike

I’ve just figured it out that a dead server left behind blocks further deployments to its environment. Octopus can’t upload new packages to that server and fails the deploy altogether. :frowning:

That’s not going to be a problem for us right now because we deploy on a weekly basis and so we’ll be able to clean everything up before deploying, but I guess it makes harder to use Octopus with a continuous delivery process and auto-scaling groups.

Regards.

Hi Michael,

Thanks for the update, that kinda sucks. I guess I’ll have to write a script to query and compare AWS instances with Octopus to remove any non-running instances prior to each deployment. I guess this script could be made a step in the Octopus release.

Cheers,
Mike