Connectivity to private EKS clusters?

We are evaluating Octopus Cloud to handle our deployments in AWS. But it seems that it requires the Kubernetes API endpoints to be public, and our security posture is to keep our environment’s clusters private. Is there any way to hook into a private EKS cluster from Octopus cloud? Or potentially a known list of IP addresses for Octopus Cloud egress that we could whitelist? It doesn’t look like the Kubernetes Cluster deployment target supports something like a bastion host or proxy like a VM target does.

Hi @eandrus,

Thanks for reaching out. Each cloud instance has a list of 10 static IPs that you could whitelist, but the connection to the EKS cluster would be from a worker. You could setup firewall rules to open communication between an internal worker you’ve created, and your cloud instance. Then you would allow the internal worker to handle the EKS connection so you don’t need to open the EKS cluster to the public.

Please let me know if you think this will meet your security standards or if we need to dig into another possible solution for you.

Thanks,
Jeremy

Thank you for responding! The 10 static IPs would be perfect, how do I find out what they are?

As for the worker connection to the EKS cluster, I don’t fully understand. I don’t have any workers created, yet I can setup an EKS deployment target. Could you elaborate a bit on that point?

Hi @eandrus,

You’re very welcome.

To get your static IP addresses for your cloud instance you would need to:

  1. Log in at Octopus.com
  2. Click your profile in the upper right
  3. Click your instance under Organizations
  4. Then click the “Manage” link for your instance

To do work with an EKS cluster within Octopus, you can set it up so that there is a worker that has a trust relationship with your EKS cluster. This worker is what will actually be interacting with your EKS cluster and running commands.

When you create the EKS cluster you will see a worker pool section, you will want to create a worker pool specifically for the EKS cluster, then put the VM/Server that has the trust relationship with the EKS cluster within that worker pool.

Now whenever you do work with the EKS cluster, make sure you are choosing that Worker Pool within the steps and you will only need to open up the worker to the cloud, and then the worker will communicate with the EKS cluster.

Communication will flow like this:
Octopus Cloud Instance - > Worker -> EKS Cluster

Please let me know if that helps or if you need more information.

Thanks,
Jeremy

Thank you! I found the IP address list.

The worker solution you’re suggesting would actually probably work better, though. But I think I’m still missing something. I don’t see the worker pool section when creating a kubernetes deployment target.

Am I looking in the wrong place?

Hey @eandrus,

You’re welcome!

It should be right below Kubernetes Details like this:

Is it there on your end?

Thanks,
Jeremy

I don’t see it, but I didn’t see the tenant section until I created some either. I’m guessing I need to make the worker and pool first, I’ll try that and get back to you to let you know if that worked or not.

Hi @eandrus,

That may be it as I have workers and worker pools on my local setup. Please let me know if the selection shows up for you once you have the worker and worker pool.

Thanks,
Jeremy

Hey @jeremy.miller, sorry for the long delay, had to build the worker with Terraform and it took more time than I thought. But I finally got a worker and pool all hooked up and healthy, and I still don’t see the worker pool section under the Kubernetes details. Any ideas?

Hi @eandrus,

Your create cluster screen doesnt look like mine in the screenshot above? Which version of Octopus server are you running?

Thanks,
Jeremy

Hi @jeremy.miller, It looks exactly like it except for the missing worker pool area. I’m using Octopus Cloud, which is on v2020.3.2

Hi @eandrus,

I took a look at my cloud instance and don’t see it either. My server instance has it but it’s a different version with more stuff set up and enabled. Let me tinker with it and I’ll update you once I’ve found a potential solution.

Let me know if you have any questions in the meantime.

I hope you have a great weekend.

Thanks,
Jeremy

Hi @eandrus,

I believe this might be a regression in 2020.3.2. Can you please try and create another worker pool and see if the section shows up within creating a kubernetes cluster? So you will have Default Worker Pool, Kubernetes Worker Pool, Dummy Worker Pool.

Please let me know if that workaround makes the selection start up.

Thanks,
Jeremy

@jeremy.miller thanks! That actually worked, not entirely clear on why, but I won’t complain.

1 Like

Hi @eandrus,

I think it’s actually a regression in the version you’re using. I’ve raised it with our engineers so it should be fixed to not require a workaround of a second dummy worker pool in future versions.

Please let me know if you have any other questions or concerns about it.

Thanks,
Jeremy

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.