I went on a bit of a journey attempting to host an OctopusDeploy server on an self-hosted Kubernetes cluster and had some success. I thought I would share what I did and where I ran into roadblocks.
My Kubernetes cluster is comprised entirely of Linux servers, and so I am using the octopusdeploy/octopusdeploy image for Linux, which as of this post is EAP. Here is the documentation https://octopus.com/docs/installation/octopus-in-container.
My goal is to stand-up and OctopusDeploy server to run integration tests (rather than actually deploy something). This means I don’t really need “Deployment Targets”, but rather “Workers”.
My initial plan was to simply have OctopusDeploy run Kubernetes commands and have the actual tests run as Jobs on my Kubernetes cluster. In order for this to work with just octopusdeploy image, I needed to install kubectl. So I created a custom Dockerfile on top of octopusdeploy.
FROM octopusdeploy/octopusdeploy
# Install kubectl
ADD https://storage.googleapis.com/kubernetes-release/release/v1.6.4/bin/linux/amd64/kubectl /usr/local/bin/kubectl
RUN chmod +x /usr/local/bin/kubectl && \
kubectl version --client
This worked ok, but was a bit clunky from a UI experience in OctopusDeploy. Little to no feedback about the actual run of the job in Octopus was the biggest issue.
I then noticed that there is an EAP feature for running tasks inside docker containers from Octopus itself. Seemed perfect. However, being an EAP feature it was a bit of a challege to get working.
According to some documentation (https://octopus.com/docs/installation/octopus-in-container/octopus-server-container-linux), the octopusdeploy image should support Docker-in-docker (DIND). I am not sure this is yet the case as docker does not seem to be installed at all on the octopusdeploy image. Sure enough, attempting to run a process that uses docker containers fails with the Docker command not being found.
Here is where things get fun. I extended my custom octopusdeploy image by installed the Docker CLI (specifically just the CLI, not the engine).
FROM octopusdeploy/octopusdeploy
# Install kubectl
ADD https://storage.googleapis.com/kubernetes-release/release/v1.6.4/bin/linux/amd64/kubectl /usr/local/bin/kubectl
RUN chmod +x /usr/local/bin/kubectl && \
kubectl version --client
# Install dependencies
RUN apt-get update && \
apt-get -y install apt-transport-https ca-certificates gnupg-agent software-properties-common
# Install Docker CLI
RUN apt-key adv --fetch-keys https://download.docker.com/linux/ubuntu/gpg && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" && \
apt-get update && \
apt-get -y install docker-ce-cli
I then configured my Kubernetes deployment for octopusdeploy with a second container in the pod with the docker:dind image. With some shared mounts and some environment variables, I was able to stand this workload up and successfully run tasks out of docker containers using the Octopus feature.
Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: octopusdeploy
labels:
app: octopusdeploy
spec:
replicas: 1
selector:
matchLabels:
app: octopusdeploy
template:
metadata:
labels:
app: octopusdeploy
spec:
containers:
- name: octopusdeploy
image: example.com/octopusdeploy/server # my custom octopusdeploy image
ports:
- containerPort: 8080
- containerPort: 10943
env:
- name: DB_CONNECTION_STRING
value: "Server=octopusdeploy-db,1433;Database=OctopusDeploy;User Id=sa;Password=********"
- name: ACCEPT_EULA
value: "Y"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "********"
- name: ADMIN_API_KEY
value: "********"
- name: MASTER_KEY
value: "********"
- name: DOCKER_HOST # tells the docker cli to talk to the other docker:dind container
value: tcp://localhost:2375
securityContext:
privileged: true
volumeMounts:
- name: octopus-server # creates a shared mount containing server files so that docker image can access work and calamari
mountPath: /home/octopus/.octopus/OctopusServer/Server
- name: docker-dind
image: docker:dind
env:
- name: DOCKER_TLS_CERTDIR # disables docker:dind tls such that it runs on port 2375 instead of 2376
value: ""
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
- name: octopus-server # creates a shared mount containing server files so that docker image can access work and calamari
mountPath: /home/octopus/.octopus/OctopusServer/Server
imagePullSecrets:
- name: registry-cred
volumes:
- name: docker-graph-storage
emptyDir: {}
- name: octopus-server
emptyDir: {}
Not shown are the mssql deployment for “octopusdeploy-db” and services.
There are some things to clean up above but this represents the working prototype. Things I would fix before production use would be pulling passwords from secrets, specifying image tags and revisiting if both containers need to be privileged or just docker:dind.
I attempted to go one step further and stand up “Workers” as well, but ran into an issue I am not sure how to resolve. First, it seems there is no Linux version of the octopusdeploy/tentacle image, so I attempted to build my own. Here is my attempt:
--- Dockerfile ---
FROM ubuntu:focal
# Install dependencies
RUN apt-get update && \
apt-get -y install apt-transport-https ca-certificates gnupg-agent software-properties-common
# Install Docker CLI
RUN apt-key adv --fetch-keys https://download.docker.com/linux/ubuntu/gpg && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" && \
apt-get update && \
apt-get -y install docker-ce-cli
# Install Octopus Deploy tentacle
RUN apt-key adv --fetch-keys https://apt.octopus.com/public.key && \
add-apt-repository "deb https://apt.octopus.com/ stretch main" && \
apt-get update && \
apt-get -y install tentacle
COPY startup.sh /
RUN ["chmod", "+x", "/startup.sh"]
ENTRYPOINT [ "/startup.sh" ]
--- startup.sh ---
serverUrl=$SERVER_URL
serverCommsPort=$SERVER_COMMS_PORT
apiKey=$API_KEY
name=$HOSTNAME
workerPool=$WORKER_POOL
configFilePath="/etc/octopus/default/tentacle-default.config"
applicationPath="/home/Octopus/Applications/"
/opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath"
/opt/octopus/tentacle/Tentacle new-certificate --if-blank
/opt/octopus/tentacle/Tentacle configure --noListen True --reset-trust --app "$applicationPath"
echo "Registering the Tentacle $name with server $serverUrl in worker pool $workerPool"
/opt/octopus/tentacle/Tentacle register-worker --server "$serverUrl" --apiKey "$apiKey" --name "$name" --workerPool "$workerPool" --comms-style "TentacleActive" --server-comms-port $serverCommsPort --force
/opt/octopus/tentacle/Tentacle run
And then built the Kubernetes deployment in a very similar way to the server deployment. The worker successfully registers with the Octopus server, however the “run” command does not seem to keep the process open. It successfully runs… outputs “Press to stop…” and then stops. This causes the container to shutdown. I have yet to find a solution to this problem and if anyone could provide insight, that would be great.