Deploying ConfigMaps

Hi,
We are deploying Kubernetes resources using Octopus. We are deploying Configmaps and Pods. We wish to ensure that if configmap is updated, the pod must automatically take the values from the configmap, without any downtime. How can this be achieved in Octopus.

In my case, I need to delete the pod and then redeploy after updating the configMap.

Regards,
Rahul

Good afternoon @manjunatha.karekal,

Thank you for contacting Octopus Support and great question on deploying K8 resources using Octopus.

From what I can find there are two ways you can potentially do this, we have a video in the link here (watch the video from 1.49 onwards) which mentions having a separate config map version for each pod version, if you update the config map you create a new set of pods, point those at the new config map and then delete the old pods:

image

The second way to achieve this is to create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.

There are a few articles I found detailing this process which I will post up for you to take a look at, we are not Kubernetes experts so following those articles should get you an idea of what you would need to do.

The first article has quite a few different ideas you could try:

Stack Overflow Article
Kubernetes GitHub Issue

I also found this article which suggests Kubernetes auto-updates the config map into a POD if mounted as a volume, however, it has limitations such as it won’t work if subpath is used.

And then there is the official K8 documentation.

In both scenarios the recommended approach is to create new config maps to use, not update any old ones.

Does that help at all?
Kind Regards,
Clare