Hi there,
I’ve been looking at the latest tentacle (octopusdeploy/tentacle:6.3.329) and server (octopusdeploy/octopusdeploy:2022.4.8474) images to run in our PCI environment and out of the box, AWS Inspector is showing that they both have multiple CVEs.
In order to try and address this, I have a custom image build that updates installed packages using apt update && apt -y upgrade. (I really don’t even know if that is a safe way to patch the image as I’m not sure if server or tentacle are depending on specific versions)
Unfortunately for both images the following CVEs still remain:
CVE-2022-1996 - go-restful (critical)
CVE-2023-25173 - containerd (high)
CVE-2023-25153 - containerd (medium)
CVE-2022-23471 - containerd (medium)
(I actually think go-restful is a side-effect of the containerd issue).
Given that I need to run these images in a PCI environment, I must either patch them or have some other form of mitigation / explanation why these CVEs are not relevant…
Ideally:
a) it would be great to get out-of-the-box images with no CVEs, but if not,
b) is anything gonna break with my crude apt upgrade?
c) Is there anyway I can patch containerd? if not,
d) Is containerd needed for tentacle and server - can I disable?
Sincerely
Pete
Related issues I logged before, but was unable to update as the thread had been locked:
I’m going to pass along all of these questions to our engineers. They won’t be on til tonight as they’re based in Australia but please feel free to reach out in the meantime.
I’ll let you know as soon as I hear back on all your questions.
The tentacle image version you are running, 6.3.329, is using containerd version 1.6.15.
When version 6.3.383 of Tentacle is released next week (There was a slight delay due to Azure VM extension), the docker image will contain containerd verision 1.6.18 and should resolve all of the issues.
We are unsure as to why the one fixed in 1.6.12 is still showing up, but we hope that when you test again with 6.3.383, it will no longer be flagged as an issue.
Please let me know if that helps, or if the .12 CVE still gets flagged after the new Tentacle version.
Just jumping in for Jeremy who is currently offline as part of our US based team, after looking at the discussion and answers the engineers gave to Jeremy it was only Tentacle that was mentioned so I have asked if there are any plans to update our server image. I am not sure what version of containerd Octopus Server uses so have asked that too just in case that helps with any answers we can get you.
I will let you know what they say, we might not get an answer until Monday as Australia are now away for their weekend and that’s where our engineers are based but we will update you when we have some answers.
Hi Clare / Jeremy
Just before I signed off today, I checked your docker hub repo for new releases of Octo server, and sure enough I saw 2023.1.9672. I have taken this base image and applied apt update && apt upgrade to it.
Unfortunately AWS Inspector is reporting similar critical, high and medium vulnerabilities - and they all seem to be related to go-restful and containerd again.
I have submitted a Support ticket to AWS to try to understand why Inspector is flagging these CVEs as it seems to believe that containerd version 1.6.6 is installed, yet when I apt list --installed I can see
of course, AWS support have not committed to a resolution timeline. If your team can confirm that the 1.6.18 is the installed version on the image, then I can setup a supression rule to swallow the alerts for those CVEs and I’ll have an image good enough for running in our PCI environment.
That is strange. Thanks for all of the info. I’ll get this back to the developer that I was chatting with and see what he says and let you know. He’s currently on weekend, but we should hear back Monday morning.
Please let me know if you have any questions in the meantime.
Our developers have gotten back to us and have confirmed the version of containerd being used in 2023.1.9672 is 1.6.18 , which contains fixes for the CVEs:
So the release of 2023.1 has picked up the latest fixes from the base image, our developer cannot understand why the AWS scanning tool is reporting 1.6.6.
I hope that does help alleviate any concerns you had with the version of containerd Octopus Server 2023.1.9672 is using, hopefully that will give you some clout to go back to AWS and ask why its scanning is incorrect.
Let us know if you need anything further,
Kind Regards,
Clare
Just an FYI update that may help future-peoples. AWS Inpspector kept raising CRITICAL, HIGH and MEDIUM alerts against the out-of-the-box octopusdeploy/tentacle image - even after I updated to 6.3.417. Unfortunately for our PCI requirements, I can’t keep adding supressions to cover these vulnerabilities.
After a bit of to-ing and fro-ing with the AWS Inspector team, they highlighted that many of the CVEs were related to the install of containerd and docker CLI plugins installed in the image. This in turn prompted me to consider why docker and CLI tools are installed into the tentacle image itself. I believe this is to allow tentacle to support Execution containers for workers - Octopus Deploy. I think this is enabled by default, but can be disabled by setting DISABLE_DIND=Y.
Since my tentacles will not need this support, I decided to build my own tailored tentacle image using AmazonLinux2 as a base and the linux rpm package that octopus provide following the RHEL/Centos/Fedora instructions on Linux Tentacle - Octopus Deploy
I added those yum commands to a Dockerfile in addition to some commands listed in OctopusTentacle/Dockerfile at main · OctopusDeploy/OctopusTentacle · GitHub. I ensure to set DISABLE_DIND=Y so that code path is not used. Lastly, not listed anywhere as a pre-req I needed to install OpenSsl package for Calamari to unpack.
The image contains no vulnerabilities at all (not even low) and successfully registers with the server and I am able to execute the scripts that I need to.
I’m gonna attempt a similar strategy for the server image that I need to run.
Just wanted to say thank you so much for taking the time to provide that information, that will really help other customers who are hitting this issue even after we updated the version of containerd and verified it was up to date on other images.
We have not had any other customers note this issue but they might not be doing as though rough auditing or might not be targeting our docker images to that extent.
If they do though and find the same issue they are now able to get around this so thank you again for updating other customers (and ourselves) it will really help someone out in future!
If you need anything else don’t hesitate to come back to us as we are always on hand to help!