Capture artefact when running script in docker container

I’m trying to use the new_octopusartifact bash function to capture an artefact, but the process is failing.

I first set about proving that the file exists (and can be read, as per the documentation) both before and after new_octopusartifact, though I noted that in the logs the warning doesn’t appear until the very end of the step.

So looking at the raw logs I found the following -

10:34:23   Info     |       -rw-r--r-- 1 root root 26123 Mar 29 10:34 destroy.tfplan
10:34:23   Info     |       Collecting destroy.tfplan as an artifact...
10:34:23   Verbose  |       Artifact destroy.tfplan will be collected from destroy.tfplan after this step completes
10:34:23   Info     |       -rw-r--r-- 1 root root 26123 Mar 29 10:34 destroy.tfplan
10:34:23   Verbose  |       Process /bin/bash in /home/Octopus/Work/20210329103321-938794-32 exited with code 0
10:34:23   Verbose  |       Collecting artifact destroy.tfplan
10:34:23   Warning  |       Artifact destroy.tfplan not found at path 'destroy.tfplan'. This can happen if the file is deleted before the task completes.

Notice that there is a Verbose log message stating -

Artifact destroy.tfplan will be collected from destroy.tfplan after this step completes

Given I’m using a docker container to execute the script, and that the container will be removed after script execution is complete, how can I capture an artefact?

Thanks,
David

Hi David,

Thank you for reaching out to us with your query.

I’ve been looking into this and I can see that in previous examples the recommended approach has been to extract the required file from the docker container and to capture it as an artefact from the local filesystem instead. Can you please try this and let me know how it goes?

If the above option doesn’t work or isn’t possible in your configuration could you please upload your process JSON and full task log so we can investigate further? These files could contain sensitive information so please let me know if you’d like me to make the topic private or, alternatively, you can e-mail them to support@octopus.com.

Best Regards,

Charles

Hi Charles,

Given that we don’t start the docker container, Octopus does, I’m not sure it’s possible to extract the required file from the docker container. The documentation that you linked to suggests that you need to use the docker cp command, but that has to be run from outside the container.

I think for now it’s probably best to close this issue off. From further reading, I’m not sure if it’s possible to access a captured artefact in a later step anyway, so the need to capture an artefact no longer exists. Instead I’ll have to upload the file to S3 and then grab it again in a later step.

Thanks,
David

Hi David,

Thank you for getting back to me so quickly.

My apologies, I misinterpreted your message as if you were working with Docker from within a “Run a Script” step and were trying to capture an artefact after the Docker container exited. You are indeed correct that if you are using the “Run in a container” option for the whole step then you won’t be able to use the workaround suggested above.

However, you are also correct that there isn’t a native way to use artefacts later in a deployment process. The feature was intended for capturing log files and so on and as such this wasn’t part of the requirements. You can access them in a script step by manually interacting with the Octopus Server via the API but that can be a fairly convoluted process and your S3 alternative may be better.

Please do get back to us if you have any questions or if you’d like to continue the artefact investigation.

Best Regards,

Charles

Thanks Charles,

I was running AWS CLI scripts in the container anyway, so it was as simple as this to upload the artefact in on step…

aws s3 cp myfile.json s3://#{BucketName}/

and this to download the artefact in a susequent step…

aws s3 cp s3://#{BucketName}/myfile.json .

Thanks,
David

1 Like