Upload to AWS S3 bucket fails with exit code 100

Goal: upload the contents of a package to an S3 bucket.

The package is uploaded to Octopus Deploy.
AWS Account and permissions are configured to upload files. Tested using the AWS cli locally.

Using Octopus Cloud, default worker. The deploy logs show

The step failed: Activity Upload a package to an AWS S3 bucket on a Worker failed with error ‘The remote script failed with exit code 100’.

I did change the S3 bucket name to something that does not exist and the error message changed to access denied, as expected. So the connection to AWS is working.

I’m used to bash scripting where I can see the AWS cli command equivalent for the operation that failed.

How can I gather more information related to what action / AWS cli command equivalent is failing?

What permissions does the upload step need?

First few lines of the log are

Running the step as the AWS role test-octopus-frontend-deploy/octopus-frontend

July 29th 2022 11:20:08

Info

Glob pattern ‘**/*’ matched 92 files

July 29th 2022 11:20:08

Info

Attempting to upload C:\Octopus\Tentacle\Work\20220729172001-347-114\staging\agent.svg to bucket qa-maple-yuhufrontend-testing with key /agent.svg.

July 29th 2022 11:20:08

Error

Calamari.Aws.Exceptions.UnknownException: An unrecognized Forbidden error was thrown while uploading to bucket qa-maple-yuhufrontend-testing

July 29th 2022 11:20:08

Error

at Calamari.Aws.Deployment.Conventions.UploadAwsS3Convention.InstallAsync(RunningDeployment deployment) in C:\BuildAgent\work\e0cefbed4ad11812\source\Calamari.Aws\Deployment\Conventions\UploadAwsS3Convention.cs:line 147

July 29th 2022 11:20:08

Error

at Calamari.Aws.Deployment.Conventions.UploadAwsS3Convention.Install(RunningDeployment deployment) in C:\BuildAgent\work\e0cefbed4ad11812\source\Calamari.Aws\Deployment\Conventions\UploadAwsS3Convention.cs:line 105

Hi @aschur,

Thanks for reaching out! I’m sorry to hear you are having trouble uploading files to an AWS S3 bucket via Octopus Deploy, but I’m happy to help take a look.

As a first step in troubleshooting, can you let me know what version of Octopus Deploy you are currently running?

Also, could you upload the raw task log for this failed task for review? Here is a secure link to upload this in case you don’t want to post it publicly on the forum. This should provide a little more detail around the specific error being thrown.

Lastly, I wanted to link our documentation on configuring this feature within Octopus using our built-in step template, which may help shed some light on the underlying issue here.

Looking forward to hearing back from you!

Best regards,

Britton

Hi Britton,

We are using Octopus Cloud, not sure what the current version is. This is the first project, first deployment we are creating.

The raw task log is ServerTasks-562.log.txt (20.2 KB)

I reviewed AWS documentation to troubleshoot uploading files to S3.

Locally, I can assume the same role Octopus Cloud is configured to assume and I can upload a file using

aws s3 cp --storage-class STANDARD --acl private agent.svg s3://qa-maple-yuhufrontend-testing/agent.svg

We made 4 releases for this project with different deploy process configurations. Documentation on Releases and Deployments mentions

Releases are snapshots of your deployment process and other associated assets (packages, scripts, variables) as they existed when the release was created.

Is there a way to see the deployment process configuration for historical releases to refresh my memory on which settings I tried?

Thanks,
Alvin.

Hi @aschur,

Thanks for sending all this over, and also for providing the additional information about the troubleshooting you’ve tried so far.

In taking a look through everything I don’t see anything that sticks out as an issue, but I do have a few other things we can try to dive deeper on this problem.

As a first step, could you enable verbose variable output for this project, run a new deployment, and upload a fresh task log for review. This additional level of verbosity may shine some light on the underlying issue here. Feel free to upload the new log to the following secure link, as it may contain some sensitive information when printing out all of the variable information.

Along with this, could you also upload your process JSON for this particular deployment process for review? This will allow me to ensure that my process lines up with yours for testing purposes.

In regards to your last question on reviewing the previous states of the deployment process, you should be able to get a pretty good picture of this by reviewing your Octopus audit log, which contains a detailed history of any actions taken within your Octopus instance. Here is a screenshot of the filters you’d want to apply in order to get to this information (“Document modified” for event category and “Deployment Process” for document type) - you can also filter by specific projects within the left-hand menu if needed as well:

Looking forward to hearing back from you!

Best,

Britton

Hi Britton,

I updated the deploy process to use a different set of AWS credentials and a different S3 bucket so it is safer for us to try out ideas. I uploaded a file locally using the new credentials to verify it still works.

When updating the deploy process, I updated the AWS credentials but forgot to update the S3 bucket name. The deployment failed with the same error message that I saw the first time.

This suggests the AWS credentials / S3 bucket configuration is not right.

I then updated the S3 bucket so the assumed role can access it. The raw logs are uploaded to the secure link.

Alvin

Hi @aschur,

Thanks for uploading that new log!

In doing some further testing on my side I was able to reproduce this same error by setting a bucket policy that prevents most actions against my S3 bucket:

I know you mentioned reviewing this document previously, but could you specifically focus on the bucket policy associated with this bucket to see if anything is set that would prevent this role from uploading objects? Hopefully this helps in getting to the bottom of this issue, but let me know what you find out in reviewing the specific permissions set on this bucket.

Best,

Britton

Hi Britton,

As you know, security is important. Part of security is granting least privileges to a role or process.

My goal: identify a reasonably small set of privileges the “Upload a package to an AWS S3 bucket” step requires.

The logs suggest it can create an S3 bucket.

The step configuration indicates it can

  • optionally add metadata, tags
  • optionally add object, maybe bucket, ACL’s

It does require other permissions to run.

So, what other permissions does the step need?

What actions is the step taking?

Thanks,
Alvin.

Hi Alvin,

Thanks for clarifying your goal with this project, I definitely understand the desire to lock things down as much as possible. From what we’ve discussed this doesn’t sound like an issue with bucket policies but rather an issue with the user policy associated with the role you are using for this (apologies for the initial misunderstanding).

In doing some further testing on my side I was able to generate the following user policy that worked for this use case:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAccountOperations",
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        },
        {
            "Sid": "AllowBucketOperations",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:CreateBucket"
            ],
            "Resource": "arn:aws:s3:::my-bucket-name"
        },
        {
            "Sid": "AllowObjectOperations",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:GetObject",
                "s3:GetObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::my-bucket-name/*"
            ]
        }
    ]
}

I built this policy using the following documentation from AWS. After associating this user policy with my role I was able to upload objects to S3 via Octopus Deploy both when my bucket existed already and when it did not.

Let me know what you find in comparing my example user policy to yours, but hopefully this helps sort out the underlying problem you’re running into.

Best,

Britton

Hi Britton,

Thanks for the help. I see I was missing the read permissions: list bucket, get object.

Cheers,
Alvin

Hi Alvin,

Thanks for the update, I’m happy to hear you were able to get things working!

While I was waiting to hear back from you I also got our main documentation updated with this same information, so this should be a good reference for others going forward:

Sorry you ran into some initial trouble in using this particular step template, but thanks for working with me to get things sorted out.

Cheers,

Britton

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.