Deploying all files from a package to s3 and maintaining directory structure

Hi,

I am attempting to deploy a static website to an s3 bucket from a package. I have TeamCity generating a zip file that gets added to our Octopus Library and I am then using the “Upload a package to an AWS S3 bucket” to take the files in the package and places them in S3.

If i use the file matcher “./**/*” i lose my directory structure as the files all just get placed in the root of the bucket. To get around this i currently have multiple file selections, with the relevant bucket key prefix for each individual folder. Clearly that isn’t maintainable.

Is there anyway of maintaining the directory structure whilst using the file matcher so that any future folders that get added to static site can be automatically added?

Or am I taking the wrong approach and there is perhaps another way of deploying a static website package to an s3 bucket?

Octopus Version: 2018.5.7

Many Thanks,

David Jones

Hi David,

Thanks for getting in touch.

Yes unfortunately there is a bug with our current S3 upload step. There was a known issue here that describes the issue.

This has been merged and will be fixed as part of the 2018.8.0 release (that I believe may be happening early next week).

Sorry for the inconvenience.

Cheers
Mark

No worries, thanks for the reply.