Within this script it takes a path to upload to S3.
I need to dynamically find the artifact that is associated with the project. I can’t see this in the variables.
My only option right now is to introduce an explicit manual variable, but I don’t like this.
Because the artifact is a nuget package I’ll also need to unzip it before uploading it. This is a bit unrelated to my original question but for context this is what I’m having to do. I’m hoping Octopus experts will be able to tell me to do it a different way. (I do NOT want a tentacle on an ec2 for this, I want to run this deployment on the server. Maybe that’s the bad idea?)
Process
get package name ($OctopusEnvironment[“packageid”] <<- does this exist ?)
copy d:/<package_id>/<package_id><release_id>.nuget to c:/temp/workingdir
unzip
s3copy recursive c:/temp/workingdir
I want to know if I can simplify this somehow, or at least get the package ID from within a script template?
Thanks for getting in touch! There is a variable to retrieve the package ID. It’s an action-level variable, so you can retrieve it during the execution of your package step. Specifically, this is Octopus.Action.Package.PackageId which you can reference in our docs.
You can use this variable in a custom deployment script in your package step, or set an output variable in a custom script in this step to use in a subsequent script step. i.e. to set an output variable, you could do something like Set-OctopusVariable -name "PackageIDVar" -value #{Octopus.Action.Package.PackageId} to substitute the package ID into the value, which you can then call in later steps by #{Octopus.Action[PackageStepName].Output.PackageIDVar}.
Would that help out in your scenario? Let me know how you go or if you have any further questions going forward.