How to get a compressed file as External Feed

Hello,

We use Jfrog to manage our repositories and External Feeds to get artifacts and deploy them using Octopus Deploy (docker, terraform, helm packages at the moment). We have a requirement where we need to compress a set of SQL files (in .zip or tar.gz) so we can then upload the artifact to JFrog so it can be pulled from Octopus Deploy (through an External Feed).
We can’t seem to find a way to configure the external feed as there is no “Generic” feed type. We tried with feed type = NuGet Feed (as I read in this question: Generic External Repository) but we get this response when we do a test:


This is how it is configured:

Is there anything we are doing wrong?
If there is a better way to do it, we are open to suggestions. At the moment we are using the “Generic” package type from JFrog and we are compressing the files using tar.gz. but we can adjust if we need to.

Kind Regards!
Nicolás Spencer

Hi @nicolas.spencer

Thanks for getting in touch with Octopus and for your question!

The answer you linked to has the best summary of using zips stored in Artifactory for use in Octopus:

Essentially if you have to use a zip format, the only way to achieve this is by setting up a Maven feed in Artifactory and Octopus.

The blog post is a little old now but still holds in terms of setting it up.

You mentioned adjusting if needed, the only alternative that comes to mind is to use an Artifactory Local NuGet feed, which may not be suitable, but we have a guide here on setting one up.

I hope that helps!

Best,

Hi @mark.harrison thank you very much for your answer!

We decided to use a zip format setting up a Maven feed in Artifactory and Octopus. We were able to do a test in the External Feeds section and it works, I can see the packages :smiley:. My question is, how can we proceed to use this script to be executed in the worker tentacle (which already has access to the SQL Server VM)? The zip file has about 5 SQL files, that need to be executed sequentially, which means I have to:
1 - Decompress the zip file
2 - Connect to the SQL DB
3 - Run a bash script which executes the scripts sequentially
4 - Some other logic regarding the results of the scripts being executed

What I’m thinking is using the Script Template and downloading the package that contains the SQL files and then doing the logic mentioned before but I think the template is limited to getting the script from the package already “cooked” and only passing parameters to the script is allowed.

What do you think is the best approach to achieve this?

Thank you very much for your help, I really appreciate it.

Kind Regards,
Nicolás Spencer

Hi @nicolas.spencer

That’s great that you got the feed working!

As with most things, for your next problem, you have several choices :smile:

You mentioned bash so I assume the worker is running the Linux tentacle. If this is the case, and you have or can install PowerShell Core, then you could choose to use the community step template called SQL - Execute SQL Script Files.

I recently updated the step template to support better file matching when executing scripts from a referenced package.

The idea is that you select your package (continuous-improvement:db-scripts) in your deployment process as usual, and then you select the path to the scripts you want to run using the SQL Scripts parameter. This parameter is multi-line, and you can add one path for each unique file path. If they are all in one folder, you can use a wildcard. Here is a quick example I used to test the template:

The package I chose has this file structure:

image

You can see in the task log below the output from the file matching:

You may also notice warnings for paths it couldn’t find, too - I tested the logic to make sure it did highlight paths that weren’t present too. :smile:

The scripts will execute using the Invoke-SqlCmd SQL PowerShell cmdlet.

Another thing to note is that I used SQL authentication, and I’d recommend using this authentication method with this step. Others are supported but may not work on Linux.

Using this approach means the connection to SQL, extraction of the package, and selection of files within the script are all handled for you.

If you can’t use the community step template and you need to write the process from scratch in bash, then you can still take the idea from it and use a custom step template, and then you can pass parameters from your deployment process into the step itself. The community step template uses the package parameter type - this allows any consumer of the step to choose the package to be referenced in the deployment process or runbook and not hard-coded in the step itself.

You can also create a copy of the step template from the community library and import it as a custom step in your Octopus instance, and customize it to any other requirements you have (you mentioned above in step 4 needing to run other logic based on the results of the scripts being executed)

Hopefully, that all makes sense, but if not, just let me know, and I’d be happy to answer any further questions!

Best,

Hi @mark.harrison thank you very much for your answer, that’s really helpful.

I tried the template and it is just what I was looking for :smiley:. I am curious about how it will behave when running multiple sql files. Will it execute the scripts using the order that I define when I list the sql scripts?

Let’s say files are named:
script_1.sql
script_2.sql
script_3.sql
script_4.sql
script_5.sql

If I do /script_*.sql it should run the script sequentially (or maybe not :thinking:). So I guess I would have to put the scripts in the order to force the sequence of execution. Is that how it would work?

I will test this anyway tomorrow.

Thank you for your help! It is great that there is already a community template for this :grin:.

Kind Regards,
Nicolás Spencer

Hi @nicolas.spencer

I tested out your scenario. e.g., Given a set of files like the following structure:

image

And an input like so:

The scripts were found in alphabetical order and executed as such:

A couple of notes on this:

  1. The full folder path is not considered, only the file within its relative folder. Some other DB deployment tools allow you to consider folders when thinking about the order of scripts. This template is fairly simple in that regard.
  2. You could technically add multiple parameters like the one below, which could find multiple versions of scripts. If that happens, there is no guarantee of any order. Essentially the scripts are executed in the order in which they are found and then added to an internal PowerShell array object.

Note script_1.sql is executed twice:

I hope that helps!

Best,

Hi @mark.harrison thank you very much for your help!

That will definitely work for us :smiley:

Best Regards,
Nicolás Spencer

Hi @nicolas.spencer

Awesome, that’s great to hear :smiley:

Glad I could help out!

Best,

Hi @mark.harrison

I have a last question (hopefully) and maybe you can help me. For the deployment, we are running 3 steps that involves using packages from External Feeds. I have a step, which is the first one, that checks whether all the versions from the packages are the same, so I need to get the package versions as a variable to do the logic.
To get the packages version used in the release, I’m using:

SQL_PKG_VERSION=$(get_octopusvariable "Octopus.Action[Execute SQL Script Files].Package[continuous-improvement:db-scripts].PackageVersion")
API_PKG_VERSION=$(get_octopusvariable "Octopus.Action[Deploy Continuous Improvement API].Package[continuous-improvement-api].PackageVersion")
CLIENT_PKG_VERSION=$(get_octopusvariable "Octopus.Action[Deploy Continuous Improvement Client].Package[continuous-improvement-client].PackageVersion")

Indexing by Step Name in Action and Package Id in Package.

It works for the steps that use a template from Octopus Deploy but It doesn’t work for the community template that executes SQL script files:

Is there a way to get the version from the package the user selected in the release so I can use that variable to do the logic?

Thank you very much in advance.

Cheers,
Nicolás Spencer

Hi @nicolas.spencer

Regarding your question on the name of the package variable used in the community template:

The name used to index into get the package will be the name of the step template parameter e.g. template.Package.

This can be a bit confusing at first, but it’s referencing the name of the reference package here in the variables:

So if your step name in your deployment process was Execute SQL Script Files, the full variable name should be:

SQL_PKG_VERSION=$(get_octopusvariable "Octopus.Action[Execute SQL Script Files].Package[template.Package].PackageVersion")

I hope that helps!

Best,

Hi @mark.harrison thank you for the quick response.

That really helps a lot!

Best,
Nicolás Spencer

Hi @nicolas.spencer

No problems, very happy to help!

Happy deployments :smile:

Best,