I have a bit of an edge case that I’m wondering how to handle. We have a data migration that we are testing via regular deployment s from Octopus. One of the limitations of our production environment is that we’ll need a relatively small set of data from a database on one network to be available to a database on another network.
The networks can’t communicate for Security reasons, but Octopus can talk to both networks.
I’m wondering if there is a way to create an Artifact or Package in Octopus (e.g. a csv file) and then consume that in a step further down the process, ensuring that the artifact or package is deployed to the target server. As all the package versions are determined at time of release, I’m guessing this is not possible with package deployment steps?
As far as I see it my options are;
1 - Allow the databases to talk to each other directly across the networks.
2 - Create a file share that we can see across the networks and drop the artifact there.
3 - Find some way for Octopus to leverage it’s connectivity to push the artifact to the target machine.
4 - Create another deployment process that is triggered from the first process and can use a package that has been pushed to the package repository by the first process.
My preference would be option 3, as it means less work with InfoSec and Ops to get a solution and less complexity with multiple Octopus processes, releases and deployments.
I realise this may seem crazy to you that I don’t just have connectivity between the servers, but it would be useful for future reference to know whether this kind of thing is possible.
Thanks for getting in touch! Your idea on using artifacts is interesting. However I’m not sure if that approach will work, or be the easiest solution. That’s because when you create an artifact, we save it on the server with a random file name. It exists, but it’s not saved as a variable, and you won’t be able to tell what the file name is going to be. You could perhaps save it as your own file so you can define and use that name.
Your next suggestion to create a package to consume in a subsequent step is also an interesting thought. I agree it may not be possible as we do the package acquisition at the beginning of the deployment process.
If this data is just text, I would personally think the best/easiest solution would be to store this value in an output variable (i.e. the whole text as JSON). This could potentially work quite well, and we hear about people having MBs of JSON in a variable.
You could also copy it to a fileshare that Octopus can see and then copy it again from there.
Let me know if this helps. If you have any further questions, please don’t hesitate to reach out and we’ll be happy to help.
I did think about using an output variable, but assumed there would be a size limit. Is there any limit to the length of an output variable string?
If not, that sounds like my solution.
Sent from my iPhone
On 15 Jun 2017, at 02:10, Kenneth Bates <firstname.lastname@example.org:email@example.com> wrote:
Thanks for following up. I’ve had a discussion with my team about this, and it doesn’t seem like there’s any explicit limitation on the size or length of an output variable string. Just be aware of the tradeoffs - the bigger it is, the longer it’ll take to process and more CPU, bandwidth, etc. As we hear of people having MBs in a variable without issue, I think it could be a good option.
I’d like to hear how you go, and let me know if you have any further questions!