I need some information on how much dependency does octopus have on feeds.
We have a feed that allow us to pull build into octopus to deploy packages.
The question I have is can octopus still deploy packages (previously deployed to e.g. QA environment) to various other environment when the feed is unavailable and how does it do this.
Thanks for getting in touch! Octopus almost completely depends on being able to download the package from the feed. The only scenario where Octopus wont try to download the package is if you are re-deploying a package that was previously downloaded on that Tentacle and you have the setting “Skip packages that are already installed” enabled. Otherwise Octopus will always try to download the package from the feed at the beginning of the deployment, and if the feed is unavailable, the deployment will fail.
We have a business requirement where we do not want to be dependent on a feed after the package has been deployed to our QA environment. This is due to the risk that the original feed (which is provided by an external company) may not be available when we promote the package from QA Environment to the next environment level (i.e. UAT or Live) which is on a different Tentacle. Do you have any suggestion on how we can do this?
Your best option if you don’t trust the third party feed is to push packages to the Octopus built in repository as part of your build process.
If that’s not a possibility then look at a nuget hosting service like Myget to mirror the external feeds. There isn’t an option in Octopus to cache the external feed.
We have a situation that we do not what all files that are created by the build process.
One idea I have is to have a Source Environment that will allow us to deploy the feed package to the octopus server and upload the package to Octopus built in repository. Once this is done then the rest of the environments will use Octopus built in repository to deploy builds.
What do you think and is it possible?
Like Damian said, if you do not trust a third party repository, you can use the Octopus built-in repository. It is a reliable as it gets, and its part of the same system that takes care of the deployments.
If you don’t want all the files created by the build process, i’d recommend you to leverage the file selection and packing to the build server, so it only packs the files you want, and then pushes the package to the Octopus built-in repository.
If none of this is a possibility, then i guess you could try the approach you described on your last message.
Hi, Reason I suggest my approach is because we have no control over third party build servers. I’m very new to octopus and am trying to work out how it can work within out environment. The analysis I’m making will determine whether my company will proceed with using/purchasing octopus.
Do you have information on how I can get octopus to take the .nupkg file from a feed and save it to a specific folder?
Many thanks for you help.
“Do you have information on how I can get octopus to take the .nupkg file from a feed and save it to a specific folder?”
Basically you can’t, Octopus isn’t designed this way.
Technically you could do it by writing a script that runs in your QA deployment and copies them, but it’s a bunch of work and not how Octopus is designed.
“We have a business requirement where we do not want to be dependent on a feed after the package has been deployed to our QA environment”
Is it any third party service as the basis of your requirement ? Or just this specific vendor ?
As I said in my first reply, if it’s just the specific vendor feed I’d use a paid MyGet account as a proxy. If you need it on premise you could use something like Artifactory.
Or better still, have the packages pushed into the Octopus package repository.
They are your best options. As I said you could write a bunch of scripts to hack something together, but it’s not going to be reliable, not going to be best practice and it’s not something we’d recommend.