Using Octopus with custom artifacts storage and delivery mechanism

I am evaluating Octopus Deploy for the possible use in our company, considering some special requirements we have.

As far as I understand normally build artifacts are physically stored on Octopus server in a form of NuGets. At deployment time the server pushes bits to target machine through tentacle.

That would not quite work for us because 1. Artifact is big (single .TAR file approx. 1 Gb in size) and 2. Network connectivity between Octopus Server and deployment target machine is VERY poor (think - modem). If we let Octopus server push artifacts to target – it would take unacceptable long time.

But there is hope: we have very custom combination of hardware and software capable of delivering artifact(s) to the target machine in reasonable time. For simplicity you may think there is a script which can deliver artifacts fast.


  1. How well Octopus handle the case when artifacts are NOT stored on the server? All relevant metadata – Ok, but not actual huge artifact.
  2. How well Octopus handle the case when artifacts are delivered to target machine by custom script? Will it be able to pick up and continue deployment using pre-delivered artifacts?
  3. Artifacts are not NuGet packages and don’t want to be. Zipping and unzipping would take substantial time and we don’t need that.

I just started looking at Octopus Deploy and like it so far, but it only makes sense for us to consider it further if we get satisfactory answers to the questions above. I need your expertise to help me make the right decision.

Thank you!

Hi Konstantin,

Thanks for getting in touch.

TLDR: If you can script it you can do it in Octopus. It sounds like it would be possible to deploy your artifacts but Octopus plays best with a package repository so you will lose some of the nice first class features that involve packages.

Usually a package is deployed and Octopus will unpack the package in the correct location and configure IIS or install services for you. Behind the scenes Octopus is writing some scripts on your behalf. If you are in a situation where you can’t use packages you would probably need to write the scripts to transfer artifacts and configure whatever needs to be configured. Your deployment process might looks like a series of “Run a script” steps instead of a “Deploy a package” step.

Does that help answer your question?


Thank you for your quick answer!

– Octopus will unpack the package in the correct location and configure IIS

So my specific question is - can I still use “Configure IIS” part of the standard step without “unpack the package” part? Can it take location of the bits already unpacked and available locally as a parameter?

I understand that I may end up with the deployment consisting solely of “Run a script” steps and it still MAY be beneficial, but I wonder if there is a way to do better than that. For instance have only some steps scripted and some standard. But for that the key question is Are standard steps flexible enough to work with loose data rather than NuGet package.

Hi Konstantin,

Are standard steps flexible enough to work with loose data rather than NuGet package.

Most of the functionality is coupled to the package step. If you really wanted to use that functionality you could deploy a dummy package (an empty zip file for example). If your artifacts were already in the destination folder, maybe put there by a script step, then you could use all of the features of the package step with your artifacts.

It’s a bit of a hack but no other method comes to mind.

Hope that helps answer your question.


Yes, that is the general direction I’m thinking towards. Not necessarily dummy packages, but just metadata only packages. Is there a way to define how folder structure should look like on the target machine? Or it is fixed, but well known?

Hi Konstatin,

Thanks for clarifying, so you intend to deploy a meta package to do the configuration and some kind of script to copy your artifacts.

If you deployed a meta package you could set a “Custom installation directory” which already has your artifacts in it and immediately use IIS or Windows Service configuration in the package step.

For things like config transforms and variable substitution: each version of the package you deploy is extracted to a unique folder (like C:\Octopus\Applications\Tentacle\Development\MetaPackage\ where transforms and substitutions are performed. You can retrieve that path using the parameter Octopus.Action[Deploy].Output.Package.InstallationDirectoryPath. If you copied your artifacts to that path in a pre-deploy script then they would be transformed or have variables substituted according to the rules defined in the package step.

Hope that helps.


Shane, thank you very much for elaborate answer! This is exactly what I was looking for! By the way, responsive support is a huge argument for choosing Octopus over other products!
I need to experiment a little bit and come back to you if anything would still be unclear.

Thanks a lot!

I figured I could make our artifact delivery mechanism working inline with Octopus design, but for that I would need it to talk to Tentacles in one of the environments through dedicated HTTP Proxy I would have to build for it.

I learned proxied Tentacles are only supported starting from version 3.4, which is in pre-release yet and as far as i understand there is no way to upgrade pre-release to 3.4 final when it is released.

What your advice would be?


Hi Konstantin,

We supporting upgrades from the betas to final release. We are trying very hard to get 3.4 out soon (days) so if you felt more comfortable waiting, the wait should not be long.