Acquire Packages failure

Hi,

We’re seeing more and more errors where we need to retry a few times before a deployment gets going (bootstrap error) and where acquire packages fails. Example of this below.

Our Octopus cloud instance has become far busier over the last few months. Could that be related?

The instance is pretty much running deployments, or more commonly scheduled runbooks, pretty much 24x7.

Very frustrating for our users. Very frustrating for me as a devops engineer feeling the user’s frustration.

The step failed: Activities failed with errors ‘A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out. Server exception: System.TimeoutException: A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out.’, ‘A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out. Server exception: System.TimeoutException: A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out.’, ‘A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out. Server exception: System.TimeoutException: A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out.’, ‘A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out. Server exception: System.TimeoutException: A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out.’, ‘A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out. Server exception: System.TimeoutException: A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out.’, ‘A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out. Server exception: System.TimeoutException: A request was sent to a polling endpoint, but the polling endpoint did not collect the request within the allowed time (00:02:00), so the request timed out.’

The step failed: Activities failed with errors ‘The archive entry was compressed using an unsupported compression method. Server exception: System.IO.InvalidDataException: The archive entry was compressed using an unsupported compression method. at System.IO.Compression.Inflater.Inflate(FlushCode flushCode) at System.IO.Compression.Inflater.ReadInflateOutput(Byte* bufPtr, Int32 length, FlushCode flushCode, Int32& bytesRead) at System.IO.Compression.Inflater.ReadOutput(Byte* bufPtr, Int32 length, Int32& bytesRead) at System.IO.Compression.Inflater.InflateVerified(Byte* bufPtr, Int32 length) at System.IO.Compression.DeflateStream.ReadCore(Span`1 buffer) at System.IO.BinaryReader.InternalRead(Int32 numBytes) at System.IO.BinaryReader.ReadInt32() at Newtonsoft.Json.Bson.BsonDataReader.ReadNormal() at Newtonsoft.Json.Bson.BsonDataReader.Read() at Newtonsoft.Json.JsonReader.ReadForType(JsonContract contract, Boolean hasConverter) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent) at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType) at Newtonsoft.Json.JsonSerializer.Deserialize[T](JsonReader reader) at Halibut.Transport.Protocol.MessageSerializer.ReadMessage[T](Stream stream) at Halibut.Transport.Protocol.MessageExchangeStream.ReceiveT at Halibut.Transport.Protocol.MessageExchangeProtocol.ProcessReceiverInternalAsync(IPendingRequestQueue pendingRequests, RequestMessage nextRequest)’, ‘The remote script failed with exit code -1073741523’, ‘Bootstrapper did not return the bootstrapper service message’

Hi Phil,

Welcome to the Octopus Deploy Community. Thanks for reaching out about this issue. I can appreciate your frustration with these errors and I hope we can get to a solution soon.

From the information you’ve provided it appears there is something blocking the packages intermittently. To gain more clarity on the issue, can you please provide me with some answers for the following so I can also get an idea on the environment you’re working with:

  • Are you running a proxy?
  • Are the tentacle machines running any antivirus?
  • Are there multiple tentacle instances installed on the same machine?
  • Is the tentacle service restarting mid-task?

Can you please also provide me with the following logging which can be uploaded to this (link):

  • Raw Task log (steps to produce are here)
  • Deployment process JSON of a known failing project (steps to produce are here)
  • Tentacle logs with trace level logging enabled (Enable trace logging - Tentacle logs located in < Tentacle Home >\Logs)

As this is a cloud instance, would you be happy to share your instance name (via direct message) so I can review the events for any red flags?

Hope to hear from you soon.

Kindest regards,
Lauren

Hi Lauren,

Apologies for taking so long to get back to you. Crazy busy!

In reply to your initial questions…

· Are you running a proxy? No

· Are the tentacle machines running any antivirus? Yes, it is running Windows Defender. But has always run it for the last couple of years.

· Are there multiple tentacle instances installed on the same machine? Yes. This is a very recent change. And the server is running Windows Server 2016 so could well be the issue here.

· Is the tentacle service restarting mid-task? No

I’ve got one of my team building a new server as I write this email, so we’ll see if that fixes it.

I’ll get the other things you’ve requested to you this afternoon.

And I’m more that happy to share the instance name with you if you let me know the DM you’d like me to send it to.

Kind regards,

Phil

Hi Phil,

Thanks for the update!

If you could please send through the instance name to my account DM that would be greatly appreciated. Click my profile image > Message.

Kindest regards,
Lauren

Thanks for getting back to me Phil!

You advised there are multiple tentacles on the same machine. Are the previously mentioned errors occurring on all tentacles on the same machine? Have you tried reinstalling any of the aforementioned tentacles?

If possible I would suggest changing the communication style of the tentacle from polling to listening to reduce the resources used on the tentacle side. The tentacle needs to poll periodically even if there aren’t any jobs for it to perform, so if there are multiple tentacles all polling it’s possible for there to be a resource conflict causing a timeout.

Let me know how you go with the above Phil and look forward to hearing from you.

Kindest regards,
Lauren

Hi Lauren,

Thanks so much for this.

  • We’ve created a new worker for dev etc workloads and moved two tentacles onto that one.
  • So we’re just running one tentacle on the critical worker now.
  • Fingers-crossed but so far no more acquire packages timeouts etc. :blush:

The original workers were set up before my time, so I’ve always just followed suit and chosen polling same as my predecessors!

So we’ll definitely set up another worker using a listening tentacle as what you’re saying makes a lot of sense.

I’m happy for you to close this issue… if you need to do so.

Your advice has been invaluable.

Kind regards,

Phil

Hi Phil,

That’s great news! Also appreciate you filling me in with the changes that have occurred.

Best of luck and as always, please reach out if there is anything else we can help you with.

Kindest regards,
Lauren

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.