Auto deployment broken

We’ve got a deployment that was working fine and automatically when the package arrived. A few days ago I was editing it, and during that process I removed an old channel from the channels list. I suspect that may have broken it somehow.

Investigating today I go to the Triggers screen and get the following exception on screen:
TypeError: Cannot read property ‘Name’ of undefined
at t.build (http://alkirac.australiasoutheast.cloudapp.azure.com/projects.b32bfb9cadf7d06c9ac0.hashedasset.js:1:1300025)
at t.render (http://alkirac.australiasoutheast.cloudapp.azure.com/projects.b32bfb9cadf7d06c9ac0.hashedasset.js:1:1298882)
at t.o [as render] (http://alkirac.australiasoutheast.cloudapp.azure.com/main.5657066ecb7b553187ff.hashedasset.js:1:3262)
at u (http://alkirac.australiasoutheast.cloudapp.azure.com/react.aa4bf9821a1f0776fcaa.hashedasset.js:1:287981)
at beginWork (http://alkirac.australiasoutheast.cloudapp.azure.com/react.aa4bf9821a1f0776fcaa.hashedasset.js:1:289481)
at i (http://alkirac.australiasoutheast.cloudapp.azure.com/react.aa4bf9821a1f0776fcaa.hashedasset.js:1:301773)
at u (http://alkirac.australiasoutheast.cloudapp.azure.com/react.aa4bf9821a1f0776fcaa.hashedasset.js:1:302289)
at c (http://alkirac.australiasoutheast.cloudapp.azure.com/react.aa4bf9821a1f0776fcaa.hashedasset.js:1:302764)
at m (http://alkirac.australiasoutheast.cloudapp.azure.com/react.aa4bf9821a1f0776fcaa.hashedasset.js:1:304786)
at h (http://alkirac.australiasoutheast.cloudapp.azure.com/react.aa4bf9821a1f0776fcaa.hashedasset.js:1:304295)

In other news: We can manually create a release, but the version number doesn’t autopopulate in the release. The newly created release then automatically commences. So it seems the issue is around creating the release automatically, not deploying it.

Hi Dylan,

Thanks for getting in touch,

I’m sorry to hear you are experiencing this issue, I understand this behavior can be frustrating.

Based on the information provided in your query, this sounds similar to an issue I’ve previously encountered and have created an associated Github issue found below; (fixed in 2018.2.2)

In that scenario , however, the error message was experienced in the Overview not the Triggers area as you’ve highlighted.

It may be possible to navigate into the Triggers area successfully using the V3 UI to address the channel that is presumably missing from the Automatic Release Creation. You can do this via replacing app# with oldportal# in the Octopus URL.

Once you’ve updated the ARC with an existing channel, please try switching back to the V4 UI by replacing oldportal# with app# in the URL to see if the issue persists.

Please let me know how you go with this, I look forward to hearing back from you :slight_smile:

Your patience and understanding in this matter are greatly appreciated.

Kind Regards,

Reece

How do I access the V3 UI?

Never mind, just read your comment more closely:
“replacing app# with oldportal# in the Octopus URL.”

OK, yes this worked. Using the V3 API I was able to recreate the ARC.

I also had to go to the project settings and set the Release Versioning again, as that had become un-set, but that done everything is working again.

Hi Dylan,

Thanks for getting back to me,

I’m glad to hear you were able to resolve this issue, I greatly appreciate you letting me know the outcome.

I tried to reproduce this same error in 2018.3.13 (which is not the latest, just what I had currently installed) by deleting the channel in use by the ARC, however, I was stopped by a message that indicated that the channel was in use.

Can I ask which version of Octopus you are currently running? It would be good to know if this is an issue with a particular version or if it has since been resolved, possibly inadvertently.

I look forward to hearing back from you :slight_smile:

Kind Regards,

Reece

Sure, we are using version 4.0.11.

I had originally had some deployments on the other channel, and couldn’t remove the channel as you have discovered as there were deployments associated with it.

I used the retention policy to specify we only wanted the last couple of weeks’ packages retained, and then after a few weeks was able to remove the channel. That’s when it all went downhill. :wink:

Dylan

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.