I am setting up a 2nd Linux SSH Worker.
When 2 workers are enabled, I get a message like this.
line 12: cd: /home/octopus/.octopus/Applications/OctopusServer/ips-shopify/Dev/ips/23081.205.1-dev960_4: No such file or directory
March 22nd 2023 11:02:05Fatal
The remote script failed with exit code 1
The folder *_3 exists and if I run the job again, *_4 will exist but then *_5 will fail due to the error above.I think this is happening since I have 2 workers set up. 1 worker has the log, the other server is trying to execute? Is it not possible to have 2 workers on the same job? I was hoping to set up 2 workers for HA.
Once I disable the 2nd worker, it all starts working again.
Thanks for reaching out and for all of the information.
You can definitely have 2 workers working on the same job, but depending on what they’re doing you may run into issues. For instance, if you have a prior step where a worker has a referenced package and does some work on files in a directory, then later, a different worker gets chosen, that 2nd worker won’t have access to those files unless you do some extra work to make them available to it.
Can you please give me a high-level description of your deployment process and what you have the workers doing in each step? If you have any parallelism in the process please indicate where. It should help narrow down the point of failure. If necessary, you can DM me the description if anything needs to stay private.
Taking a look at everything all together, I believe this is the high level overview:
Step 2 is deploying a package to a tentacle:
Tentacle = octopus-kube-bastion-v2
Step 4 is getting the directory from an output variable from Step 2 and trying to cd to it.
Worker = octopus-kube-bastion-v2-0e44
When octopus-kube-bastion-v2-0e44 is attempting to access the folder that octopus-kube-bastion-v2 extracted to, does that worker have access to the same filesystem?
It is the same filesystem. This happened with an old worker we had also. octopus-kube-bastion-v2 used to fail the same way when I enabled octopus-kube-bastion at the same time. Now it’s happening with a replica of octopus-kube-bastion-v2 which also has part of the instance ID attached 0e44.