(Linux) Deployment step waits for disowned background jobs to complete

Environment

  • Octopus 3.3.20 (in an HA configuration)
  • Tentacle running on Ubuntu 16.04.1

Problem

I’m running docker on this server, and as part of my deployment for new containers I spawn new child processes to monitor the logs and send them to our Sumologic hosted collector. The deployment step runs my entire deployment script all the way to the end, but then the deployment does not proceed to the next step.

Here’s my logging command (more or less) in case it helps:

sudo nohup sh -c "docker logs --follow=true --timestamps=true ${container_name} | ${REPO_DIR}/bin/watch_docker.sh ${SUMO_COLLECTOR_URL}" & disown

The contents of watch_docker.sh is quite simple as well:

 #!/bin/bash
URL="${1}"
while read data;
do
curl --data "${data}" $URL -s --retry 5 --retry-delay 1
done

Any help would be greatly appreciated! Thanks!

Hi,
Trying to run seperate processes from within Linux can be a bit of a tricky problem. I believe that you may be using the commands incorrectly. Reading this stackexchange post it appears that you don’t want to use disown since then it won’t disconnect itself from the terminal. Are you able to perhaps instead use the screen command as that has proved successful in the past.

Giving it a test by running just a script

echo "Going to Sleep"
screen -d -m sleep 20
echo "Awake!"

Seems to allow it to exit immediately. Essentially it the connection needs to make sure no stderr or stdout streams are still open before it can close.

Give this a go and let me know if it works.
Cheers,
Rob

Hi Rob,

Good news, it worked! Thanks for that.

-Brian