Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] condor_rm not killing subprocesses
- Date: Fri, 03 Jun 2005 20:59:23 +0300
- From: Mark Silberstein <marks@xxxxxxxxxxxxxxxxxxxxxxx>
- Subject: Re: [Condor-users] condor_rm not killing subprocesses
Hi
Let me correct my last mail - it's simply unbelievable.
I checked my own answer and was totally wrong. When bash script is
killed, it leaves its children alive. There are several threads on this
in Google, and I was curious enough to check. Indeed, it is claimed that
there's no simple solution to this problem.
So the only thing I would do is to trap EXIT in the script and kill all
running processes. It does work for this simple snippet:
procname=sleep
clean(){
killall $procname
}
trap clean EXIT
for i in {1..10}; do
$procname 100
done
If you kill this script, sleep is killed.
Mark
On Fri, 2005-06-03 at 01:18 -0400, Jacob Joseph wrote:
> Hi. I have a number of users who have taken to wrapping their jobs
> within shell scripts. Often, they'll use a for or while loop to execute
> a single command with various permutations. When such a job is removed
> with condor_rm, the main script is killed, but subprocesses spawned from
> inside a loop will not be killed and will continue to run on the compute
> machine. This naturally interferes with jobs which are later assigned
> to that machine.
>
> Does anyone know of a way to force bash subprocesses to be killed along
> with the parent upon removal with condor_rm? (This behavior is not
> unique to condor_rm. A kill to the parent also leaves the subprocess
> running.)
>
> -Jacob
> _______________________________________________
> Condor-users mailing list
> Condor-users@xxxxxxxxxxx
> https://lists.cs.wisc.edu/mailman/listinfo/condor-users