[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [condor-users] Some questions concerning security in Condor



Hi Zach & Todd,

Thanks for your replies. While we digest your latest points, I can say that we have no problems with the condor daemons running as root (as far as we currently see) or with all jobs being statically linked. Also, NFS will not be available within our inter college/department environment. However, I'll come back with a fuller reply to your points soon.

In a related security vein, I would like to raise another question (let me know if you've got tired of me by now!). We've currently set a couple of flocked pools by punching holes through firewalls, but are keen *not* to adopt this model more generally due to a number of reasons, namely the number of rules we'll have to have for each different firewall, as well as the fact that many machines lie on private networks with private IP addresses. While researching this area I came across a paper by Sechang Son and Miron "The Man" Livny on "Recovering Internet Symmetry in Distributed Computing", which I'm sure you're aware of. Here the authors address the issue I've raised, and discuss an interesting Dynamic Port Forwarding (DPF) solution to Condor's problems in such an environment. They proceed to say that "DPF client is implemented as a part of Condor communication layer". Does this mean that the functionality is built into a standard Condor build?? If so, how can we use it? I haven't been able to find anything in the main Condor documentation.

I'd be very grateful for any pointers in this area whatsoever.

Thanks for the help,

Mark

Zachary Miller wrote:

OK, here's another security-related question:

On systems where Condor is running as root, is it possible for the job's executable to be chroot'd? In particular, is it possible to MAKE Condor chroot the job's executable?



not currently. were we to add such a feature, your pool and possible your job would have to deal with a number of additional constraints:

+ if they aren't already, your condor daemons must run as root


the next three constraints apply only to vanilla universe jobs:

+ your executable must be statically linked since you will no longer be able
to access libraries in /lib or /usr/lib, etc. when you create a standard
universe job using condor_compile, it is already linked statically.
+ you *must* use file transfer since your input files would otherwise be
inaccessable (i.e. no NFS shares or pre-staging things in
/some/scratch/dir). likewise you must use file transfer to get your output
back since condor will blow away the execute dir after the job completes.


 + your job could not invoke any system() calls to other executables since
   /bin, /usr/bin, etc would not be available.


there may be more constraints, these are just off the top of my head. if you think you can live with that and still have it be useful, perhaps we can add chroot jails as a new condor feature. please let us know what you think.


cheers, -zach






--
Department of Earth Sciences, University of Cambridge
Downing Street, Cambridge CB2 3EQ, UK
Tel. (+44/0) 1223 333408, Fax  (+44/0) 1223 333450
http://www.esc.cam.ac.uk/~mcal00


Condor Support Information: http://www.cs.wisc.edu/condor/condor-support/ To Unsubscribe, send mail to majordomo@xxxxxxxxxxx with unsubscribe condor-users <your_email_address>