Hi Dimitri,Is this pretty much the same tar file?There were a couple of presentations at the 2015 HTCondor Week on various data caching patterns on the worker nodes, using technology like squid and interfacing with htcondor.You might be able to get a better idea from those.There were a couple of talks on the Wednesday.Cheers, IainOn Oct 31, 2016, at 19:07, Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx> wrote:______________________________Hi everyone,
we have jobs that transfer a gigabyte zipped tarball of data, unzip and
start crunching. The problem we're running into is when, say, 16 of them
land on a 16-core worker node at once, the untar'ing completely chokes
the drive. To the point where condor daemons wait too long trying to
write their logs and die with status 44.
I expect that's a fairly common job pattern and I'm wondering what would
be the best way to deal with it (FVVO "best"). Any suggestions?
TIA
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu_________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@cs.wisc.edu with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor- users
The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@cs.wisc.edu with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor- users
The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/