Skip to content
Prev 525 / 2152 Next

"chunking" parallel tasks

One other approach, were the computation per chunk runs into
several(tens of ) minutes, is to monitor the running time of long
running tasks(each working on a chunk), if greater than a cut off,
split the chunk and assign to unused (or lesser loaded) machines. If a
task for a particular chunk finishes earlier than some task for a
duplicated task, invalidate the latter and kill it.
Of course, the run time for a chunk should be greater(much) than the
cost of duplicating a chunk, reading it in and starting new tasks. To
implement this, one would have write a system which actually monitors
the running time tasks, the chunking and duplication.

I think Hadoop Mapreduce does something similar, though it is most
certainly not the best tool for some tasks.

Regards
Saptarshi
On Tue, Jan 26, 2010 at 12:54 PM, Martin Morgan <mtmorgan at fhcrc.org> wrote: