Skip to content
Prev 173 / 2152 Next

Rmpi and cpu usage on slaves

As Dirk said, it is a feature of OpenMPI. LAM-MPI doesn't have this issue.
I don't think there is a solution on slave sides since mpi.bcast is a
blocking call. It might be possible to use nonblocking point-to-point
calls such as mpi.ireiv with Sys.sleep command but the whole-slave
communications must be rewritten. If Dirk is correct, future release of
openmpi will remove such a feature. This is why I did not try to work out
a solution, at least on slave sides. In real computation, all slaves are
supposed to use up all assigned cpu cycles.

The same issue will be applied to master as well if any of parallel apply
functions are used. In Rmpi 0.4-7 several nonblock parallel apply
functions are added so master will not consume 100%cpu while waiting.

So far LAM-MPI is still the best environment for programing, debugging and
testing.

Hao
Dirk Eddelbuettel wrote: