Skip to content
Prev 105 / 2152 Next

Rmpi with Open MPI on Debian

Hi Ingeborg,

Dirk already answered some of your questions so I will not repeat here. I
just address some of your issues.

Overall OpenMPI is not a friendly MPI environment to work with unless it
runs under job or resource management such as slurm. This effects how Rmpi
runs under OpenMPI. The way of spawning R slaves under LAM is not working
any more under OpenMPI. Under LAM, one just uses
R -> library(Rmpi) ->  mpi.spawn.Rslaves()
as long as host file is set. Under OpenMPI this leads only one R slave on
the master host no matter how many remote hosts are specified in OpenMPI
hostfile. One has to use orterun to tell Rmpi where remote hosts are.
README in Rmpi shows how to spawn R slaves with mpirun. It applies to
orterun as well. Here is how I do it:
1. Copy R in R's bin to, say, Rort which should be in PATH.
2. Modify Rort to add a line
   R_PROFILE=${R_HOME_DIR}/library/Rmpi/Rprofile; export R_PROFILE
right after R_HOME_DIR
3. To have one master and 4 slaves, run
   orterun -np 5 Rort R CMD BATCH R.in R.out

e.g., I have one R.in like
######
#library(Rmpi)
mpi.universe.size()
#mpi.spawn.Rslaves()
#mpi.setup.rngstream()
mpi.parReplicate(100,mean(rnorm(10000000)))
mpi.close.Rslaves()
mpi.quit()
######
Notice that library(Rmpi) and mpi.spawn.Rslaves are commented out since
Rprofile takes care of loading and spawning.

Regarding 100%cpu usage from slaves while they are waiting, unfortunately,
I cannot fix it until a new release of OpenMPI solves this issue. In Rmpi
0.5-7 I add a few nonblocking parallel apply functions. However they just
reduce cpu usages on master only. I don't know if those slaves will
automatically yield to other programs. Nevertheless, you can try with nice
function. You need to modify Rslaves.sh in Rmpi's inst to add nice in
front of
$R_HOME/bin/R --no-init-file ...

Hao
Ingeborg Schmidt wrote: