-----Original Message-----
From: r-sig-hpc-bounces at r-project.org
[mailto:r-sig-hpc-bounces at r-project.org] On Behalf Of Ingeborg Schmidt
Sent: 11 February 2009 13:33
To: r-sig-hpc at r-project.org
Subject: [R-sig-hpc] Rmpi with Open MPI on Debian
Hello,
I wish to use Rmpi with Open MPI on Debian. Slaves should be
spawned on serveral computers which should be able to
communicate with a single master. However, there does not
seem to be a default hostfile for Open MPI that is used. So when I use
library(Rmpi)
mpi.spawn.Rslaves()
it only spawns one slaves on the localhost instead of several
thereads on all my computers. I am unable to find any useful
documentation of Open MPI (yes, I checked the FAQ on
open-mpi.org). Is there such a thing as a default hostfile
that is used when calling mpi.spawn.Rslaves() ? Or is there
any other way to use mpi.spawn.Rslaves() with Open MPI so
that slaves are spawned across multiple computers?
I am unsure about calling R via orterun. The only tutorials
regarding orterun and R I found (e.g.
http://dirk.eddelbuettel.com/papers/bocDec2008introHPCwithR.pd
f ) seemed to imply that there either is no master or the
master identifies itself by looking at it's mpi.comm.rank() .
Moreover running
paste("I am", mpi.comm.rank(), "of", mpi.comm.size())
via
orterun --hostfile MYHOSTFILE -n CPUNUMBER Rslaves.sh RTest.R
testlog needlog /PATH/TO/R
results in
"I am 0 of 0"
on every node.
This is not what I want, I would like only the master to
execute my R script and send relevant methods to the slaves
via mpi.bcast.Robj2slave(). My code contains commands like
mpi.remote.exec() which I would like to keep. I have not yet
seen any examples that are able to combine calling R via
orterun with communication between the slaves with
mpi.remote.exec() etc.
By the way: Can you recommend a method to lower the thread
priory of the R slaves so that other calculations done on the
same computers are not disturbed? Is placing a nice (Linux
command to lower thread priority) before R in the Rslaves.sh
sufficient when using mpi.spawn.Rslaves()?
Cheers,
Ingeborg Schmidt