Skip to content
Prev 861 / 2152 Next

Error in makeMPIcluster(spec, ...): how to get a minimal example for parallel computing with doSNOW to run?

Hi,

In the meantime, I tried the Rmpi hello world as suggested by Mario. I tried exactly the one given on
http://math.acadiau.ca/ACMMaC/Rmpi/sample.html

I submitted it to the batch system via: bsub -n 4 -R "select[model==Opteron8380]" mpirun R --no-save -q -f Rmpi_hello_world.R

Below is the output. Does anyone know how to interpret the error (and possible how to fix it :-) hopefully that helps solving the doSNOW problem)?

Cheers,

Marius

## ==== snippet start ====

Sender: LSF System <lsfadmin at a6211>
Subject: Job 938942: <mpirun R --no-save -q -f Rmpi_hello_world.R> Done

Job <mpirun R --no-save -q -f Rmpi_hello_world.R> was submitted from host <brutus2> by user <hofertj> in cluster <brutus>.
Job was executed on host(s) <4*a6211>, in queue <pub.1h>, as user <hofertj> in cluster <brutus>.
</cluster/home/math/hofertj> was used as the home directory.
</cluster/home/math/hofertj> was used as the working directory.
Started at Fri Dec 17 11:35:40 2010
Results reported at Fri Dec 17 11:35:49 2010

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
mpirun R --no-save -q -f Rmpi_hello_world.R
------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      4.21 sec.
    Max Memory :         3 MB
    Max Swap   :        29 MB

    Max Processes  :         1
    Max Threads    :         1

The output (if any) follows:

master (rank 0, comm 1) of size 4 is running on: a6211 
slave1 (rank 1, comm 1) of size 4 is running on: a6211 
slave2 (rank 2, comm 1) of size 4 is running on: a6211 
slave3 (rank 3, comm 1) of size 4 is running on: a6211
+     library("Rmpi")
+     }
Error in mpi.spawn.Rslaves() : 
  It seems there are some slaves running on comm  1
+     if (is.loaded("mpi_initialize")){
+         if (mpi.comm.size(1) > 0){
+             print("Please use mpi.close.Rslaves() to close slaves.")
+             mpi.close.Rslaves()
+         }
+         print("Please use mpi.quit() to quit R")
+         .Call("mpi_finalize")
+     }
+ }
$slave1
[1] "I am 1 of 4"

$slave2
[1] "I am 2 of 4"

$slave3
[1] "I am 3 of 4"
[1] 1
## ==== snippet end ====
On 2010-12-17, at 08:19 , Mario Valle wrote: