Shared Memory
Hi Markus, Thanks for the link to multicore...definitely worth a look! As you've already used it, could you provide any examples of expr passed to parallel()? I'm using the following as the basis of my code and would like to try multicore: http://math.acadiau.ca/ACMMaC/Rmpi/task_pull.R Cheers, Nath
Markus Schmidberger wrote:
Nathan S. Watson-Haigh schrieb:
I'm new to HPC and parallel programming but I've created my own R
package which has a parallel (using Rmpi) and non-parallel
implementation of the same algorithm. It works nicely, but I'm trying to
better understand how/if Rmpi uses shared memory. Does/can Rmpi use
shared memory? For instance, if each slave needs access to the same data
matrix, does Rmpi create a copy of that data for each slave when I do:
mpi.bcast.Robj2slave(myMatrix)
Yes
I think it does create a copy and I therefore currently pass a subset of
the data matrix to each slave using:
objList <- list(m=m[xMin:nrow(m), xMin:nrow(m)])
mpi.send.Robj(objList, slave_id, 1)
Yes, this works
myMatrix can be up to 24k x 24k in size. That's > 4Gb of RAM just to
hold it in memory! I suppose I'm wondering if memory requirements are
proportional to the number of slaves requested - if each slave has to
have it's own copy of the data and if I can reduce this requirement by
utilising the shared memory!?
Yes, you will get some memory problems. Using Rmpi I think there is no way to use shared memory. There is a great new package for shared memory sytems: multicore http://cran.r-project.org/web/packages/multicore/index.html I already tested it with 500 cores and there is a great performance (nearly linear!) Best Markus
Cheers, Nath
_______________________________________________ R-sig-hpc mailing list R-sig-hpc at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-hpc