Message-ID: <49EC09DF.2000901@csiro.au>
Date: 2009-04-20T05:36:31Z
From: Nathan S. Watson-Haigh
Subject: Shared Memory
I'm new to HPC and parallel programming but I've created my own R
package which has a parallel (using Rmpi) and non-parallel
implementation of the same algorithm. It works nicely, but I'm trying to
better understand how/if Rmpi uses shared memory. Does/can Rmpi use
shared memory? For instance, if each slave needs access to the same data
matrix, does Rmpi create a copy of that data for each slave when I do:
mpi.bcast.Robj2slave(myMatrix)
I think it does create a copy and I therefore currently pass a subset of
the data matrix to each slave using:
objList <- list(m=m[xMin:nrow(m), xMin:nrow(m)])
mpi.send.Robj(objList, slave_id, 1)
myMatrix can be up to 24k x 24k in size. That's > 4Gb of RAM just to
hold it in memory! I suppose I'm wondering if memory requirements are
proportional to the number of slaves requested - if each slave has to
have it's own copy of the data and if I can reduce this requirement by
utilising the shared memory!?
Cheers,
Nath