Matrix multiplication
Brian Thanks for spelling this out for those of us that are a bit slow. (Newbie questions below)
On 12-03-13 08:54 AM, Brian G. Peterson wrote:
Simon Urbanek<simon.urbanek at r-project.org> 03/13/12 07:27 AM>>>
On Mar 12, 2012, at 5:40 AM, Patrik Waldmann wrote:
Dear members, I noticed that there isn't a function for matrix multiplication in the new parallel library. What would be the most efficient way to do a matrix multiplication there?
The parallel package is for *explicit* parallelization. R already does implicit parallelization (using OpenMP or multi-threaded BLAS or both) automatically - this includes matrix multiplication.
On Tue, 2012-03-13 at 10:23 +0100, Patrik Waldmann wrote:
What does automatically mean? Is X%*%t(X) parallelized?
Matrix multiplication %*% is a BLAS function, as Simon and Claudia already told you. So, if your BLAS does multithreaded matrix multiplication, it will use multiple threads 'implicitly', as Simon pointed out.
Is there an easy way to know if the R I am using has been compiled with multi-thread BLAS support?
Because the actual matrix multiplication operation is carried out by the BLAS, R doesn't really care how the BLAS does it... it could be on one thread (non-parallel), on multiple threads (as with gotoblas or openblas configured that way) or on a GPU (as with Magma BLAS), and R would not care. 'explicit' parallelization if for taking some other code in R and explicitly telling R to use a certain number of worker nodes to accomplish the task. This type of parallelization is often used for simulation and optimization, where the block of code to be parallelized may be very large. Be aware that there can be unintended negative interactions between implicit and explicit parallelization. On cluster nodes I tend to configure the BLAS to use only one thread to avoid resource contention when all cores are doing explicit parallelization.
How do you do this? Does it need to be done when you are compiling R, or can it be done on the fly while running R processes? Thanks, Paul