Skip to content

R enterprise for linux

10 messages · Alaios, Paul Hiemstra, R. Michael Weylandt +3 more

#
Are you sure?

I just ran

N <- 128
system.time(matrix(rnorm(N^2),N) %*% matrix(rnorm(N^2),N))

and it took less than 0.044s on my (old-ish) laptop while doing other
things (and that includes the expensive rng calls). There might be
some other issues in play here. Even N <- 1280 takes < 5 seconds with
the rng call.

Michael
On Mon, Feb 6, 2012 at 9:29 AM, Alaios <alaios at yahoo.com> wrote:
#
On Mon, 6 Feb 2012, Alaios wrote:

            
I believe that R supports openmp during configuration. This is the
multi-processor library that allows threads to run on different cores or
processors.

   In addition to Red Hat packages you can build Slackware packages using the
scripts at www.slackbuilds.org.

Rich
#
6-02-2012, 06:29 (-0800); Alaios escriu:
It doesn't seem normal to me... in my computer such a multiplication
takes a fraction of a second:

system.time(array(rnorm(128*128), c(128,128)) %*%
            array(rnorm(128*128), c(128,128)))
   user  system elapsed 
  0.008   0.000   0.006 

Am I missing something?

Cheers,
Ernest
#
On Feb 6, 2012, at 10:17 AM, Alaios wrote:

            
You seems confused about the size of your objects. You earlier said  
they were  [128,128] which we took to mean 128 x 128 matrices, they  
are far larger than that. I'm guessing that you are consuming all of  
your RAM and paging out to disk. But that is just a guess since you  
provided none of the system information that the Posting Guide suggests.
#
I think there is some support for multi-processor matrix
multiplication (google around for it) but in your case, it might
suffice to use an optimized BLAS and R's builtin parallel facilities.

As I said in your other thread -- if you are working with
diagonal/sparse matrices, it's also possible to use special cases for
efficient manipulations.

Michael
On Mon, Feb 6, 2012 at 10:47 AM, Alaios <alaios at yahoo.com> wrote: