About performance of R
a) Base R already includes the "parallel" package. Deciding to use more than one processor for a particular computation is a very high level decision that can require knowledge of computing time cost, importance of other tasks on the system, and interdependence of computation results. It is not a decision that R should automatically make.
b) Most performance issues with R arise due to users choosing inefficient algorithms. Inserting parallelism inside existing algorithms will not fix that.
---------------------------------------------------------------------------
Jeff Newmiller The ..... ..... Go Live...
DCN:<jdnewmil at dcn.davis.ca.us> Basics: ##.#. ##.#. Live Go...
Live: OO#.. Dead: OO#.. Playing
Research Engineer (Solar/Batteries O.O#. #.O#. with
/Software/Embedded Controllers) .OO#. .OO#. rocks...1k
---------------------------------------------------------------------------
Sent from my phone. Please excuse my brevity.
On May 27, 2015 8:00:03 AM PDT, Suman <suman12029 at yahoo.co.uk> wrote:
Hi there, Now that R has grown up with a vibrant community. It's no 1 statistical package used by scientists. It's graphics capabilities are amazing. Now it's time to provide native support in "R core" for distributed and parallel computing for high performance in massive datasets. And may be base R functions should be replaced with best R packages like data.table, dplyr, reader for fast and efficient operations. Thanks Sent from my iPad
______________________________________________ R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.