Dear list;
How can I speed up the run of following code (illustrative)
#========================================================================
con<-vector("numeric")
for (i in 1:limit)
{
if(matched data for the ith item found) {
if(i==1) {con<-RowOfMatchedData } else
{con<-rbind(con,matchedData)}
}
}
#========================================================================
each RowOfMatchedData contains 105 variables, when "i" runs over 10^7
and the data container "con" get large enough, the codes get extremely
slow, I know this is a working memory problem (2GB only), is there
anyway to circumvent this problem without dicing and slicing the data.
How to speed up the for loop by releasing memeory
4 messages · Yong Wang, Jeff Newmiller, Duncan Murdoch +1 more
Please read the posting guide. You need to provide reproducible code (please simplify, but make sure it illustrates your problem and runs) to communicate clearly what problem you are trying to solve.
Chances are good that you don't need any for loop at all, but without running code we can't tell.
---------------------------------------------------------------------------
Jeff Newmiller The ..... ..... Go Live...
DCN:<jdnewmil at dcn.davis.ca.us> Basics: ##.#. ##.#. Live Go...
Live: OO#.. Dead: OO#.. Playing
Research Engineer (Solar/Batteries O.O#. #.O#. with
/Software/Embedded Controllers) .OO#. .OO#. rocks...1k
---------------------------------------------------------------------------
Sent from my phone. Please excuse my brevity.
Yong Wang <wangyong1 at gmail.com> wrote:
Dear list;
How can I speed up the run of following code (illustrative)
#========================================================================
con<-vector("numeric")
for (i in 1:limit)
{
if(matched data for the ith item found) {
if(i==1) {con<-RowOfMatchedData } else
{con<-rbind(con,matchedData)}
}
}
#========================================================================
each RowOfMatchedData contains 105 variables, when "i" runs over 10^7
and the data container "con" get large enough, the codes get extremely
slow, I know this is a working memory problem (2GB only), is there
anyway to circumvent this problem without dicing and slicing the data.
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
On 12-12-15 10:10 AM, Yong Wang wrote:
Dear list;
How can I speed up the run of following code (illustrative)
#========================================================================
con<-vector("numeric")
for (i in 1:limit)
{
if(matched data for the ith item found) {
if(i==1) {con<-RowOfMatchedData } else
{con<-rbind(con,matchedData)}
}
}
#========================================================================
each RowOfMatchedData contains 105 variables, when "i" runs over 10^7
and the data container "con" get large enough, the codes get extremely
slow, I know this is a working memory problem (2GB only), is there
anyway to circumvent this problem without dicing and slicing the data.
You are reallocating and copying con in every step in your loop. Preallocate it and just assign new data into the appropriate row and things will be much faster. Duncan Murdoch
You are in Circle 2 of 'The R Inferno'. You are wise to want to leave. http://www.burns-stat.com/pages/Tutor/R_inferno.pdf Pat
On 15/12/2012 15:10, Yong Wang wrote:
Dear list;
How can I speed up the run of following code (illustrative)
#========================================================================
con<-vector("numeric")
for (i in 1:limit)
{
if(matched data for the ith item found) {
if(i==1) {con<-RowOfMatchedData } else
{con<-rbind(con,matchedData)}
}
}
#========================================================================
each RowOfMatchedData contains 105 variables, when "i" runs over 10^7
and the data container "con" get large enough, the codes get extremely
slow, I know this is a working memory problem (2GB only), is there
anyway to circumvent this problem without dicing and slicing the data.
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Patrick Burns pburns at pburns.seanet.com twitter: @portfolioprobe http://www.portfolioprobe.com/blog http://www.burns-stat.com (home of 'Some hints for the R beginner' and 'The R Inferno')