Skip to content

avoiding loop

6 messages · jim holtman, Martin Morgan, parkbomee

#
What you need to do is to understand how to use Rprof so that you can
determine where the time is being spent.  It probably indicates that
this is not the source of slowness in your optimization function.  How
much time are we talking about?  You may spent more time trying to
optimize the function than just running the current version even if it
is "slow" (slow is a relative term and does not hold much meaning
without some context round it).
On Sat, Oct 31, 2009 at 11:36 PM, parkbomee <bbom419 at hotmail.com> wrote:

  
    
#
parkbomee <bbom419 at hotmail.com> writes:
You're giving us 'by.total', so these are saying that all the time was
spent in these functions or the functions they called. Probably all
are in 'optim' and its arguments; since little self.time is spent
here, there isn't much to work with
These are probably in the internals of optim, where the function
you're trying to optimize is being set up for evaluation. Again
there's little self.time, and all these say is that a big piece of the
time is being spent in code called by this code.
these look like they are tapply-related calls (looking at the code for
tapply, it calls lapply, factor, and unlist, and FUN is the function
argument to tapply), perhaps from the function you're optimizing (did
you implement this as suggested below?  it would really help to have a
possibly simplified version of the code you're calling).

There is material to work with here, as apparently a fairly large
amount of self.time is being spent in each of these functions. So
here's a sample data set

  n <- 100000
  set.seed(123)
  df <- data.frame(time=sort(as.integer(ceiling(runif(n)*n/5))),
                   value=ceiling(runif(n)*5))

It would have been helpful for you to provide reproducible code like
that above, so that the characteristics of your data were easily
reproducible. Let's time tapply
+     system.time(x0 <<- tapply0(df$value, df$time, sum), gcFirst=TRUE)[[1]]
+ })
[1] 0.316 0.316 0.308 0.320 0.304

tapply is quite general, but in your case I think you'd be happy with

  tapply1 <- function(X, INDEX, FUN)
      unlist(lapply(split(X, INDEX), FUN), use.names=FALSE)
+     system.time(x1 <<- tapply1(df$value, df$time, sum), gcFirst=TRUE)[[1]]
+ })
[1] 0.156 0.148 0.152 0.144 0.152

so about twice the speed (timing depends quite a bit on what 'time' is,
integer or numeric or character or factor). The vector values of the
two calculations are identical, though tapply presents the data as an
array with column names
[1] TRUE

tapply allows FUN to be anything, but if the interest is in the sum of
each time interval, and the time intervals can be assumed to be sorted
(sorting is not expensive, so could be done on the fly), then

  tapply2 <- function(X, INDEX)
  {
      csum <- cumsum(c(0, X))
      idx <- diff(INDEX) != 0
      csum[c(FALSE, idx, TRUE)] - csum[c(TRUE, idx, FALSE)]
  }

calculates the cumulative sum and the points in INDEX where the time
intervals change. It then takes the difference over the appropriate
interval.
+     system.time(x2 <<- tapply2(df$value, df$time), gcFirst=TRUE)[[1]]
+ })
[1] 0.024 0.024 0.024 0.024 0.024
[1] TRUE

This approach could be subject to rounding error (if csum gets very
large and the intervals remain small). To calculate values where
choice == 1 I think you'd want to

  tapply2(df$value * (df$choice==1), df$time)

rather than sub-setting, so that the result of tapply2 is always a
vector of the same length even when some time intervals never have
choice==1.

Because tapply in these examples seems so fast compared to your
calculation, I wonder whether optim is evaluating your function many
times, and that reformulating the optimization might lead to a very
substantial speed-up?

Martin

  
    
#
The first thing I would suggest is convert your dataframes to matrices
so that you are not having to continually convert them in the calls to
the functions.  Also I am not sure what the code:

	realized_prob = with(DF, {
     					ind <- (CHOSEN == 1)
     					n <- tapply(theta_multiple[ind], CS[ind], sum)
     					d <- tapply(theta_multiple, CS, sum)
     					n / d	
						})

is doing.  It looks like 'n' and 'd' might have different lengths
since they are being created by two different (CS & CS[ind])
sequences.  I have no idea why you are converting to the "DF"
dataframe.  THere is no need for that.  You could just leave the
vectors (e.g., theta_multiple, CS and ind) as they are and work with
them.  This is probably where most of your time is being spent.  So if
you start with matrices and leave the dataframes out of the main loop
you will probably see an increase in performance.

2009/11/2 parkbomee <bbom419 at hotmail.com>: