Skip to content
Prev 246135 / 398506 Next

python-like dictionary for R

On 12/30/2010 02:30 PM, Paul Rigor wrote:
This might be ok for small problems, but concatenation is an inefficient
R pattern -- the objects being concatenated are copied in full, so
becomes longer, and the concatenation slower, with each new key. With

a <- integer(); t0 <- Sys.time()
for (i in seq_len(1e6)) {
    a <- c(a, i)
    if (0 == i %% 10000)
        print(i / as.numeric(Sys.time() - t0))
}

we have, in 'appends per second'

[1] 3236.76
[1] 2425.111
[1] 1757.52
[1] 1331.846

We don't really have a dictionary here, either, as the 'key' values are
not stored. Phil's suggest suffers from the same type of issue, where
the addition of new keys implies growing (reallocating) the vector.

a <- integer(); t0 <- Sys.time()
for (i in seq_len(1e6)) {
    key <- as.character(i)
    a[[key]] <- i
    if (0 == i %% 10000)
        print(i / as.numeric(Sys.time() - t0))
}
[1] 12659.18
[1] 9516.288
[1] 6821.47
[1] 5907.782


Better to use an environment (and live with reference semantics)

e <- new.env(parent=emptyenv()); t0 <- Sys.time()
for (i in seq_len(1e6)) {
    key <- as.character(i)
    e[[key]] <- i
    if (0 == i %% 10000)
        print(i / as.numeric(Sys.time() - t0))
}

with

[1] 20916.56
[1] 21421.85
[1] 21762.04
[1] 21207.69
[1] 21239.19

The usual alternative to the concatenation pattern is
'pre-allocate-and-fill'

x <- integer(1e6); t0 <- Sys.time()
for (i in seq_len(1e6)) {
    ???
}

but this doesn't work with key/value pairs because there is no sense (or
is there?) in which the keys can be 'pre-allocated'.

Creating the dictionary in one go is very efficient
+     structure(seq_len(1e6), .Names=as.character(seq_len(1e6))))
   user  system elapsed
  0.417   0.002   0.419

Martin