Ross Ihaka <ihaka@stat.auckland.ac.nz> writes: [on out-of-memory data]
Its something we definitely need (almost as much as REAL libraries). If only there were another 24 hours in each day ...
No problem. You just need someone as smart as you are to chip in with the development. Or four people half as smart ;^) There might also be some more "brutal" approaches, such as having the vector heap as a memory-mapped file, leaving the cacheing and similar work to the operating system. OS's are already quite good at handling files with holes in them, so it's not like allocating a huge file just for a handful af variables. Some special routines for defragmentation might be necessary, though. Not that I really know what I'm talking about... At any rate, it would probably be a good thing to start thinking in terms of modularity, database backends and such. I've moved the thread to r-devel, it seemed to be getting too technical for r-help.
O__ ---- Peter Dalgaard Blegdamsvej 3 c/ /'_ --- Dept. of Biostatistics 2200 Cph. N (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~~~~~~~~~ - (p.dalgaard@biostat.ku.dk) FAX: (+45) 35327907 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- r-devel mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html Send "info", "help", or "[un]subscribe" (in the "body", not the subject !) To: r-devel-request@stat.math.ethz.ch =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-