On Fri, Aug 21, 2009 at 6:03 AM, Barry Rowlingson
<b.rowlingson at lancaster.ac.uk> wrote:
I'm often wanting to develop functions whilst manipulating data. But I
don't want to end up with a .RData full of functions and data. It
might be that I have functions that are re-usable but not worth
sticking in a package.
?So I've tried to come up with a paradigm for function development
that more closely follows the way Matlab and Python do it (partly
inspired by a confused Matlab convert over on R-help).
?My requirements were thus:
?* .R files as the master source for R functions
?* Don't see the functions in ls()
?* After editing R, make it easy to update the definitions visible to
R (unlike rebuilding and reloading a package).
?So I wrote these two in a few mins:
loadDir <- function(dir){
?e = attach(NULL,name=dir)
?assign("__path__",dir,envir=e)
?reloadDir(e)
?e
}
reloadDir <- function(e){
?path = get("__path__",e)
?files = list.files(path,".R$",full.names=TRUE,recursive=TRUE,ignore.case=TRUE)
?for(f in files){
? ?sys.source(f,envir=e)
?}
}
Rather than using __path__, why not just use chdir = TRUE in sys.source() and rely on the usual R working directory semantics?
?Sourcing everything on any change seems a bit wasteful, but until R objects have timestamps I can't think of a better way. Hmm, maybe my environment could keep a __timestamp__ object... Okay, this is getting less simple now...
That's what I do for all of my packages during development. You really need a huge amount of code before it starts to take a noticeable amount of time. Hadley