This commit adds a function parameter to readtable. The function is called
for every column.
The goal is to allow specific (non-standard) type conversions depending on the input.
When the parameter is not given, or the function returns NULL, the legacy default applies.
The colClasses parameter still takes precedence, i.e. the colConvertFn only applies to
the default conversions.
This allows to properly load a .csv with timestamps expressed in the (quite common) %d/%m/%y %H:%M format,
which was impossible since overruling as.POSIXlt makes a copy in the users namespace, and
read.table would still take the base version of as.POSIXlt.
Rather than fixing my specific requirement, this hook allows to probe for any custom format
and do smart things with little syntax.
Signed-off-by: Kurt Van Dijck <dev.kurt at vandijck-laurijssen.be>
---
src/library/utils/R/readtable.R | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/src/library/utils/R/readtable.R b/src/library/utils/R/readtable.R
index 238542e..076a707 100644
--- a/src/library/utils/R/readtable.R
+++ b/src/library/utils/R/readtable.R
@@ -65,6 +65,7 @@ function(file, header = FALSE, sep = "", quote = "\"'", dec = ".",
strip.white = FALSE, blank.lines.skip = TRUE,
comment.char = "#", allowEscapes = FALSE, flush = FALSE,
stringsAsFactors = default.stringsAsFactors(),
+ colConvert = NULL,
fileEncoding = "", encoding = "unknown", text, skipNul = FALSE)
{
if (missing(file) && !missing(text)) {
@@ -226,9 +227,18 @@ function(file, header = FALSE, sep = "", quote = "\"'", dec = ".",
if(rlabp) do[1L] <- FALSE # don't convert "row.names"
for (i in (1L:cols)[do]) {
data[[i]] <-
- if (is.na(colClasses[i]))
+ if (is.na(colClasses[i])) {
+ tmp <- NULL
+ if (!is.null(colConvert))
+ # attempt to convert from user provided hook
+ tmp <- colConvert(data[[i]])
+ if (!is.null(tmp))
+ (tmp)
+ else
+ # fallback, default
type.convert(data[[i]], as.is = as.is[i], dec=dec,
numerals=numerals, na.strings = character(0L))
+ }
## as na.strings have already been converted to <NA>
else if (colClasses[i] == "factor") as.factor(data[[i]])
else if (colClasses[i] == "Date") as.Date(data[[i]])
On di, 26 mrt 2019 12:48:12 -0700, Michael Lawrence wrote:
Please file a bug on bugzilla so we can discuss this further.
All fine.
I didn't find a way to create an account on bugs.r-project.org.
Did I just not see it? or do I need administrator assistance?
Kind regards,
Kurt
On di, 26 mrt 2019 12:48:12 -0700, Michael Lawrence wrote:
Please file a bug on bugzilla so we can discuss this further.
All fine.
I didn't find a way to create an account on bugs.r-project.org.
Did I just not see it? or do I need administrator assistance?
Kind regards,
Kurt
Kurt Van Dijck
on Tue, 26 Mar 2019 21:20:07 +0100 writes:
> On di, 26 mrt 2019 12:48:12 -0700, Michael Lawrence wrote:
>> Please file a bug on bugzilla so we can discuss this
>> further.
> All fine. I didn't find a way to create an account on
> bugs.r-project.org. Did I just not see it? or do I need
> administrator assistance?
> Kind regards, Kurt
--> https://www.r-project.org/bugs.html
Yes, there's some effort involved - for logistic reasons,
but I now find it's a also good thing that you have to read and
understand and then even e-talk to a human in the process.
Martin
Thank you for your answers.
I rather do not file a new bug, since what I coded isn't really a bug.
The problem I (my colleagues) have today is very stupid:
We read .csv files with a lot of columns, of which most contain
date-time stamps, coded in DD/MM/YYYY HH:MM.
This is not exotic, but the base library's readtable (and derivatives)
only accept date-times in a limited number of possible formats (which I
understand very well).
We could specify a format in a rather complicated format, for each
column individually, but this syntax is rather difficult to maintain.
My solution to this specific problem became trivial, yet generic
extension to read.table.
Rather than relying on the built-in type detection, I added a parameter
to a function that will be called for each to-be-type-probed column so I
can overrule the built-in limited default.
If nothing returns from the function, the built-in default is still
used.
This way, I could construct a type-probing function that is
straight-forward, not hard to code, and makes reading my .csv files
acceptible in terms of code (read.table parameters).
I'm sure I'm not the only one dealing with such needs, escpecially
date-time formats exist in enormous amounts, but I want to stress here
that my approach is agnostic to my specific problem.
For those asking to 'show me the code', I redirect to my 2nd patch,
where the tests have been extended with my specific problem.
What are your opinions about this?
Kind regards,
Kurt
This has some nice properties:
1) It self-documents the input expectations in a similar manner to
colClasses.
2) The implementation could eventually "push down" the coercion, e.g.,
calling it on each chunk of an iterative read operation.
The implementation needs work though, and I'm not convinced that coercion
failures should fallback gracefully to the default.
Feature requests fall under a "bug" in bugzilla terminology, so please
submit this there. I think I've made you an account.
Thanks,
Michael
On Wed, Mar 27, 2019 at 1:19 PM Kurt Van Dijck <
dev.kurt at vandijck-laurijssen.be> wrote:
Thank you for your answers.
I rather do not file a new bug, since what I coded isn't really a bug.
The problem I (my colleagues) have today is very stupid:
We read .csv files with a lot of columns, of which most contain
date-time stamps, coded in DD/MM/YYYY HH:MM.
This is not exotic, but the base library's readtable (and derivatives)
only accept date-times in a limited number of possible formats (which I
understand very well).
We could specify a format in a rather complicated format, for each
column individually, but this syntax is rather difficult to maintain.
My solution to this specific problem became trivial, yet generic
extension to read.table.
Rather than relying on the built-in type detection, I added a parameter
to a function that will be called for each to-be-type-probed column so I
can overrule the built-in limited default.
If nothing returns from the function, the built-in default is still
used.
This way, I could construct a type-probing function that is
straight-forward, not hard to code, and makes reading my .csv files
acceptible in terms of code (read.table parameters).
I'm sure I'm not the only one dealing with such needs, escpecially
date-time formats exist in enormous amounts, but I want to stress here
that my approach is agnostic to my specific problem.
For those asking to 'show me the code', I redirect to my 2nd patch,
where the tests have been extended with my specific problem.
What are your opinions about this?
Kind regards,
Kurt
Just to clarify/amplify: on the bug tracking system there's a
drop-down menu to specify severity, and "enhancement" is one of the
choices, so you don't have to worry that you're misrepresenting your
patch as fixing a bug.
The fact that an R-core member (Michael Lawrence) thinks this is
worth looking at is very encouraging (and somewhat unusual for
feature/enhancement suggestions)!
Ben Bolker
On Wed, Mar 27, 2019 at 5:29 PM Michael Lawrence via R-devel
<r-devel at r-project.org> wrote:
This has some nice properties:
1) It self-documents the input expectations in a similar manner to
colClasses.
2) The implementation could eventually "push down" the coercion, e.g.,
calling it on each chunk of an iterative read operation.
The implementation needs work though, and I'm not convinced that coercion
failures should fallback gracefully to the default.
Feature requests fall under a "bug" in bugzilla terminology, so please
submit this there. I think I've made you an account.
Thanks,
Michael
On Wed, Mar 27, 2019 at 1:19 PM Kurt Van Dijck <
dev.kurt at vandijck-laurijssen.be> wrote:
Thank you for your answers.
I rather do not file a new bug, since what I coded isn't really a bug.
The problem I (my colleagues) have today is very stupid:
We read .csv files with a lot of columns, of which most contain
date-time stamps, coded in DD/MM/YYYY HH:MM.
This is not exotic, but the base library's readtable (and derivatives)
only accept date-times in a limited number of possible formats (which I
understand very well).
We could specify a format in a rather complicated format, for each
column individually, but this syntax is rather difficult to maintain.
My solution to this specific problem became trivial, yet generic
extension to read.table.
Rather than relying on the built-in type detection, I added a parameter
to a function that will be called for each to-be-type-probed column so I
can overrule the built-in limited default.
If nothing returns from the function, the built-in default is still
used.
This way, I could construct a type-probing function that is
straight-forward, not hard to code, and makes reading my .csv files
acceptible in terms of code (read.table parameters).
I'm sure I'm not the only one dealing with such needs, escpecially
date-time formats exist in enormous amounts, but I want to stress here
that my approach is agnostic to my specific problem.
For those asking to 'show me the code', I redirect to my 2nd patch,
where the tests have been extended with my specific problem.
What are your opinions about this?
Kind regards,
Kurt
Hey,
In the meantime, I submitted a bug. Thanks for the assistence on that.
and I'm not convinced that
coercion failures should fallback gracefully to the default.
the gracefull fallback:
- makes the code more complex
+ keeps colConvert implementations limited
+ requires the user to only implement what changed from the default
+ seemed to me to smallest overall effort
In my opinion, gracefull fallback makes the thing better,
but without it, the colConvert parameter remains usefull, it would still
fill a gap.
The implementation needs work though,
Other than to remove the gracefull fallback?
Kind regards,
Kurt
On wo, 27 mrt 2019 14:28:25 -0700, Michael Lawrence wrote:
This has some nice properties:
1) It self-documents the input expectations in a similar manner to
colClasses.
2) The implementation could eventually "push down" the coercion, e.g.,
calling it on each chunk of an iterative read operation.
The implementation needs work though, and I'm not convinced that
coercion failures should fallback gracefully to the default.
Feature requests fall under a "bug" in bugzilla terminology, so please
submit this there. I think I've made you an account.
Thanks,
Michael
On Wed, Mar 27, 2019 at 1:19 PM Kurt Van Dijck
<[1]dev.kurt at vandijck-laurijssen.be> wrote:
Thank you for your answers.
I rather do not file a new bug, since what I coded isn't really a
bug.
The problem I (my colleagues) have today is very stupid:
We read .csv files with a lot of columns, of which most contain
date-time stamps, coded in DD/MM/YYYY HH:MM.
This is not exotic, but the base library's readtable (and
derivatives)
only accept date-times in a limited number of possible formats
(which I
understand very well).
We could specify a format in a rather complicated format, for each
column individually, but this syntax is rather difficult to
maintain.
My solution to this specific problem became trivial, yet generic
extension to read.table.
Rather than relying on the built-in type detection, I added a
parameter
to a function that will be called for each to-be-type-probed column
so I
can overrule the built-in limited default.
If nothing returns from the function, the built-in default is still
used.
This way, I could construct a type-probing function that is
straight-forward, not hard to code, and makes reading my .csv files
acceptible in terms of code (read.table parameters).
I'm sure I'm not the only one dealing with such needs, escpecially
date-time formats exist in enormous amounts, but I want to stress
here
that my approach is agnostic to my specific problem.
For those asking to 'show me the code', I redirect to my 2nd patch,
where the tests have been extended with my specific problem.
What are your opinions about this?
Kind regards,
Kurt
Kurt,
Cool idea and great "seeing new faces" on here proposing things on here and
engaging with R-core on here.
Some comments on the issue of fallbacks below.
On Wed, Mar 27, 2019 at 10:33 PM Kurt Van Dijck <
dev.kurt at vandijck-laurijssen.be> wrote:
Hey,
In the meantime, I submitted a bug. Thanks for the assistence on that.
and I'm not convinced that
coercion failures should fallback gracefully to the default.
the gracefull fallback:
- makes the code more complex
+ keeps colConvert implementations limited
+ requires the user to only implement what changed from the default
+ seemed to me to smallest overall effort
In my opinion, gracefull fallback makes the thing better,
but without it, the colConvert parameter remains usefull, it would still
fill a gap.
Another way of viewing coercion failure, I think, is that either the
user-supplied converter has a bug in it or was mistakenly applied in a
situation where it shouldn't have been. If thats the case the fail early
and loud paradigm might ultimately be more helpful to users there.
Another thought in the same vein is that if fallback occurs, the returned
result will not be what the user asked for and is expecting. So either
their code which assumes (e.g., that a column has correctly parsed as a
date) is going to break in mysterious (to them) ways, or they have to put a
bunch of their own checking logic after the call to see if their converters
actually worked in order to protect themselves from that. Neither really
seems ideal to me; I think an error would be better, myself. I'm more of a
software developer than a script writer/analyst though, so its possible
others' opinions would differ (though I'd be a bit surprised by that in
this particular case given the danger).
Best,
~G
On wo, 27 mrt 2019 22:55:06 -0700, Gabriel Becker wrote:
Kurt,
Cool idea and great "seeing new faces" on here proposing things on here
and engaging with R-core on here.
Some comments on the issue of fallbacks below.
On Wed, Mar 27, 2019 at 10:33 PM Kurt Van Dijck
<[1]dev.kurt at vandijck-laurijssen.be> wrote:
Hey,
In the meantime, I submitted a bug. Thanks for the assistence on
that.
> and I'm not convinced that
> coercion failures should fallback gracefully to the default.
the gracefull fallback:
- makes the code more complex
+ keeps colConvert implementations limited
+ requires the user to only implement what changed from the default
+ seemed to me to smallest overall effort
In my opinion, gracefull fallback makes the thing better,
but without it, the colConvert parameter remains usefull, it would
still
fill a gap.
Another way of viewing coercion failure, I think, is that either the
user-supplied converter has a bug in it or was mistakenly applied in a
situation where it shouldn't have been. If thats the case the fail
early and loud paradigm might ultimately be more helpful to users
there.
Another thought in the same vein is that if fallback occurs, the
returned result will not be what the user asked for and is expecting.
So either their code which assumes (e.g., that a column has correctly
parsed as a date) is going to break in mysterious (to them) ways, or
they have to put a bunch of their own checking logic after the call to
see if their converters actually worked in order to protect themselves
from that. Neither really seems ideal to me; I think an error would be
better, myself. I'm more of a software developer than a script
writer/analyst though, so its possible others' opinions would differ
(though I'd be a bit surprised by that in this particular case given
the danger).
I see.
So if we provide a default colConvert, named e.g. colConvertBuiltin,
which is used if colConvert is not given?
1) This respects the 'fail early and loud'.
2) The user would get what he asks for
3) A colConvert implementation would be able to call colConvertBuiltin
manually if desired, so have colConvert limited to adding on top of the
default implementation.
If this is acceptable, I'll prepare a new patch.
Kind regards,
Kurt
Gabe described my main concern. Specifying a coercion function asserts that
the data (1) were what was expected and (2) were converted into what was
expected. Allowing a coercer to delegate to a "next method" is a good idea,
but keep in mind that any function that did that would not confer the
beneficial constraints mentioned above.
We can continue the discussion at:
https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17546
Michael
On Thu, Mar 28, 2019 at 1:35 AM Kurt Van Dijck <
dev.kurt at vandijck-laurijssen.be> wrote:
On wo, 27 mrt 2019 22:55:06 -0700, Gabriel Becker wrote:
Kurt,
Cool idea and great "seeing new faces" on here proposing things on
here
and engaging with R-core on here.
Some comments on the issue of fallbacks below.
On Wed, Mar 27, 2019 at 10:33 PM Kurt Van Dijck
<[1]dev.kurt at vandijck-laurijssen.be> wrote:
Hey,
In the meantime, I submitted a bug. Thanks for the assistence on
that.
> and I'm not convinced that
> coercion failures should fallback gracefully to the default.
the gracefull fallback:
- makes the code more complex
+ keeps colConvert implementations limited
+ requires the user to only implement what changed from the default
+ seemed to me to smallest overall effort
In my opinion, gracefull fallback makes the thing better,
but without it, the colConvert parameter remains usefull, it would
still
fill a gap.
Another way of viewing coercion failure, I think, is that either the
user-supplied converter has a bug in it or was mistakenly applied in a
situation where it shouldn't have been. If thats the case the fail
early and loud paradigm might ultimately be more helpful to users
there.
Another thought in the same vein is that if fallback occurs, the
returned result will not be what the user asked for and is expecting.
So either their code which assumes (e.g., that a column has correctly
parsed as a date) is going to break in mysterious (to them) ways, or
they have to put a bunch of their own checking logic after the call to
see if their converters actually worked in order to protect themselves
from that. Neither really seems ideal to me; I think an error would
be
better, myself. I'm more of a software developer than a script
writer/analyst though, so its possible others' opinions would differ
(though I'd be a bit surprised by that in this particular case given
the danger).
I see.
So if we provide a default colConvert, named e.g. colConvertBuiltin,
which is used if colConvert is not given?
1) This respects the 'fail early and loud'.
2) The user would get what he asks for
3) A colConvert implementation would be able to call colConvertBuiltin
manually if desired, so have colConvert limited to adding on top of the
default implementation.
If this is acceptable, I'll prepare a new patch.
Kind regards,
Kurt