Skip to content

[R-pkg-devel] puzzling CRAN rejection

19 messages · Ben Bolker, Iñaki Ucar, Uwe Ligges +1 more

#
Before I risk wasting the CRAN maintainers' time with a query, can 
anyone see what I'm missing here?  Everything I can see looks OK, with 
the possible exception of the 'NA' result for "CRAN incoming 
feasibility" on r-devel-windows-ix86+x86_64 (which surely isn't my fault???)

   Any help appreciated, as always.

   Ben Bolker




=====
Dear maintainer,

package lme4_1.1-24.tar.gz does not pass the incoming checks 
automatically, please see the following pre-tests:
Windows: 
<https://win-builder.r-project.org/incoming_pretest/lme4_1.1-24_20201012_210730/Windows/00check.log>
Status: OK
Debian: 
<https://win-builder.r-project.org/incoming_pretest/lme4_1.1-24_20201012_210730/Debian/00check.log>
Status: OK

Last released version's CRAN status: ERROR: 2, NOTE: 5, OK: 5
See: <https://CRAN.R-project.org/web/checks/check_results_lme4.html>

Last released version's additional issues:
   gcc-UBSAN <https://www.stats.ox.ac.uk/pub/bdr/memtests/gcc-UBSAN/lme4>

CRAN Web: <https://cran.r-project.org/package=lme4>

Please fix all problems and resubmit a fixed version via the webform.
If you are not sure how to fix the problems shown, please ask for help 
on the R-package-devel mailing list:
<https://stat.ethz.ch/mailman/listinfo/r-package-devel>
If you are fairly certain the rejection is a false positive, please 
reply-all to this message and explain.

More details are given in the directory:
<https://win-builder.r-project.org/incoming_pretest/lme4_1.1-24_20201012_210730/>
The files will be removed after roughly 7 days.

*** Strong rev. depends ***: afex agRee altmeta aods3 arm ARTool bapred 
bayesammi BayesLN BayesSenMC baystability BBRecapture BClustLonG BFpack 
blme blmeco blocksdesign BradleyTerry2 buildmer cAIC4 car carcass cgam 
chngpt ciTools clickR climwin CLME clusteredinterference clusterPower 
CMatching cpr cvms DClusterm dfmeta DHARMa diagmeta difR doremi 
eda4treeR EdSurvey effects embed epr ESTER ez faraway faux fence 
finalfit fishmethods fullfact gamm4 geex GHap glmertree glmmEP GLMMRR 
glmmsr glmmTMB GLMpack gorica groupedstats gtheory gvcR HelpersMG 
HeritSeq hmi iccbeta IDmeasurer IMTest inferference influence.ME 
intRvals isni jlctree joineRmeta joineRML JointModel jomo jstable 
JWileymisc KenSyn lefko3 lmem.qtler LMERConvenienceFunctions lmerTest 
lmSupport longpower LSAmitR macc MAGNAMWAR manymodelr MargCond marked 
mbest MDMR mediation MEMSS merDeriv merTools meta metamisc metan 
metaplus Metatron micemd MiRKAT misty mixAK MixedPsy MixMAP MixRF MLID 
mlma mlmRev mlVAR MMeM multiDimBio multil
  evelTools MultiRR MultisiteMediation mumm mvMISE MXM nanny omics 
OptimClassifier pamm panelr paramhetero PBImisc pbkrtest pcgen 
pedigreemm Phenotype phyr piecewiseSEM Plasmode PLmixed powerbydesign 
powerlmm predictmeans PrevMap prLogistic psfmi ptmixed qape r2mlm 
raincin Rcmdr refund reghelper regplot REndo reproducer rewie RLRsim 
robustBLME robustlmm rockchalk rosetta rpql rptR rr2 RRreg rsq rstanarm 
rstap rties RVAideMemoire RVFam sae semEff siland simr sjstats skpr 
SlaPMEG smicd SoyNAM SPCDAnalyze specr SPreFuGED squid stability 
standardize statgenGxE statgenSTA StroupGLMM structree Surrogate 
surrosurv swissMrP TcGSA themetagenomics tidygate tidyMicro tramME 
tukeytrend userfriendlyscience varTestnlme VCA VetResearchLMM warpMix 
WebPower welchADF WeMix

Best regards,
CRAN teams' auto-check service


Flavor: r-devel-windows-ix86+x86_64
Check: CRAN incoming feasibility, Result: NA
   Maintainer: 'Ben Bolker <bbolker+lme4 at gmail.com>'

Flavor: r-devel-windows-ix86+x86_64
Check: Overall checktime, Result: NOTE
   Overall checktime 23 min > 10 min

Flavor: r-devel-linux-x86_64-debian-gcc
Check: CRAN incoming feasibility, Result: Note_to_CRAN_maintainers
   Maintainer: 'Ben Bolker <bbolker+lme4 at gmail.com>'
#
On Mon, 12 Oct 2020 at 22:04, Ben Bolker <bbolker at gmail.com> wrote:
There are UBSAN issues:

  
    
#
Thanks, but I don't think that's the problem because:

    (1) Those are reported as being from the last released version, not 
this one.
    (2) As far as I can tell from my local tests, I'm pretty sure I've 
fixed these issues in the current release.
    (3) In my experience UBSAN tests don't generally get re-run for a 
while after the initial CRAN testing anyway ...

   cheers
     Ben
On 10/12/20 4:23 PM, I?aki Ucar wrote:
#
You are right. I was too fast and didn't read "last released version".
Then the only suspicious thing I see is:

Overall checktime 23 min > 10 min
On Mon, 12 Oct 2020 at 22:25, Ben Bolker <bbolker at gmail.com> wrote:

  
    
#
On 10/12/20 4:34 PM, I?aki Ucar wrote:
I agree that's unfortunate, but it doesn't seem grounds for summary 
rejection ... ?  (CRAN policy says "Checking the package should take as 
little CPU time as possible".)  You may be right: it does seem to be _de 
facto_ policy that any NOTE is grounds for rejection.  On the other 
hand, this package has had NOTEs about 'installed size is <large>" for a 
long time, which hasn't been grounds for rejection.

   cheers
    Ben Bolker
#
There's this one in 
https://win-builder.r-project.org/incoming_pretest/lme4_1.1-24_20201012_210730/Windows/00check.log:

   Comparing 'lmer-1.Rout' to 'lmer-1.Rout.save' ...428d427
< boundary (singular) fit: see ?isSingular
430d428
< boundary (singular) fit: see ?isSingular

Those messages about the singular fit show up in

https://win-builder.r-project.org/incoming_pretest/lme4_1.1-24_20201012_210730/Windows/examples_and_tests/tests_i386/lmer-1.Rout 


but not in

https://win-builder.r-project.org/incoming_pretest/lme4_1.1-24_20201012_210730/Windows/examples_and_tests/tests_i386/lmer-1.Rout.save

The difference also doesn't show up in the x64 versions of the files.

Duncan Murdoch
On 12/10/2020 4:03 p.m., Ben Bolker wrote:
#
On Mon, 12 Oct 2020 at 22:40, Ben Bolker <bbolker at gmail.com> wrote:
Large size due to libs. But if data or docs go beyond 5MB, it would
probably be rejected. Likewise, I believe checking time is another one
of those NOTEs that are really hard lines.
#
On 10/12/20 4:40 PM, Duncan Murdoch wrote:
OK, thanks.

   I did notice this in passing (I think), but I got confused by the 
format.  (Also, it doesn't even rise to the level of a NOTE ...)

   It took me a while to localize the problem (line numbers have to be 
computing _after_ throwing away the R header info, see source code of 
tools::Rdiff()).

    Having spent this long reading tea leaves, I think I'm going to 
write to the CRAN maintainers for clarification.

   * Refactoring all the tests to decrease the testing time 
significantly is certainly possible (at worst I can make a lot of stuff 
conditionally skipped on CRAN), but would be a nuisance that I'd rather 
save for the next release if possible.

   * Eliminating the two lines of variable output is easy, but it's 
mildly annoying to update the version number for this small a correction 
...

   Looks like from now on there will only be odd-numbered releases of 
lme4 on CRAN, since I seem guaranteed to make trivial errors with my 
first (odd-numbered) try each time and have to bump the version number 
when fixing them ...

    Ben Bolker
#
Actually more than 23 minutes check time for a single package is really 
excessive, can you pls cut that down?

This comes from

** running tests for arch 'i386' ... [509s] OK
** running tests for arch 'x64' ... [501s] OK

so only tests take 1010 seconds already.

I see that lme4 is a really important package that may justify some 
extra check time, but this is really a lot.

Can you please reduce the check time in the tests? e.g. using toy data 
and few iterations? Or by running less important tests only 
conditionally if some environment variable is set that you only define 
on your machine?

Best,
Uwe Ligges
On 12.10.2020 22:25, Ben Bolker wrote:
#
Sure.  I assume I should aim for <10 minutes since that's the 
threshold for a NOTE ...  (for what it's worth the tests take a bit less 
than 25% as long on my Linux laptop, since an individual test run is 
more than twice as fast and we only have to check one architecture ...)

   Do I interpret correctly that the advice is to address this problem, 
bump the version number, and re-submit?

   cheers
    Ben Bolker
On 10/12/20 5:18 PM, Uwe Ligges wrote:
#
On 12/10/2020 5:17 p.m., Ben Bolker wrote:
Yes, failing to match saved test output should be a fatal error, but 
isn't marked as one.
One of my favourite programs back in the days when I used Windows was 
Beyond Compare (https://www.scootersoftware.com/).  They've had a Mac 
version for a while now; it works well too (though I kind of prefer the 
old Windows UI a bit for some reason).  It made it really easy to find 
this difference, once I figured out which files to compare.  I didn't 
even recognize the line numbers in the CRAN report as line numbers at first.
I'd say a mismatch in saved output isn't a small problem, it's either a 
too-sensitive test or something serious.

Duncan Murdoch
#
That's fair enough, but it would be nice if (1) this were a NOTE and 
(2) it were made explicit in the CRAN policy that, *except by special 
exception*, an unresolved NOTE is grounds for rejection.  This is 
broadly understood by experienced package maintainers but must sometimes 
come as a shock to newbies ...

    cheers
     Ben
#
On 12/10/2020 6:14 p.m., Ben Bolker wrote:
I don't think so.  As I said, I think it should be marked as an ERROR.

Duncan Murdoch
#
On 10/12/20 6:36 PM, Duncan Murdoch wrote:
OK.  But it would probably be wise (if the CRAN maintainers actually 
wanted to do this) to crank it up from silent -> NOTE -> WARNING -> 
ERROR over the course of several releases so as not to have widespread 
test failures on CRAN right away ...
#
On 12/10/2020 6:51 p.m., Ben Bolker wrote:
Do you think so?  Why would you put saved results into the package 
unless you want to test if they match?

Honestly, I thought this had always been a fatal error.

Duncan Murdoch
#
On 10/12/20 7:37 PM, Duncan Murdoch wrote:
My point was just that it would be disruptive to switch the severity 
of such mismatches from 'message, no NOTE' to 'ERROR' in a single step - 
I'd imagine it could lead to a very large number of CRAN packages 
suddenly failing their tests.

   cheers
     Ben Bolker
#
On 12.10.2020 23:29, Ben Bolker wrote:
Yes, please.

Best,
Uwe
#
On Tue, 13 Oct 2020 at 01:47, Ben Bolker <bbolker at gmail.com> wrote:
Hold on, are we sure this is detected at all? The result of the tests
is reported as OK. The "singular fit" message goes to stderr, so my
guess is that it is not compared against the saved output at all.
#
On 13/10/2020 5:33 a.m., I?aki Ucar wrote:
It is reported in the 00check.log file; I gave the link to the report.

I think it's a bug in the check code that the check log reports OK at 
the end, when (what should be) a fatal error has been displayed.

Duncan Murdoch