Dear all, I submitted a new version of my package "robust2sls" (released version on CRAN <https://cran.r-project.org/package=robust2sls>, new submitted version on GitHub <https://github.com/jkurle/robust2sls>). The automatic CRAN pre-tests fail and it seems that this is due to runtime timeout (in the tests). For Windows, I get the message "Check process probably crashed or hung up for 20 minutes ... killed" and for Debian, I get "checking tests ... [30m/30m] ERROR". See below for the links to the log files. https://win-builder.r-project.org/incoming_pretest/robust2sls_0.2.1_20220722_180126/Windows/00check.log https://win-builder.r-project.org/incoming_pretest/robust2sls_0.2.1_20220722_180126/Debian/00check.log It seems that the development version of R might be causing this problem, and others have already discussed longer running times of their checks on this mailing list. If I do the checks on winbuilder using the old version <https://win-builder.r-project.org/VCReLD0u8Svj/00check.log> or release version <https://win-builder.r-project.org/l0zOMEvibJ1b/00check.log> of R, it works fine. But with the development version <https://win-builder.r-project.org/5ZebbySXV99X/00check.log> it fails. Surprisingly, the CRAN checks work well on all five settings that I have tried with GitHub Actions <https://github.com/jkurle/robust2sls/actions/runs/2719662067>, including the one with the development version of R on Ubuntu. The check took less than 10 minutes in this case. Can anyone advise me what to do? I could skip most of my tests based on the "testthat" package to decrease the runtime, such that they can finish when re-submitting to CRAN even if the development version of R is slower. But of course, I would prefer to keep my tests and figure out the reason for the slowdown. Thank you for any suggestions and help! Best wishes, Jonas
[R-pkg-devel] CRAN pre-test failure due to long runtime / timeout
4 messages · Jonas Kurle, Dirk Eddelbuettel, Ivan Krylov
On 22 July 2022 at 21:09, Jonas Kurle wrote:
| The automatic CRAN pre-tests fail and it seems that this is due to | runtime timeout (in the tests). For Windows, I get the message "Check | process probably crashed or hung up for 20 minutes ... killed" and for | Debian, I get "checking tests ... [30m/30m] ERROR". See below for the | links to the log files. [...] | Can anyone advise me what to do? There is a "fix" addressing the _symptom_ but not the _cause_: simply do not run the tests. Your time is finite, CRAN maintainer time is certainly finite. Even today we still do not have a real ability to cleanly reproduce CRAN checks (win-builder is pretty close, but eg CRAN Debian is not Dockerized and hence not really reproducible (I once offered help but this petered out without generating anything tangible). This topic has been discussed a few times before. Some people advocate using skip_on_cran() (if you use testthat). I don't testthat but in some packages only run tests if an env. var is set which I set eg for GitHUb Actions. Hth, Dirk
dirk.eddelbuettel.com | @eddelbuettel | edd at debian.org
On Fri, 22 Jul 2022 21:09:15 +0100
Jonas Kurle <mail at jonaskurle.com> wrote:
Can anyone advise me what to do?
I see you're using testthat. Is there a way to make it produce verbose output for every test during their runtime? As far as I know, testthat tests look like a single test to R CMD check which only outputs everything after it's done (or an error occurs). Since the R process is terminated on timeout, it doesn't get a chance to handle an error and print anything. One way I see is relatively time-consuming, but so is blindly playing skip_on_cran with individual tests and resubmitting jobs to win-builder. If you move your tests from tests/testthat/*.R into tests/*.R (and adjust them to work outside the testthat harness if necessary), R should be able to tell you which *.R file times out and show you the approximate location where it was terminated. Alternatively, you could try instrumenting your tests with debugging print commands and see if any of them end up in 00check.log, but I think that testthat uses capture.output to declutter the console when running tests, which, sadly, ends up interfering with your ability to debug the problem in this case.
Best regards, Ivan
Dear Ivan and Dirk, Thank you both for your suggestions and your help. For now, I will simply skip the automatic unit testing on CRAN. My submission to winbuilder is now successful for all versions of R. In the long run, I might move away from testthat to have more flexibility with respect to debugging and being able to see in more detail what is going on. I also received a hint from another user (thanks Johannes!) who ran my tests locally on their Debian system, where the tests did not time out but eventually produced an error. It turns out that some of my tests / functions work slightly differently on different platforms, so I can now adjust these tests to be platform-specific. Thanks again and best wishes, Jonas Am 23.07.2022 um 07:21 schrieb Ivan Krylov:
On Fri, 22 Jul 2022 21:09:15 +0100 Jonas Kurle <mail at jonaskurle.com> wrote:
Can anyone advise me what to do?
I see you're using testthat. Is there a way to make it produce verbose output for every test during their runtime? As far as I know, testthat tests look like a single test to R CMD check which only outputs everything after it's done (or an error occurs). Since the R process is terminated on timeout, it doesn't get a chance to handle an error and print anything. One way I see is relatively time-consuming, but so is blindly playing skip_on_cran with individual tests and resubmitting jobs to win-builder. If you move your tests from tests/testthat/*.R into tests/*.R (and adjust them to work outside the testthat harness if necessary), R should be able to tell you which *.R file times out and show you the approximate location where it was terminated. Alternatively, you could try instrumenting your tests with debugging print commands and see if any of them end up in 00check.log, but I think that testthat uses capture.output to declutter the console when running tests, which, sadly, ends up interfering with your ability to debug the problem in this case.