(One question from the thread Handling Not-Always-Needed Dependencies?) I hope not to start another long tangled thread, but I have a basic confusion which I think has a yes/no answer and I would like to know if there is agreement on this point (or is it only me that is confused as usual). If my package has a test that needs another package, but that package is not needed in the /R code of my package, then I indicate it as "Suggests", not as "Depends" nor as "Imports". If that package is not available when I run R CMD check, should the test pass? Yes or no: ? (I realize my own answer might be different if the package was used in an example or demo in place of a test, but that is just the confusion caused by too many uses for Suggests. In the case of a test, my own thought is that the test must fail, so my own answer is no. If the test does not fail then there is no real testing being done, thus missing code coverage in the testing. If the answer is no, then the tests do not need to be run if the package is not available, because it is known that they must fail. I think that not bothering to run the tests because the result is known is even more efficient than other suggestions. I also think it is the status quo.) Hoping my confusion is cleared up, and this does not become another long tangled thread, Paul
[R-pkg-devel] Handling Not-Always-Needed Dependencies? - Part 2
9 messages · Paul Gilbert, Dirk Eddelbuettel, Uwe Ligges +3 more
On 4 August 2016 at 11:46, Paul Gilbert wrote:
| If my package has a test that needs another package, but that package is | not needed in the /R code of my package, then I indicate it as | "Suggests", not as "Depends" nor as "Imports". If that package is not | available when I run R CMD check, should the test pass? Wrong question. Better question: Should the test be running? My preference is for only inside of a requireNamespace() (or equivalent) block as the package is not guaranteed to be present. In theory. In practice people seem to unconditionally install it anyway, and think that is a good idea. I disagree on both counts but remain in the vocal minority. Dirk
http://dirk.eddelbuettel.com | @eddelbuettel | edd at debian.org
On 04.08.2016 17:46, Paul Gilbert wrote:
(One question from the thread Handling Not-Always-Needed Dependencies?) I hope not to start another long tangled thread, but I have a basic confusion which I think has a yes/no answer and I would like to know if there is agreement on this point (or is it only me that is confused as usual). If my package has a test that needs another package, but that package is not needed in the /R code of my package, then I indicate it as "Suggests", not as "Depends" nor as "Imports". If that package is not available when I run R CMD check, should the test pass? Yes or no: ?
Yes, as the package should pass the checks if suggested packages are unavailable. BUT if these are available and the code is wrong, then it should generate an error. Best, Uwe
(I realize my own answer might be different if the package was used in an example or demo in place of a test, but that is just the confusion caused by too many uses for Suggests. In the case of a test, my own thought is that the test must fail, so my own answer is no. If the test does not fail then there is no real testing being done, thus missing code coverage in the testing. If the answer is no, then the tests do not need to be run if the package is not available, because it is known that they must fail. I think that not bothering to run the tests because the result is known is even more efficient than other suggestions. I also think it is the status quo.) Hoping my confusion is cleared up, and this does not become another long tangled thread, Paul
______________________________________________ R-package-devel at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
On 04/08/2016 11:46 AM, Paul Gilbert wrote:
(One question from the thread Handling Not-Always-Needed Dependencies?) I hope not to start another long tangled thread, but I have a basic confusion which I think has a yes/no answer and I would like to know if there is agreement on this point (or is it only me that is confused as usual). If my package has a test that needs another package, but that package is not needed in the /R code of my package, then I indicate it as "Suggests", not as "Depends" nor as "Imports". If that package is not available when I run R CMD check, should the test pass? Yes or no: ? (I realize my own answer might be different if the package was used in an example or demo in place of a test, but that is just the confusion caused by too many uses for Suggests. In the case of a test, my own thought is that the test must fail, so my own answer is no. If the test does not fail then there is no real testing being done, thus missing code coverage in the testing. If the answer is no, then the tests do not need to be run if the package is not available, because it is known that they must fail. I think that not bothering to run the tests because the result is known is even more efficient than other suggestions. I also think it is the status quo.) Hoping my confusion is cleared up, and this does not become another long tangled thread,
I'd say it's up to you as the author of the test. Would skipping that test mean that your package was not adequately tested? If so, then you should get an error if it isn't available, because otherwise people will think they've done adequate testing when they haven't. One way this could happen if a major function of your package is being tested on a sample dataset from a Suggested package. Users of your package don't need the other one, but testers do. Duncan Murdoch
On 8/4/2016 11:51 AM, Dirk Eddelbuettel wrote:
On 4 August 2016 at 11:46, Paul Gilbert wrote: | If my package has a test that needs another package, but that package is | not needed in the /R code of my package, then I indicate it as | "Suggests", not as "Depends" nor as "Imports". If that package is not | available when I run R CMD check, should the test pass? Wrong question. Better question: Should the test be running? My preference is for only inside of a requireNamespace() (or equivalent) block as the package is not guaranteed to be present. In theory. In practice people seem to unconditionally install it anyway, and think that is a good idea. I disagree on both counts but remain in the vocal minority.
As another package maintainer, I had almost the identical question reading the previous (long) thread, but the three answers here don't give the same answer. My question I can make even more concrete: I use the testthat package for my testing. I never use it in the R code itself, and it is explicitly only used for testing. Should that be included as "Depends" because every test requires it or "Suggests" because no end user ever needs it? If "Depends", then it leads to over-installation of the package by end users who don't care about running tests locally. If "Suggests", then all of the tests would fail (assuming that Dirk's suggestion is implemented). At a loss, Bill
On Thu, Aug 4, 2016 at 5:48 PM, Duncan Murdoch <murdoch.duncan at gmail.com> wrote:
[...]
I'd say it's up to you as the author of the test. Would skipping that test mean that your package was not adequately tested? If so, then you should get an error if it isn't available, because otherwise people will think they've done adequate testing when they haven't. One way this could happen if a major function of your package is being tested on a sample dataset from a Suggested package. Users of your package don't need the other one, but testers do.
Indeed. IMO this was anything but a yes/no question. :) Personally I would love to be able to have BuildImports and TestImports. The former for build time dependencies, the latter for dependencies when running the tests. But I understand that the current system is complicated enough. Gabor
Duncan Murdoch
______________________________________________ R-package-devel at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
On 04.08.2016 18:55, Bill Denney wrote:
On 8/4/2016 11:51 AM, Dirk Eddelbuettel wrote:
On 4 August 2016 at 11:46, Paul Gilbert wrote: | If my package has a test that needs another package, but that package is | not needed in the /R code of my package, then I indicate it as | "Suggests", not as "Depends" nor as "Imports". If that package is not | available when I run R CMD check, should the test pass? Wrong question. Better question: Should the test be running? My preference is for only inside of a requireNamespace() (or equivalent) block as the package is not guaranteed to be present. In theory. In practice people seem to unconditionally install it anyway, and think that is a good idea. I disagree on both counts but remain in the vocal minority.
As another package maintainer, I had almost the identical question reading the previous (long) thread, but the three answers here don't give the same answer. My question I can make even more concrete: I use the testthat package for my testing. I never use it in the R code itself, and it is explicitly only used for testing. Should that be included as "Depends" because every test requires it or "Suggests" because no end user ever needs it? If "Depends", then it leads to over-installation of the package by end users who don't care about running tests locally. If "Suggests", then all of the tests would fail (assuming that Dirk's suggestion is implemented).
Suggests. Best, Uwe Ligges
At a loss, Bill
______________________________________________ R-package-devel at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
On 04/08/2016 12:55 PM, Bill Denney wrote:
On 8/4/2016 11:51 AM, Dirk Eddelbuettel wrote:
On 4 August 2016 at 11:46, Paul Gilbert wrote: | If my package has a test that needs another package, but that package is | not needed in the /R code of my package, then I indicate it as | "Suggests", not as "Depends" nor as "Imports". If that package is not | available when I run R CMD check, should the test pass? Wrong question. Better question: Should the test be running? My preference is for only inside of a requireNamespace() (or equivalent) block as the package is not guaranteed to be present. In theory. In practice people seem to unconditionally install it anyway, and think that is a good idea. I disagree on both counts but remain in the vocal minority.
As another package maintainer, I had almost the identical question reading the previous (long) thread, but the three answers here don't give the same answer. My question I can make even more concrete: I use the testthat package for my testing. I never use it in the R code itself, and it is explicitly only used for testing. Should that be included as "Depends" because every test requires it or "Suggests" because no end user ever needs it? If "Depends", then it leads to over-installation of the package by end users who don't care about running tests locally. If "Suggests", then all of the tests would fail (assuming that Dirk's suggestion is implemented).
I'd say you should use Suggests, and test for its presence at the start
of your test scripts, e.g.
if (!require("testthat"))
stop("These tests need testthat")
or
if (!requireNamespace("testthat"))
stop("these tests need testthat")
(The latter means you'd need to prefix all testthat functions with
"testthat::", but it has the advantage that their names don't conflict
with yours.)
Or perhaps you don't want to give an error, you just want to skip some
of your tests. It's your decision.
Duncan Murdoch
On 08/04/2016 11:51 AM, Dirk Eddelbuettel wrote:
On 4 August 2016 at 11:46, Paul Gilbert wrote: | If my package has a test that needs another package, but that package is | not needed in the /R code of my package, then I indicate it as | "Suggests", not as "Depends" nor as "Imports". If that package is not | available when I run R CMD check, should the test pass? Wrong question. Better question: Should the test be running? My preference is for only inside of a requireNamespace() (or equivalent) block as the package is not guaranteed to be present. In theory.
At the level of R CMD check throwing an error or not, I think this is arguing that it should be possible to pass the tests (not throw an error) even though they are not run, isn't it? (So your answer to my question is yes, at least the way I was thinking of the question.) Or do you mean you would just like the tests to fail with a more appropriate error message? Or do you mean, as Duncan suggests, that the person writing the test should be allowed to code in something to decide if the test is really important or not?
In practice people seem to unconditionally install it anyway, and think that is a good idea. I disagree on both counts but remain in the vocal minority.
Actually, I think you are in agreement with Uwe and Duncan on this point, Duncan having added the refinement that the test writer gets to decide. No one so far seems to be advocating for my position that the tests should necessarily fail if they cannot be run. So I guess I am the one in the minority. Paul
Dirk