Full_Name: Morten Welinder Version: 1.6.1 OS: Solaris/sparc Submission from: (NULL) (65.213.85.144) qbeta(0.1, 1e-8, 0.5, TRUE, FALSE) seems to hang for me.
qbeta hang (PR#2894)
3 messages · terra@diku.dk, Peter Dalgaard
terra@gnome.org writes:
Full_Name: Morten Welinder Version: 1.6.1 OS: Solaris/sparc Submission from: (NULL) (65.213.85.144) qbeta(0.1, 1e-8, 0.5, TRUE, FALSE) seems to hang for me.
confirmed on 1.7.0 Solaris 9, gcc 3.0.3 (standard build, so -O2, I assume) Morten: the gcc version is often crucial in these cases. However, the exact same thing is happening on Linux. The immediate cause is that n = fmax2(lneps/log(y), 4.0) gets large when y is in the vicinity of 1-1e-8, so the loop in src/nmath/pbeta.c:101 gets a rather high count. The algorithm isn't really stuck, it just takes a very long time. On the fastest machine that I have available:
system.time(qbeta(0.1, 1e-8, 0.5, TRUE, FALSE))
[1] 75.58 0.00 75.58 0.00 0.00 It's not really that surprising:
pbeta(1e-5, 1e-8, 0.5,, TRUE, FALSE)
[1] 0.9999999
pbeta(1e-10, 1e-8, 0.5,, TRUE, FALSE)
[1] 0.9999998
pbeta(1e-200, 1e-8, 0.5,, TRUE, FALSE)
[1] 0.9999954
pbeta(1e-309, 1e-8, 0.5,, TRUE, FALSE)
[1] 0.999993
pbeta(1e-400, 1e-8, 0.5,, TRUE, FALSE)
[1] 0 and you're trying to solve pbeta(x, 1e-8, 0.5,, TRUE, FALSE) == 0.1
O__ ---- Peter Dalgaard Blegdamsvej 3 c/ /'_ --- Dept. of Biostatistics 2200 Cph. N (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~~~~~~~~~ - (p.dalgaard@biostat.ku.dk) FAX: (+45) 35327907
Ok, I can confirm that it does not, in fact, loop forever. Just a close approximation.
Morten: the gcc version is often crucial in these cases.
Sorry. gcc-2.7.2 (both -O2 and -O0) on Solaris 2.9 / sparc. The problems seem to be... 1. The initial guess underflows to zero. 2. That guess is replaced by the truly awful guess 0.5. 3. The root finding algorithm ignores all the nice properties of pbeta -- such as it being monotonically increasing and bounded.
and you're trying to solve pbeta(x, 1e-8, 0.5,, TRUE, FALSE) == 0.1
Not quite. What I was trying to do was the see what would happen with an
insanely small alpha. Actually doing that would involve getting the
arguments right, of course, :-) The not-quite-hang was an unplesant side
effect of swapping the args.
I notice things like
r = sqrt(-log(a * a));
in the code. I fail to see why that isn't just
r = sqrt(-2 * log (a))
which ought to be faster and more accurate. Then later
...log((1. - a) * qq)...
which might be better as
...(log1p(-a) + log (qq))...
if a is close to zero.
There are lots of other places that worry me with respect to cancellation
errors, for example
r = 1 - pp;
t = 1 - qq;
Morten