Duncan> On 9/9/2005 7:41 PM, Paul MacManus wrote:
>> I need to run qbeta on a set of 500K different parameter
>> pairs (with a fixed quantile). For most pairs qbeta finds
>> the solution very quickly but for a substantial minority
>> of the cases qbeta is very slow. This occurs when the
>> solution is very close to zero. qbeta is getting answers
>> to a precision of about 16 decimal places. I don't need
>> that accuracy. Is there any way to set the precision of
>> R's calculations to, say, 9 decimal places and so speed
>> up the whole process?
>>
>> I could, of course, avoid this problem by not running
>> qbeta when I know the solution is going to be
>> sufficiently small but I'm more interested in ways to
>> adjust the precision of calculations in R.
Duncan> There's no general way to do this. The function
Duncan> that implements qbeta may have some tuning
Duncan> parameters (I haven't looked), but they aren't
Duncan> usually needed, and aren't exposed in R.
Yes.
However, I've had thoughts in the past on possibly providing such
a possibility from both R and C level. One problem is that
``for symmetry reasons'' you would want to have this ``for all functions''
which would need a lot of work, for something that's really not
of too high a need.
I agree that qbeta() can be particularly "nasty". I'm open to
more in-depth discussion on this -- after R 2.2.0 is out
Duncan> If you want a quick approximation, I'd suggest doing
Duncan> your calculation on a grid of values and using
Duncan> approx() to interpolate.
yes, or approxfun() {which prefer for its UI},
or even more smoothly using spline() or splinefun() {again
preferably the latter}.
One problem may be that these are only for 1-D interpolation and
qbeta() depends on three principal arguments.
Package 'akima' provides somewhat smooth 2-D interpolation.