Skip to content

how R implement qnorm()

7 messages · Brian Ripley, Thomas Lumley, Peter Dalgaard +1 more

#
On 18/10/2012 00:16, Sheng Liu wrote:
It's on the help page!

'For qnorm, the code is a C translation of

Wichura, M. J. (1988) Algorithm AS 241: The Percentage Points of the 
Normal Distribution. Applied Statistics, 37, 477?484.'

and the exact code is in the R sources.  E.g. online at 
https://svn.r-project.org/R/trunk/src/nmath/qnorm.c
I think you are wrong about 'most people': this is the notation used by 
a small group of non-statisticians (mainly physicists, I think).
No, FPUs do not commonly have inverse erf functions.

  
    
#
On Oct 18, 2012, at 09:55 , Prof Brian Ripley wrote:

            
Well, he's right in the sense that it is erf/erfc that are commonly documented in collections of special functions (like Abramowitz & Stegun), and also those that appear in common C math libraries. In both cases of course because physicists have dominated in their development. 

Of course "most people" use Excel these days (which has the inverse normal distribution but AFAICS not the inverse error function).
#
On Fri, Oct 19, 2012 at 12:21 PM, Sheng Liu <sheng.liu at live.ca> wrote:
It's not an approximation table, it's a rational Chebyshev
approximation, a ratio of polynomials.  qnorm also uses a rational
polynomial approximation.

At some level this *has to be* what is going on: computers don't
implement pnorm/qnorm or erfc in hardware, and there is no closed-form
expression for them in terms of quantities that are implemented in
hardware, so the functions must be some sort of approximate expression
using arithmetic and other hardware computations.
speed, and portability, and these approximations are portable,
accurate, and fast.

    -thomas

  
    
#
On Oct 19, 2012, at 06:05 , Thomas Lumley wrote:

            
I inferred that Sheng probably knew that and meant that there's a table of coefficients of a rational polynomial approximation.
Up to a few years ago, or maybe a decade, that was also the way special functions and even square roots and division were implemented in hardware using lookup tables and add/multiply circuits. All hell broke loose when one of the coefficients got entered incorrectly -- some may remember the FDIV Pentium bug of 1994. I used to have a really nice paper from Byte magazine c. 1990 which detailed the process, but I think it got lost in space. (L. Brett Glass: "Math Coprocessors" could be the one. http://dl.acm.org/citation.cfm?id=86680). 

Nowadays, we have single CPU cycle transcendentals, so I suppose that has all been reduced to hard-core optimized electronic circuitry. Fringe-market customers like statisticians still need to implement their functions in software, of course.