Skip to content

How to handle large numbers?

10 messages · Feng Li, Thomas Lumley, robin hankin +4 more

#
On Wed, 11 Feb 2009, Feng Li wrote:

            
No.  0*Inf is NaN according to the floating point arithmetic standards that R depends on.

  It's true that 0*x==0 for all finite x, but it's also true that x*Inf==Inf for all non-zero x, and you can't preserve both of these with 0*Inf.
No. It should be close to 1. Try  1000/(1000+5) for a simpler example.
exp(1000)/(exp(1007)+5) is 1/(exp(7)+5*exp(-1000)), which is the same as 1/exp(7) to more than 400 digits accuracy.

            -thomas

Thomas Lumley			Assoc. Professor, Biostatistics
tlumley at u.washington.edu	University of Washington, Seattle
#
Feng

checkout the Brobdingnag package:


 > library(Brobdingnag)
 > exp(1000)/(exp(1007)+5)
[1] NaN

 > as.numeric(exp(as.brob(1000))/(exp(as.brob(1007))+5))
[1] 0.000911882
 >
Feng Li wrote:

  
    
#
Shouldn't it rather be 1 / exp(7), since exp(1007) = exp(1000) * exp(7)?

You can get the correct solution if you rewrite the fraction  
algebraically first, dividing all terms by exp(1000).

1 / (exp(7) + 5 * exp(-1000))

[1] 0.000911882

Of course, you have to know which of the numbers are large in order to  
do this right ...



Best regards,
Stefan Evert

[ stefan.evert at uos.de | http://purl.org/stefan.evert ]
#
Try Ryacas.  N means numerically evaluate in
yacas (as we don't want it evaluated on
the R side where we already know we get NaN).
[1] 0.000911882

You can explore it somewhat in R via:

 curve(exp(x)/(exp(x+7)+5), 1, 500)
On Wed, Feb 11, 2009 at 5:40 AM, Feng Li <840116 at gmail.com> wrote:
#
On Wed, Feb 11, 2009 at 6:20 AM, Robin Hankin <rksh1 at cam.ac.uk> wrote:
Though brob is certainly useful in many cases, it can't substitute for
thinking about numeric issues (roundoff, cancellation, overflow,
underflow) in general.

For example:
[1] 0
[1] -exp(-Inf)

The correct answer is about 4e-18.  Perhaps Ryacas or some other tool
gets this right, but in general I don't think it's wise to abdicate
responsibility to one's tools. Though numerical analysis is a field in
itself, I think it's worthwhile to learn the basics.

           -s
#
Stavros Macrakis wrote:
bc gets it arbitrarily right:

bc -l <<END
scale=1000
l(e(40)+1)-40
END

as does, e.g., mathematica (which appears to be using the *foss* gmp
library):

N[Log[Exp[40]+1]-40, 1000]
interestingly, perl's bignum gets it slowly and actually wrong (or do i
miss something?):

perl -Mbignum=p,-50 -le 'print (log(exp(40)+1)-40'

vQ
#
On Wed, Feb 11, 2009 at 4:38 PM, Wacek Kusnierczyk
<Waclaw.Marcin.Kusnierczyk at idi.ntnu.no> wrote:
Yes, any arbitrary-precision systems should get it right if you
specify the right number of digits to calculate. But why use expensive
multiple-precision when you don't need it?

The 'right' way to calculate things like log(exp(x)+1)-x and
exp(x)/(exp(x+k)+a) for large values of x is to understand their
asymptotic behavior, easy to do by hand or in a CAS like Maxima (and
presumably yacas, though I don't have any experience with it) -- for
large x, the first looks like exp(-x)-exp(-2*x)/2+...; the second
looks like exp(-k)-a*exp(-(x+k)) + ...  No need for
arbitrary-precision arithmetic once you know this.

         -s