Skip to content

FAQ 7.31

4 messages · robin hankin, Peter Dalgaard, Brian Ripley

#
Jacobi's theta functions crop up here and there in various branches of 
mathematics,
such as unimodular and elliptic functions.

They have Taylor expansions, but the powers increase as the square of 
n, as in

1 + z + z^4 + z^9 + z^16 + z^25 + . . .

so they converge very quickly, provided |z|<1

The following toy function shows how I'm implementing these objects.  I 
just add terms until
they make no difference:


f <- function(z, maxiter=10){
    out.old <- 1
    for(n in 1:maxiter){
      out.new <- out.old + z^(n*n)
      if(identical(out.new, out.old)){
        return(out.new)
      }
      out.old <- out.new
    }
    stop("not converged")
  }

[NB this is not a real theta function!  Real theta functions  are more 
complicated. f() just shows the issues]

I worry about the use of identical() here, because I am comparing two 
floats for equality
as discussed in FAQ 7.31.    Perhaps all.equal() would be better:

  all.equal(1e99,1e99+1e83, tol=.Machine$double.eps)


Does the List have any comments to make?


--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743
#
Robin Hankin <r.hankin at noc.soton.ac.uk> writes:
Shouldn't matter in this sort of case, since the added term is bound
to underflow relative to the partial sum. 

The things to watch out for is that mathematical equality doesn't
imply numerical equality (3 * 0.1 != 0.3), and sometimes granularity
effects in correction terms. We've been bitten by code of the type

   x(n+1) = x(n) + f(x(n))

a couple of times in the numerical code, typically in Newton-type
iterations where the correction term alternates by +/- a few units in
the last place, and the program goes into an infinite loop. Optimizing
compilers can introduce such effects in otherwise functioning code by
applying mathematical transformations (see above...) to the code,
rearranging terms, moving terms outside of loops, folding constants,
etc.
#
On Fri, 6 May 2005, Robin Hankin wrote:

            
Well, probably 2*.Machine$double.eps since each of two numbers could be 
one bit from the truth.

The registers on ix86 machines have more precision than the representation 
stored in RAM.  So it is possible that out.new has more precision because 
it has been kept in registers and out.old has been retrieved from RAM and 
so has less.  The result may well be continued looping.

I think the scenario in the previous para is quite unlikely (it would need 
a very clever C compiler), but it is theoretically possible and may become 
likely with semi-compiled versions of R.  The C-level equivalent is quite 
common.
#
On May 6, 2005, at 09:00 am, Prof Brian Ripley wrote:

            
[snip]
Yes, this makes a lot of sense, and will likely prevent me from 
flogging a dead horse by executing
dozens of pointless iterations in pursuit of the dregs of those last 
few binary digits at the tail end of
some float (not to mention infinite loops).

I would have thought that if x, y are scalars then identical(x,y) would 
be the same as
all.equal(x,y,tol=.Machine$double.eps).

And it is, most of the time.

But this caught me out just now:

R>  x <- 5352970674736366
R>  identical(x,x+1)
[1] FALSE

R> all.equal(x,x+1,tol=.Machine$double.eps)
[1] TRUE


Thus  x and x+1  are not identical but nevertheless agree to within my 
machine precision, and
evidently my understanding of "machine precision" doesn't tally with 
Rreality.

Any comments?
--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743