-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
project.org] On Behalf Of Seref Arikan
Sent: Monday, October 17, 2011 9:11 AM
To: r-help at r-project.org
Subject: [R] Best practices for handling very small numbers?
Greetings
I have been experimenting with sampling from posterior distributions
using
R. Assume that I have the following observations from a normal
distribution,
with an unscaled joint likelihood function:
normsamples = rnorm(1000,8,3)
joint_likelihood = function(observations, mean, sigma){
return((sigma ^ (-1 * length(observations))) * exp(-0.5 * sum(
((observations - mean ) ^ 2)) / (sigma ^ 2) ));
}
the joint likelihood omits the constant (1/(2Pi)^n), which is what I
want,
because I've been experimenting with some crude sampling methods. The
problem is, with the samples above, the joint likelihood becomes 0 very
quickly.
I wanted to experiment with tens of thousands of observations, but
without
an implementation of a transformation that can handle very small
values, my
sampling algorithms would not work.
This is an attempt to use some sampling algorithms for Bayesian
parameter
estimation. I do not want to resort to conjugacy, since I am developing
this
to handle non conjugate scenarios, I just wanted to test it on a
conjugate
scenario, but I've quickly realized that I'm in trouble due to
likelihood
reaching 0 quickly.
Your feedback would be appreciated. It makes me wonder how JAGS/BUGS
handles
this problem
Best regards
Seref