Skip to content

MLE where loglikelihood function is a function of numerical solutions

7 messages · Albyn Jones, Kristian Lind, Berend Hasselman

#
Hi Kristian

The obvious approach is to treat it like any other MLE problem:  evaluation
of the log-likelihood is done as often as necessary for the optimizer  
you are using: eg a call to optim(psi,LL,...)  where LL(psi) evaluates  
the log likelihood at psi.  There may be computational shortcuts that  
would work if you knew that LL(psi+eps) were well approximated by  
LL(psi), for the values of eps used to evaluate numerical derivatives  
of LL.  Of course, then you might need to write your own custom  
optimizer.

albyn

Quoting Kristian Lind <kristian.langgaard.lind at gmail.com>:
#
to clarify: by  "if you knew that LL(psi+eps) were well approximated  
by LL(psi), for the values of eps used to evaluate numerical  
derivatives of LL. "
I mean the derivatives of LL(psi+eps) are close to the derivatives of LL(psi),
and perhaps you would want the hessian to be close as well.

albyn

Quoting Albyn Jones <jones at reed.edu>:
2 days later
#
Questions:

1. why are you defining Bo within a loop? 
2. Why are you doing library(nleqslv) within the loop?

Doing both those statements outside the loop once is more efficient.

In your transdens function you are not using the function argument
parameters, why?
Shouldn't there be a with(parameters) since otherwise where is for example
K_vv supposed to come from?

I can't believe that the code "worked": in the call of nleqslv you have ...
control=list(matxit=100000) ...
It should be maxit and nleqslv will issue an error message and stop (at
least in the latest versions).
And why 100000? If that is required, something is not ok with starting
values and/or functions.

Finally the likelihood function at the end of your code

#Maximum likelihood estimation using mle package 
library(stats4) 
#defining loglikelighood function 
#T <- length(v) 
#minuslogLik <- function(x,x2) 
#{    f <- rep(NA, length(x)) 
#        for(i in 1:T) 
#        { 
#            f[1] <- -1/T*sum(log(transdens(parameters = parameters, x = 
c(v[i],v[i+1])))-log(Jac(outmat=outmat, x2=c(v[i],r[i]))) 
#        } 
#    f 
# } 

How do the arguments of your function x and x2 influence the calculations in
the likelihood function?
As written now with argument x and x2 not being used in the body of the
function, there is nothing to optimize.
Shouldn't f[1] be f[i] because otherwise the question is why are looping
for( i in 1:T)?
But then returning f as a vector seems wrong here. Shouldn't a likelihood
function return a scalar?


Berend

--
View this message in context: http://r.789695.n4.nabble.com/MLE-where-loglikelihood-function-is-a-function-of-numerical-solutions-tp3439436p3447224.html
Sent from the R help mailing list archive at Nabble.com.
#
On 14-04-2011, at 09:00, Kristian Lind wrote:

            
Huh?
If you define zz <- function(x) {...} x is an argument which must be specified when using the function.
What kind of problem?. Why don't you then do (parameters and outmat already known globally)

          f[i] <- -1/T*sum(log(transdens(parameters = parameters, x = x))-log(Jac(outmat=outmat, x2=x2)))

You should pass the arguments used by the optimizer in calling your likelihood function to the functions you defined. That way f[] will depend on x and x2 and so will the likelihood.
Is the mean  a likelihood?

Berend