Skip to content

nproc parameter in efpFunctional

7 messages · bonda, Achim Zeileis

#
Hello all,
could anyone explain the exact meaning of parameter nproc? Why different
values of nproc give so different critical values, i.e.

meanL2BB$computeCritval(0.05,nproc=3)
[1] 0.9984853
meanL2BB$computeCritval(0.05,nproc=1)
[1] 0.4594827

The strucchange-package description gives "integer specifying for which
number of processes Brownian motions should be simulated" - do I need
nproc-dimensional Brownian bridge?

Thank you in advance!
Julia

--
View this message in context: http://r.789695.n4.nabble.com/nproc-parameter-in-efpFunctional-tp3972419p3972419.html
Sent from the R help mailing list archive at Nabble.com.
#
On Wed, 2 Nov 2011, bonda wrote:

            
Yes, see the 2006 CSDA paper, especially pages 2998/9.
#
Thank you. I've understood, that it should be k (number of parameters)
separate Brownian bridges. 
Is it possible, to get such separated/disaggregated processes also in
function efp()? (one can take gefp(..., family=gaussian), or construct by
myself residuals(lm.model)*X, but still interesting). And on the contrary,
how can I get an aggregated Brownian bridge path for all parameters
together, similar to efp()$process? It is made in plot.gefp, but only for
graphical visualization...
Thank you in advance!
Julia

--
View this message in context: http://r.789695.n4.nabble.com/nproc-parameter-in-efpFunctional-tp3972419p3984605.html
Sent from the R help mailing list archive at Nabble.com.
#
On Thu, 3 Nov 2011, bonda wrote:

            
Well, if you use a process based on OLS residuals, you have always a 
one-dimensional process even though your model has k parameters. Hence, 
the two parameters are really conceptually different..
Some processes that efp() computes are always 1-dimensional (namely those 
based on residuals) while some are k-dimensional (namely the 
estimates-based processes) and some are (k+1)-dimensional (the score-based 
processes).

gefp() generalizes this concept and lets you construct the fluctuation 
processes fairly flexibly.
For "gefp" objects all aggregation is done by the efpFunctional employed.

But this is really described in a fair amount of detail in the 
accompanying papers. Specifically, for gefp/efpFunctional in the 2006 CSDA 
paper.
1 day later
#
The 2006 CSDA paper is really very informative, perhaps, I'm trying to
understand the things lying beyond. If we have e.g. k=3, then taking nproc=3
for the functional maxBB we get a critical value (boundary)

maxBB$computeCritval(0.05,nproc=3)
[1] 1.544421,

and this for nproc=NULL (Bonferroni approximation) will be

maxBB$computeCritval(0.05)
[1] 1.358099.

Aggregating 3 Brownian bridges first over components, we obtain time series
process. Now, we wonder if maximum value of the process (aggregation over
time) lies over boundary. Which boundary - 1.544421 or 1.358099 - should one
take? They look too different and, for instance, lead to "unfair computing"
of empirical size (as rejection rate of null hypothesis) or empirical power
(as acception rate of alternative). 




--
View this message in context: http://r.789695.n4.nabble.com/nproc-parameter-in-efpFunctional-tp3972419p3989598.html
Sent from the R help mailing list archive at Nabble.com.
#
On Fri, 4 Nov 2011, bonda wrote:

            
No. In the latter case no Bonferroni approximation is applied. If you want 
to use it, you can do so via the rule of thumb

R> maxBB$computeCritval(0.05/3, nproc = 1)
[1] 1.547175

which essentially matches the critical value computed for nproc = 3. If 
you use the more precise value 1 - (1 - 0.05)^(1/3) instead of 0.05/3, you 
get a match (up to some small numerical differences).

Setting nproc=NULL is only possible in efpFunctional():
efpFunctional() sets up the computeCritval() and computePval() functions 
via simulation methods (unless closed form solutions are supplied). For 
the simulation two strategies are available: Simulate nproc = 1, 2, 3, ...
explicitly. Simulate only nproc = 1 and apply a Bonferroni correction. The 
last option is chosen if you set nproc=NULL -- it makes only sense if you 
aggregate via the maximum across the components.

The resulting computeCritval() and computePval() function always need to 
have the correct nproc supplied (i.e., nproc=NULL makes no sense).
2 days later