Skip to content

Causal version of HP filter and Kernel Smoothing in R?

11 messages · Brian G. Peterson, Michael, Paul Gilbert +3 more

#
On Fri, 2012-02-24 at 15:39 -0600, Michael wrote:
It would be easier for people to decide whether to help you if you
actually provided the reference to the paper you are looking to
replicate.  

There are many kernel smoothing methods in various R packages, which
your 'quite some search' I am sure uncovered, *and* kernel smoothing
mechanisms are typically rather trivial to code.  So without the
reference it is hard to even begin to evaluate which of them might do
what you are looking for.  Also, it would be polite for you to indicate
in what way the kernel smoothing mechanisms provided by specific
packages do not match the methodology you desire.
#
As usual, it helps to use the correct terminology.


The term usually employed is not 'causal' but 'one sided' or 'two sided'
filters.  In classic state space models, the two sided filter is often
called a 'smoother', and the one-sided version is called a 'filter'.
See any introduction to Kalman filters for examples, since the Kalman
may easily by one sided or two sided.

High pass filters are also quite trivial, as equation 4 in your
reference demonstrates.

I may be incorrect, having spent only a few moments on it, but I see
nothing in this paper to indicate that the kernel smoothing in equation
6 is not equally trivial.

Marc Wildi has written extensively on the topic of real time (one-sided)
filters, and his R code is public.
On Fri, 2012-02-24 at 16:47 -0600, Michael wrote:

  
    
#
Some pedantic points regarding correct terminology:
On 12-02-24 06:00 PM, Brian G. Peterson wrote:
Economists usually employ the terms 'one sided' and 'two sided'. In 
engineering, physics, and mathematics, I think the terms 'filter' and 
'smoother' are still used. (But yes, 'causal' usually has to do with 
something else.)
Even in the classic case this is not specific to state-space models. The 
term filter meant it could be used to filter incoming signals without 
knowledge of the future, while a smoother needs future information. So, 
a filter could be used to do realtime control, while a smoother could not.
Engineers use the term 'realtime data' to mean what I think most people 
would understand as 'look at the data as it is arriving', which implies 
using a (one-sided) filter.  Economists use the term 'realtime data' to 
mean 'look at the data as it arrived'. That is, the vintages of the data 
that were available at different points in time. Thus a realtime 
analysis for an economist is a consideration of the revisions in the 
data. I think Marc Wildi uses the term as an economist, not as an engineer.

(I warned you this is pedantic.)
Paul
#
I'm using the term as an `economist' and/or as an `engineer', hopefully without adding confusion to the topic. Specifically, a real-time filter is one-sided (which might sound redundant to some): it is also called concurrent filter in time series analysis (where target signals are considered to be outputs of bi-infinite symmetric filters: not smoothers...). Real-time data means: data as it arrives (possibly being revised in later vintages). A paper constructing real-time filters in the case of real-time data (mixing both concepts) is proposed in http://blog.zhaw.ch/idp/sefblog/index.php?/archives/205-7th-Annual-CIRANO-Workshop-on-Data-Revision-in-Macroeconomic-Forecasting-and-Policy.html

Marc
#
Being interpellated I provide cursory feedback on the topic. R-code illustrating the HP filter (symmetric or real-time) is proposed below.



Modern (traditional) filter design goes back to early work by Wiener and Kolmogorov (WK). State-space models (Kalman-filter/smoothing) is an alternative formulation - parameterization - of the problem which is found more convenient/appealing by proponents: Harvey has highlighted some of the natural appeal in economic applications. In general, both approaches could replicate each other by suitable reformulation (parameterization) of models. So ultimately it is a matter of taste which is to be preferred. I frequently use state-space for didactical purposes (not that the filter is `simpler'; but `outputs' are appealing to non-experts).



HP-filter goes back to ideas proposed by Whittaker: it is ultimately a tradeoff between `fit' and `smoothness'. One can derive an ARIMA-model which replicates HP perfectly (it is a model with a double unit-root in frequency zero).



All these approaches are pure mean-square incarnations: real-time filter minimize the mean-square error between concurrent and final (symmetric) filters assuming that the (implicit or explicit) data-model is `true'.



My filter-design emphasizes a more general `customized' perspective: one can replicate ordinary mean-square designs (WK, Kalman, HP). But one can also emphasize alternative research priorities such as `timeliness', `noise suppression', `accuracy'. Depending on your priorities these aspects might be more relevant - to you -.  customization means that the user can tweak the optimization criterion in order to match his individual priorities. My experience is that practitioners frequently assign priorities (in forecasting) differently than assumed by the `mean-square' paradigm.



HP-filter can be implemented in R as follows:



library(mFilter)

# articial data: white noise (not very clever because HP assumes a double unit-root)
set.seed(1)
len<-201
eps<-rnorm(len)

# lambda=1600 is a typical setting for working with quarterly GDP

lambda_hp<-1600
eps.hp <- hpfilter(eps,type="lambda", freq=lambda_hp)
# plot data (here noise) and HP-trend

plot(eps.hp$x)
lines(eps.hp$trend,col=2)



# Here is the coefficient matrix: it is a full revision sequence.

parm<-eps.hp$fmatrix-diag(rep(1,len)
parm<--parm

# And a plot of the HP-filter coefficients: symmetric and real-time (concurrent) trend filters

ts.plot(cbind(parm[,c(length(parm[1,])/2,length(parm[1,]))]),lty=1:2)



Marc
2 days later