People, the best thing I did all weekend was read JH’s brand new working paper, “Why you should never use the Hodrick-Prescott filter”.
What a bold title, and JH backs it up with both technical derivations and practical examples.
Cannot recommend this highly enough. Here’s the abstract:
Here’s why. (1) The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process. (2) A one-sided version of the filter reduces but does not eliminate spurious predictability and moreover produces series that do not have the properties sought by most potential users of the HP filter. (3) A statistical formalization of the problem typically produces values for the smoothing parameter vastly at odds with common practice, e.g., a value for λ far below 1600 for quarterly data. (4) There’s a better alternative. A regression of the variable at date t+h on the four most recent values as of date t offers a robust approach to detrending that achieves all the objectives sought by users of the HP filter with none of its drawbacks.
So the extracted “cycle” from using the filter may be telling us more about the properties of the filter than the properties of the data, the rule of thumb setting for the smoothness parameter is off by a factor of more than 100 and there is a simple alternative that does the job much better?
Other than that Mr. Hamilton, how did you enjoy the filter?