Describes a procedure advocated by Kay (1981b) for simulating a portion X0, X1, . . . , XN−1 of a zero mean Gaussian AR(p) process. The procedure involves unraveling prediction errors associated with best linear predictors, with the coefficients for the predictors being provided by the Levinson–Durbin recursions described in Section 9.4. No matter what the model order p is, each simulated series requires a random sample of length N from a standard Gaussian distribution. By contrast, the procedures described in Section 11.1 for simulating a portion of length N from a zero mean MA(q) process require a random sample of length N +q. Focusing for simplicity on the MA(1) case, explore an approach similar to Kay’s to simulate a portion X0, X1, . . . , XN−1of a zero mean Gaussian MA(1) process, with the goal being to use just N realizations of a standard Gaussian RV as opposed to N + 1 (the discussion at the beginning of Section 9.4 notes that the Levinson–Durbin recursions are not restricted to AR(p) processes, but rather can be used to form the coefficients for the best linear predictors of other stationary processes). With this same goal in mind, explore also adaptation of the approach of McLeod and Hipel (1978) that is discussed in C&E [2] for Section 11.1. Comment briefly on how the two adapted approaches compare with the procedures described in Section 11.1 for simulating an MA(1) process ?
Already registered? Login
Not Account? Sign up
Enter your email address to reset your password
Back to Login? Click here