About

The Cochrane Orcutt procedure is use in economics to adjust a linear model for serial correlation in the error term.

The cooresponding method in R is cochrane.orcutt however the implementation differes slightly.

R Prototype:

library(orcutt)

df = data.frame(t(data.frame(
    c(20.96,  127.3),
    c(21.40,  130.0),
    c(21.96,  132.7),
    c(21.52,  129.4),
    c(22.39,  135.0),
    c(22.76,  137.1),
    c(23.48,  141.2),
    c(23.66,  142.8),
    c(24.10,  145.5),
    c(24.01,  145.3),
    c(24.54,  148.3),
    c(24.30,  146.4),
    c(25.00,  150.2),
    c(25.64,  153.1),
    c(26.36,  157.3),
    c(26.98,  160.7),
    c(27.52,  164.2),
    c(27.78,  165.6),
    c(28.24,  168.7),
    c(28.78,  171.7))))

rownames(df) <- NULL
colnames(df) <- c("y", "x")
my_lm = lm(y ~ x, data=df)
coch = cochrane.orcutt(my_lm)

The R-implementation is kind of…silly.

The above works- converges at 318 iterations- the transformed DW is 1.72, yet the rho is .95882. After 318 iteartions, this will also report a rho of .95882 (which sugguests SEVERE autocorrelation- nothing close to 1.72.

At anyrate, the real prototype for this is the example from Applied Linear Statistcal Models 5th Edition by Kunter, Nachstheim, Neter, and Li.

Steps:

  1. Normal Regression
  2. Estimate \(\rho\)
  3. Get Estimates of Transformed Equation
  4. Step 5: Use Betas from (4) to recalculate model from (1)
  5. Step 6: repeat Step 2 through 5 until a stopping criteria is met. Some models call for convergence- Kunter et. al reccomend 3 iterations, if you don’t achieve desired results, use an alternative method.

Some additional notes from Applied Linear Statistical Models:

They also provide some interesting notes on p 494:

  1. “Cochrane-Orcutt does not always work properly. A major reason is that when the error terms are positively autocorrelated, the estimate \(r\) in (12.22) tends to underestimate the autocorrelation parameter \(\rho\). When this bias is serious, it can significantly reduce the effectiveness of the Cochrane-Orcutt approach.
  2. “There exists an approximate relation between the Durbin Watson test statistic \(\mathbf{D}\) in (12.14) and the estimated autocorrelation paramater \(r\) in (12.22):
\(D ~= 2(1-\rho)\)

They also note on p492: “… If the process does not terminate after one or two iterations, a different procedure should be employed.” This differs from the logic found elsewhere, and the method presented in R where, in the simple example in the prototype, the procedure runs for 318 iterations. This is why the default maximum iteratoins are 3, and should be left as such.

Also, the prototype and ‘correct answers’ are based on the example presented in Kunter et. al on p492-4 (including dataset).

Parameters

Parameter Description Default Value
'regressor Any subclass of org.apache.mahout.math.algorithms.regression.LinearRegressorFitter OrdinaryLeastSquares()
'iteratoins Unlike our friends in R- we stick to the 3 iteration guidance. 3
'cacheHint The DRM Cache Hint to use when holding the data in memory between iterations CacheHint.MEMORY_ONLY

Example

val alsmBlaisdellCo = drmParallelize( dense(
      (20.96,  127.3),
      (21.40,  130.0),
      (21.96,  132.7),
      (21.52,  129.4),
      (22.39,  135.0),
      (22.76,  137.1),
      (23.48,  141.2),
      (23.66,  142.8),
      (24.10,  145.5),
      (24.01,  145.3),
      (24.54,  148.3),
      (24.30,  146.4),
      (25.00,  150.2),
      (25.64,  153.1),
      (26.36,  157.3),
      (26.98,  160.7),
      (27.52,  164.2),
      (27.78,  165.6),
      (28.24,  168.7),
      (28.78,  171.7) ))

val drmY = alsmBlaisdellCo(::, 0 until 1)
val drmX = alsmBlaisdellCo(::, 1 until 2)

var coModel = new CochraneOrcutt[Int]().fit(drmX, drmY , ('iterations -> 2))

println(coModel.rhos)
println(coModel.summary)