gmm {gmm}R Documentation

Generalized method of moment estimation

Description

Function to estimate a vector of parameters based on moment conditions using the GMM method of Hansen(82).

Usage

gmm(g,x,t0=NULL,gradv=NULL, type=c("twoStep","cue","iterative"), 
wmatrix = c("optimal","ident"),  vcov=c("HAC","iid"), 
kernel=c("Quadratic Spectral","Truncated", "Bartlett", 
"Parzen", "Tukey-Hanning"),crit=10e-7,bw = bwAndrews2, 
prewhite = FALSE, ar.method = "ols", approx="AR(1)",tol = 1e-7, 
itermax=100,intercept=TRUE,optfct=c("optim","optimize"), ...)

Arguments

g A function of the form g(theta,x) and which returns a n times q matrix with typical element g_i(theta,x_t) for i=1,...q and t=1,...,n. This matrix is then used to build the q sample moment conditions. It can also be a formula if the model is linear (see details below).
x The matrix or vector of data from which the function g(theta,x) is computed. If "g" is a formula, it is an n times Nh matrix of instruments (see details below).
t0 A k times 1 vector of starting values. It is required only when "g" is a function because only then a numerical algorithm is used to minimize the objective function. If the dimension of theta is one, see the argument "optfct".
gradv A function of the form G(theta,x) which returns a qtimes k matrix of derivatives of bar{g}(theta) with respect to theta. By default, the numerical algorithm numericDeriv is used. It is of course strongly suggested to provide this function when it is possible. This gradiant is used compute the asymptotic covariance matrix of hat{theta}. If "g" is a formula, the gradiant is not required (see the details below).
type The GMM method: "twostep" is the two step GMM proposed by Hansen(1982) and the "cue" and "iterative" are respectively the continuous updated and the iterative GMM proposed by Hansen, Eaton et Yaron (1996)
wmatrix Which weighting matrix should be used in the objective function. By default, it is the inverse of the covariance matrix of g(theta,x). The other choice is the identity matrix which is usually used to obtain a first step estimate of theta
vcov Assumption on the properties of the random vector x. By default, x is a weakly dependant process. The "iid" option will avoid using the HAC matrix which will accelerate the estimation if one is ready to make that assumption.
kernel type of kernel used to compute the covariance matrix of the vector of sample moment conditions (see HAC for more details)
crit The stoping rule for the iterative GMM. It can be reduce to increase the precision.
bw The method to compute the bandwidth parameter. By default it is bwAndrews2 which is proposed by Andrews (1991). The alternative is bwNeweyWest2 of Newey-West(1994).
prewhite logical or integer. Should the estimating functions be prewhitened? If TRUE or greater than 0 a VAR model of order as.integer(prewhite) is fitted via ar with method "ols" and demean = FALSE.
ar.method character. The method argument passed to ar for prewhitening.
approx A character specifying the approximation method if the bandwidth has to be chosen by bwAndrews2.
tol Weights that exceed tol are used for computing the covariance matrix, all other weights are treated as 0.
itermax The maximum number of iterations for the iterative GMM. It is unlikely that the algorithm does not converge but we keep it as a safety.
intercept If "g" is a formula, should the model include a constant? It should always be the case but the choice is yours.
optfct Only when the dimension of theta is 1, you can choose between the algorithm optim or optimize. In that case, the former is unreliable. If optimize is chosen, "t0" must be 1times 2 which represents the interval in which the algorithm seeks the solution.
... More options to give to optim.

Details

weightsAndrews2 and bwAndrews2 are simply modified version of weightsAndrews and bwAndrews from the package sandwich. The modifications have been made so that the argument x can be a matrix instead of an object of class lm or glm. The details on how is works can be found on the sandwich manual.

If we want to estimate a model like Y_t = theta_1 + X_{2t}theta_2 + cdots + X_{k}theta_k + ε_t using the moment conditions Cov(ε_tH_t)=0, where H_t is a vector of Nh instruments, than we can define "g" like we do for lm. We would have g = y~x2+x3+\cdots+xk and the argument "x" above would become the matrix H of instruments. As for lm, Y_t can be a Ny times 1 vector which would imply that k=Nh times Ny. The intercept is included by default so you do not have to add a column of ones to the matrix H. You do not need to provide the gradiant in that case since in that case it is embedded in gmm.

The following explains the last example bellow. Thanks to Dieter Rozenich, a student from the Vienna Universtiy of Economics and Business Administration. He suggested that it would help to understand the implementation of the jacobian.

For the two parameters of a normal distribution (μ,σ) we have the following three moment conditions:

m_{1} = μ - x_{i}

m_{2} = σ^2 - (x_{i}-μ)^2

m_{3} = x_{i}^{3} - μ (μ^2+3σ^{2})

m_{1},m_{2} can be directly obtained by the definition of (μ,σ). The third moment condition comes from the third derivative of the moment generating function (MGF)

M_{X}(t) = expBig(μ t + frac{σ^{2}t^{2}}{2}Big)

evaluated at (t=0).

Note that we have more equations (3) than unknown parameters (2).

The Jacobian of these two conditions is (it should be an array but I can't make it work):

1~~~~~~~~~~ 0

-2μ+2x ~~~~~ 2σ

-3μ^{2}-3σ^{2} ~~~~ -6μσ

Value

'gmm' returns an object of 'class' '"gmm"'
The functions 'summary' is used to obtain and print a summary of the results. It also compute the J-test of overidentying restriction
The object of class "gmm" is a list containing:
par: ktimes 1 vector of parameters
vcov: the covariance matrix of the parameter
objective: the value of the objective function | var(bar{g})^{-1/2}bar{g}|^2

References

Zeileis A (2006), Object-oriented Computation of Sandwich Estimators. Journal of Statistical Software, 16(9), 1–16. URL http://www.jstatsoft.org/v16/i09/.

Andrews DWK (1991), Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation. Econometrica, 59, 817–858.

Newey WK & West KD (1987), A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix. Econometrica, 55, 703–708.

Newey WK & West KD (1994), Automatic Lag Selection in Covariance Matrix Estimation. Review of Economic Studies, 61, 631-653.

Hansen, L.P. (1982), Large Sample Properties of Generalized Method of Moments Estimators. Econometrica, 50, 1029-1054,

Hansen, L.P. and Heaton, J. and Yaron, A.(1996), Finit-Sample Properties of Some Alternative GMM Estimators. Journal of Business and Economic Statistics, 14 262-280.

Examples

# First, an exemple with the fonction g()

g <- function(tet,x)
        {
        n <- nrow(x)
        u <- (x[7:n] - tet[1] - tet[2]*x[6:(n-1)] - tet[3]*x[5:(n-2)])
        f <- cbind(u,u*x[4:(n-3)],u*x[3:(n-4)],u*x[2:(n-5)],u*x[1:(n-6)])
        return(f)
        }

Dg <- function(tet,x)
        {
        n <- nrow(x)
        xx <- cbind(rep(1,(n-6)),x[6:(n-1)],x[5:(n-2)])
        H  <- cbind(rep(1,(n-6)),x[4:(n-3)],x[3:(n-4)],x[2:(n-5)],x[1:(n-6)])
        f <- -crossprod(H,xx)/(n-6)
        return(f)
        }
n = 500
set.seed(123)
phi<-c(.2,.7)
thet <- 0.2
sd <- .2
x <- matrix(arima.sim(n=n,list(order=c(2,0,1),ar=phi,ma=thet,sd=sd)),ncol=1)

res_2s <- gmm(g,x,c(0,.3,.6),gradv=Dg)
summary(res_2s)

res_iter <- gmm(g,x,c(0,.3,.6),gradv=Dg,type="iterative")
summary(res_iter)

# The same model but with g as a formula....  much simpler in that case

y <- x[7:n]
ym1 <- x[6:(n-1)]
ym2 <- x[5:(n-2)]

H <- cbind(x[4:(n-3)],x[3:(n-4)],x[2:(n-5)],x[1:(n-6)])
g <- y~ym1+ym2
x <- H

res_2s <- gmm(g,x)
summary(res_2s)

res_iter <- gmm(g,x,type="iterative")
summary(res_iter)

## The following example has been provided by Dieter Rozenich (see details).
# It generates normal random numbers and uses the GMM to estimate 
# mean and sd.
#-------------------------------------------------------------------------------
# Random numbers of a normal distribution
# First we generate normally distributed random numbers and compute the two parameters:
n <- 1000
x <- rnorm(n, mean = 4, sd = 2)
# Implementing the 3 moment conditions
g <- function(tet,x)
        {
        m1 <- (tet[1]-x)
        m2 <- (tet[2]^2 - (x - tet[1])^2)
        m3 <- x^3-tet[1]*(tet[1]^2+3*tet[2]^2)
        f <- cbind(m1,m2,m3)
        return(f)
        }
# Implementing the jacobian
Dg <- function(tet,x)
        {
        jacobian <- matrix(c( 1, 2*(-tet[1]+mean(x)), -3*tet[1]^2-3*tet[2]^2,0, 2*tet[2],-6*tet[1]*tet[2]), nrow=3,ncol=2)
        return(jacobian)
        }
# Now we want to estimate the two parameters using the GMM.
require(gmm)
resgmm <- gmm(g,x,c(0,0),grad=Dg)

[Package gmm version 1.0-4 Index]