gmm-learn {stochmod} | R Documentation |
Expectation Maximization algorithm for Gaussian Mixture Models
GMM.learn( xL, K, vL = NULL, gmm.init = NULL, cov.reg = 0.0, tol = 1e-03, LLstop = Inf, min.iter = 3, max.iter = Inf )
xL |
Either a matrix or a list of matrices containing training observation sequences, with one sample per row |
K |
Desired number of components |
vL |
Either a matrix or a list of matrices containing validation observation sequences, with one sample per row |
gmm.init |
Optional initial model, can be partially specified |
cov.reg |
Covariance matrix regularization (towards identity), value must be in [0, 1] |
tol |
Stopping criterion: relative tolerance on the log-likelihood |
LLstop |
Stopping criterion: hard bound on the log-likelihood value |
min.iter |
At least this number of EM iterations is preformed before validation and tolerance stopping criteria are triggered |
max.iter |
Stoppint criterion: maximum number of iterations |
Learns a maximum likelihood GMM given the data
A Gaussian Mixture Model defined by:
mu |
[K x p] matrix of component means |
sigma |
[K x p x p] array of component covariance matrices |
pi |
[K x 1] vector of mixture coefficients |
Artem Sokolov (Artem.Sokolov@gmail.com)