HMMFit {RHmm} | R Documentation |
This function returns an HMMFitClass object which contains the results of the Baum-Welch algorithm for the user's data
HMMFit(obs, dis="NORMAL", nStates=, ..., asymptCov=FALSE, asymptMethod=c('nlme', 'optim')) HMMFit(obs, dis="DISCRETE", nStates=, levels=NULL, ..., asymptCov=FALSE, asymptMethod=c('nlme', 'optim')) HMMFit(obs, dis="MIXTURE", nStates=, nMixt=, ..., asymptCov=FALSE, asymptMethod=c('nlme', 'optim'))
obs |
A vector, a matrix, a data frame, a list of vectors or a list of matrices of observations. See section obs parameter. |
dis |
Distribution name = 'NORMAL', 'DISCRETE' or 'MIXTURE'. Default 'NORMAL'. |
nStates |
Number of hidden states. Default 2. |
nMixt |
Number of mixtures of normal distributions if dis ='MIXTURE' |
levels |
A character vector of all different levels of 'obs'. By Default (levels=NULL), this vector is computed from 'obs'. |
asymptCov |
A boolean. asymptCov=TRUE if the asymptotic covariance matrix is computed. Default FALSE. |
asymptMethod |
A string which indicates the numerical method for computing the Hessian of parameters. Default 'nlme'. |
... |
optional parameter:
|
a HMMFitClass object:
HMM |
A HMMClass object with the fitted values of the model |
LLH |
log-likelihood |
BIC |
BIC criterium |
nIter |
Number of iterations of the Baum-Welch algorithm |
relVariation |
last relative variation of the LLH function |
asymptCov |
Asymptotic covariance matrix of independant parameters. NULL if not computed. |
obs |
the observations. |
call |
The call object of the function call |
If you fit the model with only one sample, obs is
either a vector (for univariate distributions) or a matrix (for multivariate distributions) or a data frame.
In the two last cases, the number of columns of obs defines the dimension of observations.
If you fit the model with more than one sample, obs is a list of samples. Each element of obs is then a vector
(for univariate distributions) or a matrix (for multivariate distributions). The samples do not need to have the same length.
For discrete distributions, obs can be a vector (or a list of vectors) of any type of R factor objects.
'initProb' and 'transMat' parameters are uniformly drawn.
For univariate normal distributions, empirical mean m and variance s^2
of all the samples are computed.
Then for every states,
an initial value of the 'mean' parameter is uniformly drawn between m - 3s and m + 3s
and an initial value of the 'var'
parameter is uniformly drawn between 0.5 s^2 and 3 s^2.
For multivariate normal distributions, the same procedure is applied for each component of the mean vectors.
The initial covariance matrix is diagonal, and each initial variance is computed as for univariate models.
For mixtures of univariate normal distributions, initial values for 'mean' and 'var' parameters are computed
the same way than for normal distributions. The initial value of 'proportion' parameter is uniformly drawn.
For mixtures of multivariate normal distributions, the same procedure is applied for each component of the mean vectors,
all the covariance matrices are diagonal and each initial variance is computed as for univariate models. The initial value
of 'proportion' parameter is also uniformly drawn.
For discrete distributions, the initial values of 'proba' parameters are uniformly drawn.
Of course, the initial values of the parameters 'initProba', 'proba', 'proportion' and 'transMat' are standardized to
ensure that they can represent probabilities vectors or transition matrices.
The asymptotic covariance matrix of estimates is computed by finite difference approximations
using either function 'fdHess' from nlme package if 'asymptMethod=='nlme''
or internal function 'optimhess' of function 'optim' from stat package if 'asymptMethod=='optim''.
The summary and print.summary methods display the results.
Bilmes Jeff A. (1997) A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models http://ssli.ee.washington.edu/people/bilmes/mypapers/em.ps.gz
Visser Ingmar, Raijmakers Maartje E. J. and Molenaar Peter C. M.(2000) Confidence intervals for hidden Markov model parameters, British Journal of Mathematical and Statistical Psychology, 53, 317-327.
# Fit a 3 states 1D-gaussian model data(n1d_3s) HMMFit(obs_n1d_3s, nStates=3) # Fit a 3 states gaussian HMM for obs_n1d_3s # with iterations printing and kmeans initialization Res_n1d_3s <- HMMFit(obs=obs_n1d_3s, nStates=3, paramBW=list(verbose=1, init="KMEANS"), asymptCov=TRUE, asymptMethod='optim') summary(Res_n1d_3s) # Fit a 2 states 3D-gaussian model data(n3d_2s) summary(HMMFit(obs_n3d_2s, asymptCov=TRUE, asymptMethod='optim')) # Fit a 2 states mixture of 3 normal distributions HMM # for data_mixture data(data_mixture) ResMixture <- HMMFit(data_mixture, nStates=2, nMixt=3, dis="MIXTURE") # Fit a 3 states discrete HMM for weather data data(weather) ResWeather <- HMMFit(weather, dis='DISCRETE', nStates=3)