maximum_likelihood {hyperdirichlet} | R Documentation |
Maximum likelihood point for the hyperdirichlet distribution as estimated using numerical maximization.
maximum_likelihood(HD, start_p = NULL, give = FALSE, disallowed = NULL, zero=NULL, ...) mle(HD, start_p = NULL, give = FALSE, disallowed = NULL, ...) mle_restricted(HD, start_p = NULL, give = FALSE, disallowed = NULL, zero=NULL, ...)
HD |
Object of class hyperdirichlet |
start_p |
Start value for the p s. See details section |
give |
Boolean with default FALSE meaning to return just
the point estimate and TRUE meaning to return all the output
from optim() |
disallowed |
A function of p returning a Boolean to
restrict the search for the MLE. See examples |
zero |
In function maxlike_restricted() , a Boolean vector
with TRUE elements corresponding to components that are
constrained to be zero. See details section |
... |
Further arguments sent to optim() |
The user should use function maximum_likelihood()
, which is a
user-friendly wrapper for one of the two functions (mle()
or
mle_rst()
) depending on whether argument zero
is or is not
NULL
.
Argument start_p
specifies the start point for the optimization;
default NULL
is interpreted as rep(1/n,n)
where n
is dim(HD)
(ie neutral position).
It is not necessary to normalize start_p
: this is done by
dhyperdirichlet()
.
Non-default values for this argument are interpreted by
dhyperdirichlet()
.
Argument zero
, if not default NULL
, is Boolean in the
standard case; but if it is not Boolean, it is interpreted as a numeric
vector of integers indicating which components of the distribution are
restricted to zero. An example is given below.
Returns a k-tuple.
The functions minimize -dhyperdirichlet(...,log=TRUE)
; so there
is no need to set fnscale
.
Be aware that the aylmer package includes a function
maxlike()
, which does something different.
Robin K. S. Hankin
maximum_likelihood(dirichlet(1:4)) # Should be 0:3 jj.numerical <- maximum_likelihood(dirichlet(3:8), zero=2:3)$MLE jj <- c(2,0,0,5,6,7) jj.analytical <- jj/sum(jj) jj.numerical - jj.analytical # should be small