smoothtail-package {smoothtail} | R Documentation |
Given independent and identically distributed observations X_1 < ... < X_n from a Generalized Pareto distribution with shape parameter gamma in [-1,0], this package offers three methods to compute estimates of gamma. The estimates are based on the principle of replacing the order statistics X_{(1)}, ..., X_{(n)} of the sample by quantiles hat X_{(1)}, ..., hat X_{(n)} of the distribution function hat F_n based on the log–concave density estimator hat f_n. This procedure is justified by the fact that the GPD density is log–concave for gamma in [-1,0].
Package: | smoothtail |
Type: | Package |
Version: | 1.1.2 |
Date: | 2008-23-08 |
License: | GPL version 2 or newer |
Use this package to estimate the shape parameter gamma of a Generalized Pareto Distribution (GPD). In extreme value theory, gamma is denoted tail index. We offer three new estimators, all based on the fact that the density function of the GPD is log–concave if gamma in [-1,0], see Mueller and Rufibach (2006). The functions for estimation of the tail index are:
pickands
falk
falkMVUE
generalizedPick
Additionally, functions for density, distribution function, quantile function and random number generation for a GPD with location parameter 0, shape parameter gamma and scale parameter σ are provided:
This package depends on the package logcondens for estimation of a log–concave density.
Let us shortly clarify what we mean with log–concave density estimation. Suppose we are given an ordered sample Y_1 < ... < Y_n of i.i.d. random variables having density function f, where f = exp varphi for a concave function varphi : [-infty, infty) to R. Following the development in Duembgen and Rufibach (2007), it is then possible to get an estimator hat f_n = exp hat varphi_n of f via the maximizer hat varphi_n of
L(varphi) = sum_{i=1}^n varphi(Y_i) - int exp varphi (t) d t
over all concave functions varphi. It turns out that hat varphi_n is piecewise linear, with knots only at (some of the) observation points. Therefore, the infinite-dimensional optimization problem of finding the function hat varphi_n boils down to a finite dimensional problem of finding the vector (hat varphi_n(Y_1),...,hat varphi(Y_n)). How to solve this problem is described in Rufibach (2006, 2007) and in a more general setting in Duembgen, Huesler, and Rufibach (2007). The distribution function based on hat f_n is defined as
hat F_n(x) = int_{Y_1}^x hat f_n(t) d t
for x a real number. The definition of hat F_n is justified by the fact that hat F_n(Y_1) = 0.
Kaspar Rufibach (maintainer), kaspar.rufibach@gmail.com
Samuel Mueller, mueller@maths.uwa.edu.au,
http://www.maths.usyd.edu.au/ut/people?who=S_Mueller
Kaspar Rufibach acknowledges support by the Swiss National Science Foundation SNF, http://www.snf.ch
Duembgen, L. and Rufibach, K. (2007). Maximum likelihood estimation of a log–concave density and its distribution function: basic properties and uniform consistency. Bernoulli, to appear.
Duembgen, L., Huesler, A. and Rufibach, K. (2007) Active set and EM algorithms for log-concave densities based on complete and censored data. Technical report 61, IMSV, Univ. of Bern, available at http://arxiv.org/abs/0707.4643.
Mueller, S. and Rufibach K. (2006). Smooth tail index estimation. J. Stat. Comput. Simul., to appear.
Mueller, S. and Rufibach K. (2008). On the max–domain of attraction of distributions with log–concave densities. Statist. Probab. Lett., 78), 1440–1444.
Rufibach K. (2006) Log-concave Density Estimation and Bump Hunting for i.i.d. Observations.
PhD Thesis, University of Bern, Switzerland and Georg-August University of Goettingen, Germany, 2006.
Available at http://www.stub.unibe.ch/download/eldiss/06rufibach_k.pdf.
Rufibach, K. (2007) Computing maximum likelihood estimators of a log-concave density function. J. Stat. Comput. Simul., 77, 561–574.
Package logcondens.
# generate ordered random sample from GPD set.seed(1977) n <- 20 gam <- -0.75 x <- rgpd(n, gam) # compute known endpoint omega <- -1 / gam # estimate log-concave density, for illustration purposes est <- activeSetLogCon(x) # plot distribution functions s <- seq(0.01, max(x), by = 0.01) plot(0, 0, type = 'n', ylim = c(0, 1), xlim = range(c(x, s))); rug(x) lines(s, pgpd(s, gam), type = 'l', col = 2) lines(x, 1:n / n, type = 's', col = 3) lines(x, est$Fhat, type = 'l', col = 4) legend(1, 0.4, c('true', 'empirical', 'estimated'), col = c(2 : 4), lty = 1) # compute tail index estimators falk.logcon <- falk(x) falkMVUE.logcon <- falkMVUE(x, omega) pick.logcon <- pickands(x) genPick.logcon <- generalizedPick(x, c = 0.75, gam0 = -1/3) # plot smoothed and unsmoothed estimators versus number of order statistics plot(0, 0, type = 'n', xlim = c(0,n), ylim = c(-1, 0.2)) lines(1:n, pick.logcon[, 2], col = 1); lines(1:n, pick.logcon[, 3], col = 1, lty = 2) lines(1:n, falk.logcon[, 2], col = 2); lines(1:n, falk.logcon[, 3], col = 2, lty = 2) lines(1:n, falkMVUE.logcon[,2], col = 3); lines(1:n, falkMVUE.logcon[,3], col = 3, lty = 2) lines(1:n, genPick.logcon[, 2], col = 4); lines(1:n, genPick.logcon[, 3], col = 4, lty = 2) abline(h = gam, lty = 3) legend(11, 0.2, c("Pickands", "Falk", "Falk MVUE", "Generalized Pickands'"), lty = 1, col = 1:8)