ratiocalc {qpcR} | R Documentation |
For multiple qPCR data from type 'pcrbatch', this function calculates ratios between two samples, using normalization against a reference gene, if supplied. The input can be single qPCR data or (more likely) data containing replicates. Errors and confidence intervals for the obtained ratios can be calculated by Monte-Carlo simulation, a permutation approach similar to the popular REST software and by (first-order) error propagation. Statistical significance for the ratios is calculated by a permutation approach of randomly reallocated vs. non-reallocated data. See 'Details'.
ratiocalc(data, group = NULL, which.eff = c("sig", "sli", "exp"), type.eff = c("individual", "mean.single", "median.single", "mean.pair", "median.pair"), which.cp = c("cpD2", "cpD1", "cpE", "cpR", "cpT", "Cy0"), ...)
data |
multiple qPCR data generated by pcrbatch . |
group |
a character vector defining the replicates (if any) as well as target and reference data. See 'Details' |
which.eff |
efficiency calculated by which method. Defaults to sigmoidal fit. See output of pcrbatch . Alternatively, a fixed numeric value between 1 and 2 that is used for all runs. |
type.eff |
type of efficiency to be supplied for the error analysis. See 'Details'. |
which.cp |
type of crossing point to be used for the analysis. See output of efficiency . |
... |
other parameters to be passed to propagate . |
The replicates of the 'pcrbatch' data columns are to be defined as a character vector with the following abbreviations:
"gs": gene-of-interest in target sample
"gc": gene-of-interest in control sample
"rs": reference gene in target sample
"rc": reference gene in control sample
There is no distinction between the different runs of the same sample, so that three different runs of a gene of interest in a target sample are defined as c("gs", "gs", "gs"). The error analysis calculates statistics from ALL replicates, so that a further sub-categorization of runs seems superfluous.
Examples:
No replicates: NULL.
2 runs with 2 replicates each, no reference: c("gs", "gs", "gs", "gs", "gc", "gc", "gc", "gc").
1 run with two replicates each and reference data: c("gs", "gs", "gc", "gc", "rs", "rs", "rc", "rc").
type.eff
defines the pre-processing of the efficiencies before being transferred to propagate
.
The qPCR community sometimes uses single efficiencies, or averaged over replicates etc., so that different settings
were implemented. In detail, these are the following:
"individual": The individual efficiencies from each run are used.
"mean.single": Efficiencies are averaged over all replicates.
"median.single": Same as above but median instead of mean.
"mean.pair": Efficiencies are averaged from all replicates of target sample and control.
"median.pair": Same as above but median instead of mean.
The ratios are calculated according to the following formulas:
Without reference PCR:
frac{E.gc^{cp.gc}}{E.gs^{cp.gs}}
With reference PCR:
frac{E.gc^{cp.gc}}{E.gs^{cp.gs}}cdotfrac{E.rs^{cp.rs}}{E.rc^{cp.rc}}
The permutation approach permutates crossing point and efficiency replicates within sample and control groups. The sample runs and control runs (and their respective efficiencies) are tied together, which is similar to the popular REST software approach ("pairwise-reallocation test"). Ratios are calculated for each permutation and compared to ratios obtained if samples were randomly reallocated from the sample to the control group. A p-value is calculated from all permutations in which the reallocation gave a higher/lower ratio than the original data. The resulting p-value is thus an indication for the significance AGAINST the null hypothesis that ratios calculated by permutation are just by chance.
Confidence values are returned for all three methods (Monte Carlo, permutation, error propagation) as follows:
Monte-Carlo: From the evaluations of the Monte-Carlo simulated data.
Permutation: From the evaluations of the within-group permutated data.
Propagation: From the propagated error, assuming normality.
A list with the following components:
The complete output from propagate
, attached with the data that was transferred to propagate
for
the error analysis as item data
.
The error calculated from qPCR data by propagate
often seems quite high. This largely depends on the error of the exponent (i.e. threshold cycles)
of the exponential function. The error usually decreases when setting use.cov = TRUE
in the ...
part of the function. It can be debated anyhow,
if the variables 'efficiency' and 'threshold cycles' have a covariance structure. As the efficiency is deduced at the second derivative maximum of the sigmoidal
curve, variance in the second should show an effect on the first, such that the use of a var-covar matrix might be feasible. It is also commonly encountered
that the propagated error is much higher when using reference data, as the number of partial derivative functions increases.
Andrej-Nikolai Spiess
Livak KJ et al. (2001) Analysis of relative gene expression data using real-time quantitative PCR and the 2(-Delta Delta C(T)) method. Methods, 25: 402-428.
Tichopad A et al. (2003) Standardized determination of real-time PCR efficiency from a single reaction set-up. Nucleic Acids Res, 31: e122.
Liu W & Saint DA (2002) Validation of a quantitative method for real time PCR kinetics. Biochem Biophys Res Commun, 294: 347-53.
Pfaffl M et al. (2002) Relative expression software tool (REST) for group-wise comparison and statistical analysis of relative expression results in real-time PCR. Nucl Acids Res, 30: e36.
## Only target sample and control, ## no reference, 4 replicates each ## individual efficiencies for error ## calculation DAT <- pcrbatch(reps, 2:9, l4) GROUP <- c("gs", "gs", "gs", "gs", "gc", "gc", "gc", "gc") res <- ratiocalc(DAT, GROUP, which.eff = "sli", type.eff = "individual", which.cp = "cpD2") ## Typical for using individual efficiencies, ## this inflates the error. 95% confidence intervals ## include 1 (no differential regulation) and errors are ## also extremely high (over 100%). res$conf.Sim res$conf.Perm res$error.Prop/res$eval.Prop res$pval.Perm ## Gets better using averaged efficiencies ## over all replicates res2 <- ratiocalc(DAT, GROUP, which.eff = "sli", type.eff = "mean.single", which.cp = "cpD2") res2$conf.Sim res2$conf.Perm res2$error.Prop/res2$eval.Prop ## p-value indicates significant ## upregulation in comparison to randomly reallocated ## samples (similar to REST software) res2$pval.Perm ## Using reference data. ## Toy example is same data as above ## but replicated as reference such ## that the ratio should be 1. ## Not run: DAT2 <- pcrbatch(reps, c(2:9, 2:9), l4) GROUP2 <- c("gs", "gs", "gs", "gs", "gc", "gc", "gc", "gc", "rs", "rs", "rs", "rs", "rc", "rc", "rc", "rc") res3 <- ratiocalc(DAT2, GROUP2, which.eff = "sli", type.eff = "mean.single", which.cp = "cpD2") res3$conf.Sim res3$conf.Perm res3$error.Prop/res3$eval.Prop res3$pval.Perm ## End(Not run) ## Same as above, but reference data ## is mirrored such that the ratio ## is squared. DAT3 <- pcrbatch(reps, c(2:9, 9:2), l4) GROUP3 <- c("gs", "gs", "gs", "gs", "gc", "gc", "gc", "gc", "rs", "rs", "rs", "rs", "rc", "rc", "rc", "rc") res4 <- ratiocalc(DAT3, GROUP3, which.eff = "sli", type.eff = "mean.single", which.cp = "cpD2") res4$conf.Sim res4$conf.Perm res4$error.Prop/res4$eval.Prop res4$pval.Perm ## Example without replicates ## => no Monte-Carlo and permutations ## and no plots. DAT <- pcrbatch(reps, 2:5, l4) GROUP <- c("gs", "gc", "rs", "rc") res5 <- ratiocalc(DAT, GROUP, which.eff = "sli", type.eff = "individual", which.cp = "cpD2") res5$conf.Sim res5$conf.Perm res5$error.Prop/res4$eval.Prop res5$pval.Perm ## Compare 'propagate' to REST software ## using the data from the REST 2008 ## manual (http://rest.gene-quantification.info/), ## Have to create dataframe with values as we do ## not use 'pcrbatch', but external cp's & eff's! ## Ties define random reallocation of crossing points ## keeping controls and samples together. ## See help for 'propagate'. ## Not run: EXPR <- expression((2.01^(cp.gc - cp.gs)/1.97^(cp.rc - cp.rs))) cp.rc <- c(26.74, 26.85, 26.83, 26.68, 27.39, 27.03, 26.78, 27.32, NA, NA) cp.rs <- c(26.77, 26.47, 27.03, 26.92, 26.97, 26.97, 26.07, 26.3, 26.14, 26.81) cp.gc <- c(27.57, 27.61, 27.82, 27.12, 27.76, 27.74, 26.91, 27.49, NA, NA) cp.gs <- c(24.54, 24.95, 24.57, 24.63, 24.66, 24.89, 24.71, 24.9, 24.26, 24.44) DAT <- cbind(cp.rc, cp.rs, cp.gc, cp.gs) res6 <- propagate(EXPR, DAT, do.sim = TRUE, do.perm = TRUE, perm.crit = "perm > init", ties = c(1, 2, 1, 2)) res6$conf.Sim res6$conf.Perm res6$eval.Prop res6$error.Prop res6$pval.Perm ## End(Not run) ## Does error propagation in qPCR quantitation make sense? ## In ratio calculations based on (E1^cp1)/(E2^cp2), ## only 2% error in each of the variables result in ## over 50% propagated error! ## Not run: x <- NULL y <- NULL for (i in seq(0, 0.1, by = 0.01)) { E1 <- c(1.7, 1.7 * i) cp1 <- c(15, 15 * i) E2 <- c(1.7, 1.7 * i) cp2 <- c(18, 18 * i) DF <- cbind(E1, cp1, E2, cp2) res <- propagate(expression((E1^cp1)/(E2^cp2)), DF, type = "stat", plot = FALSE) x <- c(x, i * 100) y <- c(y, (res$error.Prop/res$eval.Prop) * 100) } plot(x, y, xlim = c(0, 10), lwd = 2, xlab = "c.v. [%]", ylab = "c.v. (prop) [%]") ## End(Not run)