--- title: "Random Rotation Package Introduction" author: "Peter Hettegger" date: "`r Sys.Date()`" output: BiocStyle::html_document: toc: TRUE fig_width: 4.7 fig_height: 4 vignette: > %\VignetteIndexEntry{Random Rotation Package Introduction} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} bibliography: ../inst/REFERENCES.bib link-citations: true --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` # Introduction `randRotation` is an R package intended for generation of randomly rotated data to resample null distributions of linear model based dependent test statistics. See also [@Yekutieli1999] for resampling dependent test statistics. The main application is to resample test statistics on linear model coefficients following arbitrary batch effect correction methods, see also section [Quick start](#quick-start). The random rotation methodology is thereby applicable for linear models in combination with normally distributed data. Note that the resampling procedure is actually based on random orthogonal matrices, which is a broader class than random rotation matrices. Nevertheless, we adhere to the naming convention of [@Langsrud2005] designating this approach as random rotation methodology. Possible applications for resampling by roation, that are outlined in this document, are: (i) linear models in combination with practically arbitrary (linear or non-linear) batch effect correction methods, section \@ref(BE-correction); (ii) generation of resampled datasets for evaluation of data analysis pipelines, section \@ref(unskewed); (iii) calculation of resampling based test statistics for calculating resampling based p-values and false discovery rates (FDRs), sections \@ref(unskewed) and \@ref(FDR); and (iv) estimation of the degrees of freedoms (df) of mapping functions, section \@ref(dfest). Generally, the rotation approach provides a methodology for generating resampled data in the context of linear models and thus potentially has further conceivable areas of applications in high-dimensional data analysis with dependent variables. Nevertheless, we focus this document on the outlined range of issues in order to provide an intuitive and problem-centered introduction. # Installation Execute the following code to install package `randRotation`: ```{r, eval=FALSE} if(!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager") BiocManager::install("randRotation") ``` # Sample dataset {#dataset} For subsequent analyses we create a hypothetical dataset with 3 batches, each containing 5 Control and 5 Cancer samples with 1000 features (genes). Note that the created dataset is pure noise and no artificial covariate effects are introduced. We thus expect uniformly distributed p-values for linear model coefficients. ```{r message=FALSE} library(randRotation) set.seed(0) # Dataframe of phenotype data (sample information) pdata <- data.frame(batch = as.factor(rep(1:3, c(10,10,10))), phenotype = rep(c("Control", "Cancer"), c(5,5))) features <- 1000 # Matrix with random gene expression data edata <- matrix(rnorm(features * nrow(pdata)), features) rownames(edata) <- paste("feature", 1:nrow(edata)) xtabs(data = pdata) ``` # Quick start - linear models with batch effect correction {#quick-start} A main application of the package is to resample null distributions of parameter estimates for linear models following batch effect correction. We first create our model matrix: ```{r} mod1 <- model.matrix(~1+phenotype, pdata) head(mod1) ``` We then initialise the random rotation object with `initBatchRandrot` and select the `phenotype` coefficient as the null hypothesis coefficient: ```{r} rr <- initBatchRandrot(Y = edata, X = mod1, coef.h = 2, batch = pdata$batch) ``` Now we define the data analysis pipeline that should be run on the original dataset and on the rotated dataset. Here we include as first step (I) our batch effect correction routine `ComBat` (`r Biocpkg("sva")` package) and as second step (II) we obtain the coefficient of interest from the linear model fit. ```{r} statistic <- function(Y, batch, mod){ # (I) Batch effect correction with "Combat" from the "sva" package Y <- sva::ComBat(dat = Y, batch = batch, mod = mod) # (II) Linear model fit fit1 <- lm.fit(x = mod, y = t(Y)) abs(t(coef(fit1))[,2]) } ``` Note that larger values of the `statistic` function are considered as more significant in the subsequently used `pFdr` function. We thus take the absolute values of the coefficients in order to calculate two-sided (two-tailed) p-values with `pFdr`. We emphasize that we highly recommend using scale independent statistics (pivotal quantities) as e.g. t-values instead of parameter estimates (as with `coef`), see also `?randRotation::pFdr`. Nevertheless, we use `coef` here in order to avoid bulky function definitions. The `rotateStat` function calculates `statistic` on the original (non-rotated) dataset and on 10 random rotations. `batch` and `mod` are provided as additional parameters to `statistic`. ```{r message=FALSE, results='hide'} rs1 <- rotateStat(initialised.obj = rr, R = 10, statistic = statistic, batch = pdata$batch, mod = mod1) ``` ```{r} rs1 ``` Resampling based p-values are obtained with `pFdr`. As we use "pooling" of the rotated statistics in `pFdr`, 10 random rotations are sufficient. ```{r} p.vals <- pFdr(rs1) hist(p.vals, col = "lightgreen");abline(h = 100, col = "blue", lty = 2) qqunif(p.vals) ``` We see that, as expected, our p-values are approximately uniformly distributed. **Hint:** The outlined procedure also works with `statistic` functions which return multiple columns (`rotateStat` and `pFdr` handle functions returning multiple columns adequately). So one could e.g. perform multiple batch effect correction methods and calculate the statistics of interest for each correction method. By doing this, one could subsequently evaluate the influence of different batch effect correction methods on the statistic of interest. **Additional info:** Below, the analysis pipeline is performed without rotation for comparison with the previous analyses. Following batch effect correction with `ComBat` (`r Biocpkg("sva")` package), we obtain p-values from linear fit coefficients as follows: ```{r} edata.combat <- sva::ComBat(dat = edata, batch = pdata$batch, mod = mod1) fit1 <- lm.fit(x = mod1, y = t(edata.combat)) # t-statistics var.beta <- diag(solve(t(mod1)%*%mod1)) s2 <- colSums(resid(fit1)^2) / df.residual(fit1) t1 <- t(coef(fit1) / sqrt(var.beta)) / sqrt(s2) # P-values from t-statistics p.vals.nonrot <- 2*pt(abs(t1), df.residual(fit1), lower.tail = FALSE) p.vals.nonrot <- p.vals.nonrot[,2] hist(p.vals.nonrot, col = "lightgreen");abline(h = 100, col = "blue", lty = 2) qqunif(p.vals.nonrot) plot(p.vals, p.vals.nonrot, log = "xy", pch = 20) abline(0,1, col = 4, lwd = 2) ``` We see that the p-values are non-uniformly distributed. See also section \@ref(skewed-null). # Basic principle of random rotation methods {#basic-principle} In the random rotation methodology, the observed data vectors (for each feature) are rotated in way that the *determined* coefficients ($\boldsymbol{B_D}$ in @Langsrud2005) stay constant when resampling under the null hypothesis $H0: \boldsymbol{B_H = 0}$, see [@Langsrud2005]. The following example shows that the intercept coefficient of the *null model* does not change when rotation is performed under the null hypothesis: ```{r} # Specification of the full model mod1 <- model.matrix(~1+phenotype, pdata) # We select "phenotype" as the coefficient associated with H0 # All other coefficients are considered as "determined" coefficients rr <- initRandrot(Y = edata, X = mod1, coef.h = 2) coefs <- function(Y, mod){ t(coef(lm.fit(x = mod, y = t(Y)))) } # Specification of the H0 model mod0 <- model.matrix(~1, pdata) coef01 <- coefs(edata, mod0) coef02 <- coefs(randrot(rr), mod0) head(cbind(coef01, coef02)) all.equal(coef01, coef02) ``` However, the coefficients of the *full model* do change when rotation is performed under the null hypothesis: ```{r} coef11 <- coefs(edata, mod1) coef12 <- coefs(randrot(rr), mod1) head(cbind(coef11, coef12)) ``` This is in principle how resampling based tests are constructed. # Batch effect correction with subsequent linear model analysis {#BE-correction} In the following we outline the use of the `randRotation` package for linear model analysis following batch effect correction as a prototype application in current biomedical research. We highlight the problems faced when batch effect correction is separated from data analysis with linear models. Although data analysis procedures with combined batch effect correction and model inference should be preferred, the separation of batch effect correction from subsequent analysis is unavoidable for certain applications. In the following we use `ComBat` (`r Biocpkg("sva")` package) as a model of a "black box" batch effect correction procedure. Subsequent linear model analysis is done with the `r Biocpkg("limma")` package. We use `limma` and `ComBat` as model functions for demonstration, as these are frequently used in biomedical research. We want to emphasize that neither the described issues are specific to these functions, nor do we want to somehow defame these highly useful packages. ## Skewed null distribution of p values {#skewed-null} Separating a (possibly non-linear) batch effect correction method from linear model analysis could practically lead to non-uniform (skewed) null distributions of p-values for testing linear model coefficients. The intuitive reason for this skew is that the batch effect correction method combines information of all samples to remove the batch effects. After removing the batch effects, the samples are thus no longer independent. For further information please refer to section [df estimation](#dfest) and to the [references](#references). The following example demonstrates the influence of the batch effect correction on the distribution of p-values. We first load the `r Biocpkg("limma")` package and create the model matrix with the intercept term and the phenotype term. ```{r message=FALSE, results='hold'} library(limma) mod1 = model.matrix(~phenotype, pdata) ``` Remember that our [sample dataset](#dataset) is pure noise. Thus, without batch effect correction, fitting a linear model with `limma` and testing the phenotype coefficient results in uniformly distributed p-values: ```{r} # Linear model fit fit0 <- lmFit(edata, mod1) fit0 <- eBayes(fit0) # P values for phenotype coefficient p0 <- topTable(fit0, coef = 2, number = Inf, adjust.method = "none", sort.by = "none")$P.Value hist(p0, freq = FALSE, col = "lightgreen", breaks = seq(0,1,0.1)) abline(1,0, col = "blue", lty = 2) qqunif(p0) ``` We now perform batch effect correction using `ComBat` (`r Biocpkg("sva")` package): ```{r} library(sva) edata.combat = ComBat(edata, batch = pdata$batch, mod = mod1) ``` Performing the model fit and testing the phenotype effect on this modified dataset results in a skewed p-value distribution: ```{r} # Linear model fit fit1 <- lmFit(edata.combat, mod1) fit1 <- eBayes(fit1) # P value for phenotype coefficient p.combat <- topTable(fit1, coef = 2, number = Inf, adjust.method = "none", sort.by = "none")$P.Value hist(p.combat, freq = FALSE, col = "lightgreen", breaks = seq(0,1,0.1)) abline(1,0, col = "blue", lty = 2) qqunif(p.combat) ``` The histogram and Q-Q plot clearly show that the null-distribution of p-values is skewed when linear model analysis is performed following batch effect correction in a data analysis pipeline of this type. This problem is known and described e.g. in [@Nygaard2015]. Note that the null-distribution is skewed although the experimental design is balanced. ## Unskewed p-values by random rotation {#unskewed} In the following, we take the data analysis pipeline of the previous section and incorporate it into the random rotation environment. The `initBatchRandrot` function initialises the random rotation object with the design matrix of the linear model. We thereby specify the coefficients associated with the null hypothesis $H0$ (see also \@ref(basic-principle)) with `coef.h`. Additionally, the batch covariate is provided. Note that the implementation with `initBatchRandrot` in principle implicitly assumes a block design of the correlation matrix and restricted roation matrix, see also \@ref{nonblock}. ```{r} init1 <- initBatchRandrot(edata, mod1, coef.h = 2, batch = pdata$batch) ``` We now pack the data analysis pipeline of above into our statistic function, which is called for the original (non-rotate) data and for all data rotations: ```{r} statistic <- function(Y, batch, mod, coef){ Y.tmp <- sva::ComBat(dat = Y, batch = batch, mod = mod) fit1 <- limma::lmFit(Y.tmp, mod) fit1 <- limma::eBayes(fit1) # The "abs" is needed for "pFdr" to calculate 2-tailed statistics abs(fit1$t[,coef]) } ``` Data rotation and calling the `statistic` function is performed with `rotateStat`. ```{r message=FALSE, results='hide'} res1 <- rotateStat(initialised.obj = init1, R = 10, statistic = statistic, batch = pdata$batch, mod = mod1, coef = 2) ``` As we use pooling of rotated statistics, `R = 10` resamples should be sufficient (see also \@ref(number-resamples)). We now calculate rotation based p-values with `pFdr`: ```{r} p.rot <- pFdr(res1) head(p.rot) hist(p.rot, freq = FALSE, col = "lightgreen", breaks = seq(0,1,0.1)) abline(1,0, col = "blue", lty = 2) qqunif(p.rot) ``` We see that our rotated p-values are roughly uniformly distributed. For illustration of the skewness of non-rotated p-values, we compare the non-rotated p-values `p.combat` (batch corrected), the rotated p-values `p.rot` (batch corrected) and the p-values from linear model analysis without batch correction `p0`. ```{r} plot(density(log(p.rot/p0)), col = "salmon", "Log p ratios", panel.first = abline(v=0, col = "grey"), xlim = range(log(c(p.rot/p0, p.combat/p0)))) lines(density(log(p.combat/p0)), col = "blue") legend("topleft", legend = c("log(p.combat/p0)", "log(p.rot/p0)"), lty = 1, col = c("blue", "salmon")) ``` We see the skew of the non-rotated p-values towards lower values. This is also seen in another illustration below: ```{r} plot(p0, p.combat, log = "xy", pch = 20, col = "lightblue", ylab = "") points(p0, p.rot, pch = 20, col = "salmon") abline(0,1, lwd = 1.5, col = "black") legend("topleft", legend = c("p.combat", "p.rot"), pch = 20, col = c("lightblue", "salmon")) ``` The non-rotated p-values are on average lower than the rotated p-values: ```{r} plot(density(log(p.combat/p.rot)), col = "blue", main = "log(p.combat / p.rot )", panel.first = abline(v=0, col = "grey")) ``` ## Resampling based FDR {#FDR} Additionally to resampling based p-values, the method `pFdr` could also be used for estimating resampling based false discovery rates [@Yekutieli1999]. ```{r} fdr.q <- pFdr(res1, "fdr.q") fdr.qu <- pFdr(res1, "fdr.q") fdr.BH <- pFdr(res1, "BH") FDRs <- cbind(fdr.q, fdr.qu, fdr.BH) ord1 <- order(res1$s0, decreasing = TRUE) FDRs.sorted <- FDRs[ord1,] matplot(FDRs.sorted, type = "l", lwd = 2) legend("bottomright", legend = c("fdr.q", "fdr.qu", "BH"), lty = 1:5, lwd = 2, col = 1:6) head(FDRs.sorted) ``` # How many resamples ? {#number-resamples} As a rule of thumb, the number of resampled values of our statistic of interest should be at least as high as the number of features in order to reach unskewed null distributions (for n features --> with a p-value of 1/n you still expect 1 significant feature). If the marginal distributions of the statistics are exchangeable (see also `?randRotation::pFdr`) pooling of the rotated statistics can be used. By pooling rotated statistics, the number of random rotations can be substantially reduced. # Degrees of freedom (df) estimation {#dfest} The method `df_estimate` can be used for estimating the local degrees of freedom of mapped data for an arbitrary mapping function. The local degrees of freedom are thereby estimated as the rank of the local linear approximation of the mapping (so the rank of the Jacobian matrix). See also `?df_estimate` for further details. In the following we provide some examples of mapping functions. For information on the dataset and model matrix, please refer to \@ref(dataset) and \@ref(quick-start). ```{r} mapping <- function(Y) abs(Y) df_estimate(edata, mapping = mapping) ncol(edata) ``` As expected, `edata` and `mapping(edata)` have equal degrees of freedom. So the mapping function `abs` *consumes* no df. ```{r} mapping <- function(Y) floor(Y) df_estimate(edata, mapping = mapping) ``` As expected, the `floor` function consumes all degrees of freedom (as long as the variations are small). A more practical example is e.g. using `removeBatchEffect` from the `r Biocpkg("limma")` package for removing batch effects from data: ```{r} mapping <- function(Y, batch, mod) { limma::removeBatchEffect(Y, batch = batch, design = mod) } df_estimate(edata, mapping = mapping, batch = pdata$batch, mod = mod1) ``` As we have 3 batches, the mapping function consumes 2 df by estimating the 2 batch effect coefficients. # Correlation matrices with non-block design {#nonblock} Function `initBatchRandrot` implicitly assumes a block design of the sample correlation matrix and the restricted rotation matrix (see also `?randRotation::initBatchRandrot`). This means that correlations between samples are allowed within batches, but are zero between batches. Simply put, biological replicates or technical replicates (or any other cause of non-zero sample correlation) are contained within single batches and are not distributed to different batches. In this case, each batch has his own sample correlation matrix and correlation coefficients between batches are assumed to be zero. This assumption seems restrictive at first view, but is computationally efficient, as the random rotation can be performed for each batch independently. This is how `initBatchRandrot` is implemented. However, a general correlation matrix with non-block design (non-zero sample correlations between batches) can be initialised with `initRandrot`. Thus, `initBatchRandrot` simply provides a comfortable wrapper for sample correlation matrices with block design or for rotation of data with batch structure. For a correlation matrix of $I_{n \times n}$, `initRandrot` and `initBatchRandrot` are practically equivalent. # Session info ```{r} sessionInfo() ``` # References {#references}