Glmnet Vignette

Introduction

Glmnet is a package that fits a generalized linear model via penalized maximum likelihood. The regularization path is computed for the lasso or elasticnet penalty at a grid of values for the regularization parameter lambda. The algorithm is extremely fast, and can exploit sparsity in the input matrix x. It fits linear, logistic and multinomial, poisson, and Cox regression models. A variety of predictions can be made from the fitted models. It can also fit multi-response linear regression.

The authors of glmnet are Jerome Friedman, Trevor Hastie, Rob Tibshirani and Noah Simon, and the R package is maintained by Trevor Hastie. The matlab version of glmnet is maintained by Junyang Qian. This vignette describes the usage of glmnet in R.

glmnet solves the following problem \[ \min_{\beta_0,\beta} \frac{1}{N} \sum_{i=1}^{N} w_i l(y_i,\beta_0+\beta^T x_i) + \lambda\left[(1-\alpha)||\beta||_2^2/2 + \alpha ||\beta||_1\right], \] over a grid of values of \(\lambda\) covering the entire range. Here \(l(y,\eta)\) is the negative log-likelihood contribution for observation \(i\); e.g. for the Gaussian case it is \(\frac{1}{2}(y-\eta)^2\). The elastic-net penalty is controlled by \(\alpha\), and bridges the gap between lasso (\(\alpha=1\), the default) and ridge (\(\alpha=0\)). The tuning parameter \(\lambda\) controls the overall strength of the penalty.

It is known that the ridge penalty shrinks the coefficients of correlated predictors towards each other while the lasso tends to pick one of them and discard the others. The elastic-net penalty mixes these two; if predictors are correlated in groups, an \(\alpha=0.5\) tends to select the groups in or out together. This is a higher level parameter, and users might pick a value upfront, else experiment with a few different values. One use of \(\alpha\) is for numerical stability; for example, the elastic net with \(\alpha = 1 - \epsilon\) for some small \(\epsilon > 0\) performs much like the lasso, but removes any degeneracies and wild behavior caused by extreme correlations.

The glmnet algorithms use cyclical coordinate descent, which successively optimizes the objective function over each parameter with others fixed, and cycles repeatedly until convergence. The package also makes use of the strong rules for efficient restriction of the active set. Due to highly efficient updates and techniques such as warm starts and active-set convergence, our algorithms can compute the solution path very fast.

The code can handle sparse input-matrix formats, as well as range constraints on coefficients. The core of glmnet is a set of fortran subroutines, which make for very fast execution.

The package also includes methods for prediction and plotting, and a function that performs K-fold cross-validation.

Installation

Like many other R packages, the simplest way to obtain glmnet is to install it directly from CRAN. Type the following command in R console:

install.packages("glmnet", repos = "http://cran.us.r-project.org")

Users may change the repos options depending on their locations and preferences. Other options such as the directories where to install the packages can be altered in the command. For more details, see help(install.packages).

Here the R package has been downloaded and installed to the default directories.

Alternatively, users can download the package source at http://cran.r-project.org/web/packages/glmnet/index.html and type Unix commands to install it to the desired location.

Back to Top

Quick Start

The purpose of this section is to give users a general sense of the package, including the components, what they do and some basic usage. We will briefly go over the main functions, see the basic operations and have a look at the outputs. Users may have a better idea after this section what functions are available, which one to choose, or at least where to seek help. More details are given in later sections.

First, we load the glmnet package:

library(glmnet)
## Loading required package: Matrix
## Loading required package: foreach
## Loaded glmnet 2.0-7

The default model used in the package is the Guassian linear model or “least squares”, which we will demonstrate in this section. We load a set of data created beforehand for illustration. Users can either load their own data or use those saved in the workspace.

data(QuickStartExample)

The command loads an input matrix x and a response vector y from this saved R data archive.

We fit the model using the most basic call to glmnet.

fit = glmnet(x, y)

“fit” is an object of class glmnet that contains all the relevant information of the fitted model for further use. We do not encourage users to extract the components directly. Instead, various methods are provided for the object such as plot, print, coef and predict that enable us to execute those tasks more elegantly.

We can visualize the coefficients by executing the plot function:

plot(fit)

Each curve corresponds to a variable. It shows the path of its coefficient against the \(\ell_1\)-norm of the whole coefficient vector at as \(\lambda\) varies. The axis above indicates the number of nonzero coefficients at the current \(\lambda\), which is the effective degrees of freedom (df) for the lasso. Users may also wish to annotate the curves; this can be done by setting label = TRUE in the plot command.

A summary of the glmnet path at each step is displayed if we just enter the object name or use the print function:

print(fit)
## 
## Call:  glmnet(x = x, y = y) 
## 
##       Df    %Dev   Lambda
##  [1,]  0 0.00000 1.631000
##  [2,]  2 0.05528 1.486000
##  [3,]  2 0.14590 1.354000
##  [4,]  2 0.22110 1.234000
##  [5,]  2 0.28360 1.124000
##  [6,]  2 0.33540 1.024000
##  [7,]  4 0.39040 0.933200
##  [8,]  5 0.45600 0.850300
##  [9,]  5 0.51540 0.774700
## [10,]  6 0.57350 0.705900
## [11,]  6 0.62550 0.643200
## [12,]  6 0.66870 0.586100
## [13,]  6 0.70460 0.534000
## [14,]  6 0.73440 0.486600
## [15,]  7 0.76210 0.443300
## [16,]  7 0.78570 0.404000
## [17,]  7 0.80530 0.368100
## [18,]  7 0.82150 0.335400
## [19,]  7 0.83500 0.305600
## [20,]  7 0.84620 0.278400
## [21,]  7 0.85550 0.253700
## [22,]  7 0.86330 0.231200
## [23,]  8 0.87060 0.210600
## [24,]  8 0.87690 0.191900
## [25,]  8 0.88210 0.174900
## [26,]  8 0.88650 0.159300
## [27,]  8 0.89010 0.145200
## [28,]  8 0.89310 0.132300
## [29,]  8 0.89560 0.120500
## [30,]  8 0.89760 0.109800
## [31,]  9 0.89940 0.100100
## [32,]  9 0.90100 0.091170
## [33,]  9 0.90230 0.083070
## [34,]  9 0.90340 0.075690
## [35,] 10 0.90430 0.068970
## [36,] 11 0.90530 0.062840
## [37,] 11 0.90620 0.057260
## [38,] 12 0.90700 0.052170
## [39,] 15 0.90780 0.047540
## [40,] 16 0.90860 0.043310
## [41,] 16 0.90930 0.039470
## [42,] 16 0.90980 0.035960
## [43,] 17 0.91030 0.032770
## [44,] 17 0.91070 0.029850
## [45,] 18 0.91110 0.027200
## [46,] 18 0.91140 0.024790
## [47,] 19 0.91170 0.022580
## [48,] 19 0.91200 0.020580
## [49,] 19 0.91220 0.018750
## [50,] 19 0.91240 0.017080
## [51,] 19 0.91250 0.015570
## [52,] 19 0.91260 0.014180
## [53,] 19 0.91270 0.012920
## [54,] 19 0.91280 0.011780
## [55,] 19 0.91290 0.010730
## [56,] 19 0.91290 0.009776
## [57,] 19 0.91300 0.008908
## [58,] 19 0.91300 0.008116
## [59,] 19 0.91310 0.007395
## [60,] 19 0.91310 0.006738
## [61,] 19 0.91310 0.006140
## [62,] 20 0.91310 0.005594
## [63,] 20 0.91310 0.005097
## [64,] 20 0.91310 0.004644
## [65,] 20 0.91320 0.004232
## [66,] 20 0.91320 0.003856
## [67,] 20 0.91320 0.003513

It shows from left to right the number of nonzero coefficients (Df), the percent (of null) deviance explained (%dev) and the value of \(\lambda\) (Lambda). Although by default glmnet calls for 100 values of lambda the program stops early if `%dev% does not change sufficently from one lambda to the next (typically near the end of the path.)

We can obtain the actual coefficients at one or more \(\lambda\)’s within the range of the sequence:

coef(fit,s=0.1)
## 21 x 1 sparse Matrix of class "dgCMatrix"
##                        1
## (Intercept)  0.150928072
## V1           1.320597195
## V2           .          
## V3           0.675110234
## V4           .          
## V5          -0.817411518
## V6           0.521436671
## V7           0.004829335
## V8           0.319415917
## V9           .          
## V10          .          
## V11          0.142498519
## V12          .          
## V13          .          
## V14         -1.059978702
## V15          .          
## V16          .          
## V17          .          
## V18          .          
## V19          .          
## V20         -1.021873704

(why s and not lambda? In case later we want to allow one to specify the model size in other ways.) Users can also make predictions at specific \(\lambda\)’s with new input data:

nx = matrix(rnorm(10*20),10,20)
predict(fit,newx=nx,s=c(0.1,0.05))
##                1          2
##  [1,] -1.5659398 -1.8835438
##  [2,]  2.4626019  2.5885411
##  [3,]  1.3356272  1.5107074
##  [4,]  2.0722165  2.1176081
##  [5,]  4.9134142  5.1645335
##  [6,]  0.5481650  0.5502954
##  [7,]  3.8549137  4.0755130
##  [8,] -3.7176846 -4.1666367
##  [9,]  2.2380324  2.4804913
## [10,]  0.9153951  0.9529989

The function glmnet returns a sequence of models for the users to choose from. In many cases, users may prefer the software to select one of them. Cross-validation is perhaps the simplest and most widely used method for that task.

cv.glmnet is the main function to do cross-validation here, along with various supporting methods such as plotting and prediction. We still act on the sample data loaded before.

cvfit = cv.glmnet(x, y)

cv.glmnet returns a cv.glmnet object, which is “cvfit” here, a list with all the ingredients of the cross-validation fit. As for glmnet, we do not encourage users to extract the components directly except for viewing the selected values of \(\lambda\). The package provides well-designed functions for potential tasks.

We can plot the object.

plot(cvfit)

It includes the cross-validation curve (red dotted line), and upper and lower standard deviation curves along the \(\lambda\) sequence (error bars). Two selected \(\lambda\)’s are indicated by the vertical dotted lines (see below).

We can view the selected \(\lambda\)’s and the corresponding coefficients. For example,

cvfit$lambda.min
## [1] 0.08307327

lambda.min is the value of \(\lambda\) that gives minimum mean cross-validated error. The other \(\lambda\) saved is lambda.1se, which gives the most regularized model such that error is within one standard error of the minimum. To use that, we only need to replace lambda.min with lambda.1se above.

coef(cvfit, s = "lambda.min")
## 21 x 1 sparse Matrix of class "dgCMatrix"
##                       1
## (Intercept)  0.14936467
## V1           1.32975267
## V2           .         
## V3           0.69096092
## V4           .         
## V5          -0.83122558
## V6           0.53669611
## V7           0.02005438
## V8           0.33193760
## V9           .         
## V10          .         
## V11          0.16239419
## V12          .         
## V13          .         
## V14         -1.07081121
## V15          .         
## V16          .         
## V17          .         
## V18          .         
## V19          .         
## V20         -1.04340741

Note that the coefficients are represented in the sparse matrix format. The reason is that the solutions along the regularization path are often sparse, and hence it is more efficient in time and space to use a sparse format. If you prefer non-sparse format, pipe the output through as.matrix().

Predictions can be made based on the fitted cv.glmnet object. Let’s see a toy example.

predict(cvfit, newx = x[1:5,], s = "lambda.min")
##               1
## [1,] -1.3647490
## [2,]  2.5686013
## [3,]  0.5705879
## [4,]  1.9682289
## [5,]  1.4964211

newx is for the new input matrix and s, as before, is the value(s) of \(\lambda\) at which predictions are made.

That is the end of glmnet 101. With the tools introduced so far, users are able to fit the entire elastic net family, including ridge regression, using squared-error loss. In the package, there are many more options that give users a great deal of flexibility. To learn more, move on to later sections.

Back to Top

Linear Regression

Linear regression here refers to two families of models. One is gaussian, the Gaussian family, and the other is mgaussian, the multiresponse Gaussian family. We first discuss the ordinary Gaussian and the multiresponse one after that.

Gaussian Family

gaussian is the default family option in the function glmnet. Suppose we have observations \(x_i \in \mathbb{R}^p\) and the responses \(y_i \in \mathbb{R}, i = 1, \ldots, N\). The objective function for the Gaussian family is \[ \min_{(\beta_0, \beta) \in \mathbb{R}^{p+1}}\frac{1}{2N} \sum_{i=1}^N (y_i -\beta_0-x_i^T \beta)^2+\lambda \left[ (1-\alpha)||\beta||_2^2/2 + \alpha||\beta||_1\right], \] where \(\lambda \geq 0\) is a complexity parameter and \(0 \leq \alpha \leq 1\) is a compromise between ridge (\(\alpha = 0\)) and lasso (\(\alpha = 1\)).

Coordinate descent is applied to solve the problem. Specifically, suppose we have current estimates \(\tilde{\beta_0}\) and \(\tilde{\beta}_\ell\) \(\forall j\in 1,]\ldots,p\). By computing the gradient at \(\beta_j = \tilde{\beta}_j\) and simple calculus, the update is \[ \tilde{\beta}_j \leftarrow \frac{S(\frac{1}{N}\sum_{i=1}^N x_{ij}(y_i-\tilde{y}_i^{(j)}),\lambda \alpha)}{1+\lambda(1-\alpha)}, \] where \(\tilde{y}_i^{(j)} = \tilde{\beta}_0 + \sum_{\ell \neq j} x_{i\ell} \tilde{\beta}_\ell\), and \(S(z, \gamma)\) is the soft-thresholding operator with value \(\text{sign}(z)(|z|-\gamma)_+\).

This formula above applies when the x variables are standardized to have unit variance (the default); it is slightly more complicated when they are not. Note that for “family=gaussian”, glmnet standardizes \(y\) to have unit variance before computing its lambda sequence (and then unstandardizes the resulting coefficients); if you wish to reproduce/compare results with other software, best to supply a standardized \(y\) first (Using the “1/N” variance formula).

glmnet provides various options for users to customize the fit. We introduce some commonly used options here and they can be specified in the glmnet function.

  • alpha is for the elastic-net mixing parameter \(\alpha\), with range \(\alpha \in [0,1]\). \(\alpha = 1\) is the lasso (default) and \(\alpha = 0\) is the ridge.

  • weights is for the observation weights. Default is 1 for each observation. (Note: glmnet rescales the weights to sum to N, the sample size.)

  • nlambda is the number of \(\lambda\) values in the sequence. Default is 100.

  • lambda can be provided, but is typically not and the program constructs a sequence. When automatically generated, the \(\lambda\) sequence is determined by lambda.max and lambda.min.ratio. The latter is the ratio of smallest value of the generated \(\lambda\) sequence (say lambda.min) to lambda.max. The program then generated nlambda values linear on the log scale from lambda.max down to lambda.min. lambda.max is not given, but easily computed from the input \(x\) and \(y\); it is the smallest value for lambda such that all the coefficients are zero. For alpha=0 (ridge) lambda.max would be \(\infty\); hence for this case we pick a value corresponding to a small value for alpha close to zero.)

  • standardize is a logical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE.

For more information, type help(glmnet) or simply ?glmnet.

As an example, we set \(\alpha = 0.2\) (more like a ridge regression), and give double weights to the latter half of the observations. To avoid too long a display here, we set nlambda to 20. In practice, however, the number of values of \(\lambda\) is recommended to be 100 (default) or more. In most cases, it does not come with extra cost because of the warm-starts used in the algorithm, and for nonlinear models leads to better convergence properties.

fit = glmnet(x, y, alpha = 0.2, weights = c(rep(1,50),rep(2,50)), nlambda = 20)

We can then print the glmnet object.

print(fit)
## 
## Call:  glmnet(x = x, y = y, weights = c(rep(1, 50), rep(2, 50)), alpha = 0.2,      nlambda = 20) 
## 
##       Df   %Dev   Lambda
##  [1,]  0 0.0000 7.939000
##  [2,]  4 0.1789 4.889000
##  [3,]  7 0.4445 3.011000
##  [4,]  7 0.6567 1.854000
##  [5,]  8 0.7850 1.142000
##  [6,]  9 0.8539 0.703300
##  [7,] 10 0.8867 0.433100
##  [8,] 11 0.9025 0.266700
##  [9,] 14 0.9101 0.164300
## [10,] 17 0.9138 0.101200
## [11,] 17 0.9154 0.062300
## [12,] 17 0.9160 0.038370
## [13,] 19 0.9163 0.023630
## [14,] 20 0.9164 0.014550
## [15,] 20 0.9164 0.008962
## [16,] 20 0.9165 0.005519
## [17,] 20 0.9165 0.003399

This displays the call that produced the object fit and a three-column matrix with columns Df (the number of nonzero coefficients), %dev (the percent deviance explained) and Lambda (the corresponding value of \(\lambda\)).

(Note that the digits option can used to specify significant digits in the printout.)

Here the actual number of \(\lambda\)’s here is less than specified in the call. The reason lies in the stopping criteria of the algorithm. According to the default internal settings, the computations stop if either the fractional change in deviance down the path is less than \(10^{-5}\) or the fraction of explained deviance reaches \(0.999\). From the last few lines , we see the fraction of deviance does not change much and therefore the computation ends when meeting the stopping criteria. We can change such internal parameters. For details, see the Appendix section or type help(glmnet.control).

We can plot the fitted object as in the previous section. There are more options in the plot function.

Users can decide what is on the X-axis. xvar allows three measures: “norm” for the \(\ell_1\)-norm of the coefficients (default), “lambda” for the log-lambda value and “dev” for %deviance explained.

Users can also label the curves with variable sequence numbers simply by setting label = TRUE.

Let’s plot “fit” against the log-lambda value and with each curve labeled.

plot(fit, xvar = "lambda", label = TRUE)