Interesting interpretations of Ridge and Lasso
Unveiling the beauty of simple penalizations in linear regression
Machine Learning is all about balance. You can have a strong learner that overfits your training data, but struggles with out-of-sample prediction. In that case, you may use right amount and kind of regularization to improve the overall performance. Two well-known regularization methods are L1 and L2 penalties.
Albeit used in many regression and classification algorithms, I feel that their linear regression counterparts are usually not studied in deep by many data science practicioners. Understanding the effects of those penalties in simple scenarios can guide us when navigating more complex ones, such as a neural network layer.
This post will introduce interesting interpretations of ridge and lasso regressions, that are described in books and other blog posts (see references in the end of this article). I intend to add some visual explanation and to compile the differents interpretations that unveil the beauty of how simple penalizations yield completely different outcomes.
Note: Unfortunately, I didn't have enough time to dive into the details of the maths and statistics included in this post or organize the sections as I think they should have been. The sections may be dense in terms of content. Anyway, I believe this post may be useful to you. If you have any questions or suggestions, feel free to reach me on linkedin or by email.
The Ordinary Linear Regression
The linear regression is the simpler, but not less powerful, regression method we can think of. If you feel confortable with the equations of linear regression, please feel free to jump to the next section. If not, let's do a quick recap.
Suppose we have observations of a dependent variable and some explanatory variables . We can use a linear model to fit the data . In matrix form, we may rewrite it as , where is the matrix containing the input variables and a column of 1s representing the intercept, and is the vector of coefficients. We can estimate our s in a way that minimizes the mean squared error between the predicted values and the true values :
The problem has a closed-form solution and we can write:
Note that if the matrix is not linear independent or has correlated columns, the matrix is ill-conditioned. In real-world problems, we know that happens all the time. Fortunately, a solution has been developed and it's strongly related to L2 penalization. We call it the ridge regression.
Ridge and multicolinearity
The ridge regression was actually developed to solve the issues of linear regression in the case of non-orthogonal inputs, also known as multicolinearity. In the Ordinary Linear Regression case, correlated inputs create unstable s and increase their variance. Suppose we had the following matrix , with and no intercept:
With outcomes :
we can easily find that . Now, assume that some random additive noise had perturbed our observed outcomes:
Our estimates for would be ! Small pertubations in the observed outcome can significantly change the coefficients estimated with linear regression.
Another look at the effects of multicolinearity on the regression coefficients comes from the interpretation of linear regression as orthogonal projection. You can understand linear regression as projection of observations onto a plane defined by . This plane is the closest to the training points in terms of average distance. However, when the correlation between and comes closer to 1, the true subspace that defines de relation between and the inputs is the line , or , The regression algorithm tries to find a plane, but there are infinite planes that contain that line. Any outlier in the training data will give a (bad) hint that there's a plane that fits the data.
The ridge
By adding a special penalization on the magnitude of the coefficients of the regression, however, only one solution solves the problem. The ridge regression can be stated as:
or
There's a one to one correspondence between and . Note that is not included in the penalization.
After centering the input vectors , , and the target , the solution has closed-form and can be written disconsidering the intercept:
where the input matrix has columns, and is the identity matrix.
Now, compare this solution with the one from ordinary linear regression. If you're familiar to eigendecomposition, you have probably found out why the term makes the ridge more stable. Supposing we had correlated columns in our data, some columns of will be "almost" linear dependent, with at least one eigenvalue of equal to or close to zero. However, the term increases the eigenvalues of by , making it possible to invert the matrix.
The beauty of the L2 penalization starts to appear. The inversion of a ill-conditioned matrix amplifies the noise of the observations, and we obtain unstable results with ordinary linear regression when we have correlated features in our data, as in the example above. However, the ridge avoid those instabilities. In the next section, we'll dive deeper to understand how the ridge is related to the covariance matrix and why it is more robust to noisy observations. We'll do that with the help of Singular Value Decomposition, a matrix factorization technique.
Singular Value Decomposition
The theorem states that any matrix can factorized into a singular value decomposition:
where and are orthogonal matrices, and is a diagonal matrix with non-negative values.
In other words, the SVD provide a way to decompose the matrix into three sequential linear transformations: a rotation, followed by a scaling, followed by another rotation. The amount of scaling is defined by the diagonal values of , also known as singular values. This factorization is related to eigendecomposition: if the original matrix has some eigenvalue equal to zero, the scaling step will shrink those directions to zero. As a consequence, the number of zeros in the diagonal of is equal to the dimension of the null space of .
When is the matrix containing our data, the sample covariance matrix is strongly related to the SVD factorization. If we replace by its factorized form, we obtain:
which is the eigen decomposition of the covariance matrix. The columns of are the eigenvectors of (also called principal components directions of ), and the eigenvalues are the squared singular values divided by . You can feel how the SVD is related to Principal Components Analysis. But I find even more intriguing its relation to the Ridge regression.
After replacing in the closed form solution of the ridge regression by its SVD factorization, we obtain the following:
while, for the ordinary least squares, we obtain
Considering that , where is an irreducible error:
Note that when we have colinearity between two features, one of the singular values will shrink torwards zero. That would increase the effects of the errors in the estimation of coefficients in the case of OLS due to the near-zero denominators in the diagonal of . The ridge term, however, avoid those near zero denominators, replacing with . The penalization is stronger in the smallest principal components of the input data, i.e. the direction that minimizes the sample variance of the data.
The Lasso
The ridge is a stable tecnique to use in regressions, but it doesn't set any coefficients to zero and may fail to provide interpretable models. The lasso tries to attack this problem by replacing with in the penalization term:
or
Just like the ridge, we can separate this optimization problem into two parts: first, centering and with regard to their means. Then, solving the estimation problem without the intercept . The lasso is a quadratic programming problem, and doesn't have a closed-form solution in the general setting, but can be solved with the same computation cost as for the ridge regression.
Let's see it in an example.
The prostate dataset by Stamey et al., 1987 contain measures of Prostate Specific Antigen and some clinical measures in 97 men. We would like to discover which features may help us identifying prostate cancer in men. Applying the lasso to the dataset and increasing the lagrangian we obtain the paths for the coefficients showed in figure below.
Those paths are really different from those we obtain with the ridge regression. The following figure illustrates the paths we obtain when we increase the lagrangian of the ridge:
Altough ridge is a powerful tool to avoid instabilities arising from correlated inputs in a regression, it isn't a tool for feature selection, as we can see from the figures above. The Lasso, on the other hand, has characteristics that are useful for understanding the linear relations between the dependent and explanatory variables, and can be used as a feature selection tool.
The two-dimensional case
We can also understand the ridge and lasso geometrically in a simplified setting with only two explanatory variables. This example uses a dataset with one dependent variable and two explanatory and .
As we described in the previous sections, the ridge keeps the best least squares solution inside the ball defined by , and the lasso keeps it inside the diamond . The point with the lowest squared error inside the filled area will be our solution:
The corners of the diamond favours the presence of null coefficients. It's more clear in the following figures, with the contours of the residual sum of squares surface:
Note how the upper corner is in an advantageous position in comparison to the other parts of the lasso contour. This helps us understanding why the change in the penalization term affects so drastrically the outcomes. In the ridge figure, the intersection between the contours and the constrained region clearly happens in a place where and .
Lasso and Ridge from a bayesian point of view
So far, we have seen how the L2 penalization works and why it avoids instabilities in the case of correlated features, and we have also seen why the lasso and the ridge behave differently from a geometric point of view. This section will show the relation between bayesian inference and the ridge and lasso regressions.
Brief introduction to bayesian linear regression
In bayesian inference, we are not only interested in the values of our coefficients. We suppose they are random variables and we would like to know their probability distribution. To do that, we use the Bayes' rule. We update our prior knowledge with the data to obtain our estimates.
The approach consists in creating a model that provides the joint probability distribution for , and having a prior on the distribution of our parameters . Then, with the data, we can estimate our posterior distribution , which represents the distribution of our coefficients after updating our priors with the data. From Bayes' rule, we have:
With fixed , we can consider the term , which does not depend on , constant, and rewrite the above equation in an equivalent form:
Note that the data affects the posterior inference through . This term, when we look at it as a function of and for a fixed , is called likelihood function. Suppose our model , with . We have the following conditional probability:
and our likelihood function is
With the likelihood and our priors , we can estimate our posterior distributions for the regression coefficients:
Lasso and ridge as a consequence of our priors
The link between the bayesian inference and regularized linear regression is in our priors . Two distributions play a important role: the Normal and the Laplace distributions:
If we replace our priors for with a normal distribution of mean zero , we find that the ridge solution is the mean of our posterior distribution . The larger the lagrangian, the more we believe that our coefficients are close to zero.
Similarly, if we use a laplacian prior with mean zero for our coefficients , the mode of is the lasso solution . Indeed, the Laplacian prior means a stronger conviction that our coefficients are equal to zero.
Conclusion
When to use one over the other
The ridge and the lasso have their unique properties that suit different use-cases. Below, I'll summarise some rules-of-thumb that you can use to know if you should use one of them.
- If you want to avoid instabilies in the case of correlated predictors, use the ridge regression. When there's a group of variables among which pairwise correlation are very high, the lasso tends to select only one variable from the group and does not care which one is selected Zou & Hastie, 2005.
- If you have a lot of data and a prediction problem, use the ridge regression. It has been empirically observed that the prediction performance of the lasso is dominated by the ridge Tibshirani, 1996.
- If you want to select features, use the lasso. It has the ability to set coefficients to zero, which is not a characteristic of the ridge regression. However, if you have , i.e. less data observations than variables, the lasso will select at most variables.
Final remarks
Albeit simple, the ridge and the lasso regressions have useful interpretations. When you apply L1 and L2 regularization terms in other statistical learning problems, you may remember the linear regression case to understand why the coefficient estimates are behaving in a certain way. There are still many other links that I didn't mentioned here, such as the Least Angle Regression coefficient paths and their relation with Lasso paths.
The adaptative lasso is another topic that I suggest for further reading. Zou, 2006 have showed that the lasso shrinkage produce biases estimates for large coefficients. The adaptative lasso has oracle properties, i.e. it performs as well as if the true underlying model was given in advance.
References
- Stamey, T. A., Yang, N., Hay, A. R., McNeal, J. E., Freiha, F. S., & Redwine, E. (1987). Prostate-specific antigen as a serum marker for adenocarcinoma of the prostate. New England Journal of Medicine, 317(15), 909–916.
- Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 301–320.
- Friedman, J. H. (2017). The elements of statistical learning: Data mining, inference, and prediction. springer open.
- Gundersen, G. (2018). Singular Value Decomposition as Simply as Possible. In Gregory Gundersen’s Blog. https://gregorygundersen.com/blog/2018/12/10/svd/
- Martin, C. D., & Porter, M. A. (2012). The extraordinary SVD. The American Mathematical Monthly, 119(10), 838–851.
- Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476), 1418–1429.
- Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (1995). Bayesian data analysis. Chapman.
- Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1), 267–288.