Regularization
Learn what regularization is and how it affects validation and training error.
Regularization is a technique to reduce the variance of a model. One such way is restricting the parameters to a subset of the parameter space. Reduction in variance turns out to be a prevention of overfitting.
Why not choose a simple model?
Starting with a model that’s too simple and gradually increasing its complexity by monitoring its performance on the testing data is one solution. Regularization does the reverse, that is, starting with a complex model and decreasing its complexity.
The answer to this question of why not choose a simple model has to do more with implementation than theory. Regularization is more systematically implementable compared to increasing the model complexity gradually. Furthermore, different regularization methods offer different ways to reduce the variance of the model, where one way might be better than the other for a task at hand.
Shrinkage method
In shrinkage-based regularization, the parameters are restricted to stay close to zero (shrink to zero). One way is to apply this restriction explicitly while minimizing the loss, that is, minimize subject to , where is a loss function and ...