In machine learning, it is extremely important that data scientists validate machine learning models that are under training for accuracy and stability as it needs to be ensured that the model picks up on most of the trends and patterns in the data without incurring too much noise.
Cross-validation is a common model validation technique to evaluate how well the model will generalize an entirely new data set. The technique defines a data set from the training data to test the model during the training phase. This defined data set is called a validation set.
The training data set and the validation data set need to be drawn from the same set of data.
The cross-validation technique is popular for various reasons:
It helps us evaluate the quality of the model during training.
It helps us choose the model that will work most efficiently on an unseen data set.
It prevents the model from overfitting and underfitting.