Search⌘ K
AI Features

Random Forests: Predictions and Ensembles of Decision Trees

Explore the concept of random forests as ensembles of decision trees designed to reduce overfitting and improve prediction accuracy. Learn how bootstrapping and feature selection increase model diversity, understand hyperparameters like n_estimators and max_features, and discover how predictions are combined. Gain insights into interpreting random forest models through feature importances for effective feature selection.

As we saw in the previous exercise, decision trees are prone to overfitting. This is one of the principal criticisms of their usage, despite the fact that they are highly interpretable. However, we were able to limit this overfitting, to an extent, by limiting the maximum depth to which the tree could be grown.

Concept behind random forests

Building on the concepts of decision trees, machine learning researchers have leveraged multiple trees as the basis for more complex procedures, resulting in some of the most powerful and widely used predictive models. In this section, we will focus on random forests of decision trees. Random forests are examples of what are called ensemble models, because they are formed by combining other, simpler models. By combining the ...

Random forests
Random forests

Once you understand decision trees, the concept behind random forests is fairly simple. That is ...