Decision Trees

Learn about the history, basic representation and terminologies of decision trees.

History of decision trees

Decision trees and the machine learning models that are based on them, in particular, random forests and gradient boosted trees, are fundamentally different types of models than Generalized Linear Models (GLMs), such as logistic regression. GLMs are rooted in the theories of classical statistics, which have a long history. The mathematics behind linear regression was originally developed at the beginning of the 19th century, by Legendre and Gauss. Because of this, the normal distribution is also known as the Gaussian distribution.

In contrast, while the idea of using a tree process to make decisions is relatively simple, the popularity of decision trees as mathematical models has come about more recently. The mathematical procedures that we currently use for formulating decision trees in the context of predictive modeling were published in the 1980s. The reason for this more recent development is that the methods used to grow decision trees rely on computational power—that is, the ability to crunch a lot of numbers quickly. We take such capabilities for granted nowadays, but they weren’t widely available until more recently in the history of mathematics.

An example of a decision tree

So, what is meant by a decision tree? We can illustrate the basic concept using a practical example. Imagine that you are considering whether or not to venture outdoors on a certain day. The only information you will base your decision on involves the weather and, in particular, whether the sun is shining and how warm it is. If it is sunny, your tolerance for cool temperatures is increased, and you will go outside if the temperature is at least 10 °C.

However, if it’s cloudy, you require somewhat warmer temperatures and will only go outside if the temperature is 15 °C or more. Your decision-making process could be represented by the following tree:

Get hands-on with 1200+ tech skills courses.