Regression Trees are decision trees that have a continuous target variable.
For example, imagine there is a newly launched product in the market whose price will depend on many constraints. This is the kind of example of where Regression Trees can be used. A regression tree is created through a process called
Root: This is the beginning of the decision tree. The first node represents the first condition based on the criteria of the data provided.
Leaf: The last node in the tree is represented by the value in the decision tree above. This is the terminal node that does not point to any condition or value.
Decision Node: Nodes after the root where any decision or condition is further divided into different categories.
Child Node: The node that is further divided into different categories is called a parent node. The nodes that result from this division are called child nodes.
Visualization of data becomes easier as users can identify and process each and every step.
A specific decision node could be set to have a priority against other decision nodes.
As the regression tree progresses, undesired data will be filtered at each step. As a result, only important data is left to process, which increases the efficiency and accuracy of our design.
It is easy to prepare regression trees – they can be used to present data during meetings, presentations, etc.
Let’s look at one basic example of a regression tree that plots data on the salary of a company’s employees based on their position.
View all Courses