The Aha! Moments In 4 Popular Machine Learning Algorithms
Each step, the Decision Tree algorithm attempts to find a method to build the tree such that the entropy is minimized. Think of entropy more formally as the amount of'disorder' or'confusion' a certain divider (the conditions) has, and its opposite as'information gain' -- how much a divider adds information and insight to the model. Feature splits that have the highest information gain (as well as a lowest entropy) are placed at the top. Note that condition 1 has clean separation, and therefore low entropy and high information gain. The same cannot be said for condition 3, which is why it is placed near the bottom of the Decision Tree.
Aug-19-2020, 07:30:15 GMT
- Technology: