ease:machinelearning:decision_trees
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revision | |||
| ease:machinelearning:decision_trees [2020/06/22 11:15] – s_fuyedc | ease:machinelearning:decision_trees [2020/06/22 11:42] (current) – s_fuyedc | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ===== NEEMS Lecture: 3. Brief Introduction to Decision Trees ===== | ===== NEEMS Lecture: 3. Brief Introduction to Decision Trees ===== | ||
| - | In [[https:// | + | In [[https:// |
| The statistic model of choice is a decision tree. Such models have the advantage of being visually inspectable and comprehendible, | The statistic model of choice is a decision tree. Such models have the advantage of being visually inspectable and comprehendible, | ||
| Line 21: | Line 21: | ||
| //samples = 8// tells the number of possible decisions for the whole scenario. In the root, we have all 8 entries of our example_data at our service. | //samples = 8// tells the number of possible decisions for the whole scenario. In the root, we have all 8 entries of our example_data at our service. | ||
| - | //value = [3, 2, 1, 2]// tells the sum of decidable goal locations over all objects, sorted like in the table above. [cupboard, dishwasher, drawer, fridge]. The further down we go along the tree, the more decisions have been made, and the fewer possibilities for a decision are left. So if the first node decides, the object is **not** milk, the predicted goal location will never be //fridge//, since no other object than milk is going to the fride, an therefore the last entry of this array of numbers is going to be 0 for all upcoming nodes. | + | //value = [3, 2, 1, 2]// tells the sum of decidable goal locations over all objects, sorted like in the table above. [cupboard, dishwasher, drawer, fridge]. The further down we go along the tree, the more decisions have been made, and the fewer possibilities for a decision are left. So if the first node decides, the object is **not** milk, the predicted goal location will never be //fridge//, since no other object than milk is going to the fridge, an therefore the last entry of this array of numbers is going to be 0 for all upcoming nodes. |
| //class = cupboard// is the output decision of the model. In this example the model tries to find a place, where to put an object. If the object is milk (like in the root node) the decision is always to put it in the fridge (branch right from the root node). For any node other that a leaf, the class represents the most likely place to put the object up until this point of decision-making. | //class = cupboard// is the output decision of the model. In this example the model tries to find a place, where to put an object. If the object is milk (like in the root node) the decision is always to put it in the fridge (branch right from the root node). For any node other that a leaf, the class represents the most likely place to put the object up until this point of decision-making. | ||
| Line 31: | Line 31: | ||
| The calculation of the Gini Impurity is shown in the lecture, as well as its implementation. For our example_data this value is at 0.71875, rounded to 0.719 as shown in the decision tree above. | The calculation of the Gini Impurity is shown in the lecture, as well as its implementation. For our example_data this value is at 0.71875, rounded to 0.719 as shown in the decision tree above. | ||
| - | In other words: The Gini impurity gives an idea on how //sure// the decision of the model is until this point. The higher the value, the more possible predictions are possible and the less secure a prediction would be up to this point of the decision tree. On the other hand, if all the leaf nodes contain a Gini score of 0.0, the model might be overfitted. Keep in mind to always keep a grain of vagueness to a model, such that new input features can still result in a prediction of a certain degree of certainty. | + | In other words: The Gini impurity gives an idea on how //sure// the decision of the model is until this point. The higher the value, the more possible predictions are possible and the less secure a prediction would be up to this point. A lower Gini score in the leaves is mostly better. On the other hand, if all the leaf nodes contain a Gini score of 0.0, the model might be overfitted. Keep in mind to always keep a grain of vagueness to a model, such that new input features can still result in a prediction of a certain degree of certainty. |
| **3.5 Cost Function / 3.6. Picking a Threshold / 3.7 Determining the Root node** | **3.5 Cost Function / 3.6. Picking a Threshold / 3.7 Determining the Root node** | ||
ease/machinelearning/decision_trees.1592824556.txt.gz · Last modified: 2020/06/22 11:15 by s_fuyedc
