User Tools

Site Tools


ease:machinelearning:decision_trees

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ease:machinelearning:decision_trees [2020/06/22 11:15] s_fuyedcease:machinelearning:decision_trees [2020/06/22 11:42] (current) s_fuyedc
Line 1: Line 1:
 ===== NEEMS Lecture: 3. Brief Introduction to Decision Trees ===== ===== NEEMS Lecture: 3. Brief Introduction to Decision Trees =====
  
-In [[https://ease-crc.org/material/ease/machinelearning/data_preparation|the previous section]] we prepared the NEEMS data for training. Here we will get into decision trees, how they are built and how to read them.+In [[https://ease-crc.org/material/ease/machinelearning/data_preparation|the previous section]] we prepared the NEEMS data for training in a one-hot-encoding manner. Here we will get into decision trees, how they are built and how to read them.
  
 The statistic model of choice is a decision tree. Such models have the advantage of being visually inspectable and comprehendible, as well as more or less easy to understand. Based on the Information about the //type// and //parent// of a task, the decision tree model predicts the most probable //next// action to come. In this section, a minimal set of example data is used to train a decision tree. It should give a first grasp of what we are about to do without our NEEMS data later. The statistic model of choice is a decision tree. Such models have the advantage of being visually inspectable and comprehendible, as well as more or less easy to understand. Based on the Information about the //type// and //parent// of a task, the decision tree model predicts the most probable //next// action to come. In this section, a minimal set of example data is used to train a decision tree. It should give a first grasp of what we are about to do without our NEEMS data later.
Line 21: Line 21:
 //samples = 8// tells the number of possible decisions for the whole scenario. In the root, we have all 8 entries of our example_data at our service. //samples = 8// tells the number of possible decisions for the whole scenario. In the root, we have all 8 entries of our example_data at our service.
  
-//value = [3, 2, 1, 2]// tells the sum of decidable goal locations over all objects, sorted like in the table above. [cupboard, dishwasher, drawer, fridge]. The further down we go along the tree, the more decisions have been made, and the fewer possibilities for a decision are left. So if the first node decides, the object is **not** milk, the predicted goal location will never be //fridge//, since no other object than milk is going to the fride, an therefore the last entry of this array of numbers is going to be 0 for all upcoming nodes.+//value = [3, 2, 1, 2]// tells the sum of decidable goal locations over all objects, sorted like in the table above. [cupboard, dishwasher, drawer, fridge]. The further down we go along the tree, the more decisions have been made, and the fewer possibilities for a decision are left. So if the first node decides, the object is **not** milk, the predicted goal location will never be //fridge//, since no other object than milk is going to the fridge, an therefore the last entry of this array of numbers is going to be 0 for all upcoming nodes.
  
 //class = cupboard// is the output decision of the model. In this example the model tries to find a place, where to put an object. If the object is milk (like in the root node) the decision is always to put it in the fridge (branch right from the root node). For any node other that a leaf, the class represents the most likely place to put the object up until this point of decision-making. //class = cupboard// is the output decision of the model. In this example the model tries to find a place, where to put an object. If the object is milk (like in the root node) the decision is always to put it in the fridge (branch right from the root node). For any node other that a leaf, the class represents the most likely place to put the object up until this point of decision-making.
Line 31: Line 31:
 The calculation of the Gini Impurity is shown in the lecture, as well as its implementation. For our example_data this value is at 0.71875, rounded to 0.719 as shown in the decision tree above. The calculation of the Gini Impurity is shown in the lecture, as well as its implementation. For our example_data this value is at 0.71875, rounded to 0.719 as shown in the decision tree above.
  
-In other words: The Gini impurity gives an idea on how //sure// the decision of the model is until this point. The higher the value, the more possible predictions are possible and the less secure a prediction would be up to this point of the decision tree. On the other hand, if all the leaf nodes contain a Gini score of 0.0, the model might be overfitted. Keep in mind to always keep a grain of vagueness to a model, such that new input features can still result in a prediction of a certain degree of certainty.  +In other words: The Gini impurity gives an idea on how //sure// the decision of the model is until this point. The higher the value, the more possible predictions are possible and the less secure a prediction would be up to this point. A lower Gini score in the leaves is mostly better. On the other hand, if all the leaf nodes contain a Gini score of 0.0, the model might be overfitted. Keep in mind to always keep a grain of vagueness to a model, such that new input features can still result in a prediction of a certain degree of certainty.  
  
 **3.5 Cost Function / 3.6. Picking a Threshold / 3.7 Determining the Root node** **3.5 Cost Function / 3.6. Picking a Threshold / 3.7 Determining the Root node**
Line 39: Line 39:
 {{ :ease:machinelearning:gini_cost_calc.png |}} {{ :ease:machinelearning:gini_cost_calc.png |}}
  
-This cost function is applied to every feature and sorted ascending with their respective cost value. Determining the root node is essential for optimizing the model. A feature with especially high influence in the model is considered to gain a lot of information, which makes it reasonable to put them closer to the root of the decision tree. Decisions, that can be made with fewer branching, can be done much faster.+This cost function is applied to every feature and sorted ascending with their respective cost value. Determining the root node is essential for optimizing the model. A feature with especially high influence in the model is considered to gain a lot of information, which makes it reasonable to put them closer to the root of the decision tree. Decisions, that can be made with fewer branching, are done much faster.
  
  
 In [[https://ease-crc.org/material/ease/machinelearning/machine_learning_theory|the next section]] we will talk about some more machine learning theory. In [[https://ease-crc.org/material/ease/machinelearning/machine_learning_theory|the next section]] we will talk about some more machine learning theory.
ease/machinelearning/decision_trees.1592824516.txt.gz · Last modified: 2020/06/22 11:15 by s_fuyedc

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki