===== NEEMS Lecture: 6. Evaluate the Next Action Classifier ===== In [[https://ease-crc.org/material/ease/machinelearning/classifier_training|the previous section]] we trained our decision tree model. With this last section this model is to be evaluated. Now it comes to evaluating what the tree model is capable of. The purpose of this model is to predict which action is the most likely to happen, depending on the previously performed action and its context. Simply execute the code blocks to see the outcome. The first block shows a table that gives information about precision, recall, F1-Score, and support of the model. Read more about these terms [[https://ease-crc.org/material/ease/machinelearning/machine_learning_theory|in the earlier sections]] of this lesson. The second code block is much more interesting. It generates a confusion matrix, showing for each action how often it was predicted successfully. In an optimal model, this matrix would only show entries on a diagonal line from top-left to bottom-right. If the confusion matrix is visualized without labels at the left and bottom side, check the code [[https://ease-crc.org/material/ease/machinelearning/data_preparation|of your data preparation]] and compare it with the solutions provided in this tutorial. Having the //NEXT// column removed is the most probable mistake. {{ :ease:generated_cof_matrix.png |}} Along the rows we see the true classes, or what is expected to be predicted. In the columns are the classes predicted from our decision-tree model. Notice, that the most prominent and also correct prediction is //NoNext//, whereas the other classes line up pretty well along the optimal diagonal line. Some predictions are either falsely classified, or defaulted to //NoNext//. There seems to be some confusion, especially about the //AcquireGraspOfSomething// and the //MovingToLocation//. Print out the narratives for //MovingToLocation// and compare it with the other table. other_narratives = narratives[(narratives.next == 'MovingToLocation')] other_narratives[[header_names.PARENT, header_names.PREVIOUS, header_names.TYPE]] Apparently both actions seem to have mostly the same parent and previous actions, as well as its type. Wanting to do a //PickingUpAction// and just being done with //LookingForSomething//, which is a //VisualPerception// can mean that the robot now either will do //MovingToLocation// to relocate itself, in order to better perceive the object, or already perceived the object in the previous action and can now //AcquireGraspOfSomething//. This has been the last section of [[https://ease-crc.org/material/ease/machinelearning|the NEEMS Lecture]]. For more information, feel free to contact our researchers.