ease:workshop
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| ease:workshop [2020/07/15 12:09] – [Tutorials] hawkin | ease:workshop [2020/09/01 09:55] (current) – added video to intriduction hawkin | ||
|---|---|---|---|
| Line 4: | Line 4: | ||
| ==== Abstract ==== | ==== Abstract ==== | ||
| In order to enable robots to perform every day activities with ease, they need to know when to do what. And who could be a better teacher then the human who wants the robots to perform these tasks? But instead of having us humans explain to the robots what to do, it is easier to just show them in a Virtual Reality environment, | In order to enable robots to perform every day activities with ease, they need to know when to do what. And who could be a better teacher then the human who wants the robots to perform these tasks? But instead of having us humans explain to the robots what to do, it is easier to just show them in a Virtual Reality environment, | ||
| + | |||
| + | In the following video we can see how a human performs everyday activities within the VR environment. | ||
| + | |||
| + | < | ||
| + | <div style=" | ||
| + | < | ||
| + | </ | ||
| + | |||
| Before we can teach the robots though, we have to understand what kind of data is being recorded and how we can access and inspect it, before we decide which parts of it to pass on to the robots. | Before we can teach the robots though, we have to understand what kind of data is being recorded and how we can access and inspect it, before we decide which parts of it to pass on to the robots. | ||
| This tutorial will aim at teaching how to interact with episodic memories and inspect the data stored within using [[http:// | This tutorial will aim at teaching how to interact with episodic memories and inspect the data stored within using [[http:// | ||
| Line 9: | Line 17: | ||
| ==== Introduction ==== | ==== Introduction ==== | ||
| - | We can record everything the human does in a Virtual Reality environment fairly precisely. The position of the head of the human can be tracked by tracking the headset itself, while the position of the hands can be mapped to the poistion | + | We can record everything the human does in a Virtual Reality environment fairly precisely. The position of the head of the human can be tracked by tracking the headset itself, while the position of the hands can be mapped to the position |
| - | All of these things are small subconscious decisions we are not necessarily aware of, since we are just used to do things a certain way. How should a robot know them? This is where episodic memories come in. We can use the recorded data from Virtual Reality to teach robots to do every day activities without having to hard code every small little detail into the program of the robot (at least that's the goal). Before we can get there though, we need to understand and see how the data is stored, what we can learn from it and what information can be obtained in the first place. | + | |
| + | Every interaction between the hands of the human with the virtual environment is recorded. We can replay these recordings (episodes) and inspect them, learning from them how a human does a certain | ||
| + | |||
| + | |||
| + | All of these things are small subconscious decisions we are not necessarily aware of since we are just used to do things a certain way. How should a robot know them? This is where episodic memories come in. We can use the recorded data from Virtual Reality to teach robots to do everyday | ||
| ==== Tutorials ==== | ==== Tutorials ==== | ||
| == Setup the Tutorial Environment == | == Setup the Tutorial Environment == | ||
| - | Go to the website [[http:// | + | Go to the website [[http:// |
| - | Click on '' | + | Click on '' |
| {{ : | {{ : | ||
| - | The tutorial will explain a few basics, as well as describe the goals of the following tasks. If you want to get more familiar with OpenEase first and this is your first encounter with OpenEase, KnowRob and Prolog, it is advisable to also read and do the other tutorials available in the tutorial overview. | + | The tutorial will explain a few basics, as well as describe the goals of the following tasks. If you want to get more familiar with OpenEase first and this is your first encounter with OpenEase, KnowRob, and Prolog, it is advisable to also read and do the other tutorials available in the tutorial overview. |
| The following description comments on the build-in '' | The following description comments on the build-in '' | ||
| - | In order to be able to visualize what happened within an episode, we first need to load the environment in which that episode took place, as well as the episode in question. In short: we need to tell OpenEase what to load from the database and that we want the result to be visualized. In order to achieve this, select the '' | + | In order to be able to visualize what happened within an episode, we first need to load the environment in which that episode took place, as well as the episode in question. In short: we need to tell OpenEase what to load from the database and that we want the result to be visualized. In order to achieve this, select the '' |
| Now the environment, | Now the environment, | ||
| Line 42: | Line 55: | ||
| Now these are a few queries chained together by a '','' | Now these are a few queries chained together by a '','' | ||
| - | '' | + | '' |
| - | '' | + | '' |
| '' | '' | ||
| Line 50: | Line 63: | ||
| '' | '' | ||
| - | Note how after executing these queries you also get their results in the result pane above the question answering pane. In this case, we only had one varibale, namely '' | + | Note how after executing these queries you also get their results in the result pane above the question answering pane. In this case, we only had one variable, namely '' |
| <code prolog> | <code prolog> | ||
| MapInst = http:// | MapInst = http:// | ||
| </ | </ | ||
| - | Whenever we use variables, it can always be the case that there are multiple solutions to one question. If you want to know if this is the only solution or if there are more, click on the little arrow button right next to the questionmark | + | Whenever we use variables, it can always be the case that there are multiple solutions to one question. If you want to know if this is the only solution or if there are more, click on the little arrow button right next to the question mark button. There are solutions as long as you are able to click the button, and as long as you do not get a '' |
| ==== Tasks and Exercises ==== | ==== Tasks and Exercises ==== | ||
| Now, let's see if you can figure out the following questions by yourself. There can be more then one way to ask the queries to obtain the result, so if you come up with a different solution then suggested here, that's totally fine. | Now, let's see if you can figure out the following questions by yourself. There can be more then one way to ask the queries to obtain the result, so if you come up with a different solution then suggested here, that's totally fine. | ||
| - | You can try to solve these without any hints at all. If you want some hints, under each task there will be some usefull queries | + | You can try to solve these without any hints at all. In the following, first the task questions will be asked. Then there will be a secion with hints and usefull queries which might help to solve these tasks, and after that you will find the solutions. |
| + | |||
| + | === Task Questions === | ||
| + | Task 1: Which type of objects are brought by the demonstrators? | ||
| + | |||
| + | Task 2: What are these objects' | ||
| + | |||
| + | Task 3: What are their final poses? | ||
| + | |||
| + | Task 4: Highlight the human when he starts to grasp an object | ||
| + | |||
| + | |||
| + | ==== Usefull Predicates / Queries / Cheat Sheet ==== | ||
| + | Here are some usefull queries that might help you to solve the tasks. They are sectioned in two parts. The first contains general queries, the second ones are more NEEM and VR specific. This only means, | ||
| + | |||
| + | === general Predicates === | ||
| + | |||
| + | <code prolog> | ||
| + | entity(Action, | ||
| + | </ | ||
| + | Definition: returns Action whose task context is Context | ||
| + | variables: Context → Bound | ||
| + | |||
| + | <code prolog> | ||
| + | get_divided_subtasks_with_goal(Action, | ||
| + | </ | ||
| + | Definition: If an action, Action, has multiple subactions with the context Context in a way that it tries this subaction until it succeeds, this predicate returns successful instance and failed instances | ||
| + | Variables: Action, Context → Bound; SuccInstance, | ||
| + | |||
| + | <code prolog> | ||
| + | task_start(Act, | ||
| + | </ | ||
| + | Definition: returns start and end time point of given action, Act | ||
| + | Variables: Act → Bound; Start, End → Unbound | ||
| + | |||
| + | <code prolog> | ||
| + | entity(Base, | ||
| + | </ | ||
| + | Definition: returns base link individual | ||
| + | Variables: Base → Unbound | ||
| + | |||
| + | <code prolog> | ||
| + | object_pose_at_time(Obj, | ||
| + | </ | ||
| + | Definition: returns Obj’s pose at TimePoint. For non-moving objects, you can use 1 as TimePoint. | ||
| + | Variables: Obj→ Bound; Timepoint → Bound; Pose → Unbound | ||
| + | |||
| + | <code prolog> | ||
| + | owl_individual_of(Obj, | ||
| + | </ | ||
| + | Definition: returns objects with the given type or returns the types of the object. So, both Obj and Type can be bound and unbound | ||
| + | Hint: Type that is important to consider: knowrob: | ||
| + | |||
| + | <code prolog> | ||
| + | transform_between(Pose1, | ||
| + | </ | ||
| + | Definition: returns Pose1’s position and rotation with respect to Pose2. | ||
| + | Variables: Pose1, Pose2 → Bound | ||
| + | |||
| + | |||
| + | <code prolog> | ||
| + | findall(Var, | ||
| + | </ | ||
| + | Definition: finds out all of the solutions of PrologQuery. Then, it stores all of solutions of the unbound Var inside the List ListOfVar. | ||
| + | |||
| + | <code prolog> | ||
| + | jpl_list_to_array(List, | ||
| + | </ | ||
| + | Definition: converts Prolog List to Java Array. | ||
| + | |||
| + | <code prolog> | ||
| + | append(List1, | ||
| + | </ | ||
| + | Definition: appends two lists together | ||
| + | |||
| + | <code prolog> | ||
| + | generate_feature_files(FeatureArrArr, | ||
| + | </ | ||
| + | Definition: given an float array of array, write this as a feature file (with CSV extension) into the given path | ||
| + | |||
| + | === VR/NEEM specific Predicates === | ||
| + | These are some queries which are within the '' | ||
| + | |||
| + | <code prolog> | ||
| + | ep_inst(EpInst). | ||
| + | </ | ||
| + | Definition: returns the instance of an Episode. | ||
| + | |||
| + | <code prolog> | ||
| + | u_occurs(EpInst, | ||
| + | </ | ||
| + | Definition: Given the Episode Instance, returns any occured Event Instance with the correspoding Start and End time stamps of the Event. | ||
| + | |||
| + | <code prolog> | ||
| + | obj_type(EventInst, | ||
| + | </ | ||
| + | Definition: Returns an Event Instance in which an Event of Type '' | ||
| + | |||
| + | <code prolog> | ||
| + | rdf_has(EventInst, | ||
| + | </ | ||
| + | Definition: Checks if rdf has an Object Instance within the given Event Instance with the propery '' | ||
| + | |||
| + | <code prolog> | ||
| + | iri_xml_namespace(ObjInst, | ||
| + | </ | ||
| + | Definition: Cuts of the Namespace-prefix of the Object Instance, returning it's short Name. | ||
| + | |||
| + | <code prolog> | ||
| + | actor_pose(EpInst, | ||
| + | </ | ||
| + | Definition: returns the Pose of the Object (ObjShortName) during a specific Timestamp (Start) from the given Episode Instance. The Pose will be an array of seven values. The first three are the x y z coordinates, | ||
| + | Note: You need to use the '' | ||
| + | |||
| + | <code prolog> | ||
| + | show_world_state(EpInst, | ||
| + | </ | ||
| + | Definition: visualizes the world state of a specific Episode Instance during a given Time Stamp. | ||
| + | |||
| + | <code prolog> | ||
| + | highlight(Object). | ||
| + | </ | ||
| + | Definition: Highlights a specific Object Instance in red within the visualization pane. | ||
| + | ==== Solution suggestions ==== | ||
| + | The following contains some solution suggestions. In the first task, we will show multiple solution methods, to showcase how different some approaches can be. After this only one solution will be presented. If you get the same result using different predicates, or even some which are not mentioned above, that's totally fine. | ||
| + | |||
| + | The Idea behind each solution will be briefly explained before the Solution itself is shown. It can also be used as a hint towards finding a solution on your own. | ||
| == Task 1: Which type of objects are brought by the demonstrators? | == Task 1: Which type of objects are brought by the demonstrators? | ||
| + | **Solution 1** | ||
| + | Idea: Get an episode instance. Check if the episode has an event instance with a start and end time. Return an event instance which is of type '' | ||
| - | **Solution:** | + | '' |
| + | |||
| + | <code prolog> | ||
| + | findall(ObjType, | ||
| + | (ep_inst(EpInst), | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | </ | ||
| + | |||
| + | Result of the Query: | ||
| + | <code prolog> | ||
| + | [...] | ||
| + | ListOfObjectTypes = [ | ||
| + | 0 = knowrob: | ||
| + | 1 = knowrob: | ||
| + | 2 = knowrob: | ||
| + | 3 = knowrob: | ||
| + | 4 = knowrob: | ||
| + | 5 = knowrob: | ||
| + | 6 = knowrob: | ||
| + | 7 = knowrob: | ||
| + | ] | ||
| + | </ | ||
| + | |||
| + | **Solution | ||
| <code prolog> | <code prolog> | ||
| findall(_T, | findall(_T, | ||
| Line 80: | Line 248: | ||
| 4 = knowrob: | 4 = knowrob: | ||
| 5 = knowrob: | 5 = knowrob: | ||
| - | ... | + | [...] |
| </ | </ | ||
| == Task 2: What are these objects' | == Task 2: What are these objects' | ||
| + | Idea: Very similar to task 1. Just without the findall since we only need one instance here. So going from the solution of task 1, the name of the object was shortened. The position of the object at the start of the '' | ||
| **Solution** | **Solution** | ||
| + | <code prolog> | ||
| + | ep_inst(EpInst), | ||
| + | u_occurs(EpInst, | ||
| + | obj_type(EventInst, | ||
| + | rdf_has(EventInst, | ||
| + | obj_type(ObjInst, | ||
| + | iri_xml_namespace(ObjInst, | ||
| + | actor_pose(EpInst, | ||
| + | show_world_state(EpInst, | ||
| + | highlight(ObjInst). | ||
| + | </ | ||
| + | **Result: | ||
| + | <code prolog> | ||
| + | ObjShortName = KoellnMuesliCranberry_2pcO | ||
| + | EpInst = http:// | ||
| + | PoseObjStart = [ | ||
| + | 0 = -3.5357048511505127 | ||
| + | 1 = -2.2400009632110596 | ||
| + | 2 = 1.0195969343185425 | ||
| + | 3 = 0.8191515803337097 | ||
| + | 4 = 0.000012126906767662149 | ||
| + | 5 = -0.00002010483331105206 | ||
| + | 6 = 0.5735770463943481 | ||
| + | ] | ||
| + | EventInst = http:// | ||
| + | Start = http:// | ||
| + | ObjType = knowrob: | ||
| + | End = http:// | ||
| + | ObjInst = http:// | ||
| + | </ | ||
| + | |||
| + | {{ : | ||
| == Task 3: What are their final poses? == | == Task 3: What are their final poses? == | ||
| + | Idea: Same as the above except for the timestamp change from start to end. | ||
| + | |||
| + | **Solution** | ||
| + | <code prolog> | ||
| + | ep_inst(EpInst), | ||
| + | u_occurs(EpInst, | ||
| + | obj_type(EventInst, | ||
| + | rdf_has(EventInst, | ||
| + | obj_type(ObjInst, | ||
| + | iri_xml_namespace(ObjInst, | ||
| + | actor_pose(EpInst, | ||
| + | show_world_state(EpInst, | ||
| + | highlight(ObjInst). | ||
| + | </ | ||
| + | |||
| + | **Result:** | ||
| + | <code prolog> | ||
| + | ObjShortName = KoellnMuesliCranberry_2pcO | ||
| + | EpInst = http:// | ||
| + | PoseObjStart = [ | ||
| + | 0 = -3.984745979309082 | ||
| + | 1 = -1.8089243173599243 | ||
| + | 2 = 0.9824687242507935 | ||
| + | 3 = 0.9975854754447937 | ||
| + | 4 = 0.006023405119776726 | ||
| + | 5 = -0.023459946736693382 | ||
| + | 6 = 0.06508690118789673 | ||
| + | ] | ||
| + | EventInst = http:// | ||
| + | Start = http:// | ||
| + | ObjType = knowrob: | ||
| + | End = http:// | ||
| + | ObjInst = http:// | ||
| + | </ | ||
| + | |||
| + | {{ : | ||
| + | |||
| + | |||
| == Task 4: Highlight the human when he starts to grasp an object == | == Task 4: Highlight the human when he starts to grasp an object == | ||
| + | Idea: Same as the above. The new change is that we want the position of the person. The person which interacts with the obejcts is refered to as '' | ||
| + | |||
| + | **Solution** | ||
| + | <code prolog> | ||
| + | ep_inst(EpInst), | ||
| + | u_occurs(EpInst, | ||
| + | obj_type(EventInst, | ||
| + | rdf_has(EventInst, | ||
| + | obj_type(ObjInst, | ||
| + | iri_xml_namespace(ObjInst, | ||
| + | rdf_has(CameraInst, | ||
| + | iri_xml_namespace(CameraInst, | ||
| + | show_world_state(EpInst, | ||
| + | actor_pose(EpInst, | ||
| + | highlight(CameraInst). | ||
| + | </ | ||
| + | |||
| + | **Result: | ||
| + | <code prolog> | ||
| + | EpInst = http:// | ||
| + | CameraStartPose = [ | ||
| + | 0 = -3.3242106437683105 | ||
| + | 1 = -1.7523821592330933 | ||
| + | 2 = 1.5412843227386475 | ||
| + | 3 = -0.4662284851074219 | ||
| + | 4 = -0.24837808310985565 | ||
| + | 5 = -0.10210808366537094 | ||
| + | 6 = 0.8429195284843445 | ||
| + | ] | ||
| + | End = http:// | ||
| + | CameraShortName = CharacterCamera_LPbi | ||
| + | ObjInst = http:// | ||
| + | ObjShortName = KoellnMuesliCranberry_2pcO | ||
| + | EventInst = http:// | ||
| + | ObjType = knowrob: | ||
| + | Start = http:// | ||
| + | CameraInst = http:// | ||
| + | </ | ||
| + | {{ : | ||
| - | === Conclusion | + | ==== Best practices and Feedback |
| + | === Best practices === | ||
| + | * queries one by one, stacking them on one another. Once you know how some of them work, you can build on top of that. | ||
| + | * Ask questions if you have the chance to. | ||
| + | * Vizualization can be very helpful (if it works). | ||
| + | === Feedback === | ||
| + | If you have any suggestions or feedback about this tutorial, or if you encounter bugs, feel free to contact us. We appreciate any given feedback :) | ||
| - | === general todos === | ||
| - | @TODO link Prolog, KnowRob, OpenEase whenever they are mentioned | ||
| - | @TODO make sure the used terms for the individual OpenEase panels are correct | ||
ease/workshop.1594814967.txt.gz · Last modified: 2020/07/15 12:09 by hawkin
