ease:workshop
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| ease:workshop [2020/07/23 11:26] – [Introduction] syrbe | ease:workshop [2020/09/01 09:55] (current) – added video to intriduction hawkin | ||
|---|---|---|---|
| Line 4: | Line 4: | ||
| ==== Abstract ==== | ==== Abstract ==== | ||
| In order to enable robots to perform every day activities with ease, they need to know when to do what. And who could be a better teacher then the human who wants the robots to perform these tasks? But instead of having us humans explain to the robots what to do, it is easier to just show them in a Virtual Reality environment, | In order to enable robots to perform every day activities with ease, they need to know when to do what. And who could be a better teacher then the human who wants the robots to perform these tasks? But instead of having us humans explain to the robots what to do, it is easier to just show them in a Virtual Reality environment, | ||
| + | |||
| + | In the following video we can see how a human performs everyday activities within the VR environment. | ||
| + | |||
| + | < | ||
| + | <div style=" | ||
| + | < | ||
| + | </ | ||
| + | |||
| Before we can teach the robots though, we have to understand what kind of data is being recorded and how we can access and inspect it, before we decide which parts of it to pass on to the robots. | Before we can teach the robots though, we have to understand what kind of data is being recorded and how we can access and inspect it, before we decide which parts of it to pass on to the robots. | ||
| This tutorial will aim at teaching how to interact with episodic memories and inspect the data stored within using [[http:// | This tutorial will aim at teaching how to interact with episodic memories and inspect the data stored within using [[http:// | ||
| Line 10: | Line 18: | ||
| ==== Introduction ==== | ==== Introduction ==== | ||
| We can record everything the human does in a Virtual Reality environment fairly precisely. The position of the head of the human can be tracked by tracking the headset itself, while the position of the hands can be mapped to the position of the joysticks. | We can record everything the human does in a Virtual Reality environment fairly precisely. The position of the head of the human can be tracked by tracking the headset itself, while the position of the hands can be mapped to the position of the joysticks. | ||
| - | |||
| - | :!: '' | ||
| Every interaction between the hands of the human with the virtual environment is recorded. We can replay these recordings (episodes) and inspect them, learning from them how a human does a certain everyday activity. Why do we do tasks in a specific order? (For instance, in a table setting scenario most people would put the plate down first and then get the cutlery.) How do we place objects? What orientation of objects do we tend to prefer? | Every interaction between the hands of the human with the virtual environment is recorded. We can replay these recordings (episodes) and inspect them, learning from them how a human does a certain everyday activity. Why do we do tasks in a specific order? (For instance, in a table setting scenario most people would put the plate down first and then get the cutlery.) How do we place objects? What orientation of objects do we tend to prefer? | ||
| + | |||
| + | |||
| All of these things are small subconscious decisions we are not necessarily aware of since we are just used to do things a certain way. How should a robot know them? This is where episodic memories come in. We can use the recorded data from Virtual Reality to teach robots to do everyday activities without having to hard code every small little detail into the program of the robot (at least that's the goal). Before we can get there though, we need to understand and see how the data is stored, what we can learn from it, and what information can be obtained in the first place. | All of these things are small subconscious decisions we are not necessarily aware of since we are just used to do things a certain way. How should a robot know them? This is where episodic memories come in. We can use the recorded data from Virtual Reality to teach robots to do everyday activities without having to hard code every small little detail into the program of the robot (at least that's the goal). Before we can get there though, we need to understand and see how the data is stored, what we can learn from it, and what information can be obtained in the first place. | ||
| + | |||
| ==== Tutorials ==== | ==== Tutorials ==== | ||
| == Setup the Tutorial Environment == | == Setup the Tutorial Environment == | ||
| - | Go to the website [[http:// | + | Go to the website [[http:// |
| - | Click on '' | + | Click on '' |
| {{ : | {{ : | ||
| - | The tutorial will explain a few basics, as well as describe the goals of the following tasks. If you want to get more familiar with OpenEase first and this is your first encounter with OpenEase, KnowRob and Prolog, it is advisable to also read and do the other tutorials available in the tutorial overview. | + | The tutorial will explain a few basics, as well as describe the goals of the following tasks. If you want to get more familiar with OpenEase first and this is your first encounter with OpenEase, KnowRob, and Prolog, it is advisable to also read and do the other tutorials available in the tutorial overview. |
| The following description comments on the build-in '' | The following description comments on the build-in '' | ||
| - | In order to be able to visualize what happened within an episode, we first need to load the environment in which that episode took place, as well as the episode in question. In short: we need to tell OpenEase what to load from the database and that we want the result to be visualized. In order to achieve this, select the '' | + | In order to be able to visualize what happened within an episode, we first need to load the environment in which that episode took place, as well as the episode in question. In short: we need to tell OpenEase what to load from the database and that we want the result to be visualized. In order to achieve this, select the '' |
| Now the environment, | Now the environment, | ||
| Line 46: | Line 55: | ||
| Now these are a few queries chained together by a '','' | Now these are a few queries chained together by a '','' | ||
| - | '' | + | '' |
| - | '' | + | '' |
| '' | '' | ||
| Line 54: | Line 63: | ||
| '' | '' | ||
| - | Note how after executing these queries you also get their results in the result pane above the question answering pane. In this case, we only had one varibale, namely '' | + | Note how after executing these queries you also get their results in the result pane above the question answering pane. In this case, we only had one variable, namely '' |
| <code prolog> | <code prolog> | ||
| MapInst = http:// | MapInst = http:// | ||
| </ | </ | ||
| - | Whenever we use variables, it can always be the case that there are multiple solutions to one question. If you want to know if this is the only solution or if there are more, click on the little arrow button right next to the questionmark | + | Whenever we use variables, it can always be the case that there are multiple solutions to one question. If you want to know if this is the only solution or if there are more, click on the little arrow button right next to the question mark button. There are solutions as long as you are able to click the button, and as long as you do not get a '' |
| ==== Tasks and Exercises ==== | ==== Tasks and Exercises ==== | ||
| Line 66: | Line 75: | ||
| === Task Questions === | === Task Questions === | ||
| Task 1: Which type of objects are brought by the demonstrators? | Task 1: Which type of objects are brought by the demonstrators? | ||
| + | |||
| Task 2: What are these objects' | Task 2: What are these objects' | ||
| + | |||
| Task 3: What are their final poses? | Task 3: What are their final poses? | ||
| + | |||
| Task 4: Highlight the human when he starts to grasp an object | Task 4: Highlight the human when he starts to grasp an object | ||
| Line 256: | Line 268: | ||
| </ | </ | ||
| - | Result: | + | **Result:** |
| <code prolog> | <code prolog> | ||
| ObjShortName = KoellnMuesliCranberry_2pcO | ObjShortName = KoellnMuesliCranberry_2pcO | ||
| Line 277: | Line 289: | ||
| {{ : | {{ : | ||
| - | |||
| == Task 3: What are their final poses? == | == Task 3: What are their final poses? == | ||
| Idea: Same as the above except for the timestamp change from start to end. | Idea: Same as the above except for the timestamp change from start to end. | ||
| Line 294: | Line 305: | ||
| </ | </ | ||
| - | Result: | + | **Result:** |
| <code prolog> | <code prolog> | ||
| ObjShortName = KoellnMuesliCranberry_2pcO | ObjShortName = KoellnMuesliCranberry_2pcO | ||
| Line 335: | Line 346: | ||
| </ | </ | ||
| - | Result: | + | **Result:** |
| <code prolog> | <code prolog> | ||
| EpInst = http:// | EpInst = http:// | ||
ease/workshop.1595503590.txt.gz · Last modified: 2020/07/23 11:26 by syrbe
