User Tools

Site Tools


ease:workshop

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ease:workshop [2020/07/23 12:55] – [Tutorials] syrbeease:workshop [2020/09/01 09:55] (current) – added video to intriduction hawkin
Line 4: Line 4:
 ==== Abstract ==== ==== Abstract ====
 In order to enable robots to perform every day activities with ease, they need to know when to do what. And who could be a better teacher then the human who wants the robots to perform these tasks? But instead of having us humans explain to the robots what to do, it is easier to just show them in a Virtual Reality environment, since the VR world can easily be adapted to whichever tasks we want to teach. In order to enable robots to perform every day activities with ease, they need to know when to do what. And who could be a better teacher then the human who wants the robots to perform these tasks? But instead of having us humans explain to the robots what to do, it is easier to just show them in a Virtual Reality environment, since the VR world can easily be adapted to whichever tasks we want to teach.
 +
 +In the following video we can see how a human performs everyday activities within the VR environment.
 +
 +<html>
 +<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/199633754" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>
 +<p><a href="https://vimeo.com/199633754">RobCoG</a> from <a href="https://vimeo.com/andreihaidu">Andrei Haidu</a> on <a href="https://vimeo.com">Vimeo</a>.</p>
 +</html>
 +
 Before we can teach the robots though, we have to understand what kind of data is being recorded and how we can access and inspect it, before we decide which parts of it to pass on to the robots.  Before we can teach the robots though, we have to understand what kind of data is being recorded and how we can access and inspect it, before we decide which parts of it to pass on to the robots. 
 This tutorial will aim at teaching how to interact with episodic memories and inspect the data stored within using [[http://data.open-ease.org/user/sign-in|Open Ease]] and [[https://www.swi-prolog.org/|Prolog]] queries. This tutorial will aim at teaching how to interact with episodic memories and inspect the data stored within using [[http://data.open-ease.org/user/sign-in|Open Ease]] and [[https://www.swi-prolog.org/|Prolog]] queries.
Line 10: Line 18:
 ==== Introduction ==== ==== Introduction ====
 We can record everything the human does in a Virtual Reality environment fairly precisely. The position of the head of the human can be tracked by tracking the headset itself, while the position of the hands can be mapped to the position of the joysticks.  We can record everything the human does in a Virtual Reality environment fairly precisely. The position of the head of the human can be tracked by tracking the headset itself, while the position of the hands can be mapped to the position of the joysticks. 
- 
-:!: ''A figure of the Headset and the controllers might be helpful'' 
  
 Every interaction between the hands of the human with the virtual environment is recorded. We can replay these recordings (episodes) and inspect them, learning from them how a human does a certain everyday activity. Why do we do tasks in a specific order? (For instance, in a table setting scenario most people would put the plate down first and then get the cutlery.) How do we place objects? What orientation of objects do we tend to prefer?  Every interaction between the hands of the human with the virtual environment is recorded. We can replay these recordings (episodes) and inspect them, learning from them how a human does a certain everyday activity. Why do we do tasks in a specific order? (For instance, in a table setting scenario most people would put the plate down first and then get the cutlery.) How do we place objects? What orientation of objects do we tend to prefer? 
  
-:!: ''Is there a small gif available showing different people set the table? Would be awesome'' 
  
 All of these things are small subconscious decisions we are not necessarily aware of since we are just used to do things a certain way. How should a robot know them? This is where episodic memories come in. We can use the recorded data from Virtual Reality to teach robots to do everyday activities without having to hard code every small little detail into the program of the robot (at least that's the goal). Before we can get there though, we need to understand and see how the data is stored, what we can learn from it, and what information can be obtained in the first place. All of these things are small subconscious decisions we are not necessarily aware of since we are just used to do things a certain way. How should a robot know them? This is where episodic memories come in. We can use the recorded data from Virtual Reality to teach robots to do everyday activities without having to hard code every small little detail into the program of the robot (at least that's the goal). Before we can get there though, we need to understand and see how the data is stored, what we can learn from it, and what information can be obtained in the first place.
  
-:!: ''I edited the text a little'' 
  
 ==== Tutorials ==== ==== Tutorials ====
Line 51: Line 55:
 Now these are a few queries chained together by a '','' which acts as an ''and'' in prolog. The fullstop ''.'' at the end is also very important. It signals the end of the query and you might get an error message if you forget to add it.  Now these are a few queries chained together by a '','' which acts as an ''and'' in prolog. The fullstop ''.'' at the end is also very important. It signals the end of the query and you might get an error message if you forget to add it. 
  
-''owl_parse'' parses the path of the EventData.owl file and the SemanticMap so what OpenEase knows which episode to load. The EventData contains every event that traspired :?: in the episode. Every action like pick and place etc. is recorded as an Event in the EventData. The SemanticMap contains the initial state of the world, including which object is where, and how the world is set up. Aka if you do experiments in a kitchen, it will describe where all the kitchen Furniture and objects are, where the meshes are located etc. With these queries, this information gets loaded into OpenEase.+''owl_parse'' parses the path of the EventData.owl file and the SemanticMap so what OpenEase knows which episode to load. The EventData contains every event that happened in the episode. Every action like pick and place etc. is recorded as an Event in the EventData. The SemanticMap contains the initial state of the world, including which object is where, and how the world is set up. Aka if you do experiments in a kitchen, it will describe where all the kitchen Furniture and objects are, where the meshes are located etc. With these queries, this information gets loaded into OpenEase.
  
 ''connect_to_db'' connects to the MongoDB database which contains all the poses of the objects during the Events. The poses are mapped to timestamps.  ''connect_to_db'' connects to the MongoDB database which contains all the poses of the objects during the Events. The poses are mapped to timestamps. 
Line 71: Line 75:
 === Task Questions === === Task Questions ===
 Task 1: Which type of objects are brought by the demonstrators?  Task 1: Which type of objects are brought by the demonstrators? 
 +
 Task 2: What are these objects' initial poses? Task 2: What are these objects' initial poses?
 +
 Task 3: What are their final poses? Task 3: What are their final poses?
 +
 Task 4: Highlight the human when he starts to grasp an object Task 4: Highlight the human when he starts to grasp an object
  
Line 261: Line 268:
 </code> </code>
  
-Result: +**Result:** 
 <code prolog> <code prolog>
 ObjShortName = KoellnMuesliCranberry_2pcO ObjShortName = KoellnMuesliCranberry_2pcO
Line 282: Line 289:
  
 {{ :ease:solution_2.png |}} {{ :ease:solution_2.png |}}
- 
 == Task 3: What are their final poses? == == Task 3: What are their final poses? ==
 Idea: Same as the above except for the timestamp change from start to end. Idea: Same as the above except for the timestamp change from start to end.
Line 299: Line 305:
 </code> </code>
  
-Result:+**Result:**
 <code prolog> <code prolog>
 ObjShortName = KoellnMuesliCranberry_2pcO ObjShortName = KoellnMuesliCranberry_2pcO
Line 340: Line 346:
 </code> </code>
  
-Result: +**Result:** 
 <code prolog> <code prolog>
 EpInst = http://knowrob.org/kb/unreal_log.owl#UnrealExperiment_Hnkn EpInst = http://knowrob.org/kb/unreal_log.owl#UnrealExperiment_Hnkn
ease/workshop.1595508926.txt.gz · Last modified: 2020/07/23 12:55 by syrbe

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki