EASE-related Publications

Shown Category: Perception

2023

[1] Representing (Dis)Similarities Between Prediction and Fixation Maps Using Intersection-over-Union Features
Maldonado, Jaime and Zetzsche, Christoph, "Representing (Dis)Similarities Between Prediction and Fixation Maps Using Intersection-over-Union Features", In Symposium on Eye Tracking Research and Applications (ETRA ’23), May 30–June 02, 2023, Tubingen, Germany, 2023.
[url] [bib]
[2] VRisbee: How Hand Visibility Impacts Throwing Accuracy and Experience in Virtual Reality
Borgwardt, Malte, Boueke, Jonas, Fernanda Sanabria, María, Bonfert, Michael and Porzel, Robert, "VRisbee: How Hand Visibility Impacts Throwing Accuracy and Experience in Virtual Reality", CHI EA ’23, April 23–28, 2023, Hamburg, Germany, 2023.
[url] [bib]

2022

[3] Curiously exploring affordance spaces of a pouring task
Pomarlan, Mihai, Hedblom, Maria M. and Porzel, Robert, "Curiously exploring affordance spaces of a pouring task", WILEY, 2022.
[url] [bib]
[4] NaivPhys4RP - Towards Human-like Robot Perception“Physical Reasoning based on Embodied Probabilistic Simulation”
Kenghagho K., Franklin, Neumann, Michael, Mania, Patrick, Tan, Toni, Siddiky A., Feroz, Weller, Reńe, Zachmann, Gabriel and Beetz, Michael, "NaivPhys4RP - Towards Human-like Robot Perception“Physical Reasoning based on Embodied Probabilistic Simulation”", In 2022 IEEE-RAS 21th International Conference on Humanoid Robots (Humanoids)November 28-30, 2022. Ginowan, Japan., IEEE, 2022.
[url] [bib]
[5] Improving Object Pose Estimation by Fusion With a Multimodal Prior – Utilizing Uncertainty-Based CNN Pipelines for Robotics
Richter-Klug, Jesse, Mania, Patrick, Kazhoyan, Gayane, Beetz, Michael and Frese, Udo, "Improving Object Pose Estimation by Fusion With a Multimodal Prior – Utilizing Uncertainty-Based CNN Pipelines for Robotics", In IEEE Robotics and Automation Letters ( Volume: 7, Issue: 2, April 2022), IEEE, 2022.
[url] [bib]
[6] Kicking in Virtual Reality: The Influence of Foot Visibility on the Shooting Experience and Accuracy
Bonfert, Michael, Lemke, Stella, Porzel, Robert and Malaka, Rainer, "Kicking in Virtual Reality: The Influence of Foot Visibility on the Shooting Experience and Accuracy", In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Christchurch, New Zealand, IEEE, 2022.
[pdf] [bib]
[7] A Framework for Safe Execution of User-Uploaded Algorithms
Tan, Toni, Weller, René and Zachmann, Gabriel, "A Framework for Safe Execution of User-Uploaded Algorithms", In Proceedings in The 27th International Conference on 3D Web Technology (Web3D ’22), November 2–4, 2022, EvryCourcouronnes, France. ACM, New York, NY, USA, pp. 1-5, 2022.
[url] [bib]

2021

[8] An Evaluation of Visual Embodiment for Voice Assistants on Smart Displays
Bonfert, Michael, Zargham, Nima, Saade, Florian, Porzel, Robert and Malaka, Rainer, "An Evaluation of Visual Embodiment for Voice Assistants on Smart Displays", In In 3rd Conference on Conversational User Interfaces (CUI’21), July 27–29, 2021, Bilbao (online), Spain. ACM, New York, NY, USA, 2021.
[bib] [doi]
[9] Dynamic Action Selection Using Image Schema-based Reasoning for Robots
Hedblom, Maria M., Pomarlan, Mihai, Porzel, Robert, Malaka, Rainer and Beetz, Michael, "Dynamic Action Selection Using Image Schema-based Reasoning for Robots", In JOWO, 2021, Bolzano, Italy (September 2021), CEUR-WS.org, 2021.
[pdf] [bib]
[10] Cutting Events: Towards Autonomous Plan Adaption by Robotic Agents through Image-Schematic Event Segmentation
Dhanabalachandran, Kaviya, Hassouna, Vanessa, Hedblom, Maria M., Kümpel, Michaela, Leusmann, Nils and Beetz, Michael, "Cutting Events: Towards Autonomous Plan Adaption by Robotic Agents through Image-Schematic Event Segmentation", In In Proceedings of the 11th on Knowledge Capture Conference, Association for Computing Machinery, New York, NY, USA, pp. 25-32, 2021.
[url] [bib]

2020

[11] Enabling cognitive behavior of humans, animals, and machines: A situation model framework
Schneider, Werner X, Albert, Josefine and Ritter, Helge, "Enabling cognitive behavior of humans, animals, and machines: A situation model framework", ZiF-Mitteilungen, vol. 1, pp. 21–34, 2020.
[bib]
[12] From Geometries to Contact Graphs
Meier, Martin, Haschke, Robert and Ritter, Helge J, "From Geometries to Contact Graphs", In International Conference on Artificial Neural Networks, pp. 546–555, 2020.
[bib]
[13] Barometer-based Tactile Skin for Anthropomorphic Robot Hand
Kõiva, Risto, Schwank, Tobias, Walck, Guillaume, Meier, Martin, Haschke, Robert and Ritter, Helge, "Barometer-based Tactile Skin for Anthropomorphic Robot Hand", 2020.
[bib]
[14] Data Publication: Tactile, Force and Torque Data of Robotic Lid Closing
Meier, Martin, "Data Publication: Tactile, Force and Torque Data of Robotic Lid Closing", 2020.
[url] [bib] [doi]
[15] Data Publication: Object Geometries and Contact Graphs
Meier, Martin, "Data Publication: Object Geometries and Contact Graphs", 2020.
[url] [bib] [doi]
[16] Imagination-enabled Robot Perception
Mania, Patrick, Kenfack, Franklin Kenghagho, Neumann, Michael and Beetz, Michael, "Imagination-enabled Robot Perception", arXiv preprint arXiv:2011.11397, 2020.
[url] [bib]
[17] RobotVQA — A Scene-Graph- and Deep-Learning-based Visual Question Answering System for Robot Manipulation
Franklin Kenghagho Kenfack, Feroz Ahmed Siddiky, Ferenc Balint-Benczedi and Michael Beetz, "RobotVQA — A Scene-Graph- and Deep-Learning-based Visual Question Answering System for Robot Manipulation", In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, USA, 2020.
[url] [bib]
[18] The Robot Household Marathon Experiment
Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz, "The Robot Household Marathon Experiment", 2020.
[url] [bib]
[19] Cognitive Vision and Perception: Deep Semantics Integrating AI and Vision for Reasoning about Space, Motion, and Interaction
Mehul Bhatt and Jakob Suchan, "Cognitive Vision and Perception: Deep Semantics Integrating AI and Vision for Reasoning about Space, Motion, and Interaction", In ECAI 2020 - 24th European Conference on Artificial Intelligence, 29 August-8 September 2020, Santiago de Compostela, Spain, August 29 - September 8, 2020 - Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020) (Giuseppe De Giacomo, Alejandro Catalá, Bistra Dilkina, Michela Milano, Senén Barro, Alberto Bugarín, Jérôme Lang, eds.), IOS Press, pp. 2881–2882, 2020.
[url] [bib] [doi]
[20] Model-based Prediction of Exogeneous and Endogeneous Attention Shifts During an Everyday Activity
Putze, Felix, Burri, Merlin, Vortmann, Lisa-Marie and Schultz, Tanja, "Model-based Prediction of Exogeneous and Endogeneous Attention Shifts During an Everyday Activity", Virtual event, Netherlands, 2020.
[bib]
[21] From Human to Robot Everyday Activity
Mason, Celeste, Gadzicki, Konrad, Meier, Moritz, Ahrens, Florian, Kluss, Thorsten, Maldonado, Jaime, Putze, Felix, Fehr, Thorsten, Zetzsche, Christoph, Herrmann, Manfred and others, "From Human to Robot Everyday Activity", In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA (Virtual), pp. 25–29, 2020.
[pdf] [bib]
[22] Decision making under uncertainty in a quasi realistic binary decision task–An fMRI study
Gloy, K, Herrmann, M and Fehr, T, "Decision making under uncertainty in a quasi realistic binary decision task–An fMRI study", Brain and Cognition, Elsevier, vol. 140, pp. 105549, 2020.
[bib]
[23] Categorization of Contact Events as Intended or Unintended usingPre-Contact Kinematic Features
Maldonado Cañón, Jaime Leonardo, Kluss, Thorsten and Zetzsche, Christoph, "Categorization of Contact Events as Intended or Unintended usingPre-Contact Kinematic Features", In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2020.
[bib] [doi]
[24] Pre-Contact Kinematic Features for the Categorization of Contact Eventsas Intended or Unintended
Maldonado Cañón, Jaime Leonardo, Kluss, Thorsten and Zetzsche, Christoph, "Pre-Contact Kinematic Features for the Categorization of Contact Eventsas Intended or Unintended", In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2020.
[bib] [doi]
[25] Early vs Late Fusion in Multimodal Convolutional Neural Networks
K. Gadzicki and R. Khamsehashari, "Early vs Late Fusion in Multimodal Convolutional Neural Networks", In 2020 IEEE 23rd International Conference on Information Fusion (FUSION), pp. 1-6, 2020.
[bib]
[26] Improved CNN-Based Marker Labeling for Optical Hand Tracking
Rosskamp, Janis, Weller, Rene, Kluss, Thorsten, Zachmann, Gabriel and others, "Improved CNN-Based Marker Labeling for Optical Hand Tracking", In International Conference on Virtual Reality and Augmented Reality, Springer LNCS, pp. 165–177, 2020.
[bib]
[27] Examining Design Choices of Questionnaires in VR User Studies
Dmitry Alexandrovsky, Susanne Putze, Michael Bonfert, Sebastian Höffner, Pitt Michelmann, Dirk Wenig, Rainer Malaka and Jan David Smeddinck, "Examining Design Choices of Questionnaires in VR User Studies", In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Regina Bernhaupt, Florian Mueller, David Verweij, Josh Andres, eds.), Association for Computing Machinery, New York, NY, USA, pp. 1–21, 2020.
[pdf] [bib] [doi]
[28] Breaking the experience: effects of questionnaires in vr user studies
Putze, Susanne, Alexandrovsky, Dmitry, Putze, Felix, Höffner, Sebastian, Smeddinck, Jan David and Malaka, Rainer, "Breaking the experience: effects of questionnaires in vr user studies", In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15, 2020.
[url] [bib]

2019

[29] A Framework for Self-Training Perceptual Agents in Simulated Photorealistic Environments
Mania, Patrick and Beetz, Michael, "A Framework for Self-Training Perceptual Agents in Simulated Photorealistic Environments", In 2019 International Conference on Robotics and Automation (ICRA), pp. 4396–4402, 2019.
[bib]
[30] Towards Meaningful Uncertainty Information for CNN Based 6D Pose Estimates
Richter-Klug, Jesse and Frese, Udo, "Towards Meaningful Uncertainty Information for CNN Based 6D Pose Estimates", In Computer Vision Systems (Tzovaras, Dimitrios, Giakoumis, Dimitrios, Vincze, Markus, Argyros, Antonis, eds.), Springer International Publishing, Cham, pp. 408–422, 2019.
[url] [bib]
[31] Give MEANinGS to Robots with Kitchen Clash: A VR Human Computation Serious Game for World Knowledge Accumulation
Grudpan, Supara, Höffner, Sebastian, Bateman, John and Malaka, Rainer, "Give MEANinGS to Robots with Kitchen Clash: A VR Human Computation Serious Game for World Knowledge Accumulation", In Entertainment Computing and Serious Games: First IFIP TC 14 Joint International Conference, ICEC-JCSG 2019, Arequipa, Peru, November 11–15, 2019, Proceedings, pp. 85, 2019.
[bib]
[32] Adaptivity of End Effector Motor Control Under Different Sensory Conditions: Experiments With Humans in Virtual Reality and Robotic Applications
Maldonado Cañón, Jaime Leonardo, Kluss, Thorsten and Zetzsche, Christoph, "Adaptivity of End Effector Motor Control Under Different Sensory Conditions: Experiments With Humans in Virtual Reality and Robotic Applications", Frontiers in Robotics and AI, Frontiers Media SA, vol. 6, 2019.
[bib] [doi]
[33] Deep Residual Temporal Convolutional Networks for Skeleton-Based Human Action Recognition
Khamsehashari, R., Gadzicki, K. and Zetzsche, C., "Deep Residual Temporal Convolutional Networks for Skeleton-Based Human Action Recognition", In Computer Vision Systems (Tzovaras, Dimitrios, Giakoumis, Dimitrios, Vincze, Markus, Argyros, Antonis, eds.), Springer International Publishing, Cham, pp. 376–385, 2019.
[bib]

2018

[34] Multimodal Convolutional Neural Networks for Human Activity Recognition
Gadzicki, Konrad, Khamsehashari, Razieh and Zetzsche, Christoph, "Multimodal Convolutional Neural Networks for Human Activity Recognition", In IROS 2018: Workshop on Latest Advances in Big Activity Data Sources for Robotics & New Challenges, 2018.
[bib]
[35] Exploring Human Kinematic Control for Robotics Applications: The Role of Afferent Sensory Information in a Precision Task
Maldonado Cañon, Jaime Leonardo, Kluss, Thorsten and Zetzsche, Christoph, "Exploring Human Kinematic Control for Robotics Applications: The Role of Afferent Sensory Information in a Precision Task", In IROS 2018: Workshop - Towards Robots that Exhibit Manipulation Intelligence, 2018.
[bib]

2017

[36] Storing and Retrieving Perceptual Episodic Memories for Long-term Manipulation Tasks
Balint-Benczedi, Ferenc, Marton, Zoltan-Csaba, Durner, Maximilian and Beetz, Michael, "Storing and Retrieving Perceptual Episodic Memories for Long-term Manipulation Tasks", In Proceedings of the 2017 IEEE International Conference on Advanced Robotics (ICAR), Hong-Kong, China, 2017.
[bib]

2016

[37] Semantic Question-Answering with Video and Eye- Tracking Data – AI Foundations for Human Visual Perception Driven Cognitive Film Studies
Suchan, Jakob and Bhatt, Mehul, "Semantic Question-Answering with Video and Eye- Tracking Data – AI Foundations for Human Visual Perception Driven Cognitive Film Studies", In IJCAI 2016: 25th International Joint Conference on Artificial Intelligence, New York City, USA, 2016.
[bib]

2015

[38] RoboSherlock: Unstructured Information Processing for Robot Perception
Michael Beetz, Ferenc Balint-Benczedi, Nico Blodow, Daniel Nyga, Thiemo Wiedemeyer and Zoltan-Csaba Marton, "RoboSherlock: Unstructured Information Processing for Robot Perception", In IEEE International Conference on Robotics and Automation (ICRA), Seattle, Washington, USA, 2015.
[url] [bib]
[39] Understanding the intention of human activities through semantic perception: observation, understanding and execution on a humanoid robot
Karinne Ramirez-Amaro, Michael Beetz and Gordon Cheng, "Understanding the intention of human activities through semantic perception: observation, understanding and execution on a humanoid robot", Advanced Robotics, vol. 29, no. 5, pp. 345-362, 2015.
[bib]
[40] Robust Semantic Representations for Inferring Human Co-manipulation Activities even with Different Demonstration Styles
Karinne Ramirez-Amaro, Emmanuel Dean-Leon and Gordon Cheng, "Robust Semantic Representations for Inferring Human Co-manipulation Activities even with Different Demonstration Styles", In 15th IEEE-RAS International Conference on Humanoid Robots, Seoul, Korea, 2015.
[bib]
[41] Transferring Skills to Humanoid Robots by Extracting Semantic Representations from Observations of Human Activities
Karinne Ramirez-Amaro, Michael Beetz and Gordon Cheng, "Transferring Skills to Humanoid Robots by Extracting Semantic Representations from Observations of Human Activities", Artificial Intelligence Journal, 2015.
[bib]
[42] Pervasive 'Calm' Perception for Autonomous Robotic Agents
Wiedemeyer, Thiemo, Bálint-Benczédi, Ferenc and Beetz, Michael, "Pervasive 'Calm' Perception for Autonomous Robotic Agents", In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, pp. 871–879, 2015.
[url] [bib]

2014

[43] PR2 Looking at Things: Ensemble Learning for Unstructured Information Processing with Markov Logic Networks
Daniel Nyga, Ferenc Bálint-Benczédi and Michael Beetz, "PR2 Looking at Things: Ensemble Learning for Unstructured Information Processing with Markov Logic Networks", In IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 2014.
[bib]

2013

[44] Decomposing CAD Models of Objects of Daily Use and Reasoning about their Functional Parts
Moritz Tenorth, Stefan Profanter, Ferenc Balint-Benczedi and Michael Beetz, "Decomposing CAD Models of Objects of Daily Use and Reasoning about their Functional Parts", In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo Big Sight, Japan, pp. 5943–5949, 2013.
[url] [bib]

2012

[45] Ensembles of Strong Learners for Multi-cue Classification
Zoltan-Csaba Marton, Florian Seidel, Ferenc Balint-Benczedi and Michael Beetz, "Ensembles of Strong Learners for Multi-cue Classification", Pattern Recognition Letters (PRL), Special Issue on Scene Understandings and Behaviours Analysis, 2012.
[url] [bib]