Robotics & Teleoperation

Thursday 08:30- 09:00



“Flip of a Switch: Designing A Kinetic Dialogue System for Switch Interfaces”

Maximilian Kullmann, Jan Ehlers, Eva Hornecker and Lewis L. Chuang

Abstract— The proliferation of smart technology in everyday objects provide an opportunity for human-machine collaborations, signaling a shift from typical principal-agent relationships. The current work proposes roboticizing familiar user interfaces to allow objects to communicate with their users without the need of an additional communication modality. We implemented a light switch that is capable of kinetic gestures and report an exploratory study that investigated how naïve users (n=15) would respond to and interact with different gesture designs. A qualitative analysis of recorded interactions and semi-structured interviews reveal that users are surprised by unfamiliar automation and kinetic gestures serve as cues to promote discovery of the system’s mechanisms and mitigate assumed helplessness.


Keywords: automation, physical prototyping, interface design, human-robot interaction



“Grasp Pose Generation for Human-to-Robot Handovers using Simulation-to-Reality Transfer”

Carl Gaebert, Chaitanya Bandi and Ulrike Thomas

Abstract— Human-to-robot handovers play an important role in collaborative tasks in industry or household assistance. Due to the vast amount of possible unknown objects, learning-based approaches gained interest for robust and general grasp- synthesis. However, obtaining real training data for such methods requires expensive human demonstrations. Simulated data, on the other hand, is easy to generate and can be randomized to cover the distribution of real world data. The first contribution of this work is a dataset for human grasps generated in simulation. For this, we use a simulated hand and models of 10 objects from the YCB dataset [Calli et al., 2015]. It can also be easily extended to include new objects. The method thus allows for generating an arbitrary amount of training data without human interactions. Secondly, we combine a generative neural grasp generator with an evaluator model for grasp pose generation. In contrast to previous works, we obtain grasp poses from simulated RGB images, which allows for reducing the negative effects of depth sensor noise. To this end, our generator model is provided with a cropped image of the human hand and learns the distribution of grasps in the wrist system. The evaluator then narrows down the list of grasps to the most promising ones. The presented approach requires the model to extract relevant features from images instead of point clouds. A cost-efficient method for generating large amounts of training data is therefore needed. We test our approach in simulation and transfer it to a real robot system. We use the same objects as in the training dataset, but also test the generalization capabilities towards new objects. The presented dataset is available for download: https://tuc.cloud/index.php/s/g3noZD7oCqbQR9d.


Keywords: handover, grasp generation, human-robot interaction



Thursday 16:15- 17:15



“Where Am I? How to Measure and Support Spatial Orientation in Teleoperation”

Jennifer Brade, Philipp Klimant, Ning Xie, Georg Jahn and Sven Winkler

Abstract— A multitude of differently embodied digital technologies, such as autonomous vehicles, delivery robots, or telepresence systems and humans will coordinate with each other, interact, and work side by side in the near future. The possibilities that this incurs are manifold, but creating a smooth coordination is just as challenging. Because of limits of automation and exceptions to routine operation, there are and will be systems, which can both act autonomously and be controlled remotely by humans using teleoperation. This article pertains to challenges of spatial orientation that teleoperators have to face if they take over and control systems remotely. The ability to orient oneself via the cameras of a teleoperated system in the remote environment plays a crucial role for a smooth and error-free coordination with and in the environment of the teleoperated system. In this article, we will explain why orientation in a remote environment is crucial, which problems exist, how spatial orientation can be measured, and which approaches could be used to provide visual support for operating in a remote environment.


Keywords: Telepresence, spatial orientation, human-robot interaction, teleoperation



“Human-Robot Interaction in Telemanipulation – An Overview”

Stephan Andreas Schwarz and Ulrike Thomas

Abstract— Teleoperation and haptic telemanipulation is a common solution to perform tasks from a remote distance. It is very suitable and can be used in dangerous or unreachable environments, such as nuclear power plants, space missions or under water. In recent years, especially due to the COVID-19 pandemic, telemanipulation is increasingly used to perform tasks involving other human participants. This paper gives an overview of the state of the art regarding control concepts to improve human-robot interactions on the follower side of a telemanipulation system. In this context, system architectures and shared control approaches are considered. We also present the work done in the Collaborative Research Center 1410 regarding telemanipulation including a safety mechanism as well as two shared control concepts to improve human-likeness, safety and mobility of the follower motion.

Keywords: telemanipulation, human-robot interaction, control, shared control, safety



“Motion Planning”

Ulrike Thomas and Carl Gaebert

Abstract Successful human-robot interaction calls for fast generation of collision-free and optimized motions. To this end, sampling-based motion planning algorithms have been widely used. However, they often require long planning times to achieve optimized motions. While not being a critical issue in traditional industrial applications, planning time delays or poorly optimized motions have very negative effects on human-robot cooperation. Including artificial potential fields in the sampling algorithm can drastically improve the quality and planning time of such methods. Previous works in this direction are often tailored towards minimizing distance costs such as path length. In this work, we propose a heuristic based on potential fields that can also be used with a variety of state cost functions. We demonstrate the effectiveness of our approach using two cost functions related to human-robot interaction. We achieve drastically improved results in both scenarios. This allows for reducing total planning time and achieving a smoother interaction between human and robot.

Keywords: Motion Planning, RRT*, Optimal Path Planning, Artificial Potential Fields, Human-Robot Interaction



“Policy Learning with Spiking Neural Network”

Osama M. Abdelaal and Florian Röhrbein

Abstract— This paper shows the results of deploying architecture for supervised biologically plausible neural networks based on spiking neuron models on ManiSkill2 Challenge. The aim is to correlate between the traditional artificial neural network and spiking neural network to train such an algorithm on advanced robotic tasks. The task chosen to be the testing environment represent by using a mobile arm robot to open a cabinet drawer, which it considers in the context of Human-Robot helping tasks as a human-like behavior. A comparison is established on the training and testing results and showed that, with applying 50k training steps, the success rate for the architecture that contains spiking neural network is 100% instead of 84% without it. To elaborate that through applying a customized supervised spiking neural network the training takes less time, less energy and testing results are more accurate. 


Keywords:  deep reinforcement learning, spiking neural network, robot manipulation tasks, behavior cloning, SNN, BC