Motion Tracking & Prediction

Thursday 14:00-15:00



„A Computational Study On Supervised and Unsupervised Gait Data Segmentation“

Christoph Helmberg, Tobias Hofmann, Dominik Krumm, and Stephan Odenwald

Abstract— Our paper addresses the identification of patterns and change points in high-dimensional time series data recorded by body-attached sensor networks. We discuss different approaches to state detection in a supervised and in an unsupervised learning scenario. Within this context, we provide empirical evidence that our methods are capable of identifying relevant motion patterns in complex time series. Among other conceivable applications, we are particularly motivated by the idea that recognized motion change points serve as valuable information for embodied digital technologies interacting in public spaces.


Keywords: time series data, segmentation, sparsest cut, nearest centroid



„Predicting Object Weights from Giver’s Kinematics In Handover Actions“

Lena Kopnarski, Laura Lippert, Claudia Voelcker-Rehage, Daniel Potts and Julian Rudisch

Abstract— Handover actions describe the action when an object is handed over from one actor (human/robot) to another. A requirement for a smooth handover action is precise coordination between the two actors in space and time. Part of a handover action are reach and grasp movements. In order to be able to perform adequate reach and grasp movements, precise models regarding the object properties are necessary, only then anticipatory grip force scaling can take place. It is possible that receivers in handover actions observe the giver during object manipulation in order to estimate the object weight more accurately. Knowledge about the change in kinematics due to object weight in handover actions can be used to improve human-robot interactions by providing robots with better weight estimation through prediction based on human kinematics. The aim of this study was to investigate whether predictions about the object weight can be achieved from the kinematics of the giver in a handover action. Furthermore, the aim was to analyze which joint angles are particularly suitable for classifying the object weight (i.e., are most influenced by the object weight).


Keywords: classification, handover, kinematics, object weight, pattern recognition



„A Parametric Model to Assess Minnesota Dexterity Test with IMU“

Marvin Rehm, Giuseppe Sanseverino, Teodorico Caporaso, Antonio
Lanzotti, Stephan Odenwald and Alois Pichler

Abstract— Evaluations of injuries and the assessment of residual dexterity is carried out by clinicians by means of standardized tests. Several dexterity tests have been developed and all of them require the test administrator to sit in front of the subject and visually assess the motions performed. Automatic tools can reduce the time needed for the evaluation and can provide additional useful information. For many of the available dexterity tests, automated evaluation methods have been developed. However, for the Minnesota dexterity test, despite being widely used, just one automated solution based on multiple depth sensor cameras was developed. This study aims to provide a wearable and flexible method to automatically evaluate the outcomes of Minnesota dexterity tests. The proposed methodology consists of a parametric model that is capable of processing acceleration data collected with an IMU attached to the centre of mass of the subject’s dominant hand. The developed model was successfully validated against a subject study with ten participants.

Keywords:  Manual Dexterity, Biomechanics, Wearable Devices, Automatic Assessment, Parametric Mode



„A Novel Framework for the Generation of Synthetic Datasets with Applications to Hand Detection and Segmentation“

Tom Uhlmann, Amin Dadgar, Felix Weigand and Guido Brunnett

Abstract— We introduce a configurable framework to generate synthetic datasets featuring human characters for the training of neural networks. We demonstrate how our framework can generate a vast amount of annotated images showing a human and its hands in a multitude of poses and with varying backgrounds. Furthermore, we conduct a specific formation of synthetic hand images to train convolutional neural networks for detecting and/or segmenting hands in real scenarios in a novel way. The generated dataset aims to enhance the previously successful method of exploiting the invariancy concept with slightly more expensive rendering techniques. That is, we keep the number of subjects, poses, scenes, and other costly factors at the minimum level while increasing the diversity of the data. Our dataset features ten different human characters in various poses captured from a multitude of camera angles. We make our framework open source and publish the dataset consisting of 90,000 images.

Keywords: machine learning, synthetic dataset, hand detection, hand segmentation, configurable graphical framework, open source, neural network



Friday 9:45-10:45



„Deep Learning for Human Activity Recognition with Plantar Pressure and Movement Data from an Instrumented Smart Sock System“

Noah Zuchna, Bernd Resch and Stephan Odenwald

Abstract— Human activity recognition (HAR) is a task which, if solved by machine intelligence, has a vast potential to improve technological advances in many different fields, medical engineering being one of the more prominent ones. Recent advances in the development of wearable sensor technologies, mobile computing, cloud computing and machine learning (ML) methods made cutting-edge technology broadly available to the scientific community. One such medical application is the therapy and monitoring optimization of subjects with Parkinson’s disease (PD). Their aim is to improve the objectiveness, quality, and quantity of monitoring the course of the disease through plantar pressure and movement data collection. This may, in turn, lead medical supervisors to better informed, AI-supported decisions and new insights about PD and its therapy. However, previous research has not sufficiently investigated algorithms to assist movement data analysis to lay the fundament, on which such goals can be accomplished.

Thus, this paper presents a new methodology for machine learning based time-series classification using data collected by the ENVISIBLE ParKInSock system. ParKInSock is a multi-sensor system that collects plantar pressure data with 8 highly dynamic force sensing resistor (HD-FSR) cell sensors and linear acceleration and rotational velocity data with an inertial measurement unit (IMU) on each foot, generating a total of 28 data channels. These 28 signals are minimally processed, windowed, and used as features to train two machine learning algorithms: 1.) the kernel-based ROCKET (RandOm Convolutional KErnel Transform) algorithm, and 2.) a long short term memory (LSTM) type recurrent neural network (RNN). These two models are trained to perform a supervised learning classification task with classes for activities of daily living (ADL). In a first trial, using cross-validation, the mean accuracy of the ROCKET and LSTM model on the test dataset were 99.61% +/-0.26% and 99.52% +/-0.22%, respectively. The dataset consisted of 4.4 hours of five different ADL of one individual subject. In a second trial, six different ADL from three individual subjects created ~1 hour of data. The average accuracy results on this dataset were 94.93% +/-1.49% and 95.71% +/-1.45% for the ROCKET and LTSM model respectively.


Keywords: deep learning, human activity recognition, instrumented sock, plantar pressure data, movement data



„Using Functionally Anthropomorphic Eyes“

Paul Schweidler and Linda Onnasch

Abstract— For safe and seamless human-robot interaction in a joint task, humans should be able to predict a robot’s movements.
Employing the interpersonal coordination principle of joint attention and gaze following might be a promising way to achieve that
goal. We conducted four studies to explore whether and how abstract, functionally anthropomorphic (FA) forms of human eyes and
eye movements can help to make robotic movements more predictable, especially compared to highly or non-anthropomorphic
forms, respectively. Results showed that the FA eyes were able to trigger reflexive shifts of attention and thereby help the human to
effortlessly predict the robot’s movements better than non-anthropomorphic stimuli did. Results further indicated that the FA eyes
did not evoke any negative emotions possibly related with anthropomorphic designs.


Keywords: HRI, intentionality, coordination, safety



„Radar Based Gait Analysis“

Wolfgang Kilian, Aline Püschel, Stefanie Doetz and Stephan Odenwald

Abstract— Patients with neurological diseases often suffer from motor disorders leading to changes in gait pattern and gait disorders. In order to assess the progression of the disease and the effectiveness of therapies, gait analyses are conducted. However, these are often associated with considerable time efforts for both patients and medical staff and require a high amount of space. In this paper, a system based on radar technology is evaluated for gait analysis as an alternative measurement system to the previously used pressure distribution measuring devices and optical motion capture systems. The gait pattern of a test persons were extracted from data and five gait parameters were derived. During the evaluation, the sensor position and height has been varied, and different methods of analysis have been considered. The validity of the system was investigated by quantitatively comparing the measured biomechanical parameters with those recorded by a pressure distribution measurement system. The results show that it is possible to measure medically relevant gait parameters using radar technology.


Keywords: human gait, radar, gait analysis, human movement assessment



„Avoidance Of Collisions Through Prospective Path Planning“

Edgar Scherstjanoi and Svetlana Wähnert

Abstract— Collaborative systems in which humans and robots share a common physical environment are part of the fourth industrial revolution. Since the health integrity of humans must not be compromised, collisions are often avoided by emergency stops of the machines, which in turn have a significant impact on the throughput of automated processes. By using motion predictions, a prospective navigation of robot movements emerges. In this paper, the benefits of such a human-robot system are evaluated. Recorded motion data of a human subject are used to simulate a quasi error-free motion prediction, which is integrated as a dynamic obstacle into the path planning of virtual robots. It is shown that a simple strategy significantly reduces safety risks, with only a minor impact on the throughput of the robots.

Keywords: collaborative systems, mobile robots, motion prediction, simulation