{"id":3349,"date":"2022-04-28T13:42:31","date_gmt":"2022-04-28T11:42:31","guid":{"rendered":"https:\/\/hybrid-societies.org\/?page_id=3349"},"modified":"2022-04-28T14:01:21","modified_gmt":"2022-04-28T12:01:21","slug":"bibsonomy-plugin","status":"publish","type":"page","link":"https:\/\/hybrid-societies.org\/en\/bibsonomy-plugin\/","title":{"rendered":"Bibsonomy Plugin"},"content":{"rendered":"<div id=\"trailimageid\"><img decoding=\"async\" id=\"ttimg\" src=\"https:\/\/hybrid-societies.org\/wp-content\/plugins\/bibsonomy-csl\/img\/loading.gif\"><\/div>  \n <ul class=\"bibsonomycsl_publications\"><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Fietzek, T., C. Ruff and F.H. Hamker, 2024: <span class=\"citeproc-title\">A Brain-Inspired Model of Reaching and Adaptation on the iCub Robot<\/span>. in: 2024 IEEE International Symposium on Robotic and Sensors Environments (ROSE).<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-86a41dbf9584b3ef66aa7dd2de196c20\">Motor learning is an important property for em-bodied agents and a wide variety of approaches has been developed for humanoid robots. One strategy is to study the human brain and replicate the structure and functionality of the brain in neuro-computational models. These models are built at different levels of abstraction to perform a variety of tasks. The presented work integrates a recent system-level neuro-computational model incorporating the motor cortex basal ganglia loops, the cerebellum and central pattern generators with the iCub robot. The iCub is a humanoid robot explicitly developed for cognitive research and well suited for manipulation and other motor tasks due to its high degrees of freedom. To test the performance of the model two experiments are conducted. In the first step a reaching task is carried out. Here, the task is to perform movements from a fixed initial position to different goal locations in the 3D-space. The results are compared to the prior performance of the model to verify the model and code adjustments and the learning abilities. In the second setup a simple motor adaptation task is realized, where an external shift perturbation is introduced to the arm motion, and the adaptation abilities of the model are investigated.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Mnif, M., S. Sahnoun, M. Kaaniche, B.B. Atitallah, A. Fakhfakh and O. Kanoun, 2024: <span class=\"citeproc-title\">Ultra-Fast Edge Computing Approach for Hand Gesture Classification Based on EIT Measurements<\/span>. pp. 1\u20137 in: 2024 IEEE International Symposium on Robotic and Sensors Environments (ROSE).<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-97e5133ef2eaae2d99c6bf0493f9a69f\">Gesture-based robot control offers intuitive interaction between humans and robots, with applications ranging from industrial automation to assistive robotics. However, existing solutions face challenges in achieving real-time requirements while ensuring accurate gesture recognition. This paper presents a new edge computing-based approach for real-time control of robots using Electrical Impedance Tomography (EIT) measurements to classify hand gesture numbers in American Sign Language (ASL). Existing solutions for gesture recognition struggle to achieve real-time performance while maintaining accuracy and energy efficiency. This challenge becomes higher in the case of EIT because of its relative complexity. We focus therefore on leveraging the capabilities of the edge device to implement effectively the Convolutional Neural Network (CNN) acceleration. The proposed solution combines hardware-aware optimization techniques to achieve fast and accurate gesture recognition by enabling rapid inference while minimizing energy consumption on a low-power resource-constrained device with Tiny Machine Learning (TinyML) capabilities. The lightweight CNN model required only 10.2 s to train using the Keras library of TensorFlow and achieved an accuracy of 89.37% for 10 sign language classes, with only 66 \u03bcs taken to run inference on the hardware-accelerated microcontroller-based device.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Bressem, J., 2024: <span class=\"citeproc-title\">Systems of Gesture Coding and Annotation<\/span>. pp. 158\u2013181 in: A. Cienki (Ed.), The Cambridge Handbook of Gesture Studies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Fricke, E., 2024: <span class=\"citeproc-title\">(in press). Negation multimodal: Geste und Rede, Text und Bild<\/span>. in: S. Kabatnik, L. B\u00fclow, M. Merten &#38; R. Mroczynski (Eds.), Pragmatik multimodal. Narr.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Medhioub, M., S.A. Bouhamed, I.K. Kallel, N. Derbel and O. Kanoun, 2024: <span class=\"citeproc-title\">Optimal feature subset deduction based on possibilistic feature quality classification and feature complementarity<\/span>. Expert Systems with Applications 249: 123353.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Theisen, M., C. Schie\u00dfl, W. Einh\u00e4user and G. Markkula, 2024: <span class=\"citeproc-title\">Pedestrians\u2019 road-crossing decisions: Comparing different drift-diffusion models<\/span>. International Journal of Human-Computer Studies 183: 103200.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Schmitz, I., H. Strauss, L. Reinel and W. Einh\u00e4user, 2024: <span class=\"citeproc-title\">Attentional cueing: Gaze is harder to override than arrows<\/span>. PLOS ONE 19: e0301136.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-49794fb6f0b4eefccdd8d61b2768ff8f\">Gaze is an important and potent social cue to direct others\u2019 attention towards specific locations. However, in many situations, directional symbols, like arrows, fulfill a similar purpose. Motivated by the overarching question how artificial systems can effectively communicate directional information, we conducted two cueing experiments. In both experiments, participants were asked to identify peripheral targets appearing on the screen and respond to them as quickly as possible by a button press. Prior to the appearance of the target, a cue was presented in the center of the screen. In Experiment 1, cues were either faces or arrows that gazed or pointed in one direction, but were non-predictive of the target location. Consistent with earlier studies, we found a reaction time benefit for the side the arrow or the gaze was directed to. Extending beyond earlier research, we found that this effect was indistinguishable between the vertical and the horizontal axis and between faces and arrows. In Experiment 2, we used 100% \u201ccounter-predictive\u201d cues; that is, the target always occurred on the side opposite to the direction of gaze or arrow. With cues without inherent directional meaning (color), we controlled for general learning effects. Despite the close quantitative match between non-predictive gaze and non-predictive arrow cues observed in Experiment 1, the reaction-time benefit for counter-predictive arrows over neutral cues is more robust than the corresponding benefit for counter-predictive gaze. This suggests that\u2013if matched for efficacy towards their inherent direction\u2013gaze cues are harder to override or reinterpret than arrows. This difference can be of practical relevance, for example, when designing cues in the context of human-machine interaction.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Jahn, G., 2024: <span class=\"citeproc-title\">Resilience engineering for highly automated driving, autonomous vehicles, and urban robotics: wizards and shepherds in hybrid societies<\/span>. Theoretical Issues in Ergonomics Science.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Einh\u00e4user, W., C.R. Neubert, S. Grimm and A. Bendixen, 2024: <span class=\"citeproc-title\">High visual salience of alert signals can lead to a counterintuitive increase of reaction times<\/span>. Scientific Reports 14: 8858.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-ea63b82e66674d4c40e3a133cb710bf8\">It is often assumed that rendering an alert signal more salient yields faster responses to this alert. Yet, there might be a trade-off between attracting attention and distracting from task execution. Here we tested this in four behavioral experiments with eye-tracking using an abstract alert-signal paradigm. Participants performed a visual discrimination task (primary task) while occasional alert signals occurred in the visual periphery accompanied by a congruently lateralized tone. Participants had to respond to the alert before proceeding with the primary task. When visual salience (contrast) or auditory salience (tone intensity) of the alert were increased, participants directed their gaze to the alert more quickly. This confirms that more salient alerts attract attention more efficiently. Increasing auditory salience yielded quicker responses for the alert and primary tasks, apparently confirming faster responses altogether. However, increasing visual salience did not yield similar benefits: instead, it increased the time between fixating the alert and responding, as high-salience alerts interfered with alert-task execution. Such task interference by high-salience alert-signals counteracts their more efficient attentional guidance. The design of alert signals must be adapted to a ``sweet spot'' that optimizes this stimulus-dependent trade-off between maximally rapid attentional orienting and minimal task interference.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Schwarz, S.A., C. Gaebert, B. Nieberle and U. Thomas, 2024: <span class=\"citeproc-title\">(in press). Virtually Guided Telemanipulation using Neural RRT-Based Planning<\/span>. pp. 256\u2013259 in: 4th IFSA Winter Conference on Automation, Robotics &amp; Communications for Industry 4.0 \/ 5.0 (ARCI\u2019 2024).<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Weidensager, L., D. Krumm, D. Potts and S. Odenwald, 2024: <span class=\"citeproc-title\">Estimating vertical ground reaction forces from plantar pressure using interpretable high-dimensional approximation<\/span>. Sports Engineering 27: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-75167d7de5ebed4916173dbf0dbb5ed6\">Several studies have reported methods for capturing ground reaction forces in the field. However, these forces are usually measured indirectly with non-interpretable ``black box'' models. Here we report an interpretable model with a function approximation algorithm. The model uses time series of plantar pressure from instrumented insoles to estimate vertical ground reaction forces. The study included data from 16 persons moving at different speeds on a two-belt treadmill equipped with force plates. The introduced regression model, based on a high-dimensional approximation, using only low-dimensional variable interactions, demonstrated its ability to estimate vertical ground reaction forces. The normalized mean square error for velocities between 3 and \\\\($\\$\\)9backslash,backslashhbox kmbackslash,backslashhbox \\h\\^\\-1\\\\\\($\\$\\)ranged from 10.6 to \\\\($\\$\\)24.4\\backslash,\\backslash\\%\\\\($\\$\\). The accuracy of the presented approach, which can be used to analyze and interpret the learned model, was comparable to that reported in the literature. Furthermore, the evaluation of the learned model is particularly suitable for embedded and portable systems and, after a one-time calibration measurement, allows permanent and laboratory-independent measurement of vertical ground reaction forces.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Ghoul, B., B.B. Atitallah, R. Barioul, A. Fakhfakh and O. Kanoun, 2024: <span class=\"citeproc-title\">Exploring the Real-Time Capability of Electrical Impedance Tomography for Hand Sign Recognition in Robotic Hand Control<\/span>. pp. 1\u20136 in: 2024 IEEE International Symposium on Robotic and Sensors Environments (ROSE).<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-6ccbc41cfbd6046ff792b96f87c1b083\">Electrical impedance tomography (EIT) can monitor the influence of the muscle activity on the conductivity distribution across the forearm and is therefore suitable for Hand Sign Recognition (HSR). However, the method is relatively complex, so realizing real-time classification based on EIT measurements is still considered a major challenge. In addition, the accuracy of EIT for HSR is highly dependent on the injection configuration parameters applied. In this study, we investigate the influence of the injection configuration on the achieved accuracy of the hand robot control and the feasibility of a real-time classification based on EIT measurements. To assess real-time feasibility, experimental data have been collected for a sign set of 36 American Sign Language (ASL) performed by one healthy subject. Moreover, a comparative study of the adjacent, opposite, and cross injection configurations has been conducted on 10 subjects performing the ASL set. An ambiguity study based on the measured signals was performed to assess the discrimination potential between the hand signs. The adjacent EIT configuration exhibited higher sensitivity in discriminating hand signs. For a real-time classification, we propose the use of a Support Vector Machine (SVM) model realizing an overall accuracy of 71.38%. To validate the system's real-time behavior, the classification result is visualized by a robotic hand reproducing 6 different hand signs from the 36 ASL set reaching an accuracy of 80%. The results demonstrate the potential of EIT to facilitate real-time Hand Sign Recognition for robot control applications.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Katzenbach, C., C. Pentzold and P. Viejo Otero, 2024: <span class=\"citeproc-title\">Smoothing Out Smart Tech\u2019s Rough Edges: Imperfect Automation and the Human Fix<\/span>. Human-Machine Communication 7: 23\u201343.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Ghoul, B., B.B. Atitallah, S. Sahnoun, A. Fakhfakh and O. Kanoun, 2024: <span class=\"citeproc-title\">Comparative Study of Data Reduction Methods in Electrical Impedance Tomography For Hand Sign Recognition<\/span>. Vol. 1, pp. 654\u2013658 in: 2024 IEEE 7th International Conference on Advanced Technologies, Signal and Image Processing (ATSIP).<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-f406776e2054ceaae24f5a9c154a8edb\">Several studies have shown that electrical impedance tomography (EIT) on the human forearm can be used to classify hand signs without the need for cameras or gloves. However, the amount of measured data is in relatively high, leading to long execution times and high processing complexity. In this study, we investigate the influence of reducing the number of EIT measurements on the classification accuracy. Four different algorithms were used to halve the number of measurements, namely Principal Component Analysis (PCA), Chi-square tests, Minimum Redundancy Maximum Relevance (MRMR) and Laplacian scores. These algorithms were applied to a dataset of forearm EIT measurements corresponding to the practice of American Sign Language (ASL) in a healthy subject. The results show a slight drop in accuracy when using the quadratic support vector machine (SVM) classifier, with accuracy dropping from 95.3% to 93.4% in the best cases. A method based on a genetic algorithm (GA) was also used. This approach exploits the strengths of genetic algorithms in exploring a large solution space and converging on the near-optimal combination. This approach resulted in an increase in accuracy to 96.7%. The study demonstrated that the number of EIT measurements can be reduced without compromising overall accuracy.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kopnarski, L., J. Rudisch, D.F. Kutz and C. Voelcker-Rehage, 2024: <span class=\"citeproc-title\">Unveiling the invisible: receivers use object weight cues for grip force planning in handover actions<\/span>. Experimental Brain Research 242: 1191\u20131202.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-77c01d5c63ab8bcbe751891c46f58af4\">Handover actions are part of our daily lives. Whether it is the milk carton at the breakfast table or tickets at the box office, we usually perform these joint actions without much conscious attention. The individual actions involved in handovers, that have already been studied intensively at the level of individual actions, are grasping, lifting, and transporting objects. Depending on the object's properties, actors must plan their execution in order to ensure smooth and efficient object transfer. Therefore, anticipatory grip force scaling is crucial. Grip forces are planned in anticipation using weight estimates based on experience or visual cues. This study aimed to investigate whether receivers are able to correctly estimate object weight by observing the giver's kinematics. For this purpose, handover actions were performed with 20 dyads, manipulating the participant role (giver\/receiver) and varying the size and weight of the object. Due to the random presentation of the object weight and the absence of visual cues, the participants were unaware of the object weight from trial to trial. Kinematics were recorded with a motion tracking system and grip forces were recorded with customized test objects. Peak grip force rates were used as a measure of anticipated object weight. Results showed that receiver kinematics are significantly affected by object weight. The peak grip force rates showed that receivers anticipate object weight, but givers not. This supports the hypothesis that receivers obtain information about the object weight by observing giver's kinematics and integrating this information into their own action execution.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Hellara, H., R. Barioul, S. Sahnoun, A. Fakhfakh and O. Kanoun, 2024: <span class=\"citeproc-title\">Comparative Study of sEMG Feature Evaluation Methods Based on the Hand Gesture Classification Performance<\/span>. Sensors 24: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-c07e2a7875d35befe8c7fe0836cd5eb7\">Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture recognition. The investigation is based on several benchmark datasets and one real hand gesture dataset, including 15 hand force exercises collected from 14 healthy subjects using eight commercial sEMG sensors. A total of 37 time- and frequency-domain features were extracted from each sEMG channel. The benchmark dataset revealed that the minimum Redundancy Maximum Relevance (mRMR) feature evaluation method had the poorest performance, resulting in a decrease in classification accuracy. However, the RFE method demonstrated the potential to enhance classification accuracy across most of the datasets. It selected a feature subset comprising 65 features, which led to an accuracy of 97.14%. The Mutual Information (MI) method selected 200 features to reach an accuracy of 97.38%. The Feature Importance (FI) method reached a higher accuracy of 97.62% but selected 140 features. Further investigations have shown that selecting 65 and 75 features with the RFE methods led to an identical accuracy of 97.14%. A thorough examination of the selected features revealed the potential for three additional features from three specific sensors to enhance the classification accuracy to 97.38%. These results highlight the significance of employing an appropriate feature selection method to significantly reduce the number of necessary features while maintaining classification accuracy. They also underscore the necessity for further analysis and refinement to achieve optimal solutions.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Fricke, E., 2024: <span class=\"citeproc-title\">Indexicality, Deixis, and Space in Gesture<\/span>. pp. 84\u2013111 in: A. Cienki (Ed.), The Cambridge Handbook of Gesture Studies, Cambridge University Press.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Dettmann, A., A. Berkemeier, K. Felbel and A.C. Bullinger, 2024: <span class=\"citeproc-title\">Investigation of Implicit and Contextual Cues for the Facilitation of Cooperative Automated Driving: A Qualitative Analysis<\/span>. pp. 319\u2013326 in: Design Tools and Methods in Industrial Engineering III. ADM 2023. Lecture Notes in Mechanical Engineering.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Potts, D. and L. Weidensager, 2024: <span class=\"citeproc-title\">Variable transformations in combination with wavelets and ANOVA for high-dimensional approximation<\/span>. Advances in Computational Mathematics 50: 53.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-4c3b939b7ef0c95bb8590fb1e0660b13\">We use hyperbolic wavelet regression for the fast reconstruction of high-dimensional functions having only low-dimensional variable interactions. Compactly supported periodic Chui-Wang wavelets are used for the tensorized hyperbolic wavelet basis on the torus. With a variable transformation, we are able to transform the approximation rates and fast algorithms from the torus to other domains. We perform and analyze scattered data approximation for smooth but arbitrary density functions by using a least squares method. The corresponding system matrix is sparse due to the compact support of the wavelets, which leads to a significant acceleration of the matrix vector multiplication. For non-periodic functions, we propose a new extension method. A proper choice of the extension parameter together with the piecewise polynomial Chui-Wang wavelets extends the functions appropriately. In every case, we are able to bound the approximation error with high probability. Additionally, if the function has a low effective dimension (i.e., only interactions of a few variables), we qualitatively determine the variable interactions and omit ANOVA terms with low variance in a second step in order to decrease the approximation error. This allows us to suggest an adapted model for the approximation. Numerical results show the efficiency of the proposed method.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Jahn, K., O. Rehren, B. Kordyaka, S. Jansen, P. Ohler and G.D. Rey, 2023: <span class=\"citeproc-title\">(in press). Designing Computer-mediated Communication with Affective Technology to Increase Feedback Acceptance<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Sanseverino, G., M. Rothermel and S. Odenwald, 2023: <span class=\"citeproc-title\">(in press). A Wearable Sensor Network for Cyclists Safety in Mixed Traffic, a Pilot Study<\/span>. in: Proceedings of 2023 IEEE International Workshop on Metrology for Industry 4.0 &amp; IoT.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Gesmann-Nuissl, D. and S. Meyer, 2023: <span class=\"citeproc-title\">Black Hole instead of Black Box? - The Double Opaqueness of Recommender Systems on Gaming Platforms and its Legal Implications<\/span>. pp. 55\u201382 in: S. Genovesi, K. Kaesling &#38; S. Robbins (Eds.), Recommender Systems: Legal and Ethical Issues, Springer.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Ragni, M. and G. Jahn, 2023: <span class=\"citeproc-title\">(in press). Challenges in modeling mental simulations of human drivers<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Hensch, A.-C., M. Beggiato and J.F. Krems, 2023: <span class=\"citeproc-title\">(in press). Comparing accepted gaps in traffic flow for conducting left turn actions in different intersection scenarios \u2013 A driving simulator study<\/span>. pp. 83\u201394 in: D. de Waard, V. Hagemann, L. Onnasch, A. Toffetti, D. Coelho, A. Botzer, M. de Angelis, K. Brookhuis &#38; S. Fairclough (Eds.), Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2023 Annual Conference.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Zuchna, N., B. Resch and S. Odenwald, 2023: <span class=\"citeproc-title\">(in press). Deep Learning for Human Activity Recognition with Plantar Pressure and Movement Data from an Instrumented Smart Sock System<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kullmann, M., J. Ehlers, E. Hornecker and L.L. Chuang, 2023: <span class=\"citeproc-title\">(in press). Flip of a Switch: Designing a kinetic dialogue system for switch interfaces<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Albrecht, S., R. Tamboli, S. Taubert, F. Meusel, M. Eibl, G.D. Rey and J. Schmied, 2023: <span class=\"citeproc-title\">(in press). Establishing Conversational Pedagogical Agents as Credible Knowledge Providers: the Case of Synthesized Italian English<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Lakshmanan, R. and A. Pichler, 2023: <span class=\"citeproc-title\">(in press). Expectiles In Risk Averse Stochastic Programming and Dynamic Optimization<\/span>. Pure and Applied Functional Analysis.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Brandenburg, S. and M. Th\u00fcring, 2023: <span class=\"citeproc-title\">(in press). Experiencing automated vehicles in real-life affects central aspects of drivers\u2019 User Experience<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Bendixen, A., T.G.G. Wegner and W. Einh\u00e4user, 2023: <span class=\"citeproc-title\">(in press). Facilitating Ethics Application and Review for Interdisciplinary Human-Participant Research via Software-Based Guidance and Standardization<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Rehm, M., G. Sanseverino, T. Caporaso, A. Lanzotti, S. Odenwald and A. Pichler, 2023: <span class=\"citeproc-title\">(in press). A Parametric Model to Assess Minnesota Dexterity Test with IMU<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Al-Hammouri, S., R. Barioul, K. Lweesy, M. Ibbini and O. Kanoun, 2023: <span class=\"citeproc-title\">(in press). Feature Selection for Hand Gesture Recognition using Six FSR Sensors Bracelet<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Gaebert, C., C. Bandi and U. Thomas, 2023: <span class=\"citeproc-title\">(in press). Grasp Pose Generation for Human-to-Robot Handovers using Simulation-to-Reality Transfer<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Ramalingame, R., B.B. Atitallah and O. Kanoun, 2023: <span class=\"citeproc-title\">(in press). A preliminary evaluation of a body-attached multisensor measurement framework for hand gesture recognition<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">M\u00fcller, M.R., 2023: <span class=\"citeproc-title\">Social displays textendash Die Fabrikation sozialweltlicher Zurechenbarkeit in der Robotik<\/span>. \u00d6sterreichische Zeitschrift f\u00fcr Soziologie 4: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Uhlmann, T., S.-A. Dadgar, F. Weigand and G. Brunnett, 2023: <span class=\"citeproc-title\">(in press). A novel Framework for the Generation of Synthetic Datasets with Applications to Hand Detection and Segmentation<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Pentzold, C., I. Rothe and A. Bischof, 2023: <span class=\"citeproc-title\">Living labs as third places: low-threshold participation, empowering hospitality, and the social infrastructuring of continuous presence<\/span>. Journal of Science Communication. Special Issue: Living labs under construction: paradigms, practices, and perspectives of public science communication and participatory science 22: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Pentzold, C. and A. Bischof, 2023: <span class=\"citeproc-title\">Achieving agency within imperfect automation: Working customers and self-service technologies<\/span>. Convergence: The International Journal of Research into New Media Technologies135485652311745.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Lippert, L., D. Potts and T. Ullrich, 2023: <span class=\"citeproc-title\">Fast hyperbolic wavelet regression meets ANOVA<\/span>. Numerische Mathematik 154: 155\u2013207.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Helmberg, C., 2023: <span class=\"citeproc-title\">A preconditioned iterative interior point approach to the conic bundle subproblem<\/span>. Mathematical Programming.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Zhu, H. and U. Thomas, 2023: <span class=\"citeproc-title\">Mechanical Design of a Biped Robot FORREST and an Extended Capture-Point-Based Walking Pattern Generator<\/span>. Robotics 12: 82.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Ivanova, M., C.R. Neubert, J. Schmied and A. Bendixen, 2023: <span class=\"citeproc-title\">ERP evidence for Slavic and German word stress cue sensitivity in English<\/span>. Frontiers in Psychology 14: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kutz, D.F., S. Fr\u00f6hlich, J. Rudisch, K. M\u00fcller and C. Voelcker-Rehage, 2023: <span class=\"citeproc-title\">Sex-dependent performance differences in curvilinear aiming arm movements in octogenarians<\/span>. Scientific Reports 13: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Goelz, C., E.-M. Reuter, S. Fr\u00f6hlich, J. Rudisch, B. Godde, S. Vieluf and C. Voelcker-Rehage, 2023: <span class=\"citeproc-title\">Classification of age groups and task conditions provides additional evidence for differences in electrophysiological correlates of inhibitory control across the lifespan<\/span>. Brain Informatics 10: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Rudisch, J., S. Fr\u00f6hlich, N.H. Pixa, D.F. Kutz and C. Voelcker-Rehage, 2023: <span class=\"citeproc-title\">Bimanual coupling is associated with left frontocentral network activity in a task-specific way<\/span>. European Journal of Neuroscience.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Lemmer, K., K. Jahn, A. Chen and B. Niehaves, 2023: <span class=\"citeproc-title\">One tool to rule? textendash A field experimental longitudinal study on the costs and benefits of mobile device usage in public agencies<\/span>. Government Information Quarterly 40: 101836.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Bandi, C. and U. Thomas, 2023: <span class=\"citeproc-title\">A New Efficient Eye Gaze Tracker for Robotic Applications<\/span>. pp. 6153\u20136159 in: Proceedings of 2023 IEEE International Conference on Robotics and Automation (ICRA).<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kircheis, M. and D. Potts, 2023: <span class=\"citeproc-title\">Fast and direct inversion methods for the multivariate nonequispaced fast Fourier transform<\/span>. Frontiers in Applied Mathematics and Statistics 9: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Wehrmann, C., C. Pentzold, I. Rothe and A. Bischof, 2023: <span class=\"citeproc-title\">Introduction: Living Labs Under Construction<\/span>. Journal of Science Communication. Special Issue: Living labs under construction: paradigms, practices, and perspectives of public science communication and participatory science 22: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Schwarz, S.A. and U. Thomas, 2023: <span class=\"citeproc-title\">(in press). Human-Robot Interaction in Telemanipulation - An Overview<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Dommel, P. and A. Pichler, 2023: <span class=\"citeproc-title\">Dynamic Programming For Data Independent Decision Sets<\/span>. Journal of Convex Analysis 30: 897\u2013916.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Borodaeva, Z., S. Winkler, J. Brade, P. Klimant and G. Jahn, 2023: <span class=\"citeproc-title\">Spatial updating in virtual reality for reproducing object locations in vista space\u2014Boundaries, landmarks, and idiothetic cues<\/span>. Frontiers in Psychology 14: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Schwarz, S.A., C. G\u00e4bert and U. Thomas, 2023: <span class=\"citeproc-title\">6D Dynamic Tool Compensation using Deep Neural Networks to improve Bilateral Telemanipulation<\/span>. in: 2023 IEEE International Conference on Robotics and Automation (ICRA), https:\/\/www.ais.uni-bonn.de\/ICRA2023AvatarWS\/contributions\/ICRA_2023_Avatar_WS_Schwarz.pdf.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Asbrock, F., J. Mayerl, M. Holz, H. Andersen and B. Maskow, 2023: <span class=\"citeproc-title\">(in press). \u201eAI Takeover. doesn\u2019t sound that bad!\u201d \u2013 Authoritarian ambivalence towards artificial intelligence<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Helmberg, C., T. Hofmann, D. Krumm and S. Odenwald, 2023: <span class=\"citeproc-title\">(in press). A Computational Study on Supervised and Unsupervised Gait Data Segmentation<\/span>. Vol. 1 in: Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Teichmann, M., M. Ragni, J. Vitay, M. Gaedke and F.H. Hamker, 2023: <span class=\"citeproc-title\">(in press). Human-Machine Teaming Agents: A Future Perspective<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Siefkes, M., 2023: <span class=\"citeproc-title\">(in press). Multimodal Digital Humanities - Grounding digital research methods in multimodal linguistics and semiotics<\/span> (PhD dissertation).<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Potinteu, A.E., N. Said, G. Jahn and M. Huff, 2023: <span class=\"citeproc-title\">(in press). Humans Helping Robots: The Role of Knowledge, Attitudes, and Context of Use<\/span>. in: Proceedings of 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2023).<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Schmitz, I. and W. Einh\u00e4user, 2023: <span class=\"citeproc-title\">Effects of interpreting a dynamic geometric cue as gaze on attention allocation<\/span>. Journal of Vision 23: 1\u201311.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Feder, S., J. Miksch, S. Grimm, J.F. Krems and A. Bendixen, 2023: <span class=\"citeproc-title\">(in press). Using event-related brain potentials to evaluate motor-auditory latencies in virtual reality<\/span>. Frontiers in Neuroergonomics 4: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kuschnereit, A., A. Bendixen, D. Mandl and W. Einh\u00e4user, 2023: <span class=\"citeproc-title\">(in press). Using Eye Tracking to Aid the Design of Human Machine Interfaces (HMIs) in Industrial Applications<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Tietz, S., 2023: <span class=\"citeproc-title\">(in press). What are you? Social Displays of \u2018Autonomous\u2019 Machines<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Brade, J., N. Xie, S. Winkler, P. Klimant and G. Jahn, 2023: <span class=\"citeproc-title\">(in press). Where am I? How to measure and support spatial orientation in teleoperation<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Gesmann-Nuissl, D., S. Meyer and Z. Robert, 2023: <span class=\"citeproc-title\">(in press). \u201cI act (consciously), therefore I am.\u201d \u2013 About Robotic Humanity, Members of the ie and Human Rights<\/span>. in: AI and the Human.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Gesmann-Nuissl, D. and S. Meyer, 2023: <span class=\"citeproc-title\">Robot Land \u00a0- \u30ed\u30dc\u30c3\u30c8\u30e9\u30f3\u30c9<\/span>. Zeitschrift f\u00fcr Innovations- und Technikrecht (InTeR) 3: 110\u2013119.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Burkhardt, M., J. Bergelt, L. G\u00f6nner, H. \u00dclo Dinkelbach, F. Beuth, A. Schwarz, A. Bicanski, N. Burgess and F.H. Hamker, 2023: <span class=\"citeproc-title\">A large-scale neurocomputational model of spatial cognition integrating memory with vision<\/span>. Neural Networks 167: 473\u2013488.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Gesmann-Nuissl, D. and S. Meyer, 2023 (December): <span class=\"citeproc-title\">From Self to Avatar as Likeness: Legal Barriers and Opportunities<\/span>. HAI \u201923: Proceedings of the 11th International Conference on Human-Agent Interaction390\u2013391, ACM.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Meyer, B., O. Kanoun, G. Sanseverino, M. M\u00fcller, C. Pentzold, A. Bischof and F. Hamker, 2023: <span class=\"citeproc-title\">(in press). Hybrid societies: Concepts, challenges, and research agenda<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Mandl, S., J. Brade, M. Bretschneider, F. Asbrock, B. Meyer, G. Jahn, P. Klimant and A. Strobel, 2023: <span class=\"citeproc-title\">Perception of embodied digital technologies: robots and telepresence systems<\/span>. Human-Intelligent Systems Integration 5: 43\u201362.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-67031477004f77d62965b9c0ca2f417e\">Embodied Digital Technologies (EDTs) are increasingly populating private and public spaces. How EDTs are perceived in Hybrid Societies requires prior consideration. However, findings on social perception of EDTs remain inconclusive. We investigated social perception and trustworthiness of robots and telepresence systems (TPS) and aimed at identifying how observers' personality traits were associated with social perception of EDTs. To this end, we conducted two studies (N1\\thinspace=\\thinspace293, N2\\thinspace=\\thinspace305). Participants rated five different EDTs in a short video sequence of a space sharing conflict with a human in terms of anthropomorphism, sociability\/morality, activity\/cooperation, competence, and trustworthiness. The TPS were equipped with a tablet on which a person was visible. We found that the rudimentarily human-like TPS was perceived as more anthropomorphic than the automated guided vehicle, but no differences emerged in terms of other social dimensions. For robots, we found mixed results but overall higher ratings in terms of social dimensions for a human-like robot as opposed to a mechanical one. Trustworthiness was attributed differently to the EDTs only in Study 2, with a preference toward TPS and more human-like robots. In Study 1, we did not find any such differences. Personality traits were associated with attributions of social dimensions in Study 1, however results were not replicable and thus, associations remained ambiguous. With the present studies, we added insights on social perception of robots and provided evidence that social perception of TPS should be taken into consideration before their deployment.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kopiske, K., E.-M. Heinrich, G. Jahn, A. Bendixen and W. Einh\u00e4user, 2023: <span class=\"citeproc-title\">Multisensory cues for walking in virtual reality: humans combine conflicting visual and self-motion information to reproduce distances<\/span>. Journal of Neurophysiology 130: 1028\u20131040.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kopnarski, L., L. Lippert, J. Rudisch and C. Voelcker-Rehage, 2023: <span class=\"citeproc-title\">Predicting object properties based on movement kinematics<\/span>. Brain Informatics 10: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Hensch, A.-C., M. Beggiato and J.F. Krems, 2023: <span class=\"citeproc-title\">Comparing accepted gaps in traffic flow for conducting left turn actions in different intersection scenarios \u2013 A driving simulator study<\/span>. pp. 83\u201394 in: D. de Waard, V. Hagemann, L. Onnasch, A. Toffetti, D. Coelho, A. Botzer, M. de Angelis, K. Brookhuis &#38; S. Fairclough (Eds.), Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2023 Annual Conference, http:\/\/hfes-europe.org.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Bretschneider, M., B. Meyer and F. Asbrock, 2023: <span class=\"citeproc-title\">The impact of bionic prostheses on users\u2019 self-perceptions: A qualitative study<\/span>. Acta Psychologica 241: 104085.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Felbel, K., A. Dettmann and A.C. Bullinger, 2023: <span class=\"citeproc-title\">The Effect of Implicit Cues in Lane Change Situations on Driving Discomfort<\/span>. Vol. 14049 in: H. Kr\u00f6mker (Ed.), HCI in Mobility, Transport, and Automotive Systems. HCII 2023.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Uhlmann, T., S. Br\u00e4uer, F. Zaumseil and G. Brunnett, 2023: <span class=\"citeproc-title\">A Novel Inexpensive Camera-Based Photoelectric Barrier System for Accurate Flying Sprint Time Measurement<\/span>. Sensors 23: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-40b0f6c1b920a5983cc27898684c52f4\">This paper introduces a novel approach to addressing the challenge of accurately timing short distance runs, a critical aspect in the assessment of athletic performance. Electronic photoelectric barriers, although recognized for their dependability and accuracy, have remained largely inaccessible to non-professional athletes and smaller sport clubs due to their high costs. A comprehensive review of existing timing systems reveals that claimed accuracies beyond 30 ms lack experimental validation across most available systems. To bridge this gap, a mobile, camera-based timing system is proposed, capitalizing on consumer-grade electronics and smartphones to provide an affordable and easily accessible alternative. By leveraging readily available hardware components, the construction of the proposed system is detailed, ensuring its cost-effectiveness and simplicity. Experiments involving track and field athletes demonstrate the proficiency of the proposed system in accurately timing short distance sprints. Comparative assessments against a professional photoelectric cells timing system reveal a remarkable accuracy of 62 ms, firmly establishing the reliability and effectiveness of the proposed system. This finding places the camera-based approach on par with existing commercial systems, thereby offering non-professional athletes and smaller sport clubs an affordable means to achieve accurate timing. In an effort to foster further research and development, open access to the device\u2019s schematics and software is provided. This accessibility encourages collaboration and innovation in the pursuit of enhanced performance assessment tools for athletes.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Dommel, P. and A. Pichler, 2023: <span class=\"citeproc-title\">(in press). Uniform Function Estimators in Reproducing Kernel Hilbert Spaces<\/span>. Pure and Applied Functional Analysis.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Sanseverino, G., D. Krumm, R. Ramalingame, C. Malani, R. Barioul, O. Kanoun and S. Odenwald, 2023: <span class=\"citeproc-title\">(in press). Understanding the Capabilities of FMG and EMG Sensors in Recognizing Basic Gesture Components<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Rudolph, C., S.-A. Dadgar, M. Bretschneider, B. Meyer, F. Asbrock and G. Brunnett, 2023: <span class=\"citeproc-title\">(in press). Towards TechnoSapiens: Experiencing Embodied Technologies in Augmented Reality<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kaden, S., C. Gaebert and U. Thomas, 2023: <span class=\"citeproc-title\">(in press). Towards Smooth Human-Robot Interaction using Potential Gradient-Based Sampling<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Hensch, A.-C., K. Felbel, M. Beggiato, A. Dettmann, J.F. Krems and A.C. Bullinger, 2023: <span class=\"citeproc-title\">(in press). Implicit driving cues for coordinating actions when sharing spaces<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Rauh, N., T. G\u00fcnther-Gommlich, K. Alyssa-Maas, C. Hollander and M. Beggiato, 2023: <span class=\"citeproc-title\">(in press). Influence of an innovative HMI for highly automated driving on trust<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Jansen, S., O. Rehren, K. Jahn, P. Ohler and G.D. Rey, 2023: <span class=\"citeproc-title\">(in press). Make it more Human! A Systematic Literature Review about the anthropomorphic processes on empathy<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Siefkes, M., E. Fricke, J. Bressem and A. Charoensit, 2023: <span class=\"citeproc-title\">(in press). Modelling intentional complexity in hybrid interaction scenarios beyond explicit and implicit communication<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kretzschmar, F., D. Uhlig, A. Pichler, O. Kanoun and R. Barioul, 2023: <span class=\"citeproc-title\">(in press). Nadaraya-Watson Time Series Early Classification for Gesture Recognition<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Lakshmanan, R., A. Pichler and D. Potts, 2023: <span class=\"citeproc-title\">Nonequispaced Fast Fourier Transform Boost for the Sinkhorn Algorithm<\/span>. ETNA - Electronic Transactions on Numerical Analysis 58: 289\u2013315.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Voelcker-Rehage, C., F.H. Hamker, J. Baladron, J. Rudisch, T. Fietzek and J. Vitay, 2023: <span class=\"citeproc-title\">(in press). Observational Learning in Humans and Machines<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Gaebert, C., S. Kaden, B. Fischer and U. Thomas, 2023: <span class=\"citeproc-title\">Parameter Optimization for Manipulator Motion Planning using a Novel Benchmark Set<\/span>. pp. 9218\u20139223 in: Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), http:\/\/arxiv.org\/abs\/2302.14422.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><div class=\"bibsonomycsl_collapse bibsonomycsl_pub_abstract\" style=\"display:none;\" id=\"abs-65e7fe25b2a809b9b26d598c7bcd8118\">Sampling-based motion planning algorithms have been continuously developed for more than two decades. Apart from mobile robots, they are also widely used in manipulator motion planning. Hence, these methods play a key role in collaborative and shared workspaces. Despite numerous improvements, their performance can highly vary depending on the chosen parameter setting. The optimal parameters depend on numerous factors such as the start state, the goal state and the complexity of the environment. Practitioners usually choose these values using their experience and tedious trial and error experiments. To address this problem, recent works combine hyperparameter optimization methods with motion planning. They show that tuning the planner's parameters can lead to shorter planning times and lower costs. It is not clear, however, how well such approaches generalize to a diverse set of planning problems that include narrow passages as well as barely cluttered environments. In this work, we analyze optimized planner settings for a large set of diverse planning problems. We then provide insights into the connection between the characteristics of the planning problem and the optimal parameters. As a result, we provide a list of recommended parameters for various use-cases. Our experiments are based on a novel motion planning benchmark for manipulators which we provide at https:\/\/mytuc.org\/rybj.<\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Naik, D., M. Reber, T. Uhlmann and G. Brunnett, 2023: <span class=\"citeproc-title\">(in press). Person detection and differentiation in shopping scenarios<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kopnarski, L., J. Rudisch, D. Potts, C. Voelcker-Rehage and L. Lippert, 2023: <span class=\"citeproc-title\">(in press). Predicting Object Weights from Giver\u2019s Kinematics in Handover Actions<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kilian, W., A. P\u00fcschel, S. Doetz and S. Odenwald, 2023: <span class=\"citeproc-title\">(in press). Radar based gait analysis<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Haas, J.I., C. G\u00f6pfert and M. Gaedke, 2023: <span class=\"citeproc-title\">(in press). RDMflow: Managing Research Data Workflows with Micro-Frontends<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">M\u00fcller, M.R., 2023: <span class=\"citeproc-title\">(in press). Sociomorphic Machines. On a Shifting Boundary in the Taxonomy of Social Actors<\/span>. Zeitschrift f\u00fcr Semiotik 1-2: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">M\u00fcller, M. and S. A., 2023: <span class=\"citeproc-title\">(in press). Sociomorphic Technologies - On the Typology of Artificial Actors<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Zeuge, A., K. Lemmer, M. Klesel, B. Kordyaka, K. Jahn and B. Niehaves, 2023: <span class=\"citeproc-title\">(in press). To be or not to be stressed: Designing autonomy to reduce stress at work<\/span>. Work1\u201315.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Meyer, S., M. M\u00fcller, A. Sonnenmoser, S. Mandl, A. Strobel and D. Gesmann-Nuissl, 2023: <span class=\"citeproc-title\">(in press). Towards Hybrid Personae?<\/span>. Vol. 1 in: B. Meyer, U. Thomas &#38; O. Kanoun (Eds.), Hybrid Societies - Humans Interacting with Embodied Technologies.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Maith, O., J. Baladron, W. Einh\u00e4user and F.H. Hamker, 2023: <span class=\"citeproc-title\">Exploration behavior after reversals is predicted by STN-GPe synaptic plasticity in a basal ganglia model<\/span>. iScience 26: 106599.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Henke, L., M. Guseva, K. Wagemans, D. Pischedda, J.-D. Haynes, G. Jahn and S. Anders, 2022: <span class=\"citeproc-title\">Surgical face masks do not impair the decoding of facial expressions of negative affect more severely in older than in younger adults<\/span>. Cognitive Research: Principles and Implications 7: .<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Gaidai, R., C. Goelz, K. Mora, J. Rudisch, E.-M. Reuter, B. Godde, C. Reinsberger, C. Voelcker-Rehage and S. Vieluf, 2022: <span class=\"citeproc-title\">Classification characteristics of fine motor experts based on electroencephalographic and force tracking data<\/span>. Brain Research 1792: 148001.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div class=\"csl-bib-body\">\n  <div class=\"csl-entry\">Kanoun, O., A. Bouhamed, R. Ramalingame, J.R. Bautista-Quijano, D. Rajendran and A. Al-Hamry, 2021: <span class=\"citeproc-title\">Review on Conductive Polymer\/CNTs Nanocomposites Based Flexible and Stretchable Strain and Pressure Sensors<\/span>. Sensors 21: 341.<\/div>\n<\/div><div style=\"clear: left\"> <\/div><\/div><\/li><li class=\"bibsonomycsl_pubitem\"><div class=\"bibsonomycsl_entry\"><div style=\"clear: left\"> <\/div><\/div><\/li><\/ul>","protected":false},"excerpt":{"rendered":"<p>Fietzek, T., C. Ruff and F.H. Hamker, 2024: A Brain-Inspired Model of Reaching and Adaptation on the iCub Robot. in: 2024 IEEE International Symposium on Robotic and Sensors Environments (ROSE). Motor learning is an important property for em-bodied agents and a wide variety of approaches has been developed for humanoid robots. One strategy is to [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"episode_type":"","audio_file":"","podmotor_file_id":"","podmotor_episode_id":"","cover_image":"","cover_image_id":"","duration":"","filesize":"","filesize_raw":"","date_recorded":"","explicit":"","block":"","itunes_episode_number":"","itunes_title":"","itunes_season_number":"","itunes_episode_type":"","footnotes":""},"class_list":["post-3349","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/pages\/3349","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/comments?post=3349"}],"version-history":[{"count":1,"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/pages\/3349\/revisions"}],"predecessor-version":[{"id":3350,"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/pages\/3349\/revisions\/3350"}],"wp:attachment":[{"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/media?parent=3349"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}