{"id":3860,"date":"2023-02-16T21:24:15","date_gmt":"2023-02-16T20:24:15","guid":{"rendered":"https:\/\/hybrid-societies.org\/?page_id=3860"},"modified":"2023-03-07T20:10:35","modified_gmt":"2023-03-07T19:10:35","slug":"social-perception-anthropomorphisation","status":"publish","type":"page","link":"https:\/\/hybrid-societies.org\/en\/social-perception-anthropomorphisation\/","title":{"rendered":"Social Perception &#038; Anthropomorphisation"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\" style=\"text-decoration:underline\">Wednesday 15:15 \u2013 16:15<br><br><br><br><\/h2>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>&#8220;Artificial Morality&#8221;<\/strong><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><em>Diana Armbruster, Sarah Mandl and Anja Strobel<\/em><\/h4>\n\n\n\n<p>Abstract\u2014&nbsp;Consensus on moral&nbsp;\u2018rights\u2019&nbsp;and&nbsp;\u2018wrongs\u2019&nbsp;is essential for a functioning society. Moral research has dealt with different aspects of moral cognition for decades, but has&nbsp;predominantly&nbsp;focused on human agents. With the&nbsp;increasing&nbsp;presence of artificial intelligence (AI) in&nbsp;society, questions about its ability to make moral decisions become relevant: can artificial agents make moral decisions and if so,&nbsp;should they?&nbsp;We studied responses to moral choices by&nbsp;human resp.&nbsp;artificial agents with&nbsp;moral&nbsp;dilemmas describing either high-stakes or low-stakes conflict scenarios&nbsp;in which&nbsp;agents either acted&nbsp;on a suggested course of action&nbsp;or did not.<br>Participants&nbsp;rated&nbsp;the appropriateness of an agent\u2019s moral decision,&nbsp;as well as&nbsp;how much they blamed and trusted an agent.&nbsp;On average, humans were blamed more, with the exception&nbsp;of&nbsp;low-stakes scenarios&nbsp;in which agents did not&nbsp;act,&nbsp;where there was no difference in how much blame human and artificial&nbsp;agents received.&nbsp;Humans were&nbsp;also&nbsp;more trusted&nbsp;in \u2018inaction\u2019 conditions&nbsp;while there was no difference in&nbsp;\u2018action\u2019&nbsp;conditions,&nbsp;where trust was generally lower.&nbsp;In sum, the&nbsp;results point to&nbsp;intriguing&nbsp;differences&nbsp;in evaluating moral choices&nbsp;depending on&nbsp;whether the agent is human or artificial.<\/p>\n\n\n\n<p><br>Keywords: moral dilemmas,&nbsp;artificial moral agents,&nbsp;blame, trust<br><br><br><br><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>&#8220;Sociomorphic Technologies \u2013 On the Typology of Artificial Actors&#8221;<\/strong><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><em>Michael R. M\u00fcller and Anne Sonnenmoser<\/em><\/h4>\n\n\n\n<p class=\"p{text-align: justify; }\">Abstract\u2014 The&nbsp;central&nbsp;question of our paper is not whether Embodied Digital Technologies (EDTs) are to be understood as&nbsp;social actors by analogy with humans. The question is rather how the contingency of machine behavior is communicatively processed in design and how the mechatronics of machines is transformed into&nbsp;a&nbsp;socially accountable&nbsp;shape. To describe the contingency of machine behavior, we draw on Heinz von Foerster\u2019s concept of the non-trivial machine. To discuss the actor&nbsp;status of machines, we use the concept of \u201csocial frameworks\u201d developed by Erving Goffman: While&nbsp;\u201cnatural frameworks\u201d&nbsp;aim at a technical control of environmental phenomena, social frameworks&nbsp;operate with displays and expectations of behavior in order to&nbsp;provide&nbsp;orientation and to gain influence. Goffman\u2019s concept&nbsp;of&nbsp;social frameworks&nbsp;leads to&nbsp;the possibility&nbsp;of identifying different types of non-human, technically developed&nbsp;social&nbsp;actors. Each of these types&nbsp;is\u2014as our comparative study shows\u2014characterized by a different principle of organizing the relationship between man and machine.<\/p>\n\n\n\n<p><br>Keywords: artificial actors,&nbsp;social displays,&nbsp;sociomorphism,&nbsp;anthropomorphism,&nbsp;micro-sociology<br><br><br><br><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>&#8220;Towards TechnoSapiens:<\/strong> <strong>Experiencing Embodied Technologies In Augmented Reality&#8221;<\/strong><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><em>Carsten Rudolph, Seyed-Amin Dadgar, Maximilian Bretschneider, Bertolt Meyer, Frank Asbrock and Guido Brunnett<\/em> <\/h4>\n\n\n\n<p>Abstract\u2014&nbsp;The number of individuals using some form of bionic technology that is merged to their bodies (e.g., prostheses, exoskeletons) is likely to increase in the future. Such Embodied Digital Technologies (EDTs) will affect the psychological processes underlying social interaction, perception, and stereotyping, especially when users and non-users of these technologies meet&nbsp;and need to coordinate in public. Therefore, understanding the psychological processes underlying the use and the perceptions of embodied technologies&nbsp;is important for designing such devices for smooth coordination. However, today\u2019s limited availability of both, technology and their users limits the&nbsp;possibilities for conducting psychological studies in this area. We thus suggest employing Mixed Reality (MR) to simulate wearing those technologies&nbsp;and to simulate encounters between users for studying associated processes.<br>We call our system&nbsp;TechnoSapiens&nbsp;to emphasize the merging of human bodies with technology. We describe major technological requirements, propose a psychological framework to assess the system\u2019s capabilities and lay out the&nbsp;current state-of-the-art in the respective fields.<\/p>\n\n\n\n<p>Keywords:&nbsp;Mixed Reality, Computer Vision, Human-Computer Interaction, Social Perception, Stereotyping<br><br><br><br><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>&#8220;Anthropomorphization<\/strong> <strong>of Robots and Attribution of Mental States: Hints for the Design of Hybrid Societies&#8221;<\/strong><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><em>F. Manzi, L. Miraglia, C. Di Dio,D. Massaro, G. Riva and A. Marchetti<\/em><\/h4>\n\n\n\n<p class=\"has-text-align-left\">Abstract\u2014 In the near future, humans and embodied technologies will share spaces and relationships in an inextricable manner that will lead to new forms of societies:&nbsp;Hybrid societies. Within this scenario, humanoid social robots (HSRs) will be important in several&nbsp;contexts of daily life. HSRs can vary in their anthropomorphic physical features, often depending on the target user and the context of employment. Human interactions are underpinned by a crucial social skill, Theory of Mind (ToM). ToM is the ability to understand one\u2019s own and others\u2019 mental states (intentions, emotions, desires, beliefs), and to predict and interpret one\u2019s own and others\u2019 behaviors on the basis of such understanding. Several studies have shown that&nbsp;the attribution of mental states&nbsp;is&nbsp;also&nbsp;triggered by the HSRs&nbsp;and&nbsp;that&nbsp;the degree of physical anthropomorphization of&nbsp;the&nbsp;HSRs influences the attribution of mental qualities. The present&nbsp;study aimed&nbsp;to analyzed&nbsp;in adults the influence of robots\u2019 anthropomorphization on the attribution of mental states&nbsp;measured&nbsp;by&nbsp;the Attribution of Mental States Questionnaire (AMS-Q). For this purpose, we compared&nbsp;the attribution of sensory and mental states to humans&nbsp;and&nbsp;to Nextage, NAO, Romeo, and Geminoid&nbsp;\u2013&nbsp;HSRs differing in their anthropomorphic<br>physical features. Overall, participants attributed greater sensory and mental states to human compared to all robots. Generally, when comparing robotic agents, participants attributed greater sensory and mental states to the Geminoid and Romeo robots&nbsp;\u2013&nbsp;which are characterized by a higher degree of human-likeness&nbsp;\u2013&nbsp;than to the Nextage and NAO, which are less anthropomorphic. However, no differences were found between Nextage and NAO and between Romeo and Geminoid.&nbsp;Our results show that the attribution of mental states in adults is influenced by the degree of physical anthropomorphization of&nbsp;HSRs, in particular the greater the anthropomorphization the greater the attribution of mental qualities. This finding offers at least two insights from both a psychological and a robotics perspective. In psychological terms, the ascription of mental qualities in adults is influenced by the anthropomorphization of the agent. From the perspective of robotics, these results show that the design of HSRs affects people\u2019 attitudes also before an interaction with a humanoid robot. In conclusion, this study highlights that the physical anthropomorphization of robots is a crucial aspect to consider in robot design to ensure that HSRs become effective social partners in hybrid societies.<br><br><br><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" style=\"text-decoration:underline\">Thursday 15:15 &#8211; 16:00<br><br><br><br><\/h2>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>&#8220;Eudaimonics Of Hybrid Societies&#8221;<\/strong><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><em>Jaime Banks<\/em><\/h4>\n\n\n\n<p>Abstract\u2014This short essay spotlights the Aristotelian notion of&nbsp;eudaimonia&nbsp;as an entry point for considering flourishing as an ideal, to explore permutations of flourishing in hybrid societies, and to consider an agent-agnostic conceptualization for thinking about (co-)flourishing.<\/p>\n\n\n\n<p><br>Keywords: social machines, well-being, flourishing, moral patiency, ontological categorization, anthropocentrism<br><br><br><br><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>&#8220;Make it more Human! A Systematic Literature Review of the Anthropomorphic Processes on Empathy&#8221;<\/strong><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><em>Sebastian Jansen, Oliver Rehren, Katharina Jahn, Peter Ohler and G\u00fcnter Daniel Rey<\/em> <\/h4>\n\n\n\n<p>Abstract&nbsp;\u2014&nbsp;Today, interactions with Embodiment Digital Technologies (EDT) are becoming quite common. The concept of robots interacting with humans sounds simple. But there is an obstacle: while the robot has no problem interacting with humans, humans may not like the&nbsp;interaction. To improve acceptance and thus interaction, the EDT is often anthropomorphised. In addition to physical features, mental states such as empathy can also be attributed to an EDT and influence the interaction. However, it is important to understand the direction of empathy. Does the human show empathy towards the EDT or is empathy used as a feature to anthropomophise the EDT? This systematic literature review aims to examine and compare these findings to identify and evaluate the literature from recent years on the influence of empathy and anthropomorphism in interactions with EDTs, and to draw conclusions on how consistent the findings are on anthropomorphic processes on empathy.&nbsp;This is an initial review of the literature to build upon and prepare for a meta-analysis.&nbsp;It was found that both empathy and anthropomorphism are mainly self-reported and that a mental attribution of human abilities seems to be more significant than the visual appearance of an EDT.<\/p>\n\n\n\n<p><br>Keywords&nbsp;: Anthropomorphism,&nbsp;Empathy,&nbsp;EDT, Uncanny Valley<br><br><br><br><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>&#8220;Towards Hybrid Personae&#8221;<\/strong><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><em>Stefanie Meyer, Michael R. M\u00fcller, Anne Sonnenmoser, Sarah Mandl, Anja Strobel and Dagmar Gesmann-Nuissl<\/em> <\/h4>\n\n\n\n<p>Abstract\u2014&nbsp;When we think of future hybrid societies, this goes far beyond conventional scenarios. We have long since moved away from humans working only by hand or with the help of machines, and we already find technologized humans, humanized technology, and hybrid entities within a society, and even more so in the future. In this paper, we explore how all these entities fit into existing legal, psychological, and sociological constructions. We shed light on the attributions to an acting entity from the respective disciplines and explore whether hybrid entities can possess personality, paving the way for a hybrid personae. It becomes clear that at the core of conventional considerations is the biological person. Based on common definitions of person and personality, psychological considerations do not, up to now, grant personality to artificial actors. This explicitly excludes all kinds of technology where a human being is involved, irrespective of the level of technicity. Sociology can also designate machine entities as actors. The concept of person is always connected with the human being. In legal science, it is conceivable and not excluded in advance, to extend the concept of personhood to hybrid entities as well, at least under certain conditions and perspectives.<\/p>\n\n\n\n<p><br>Keywords: hybrid societies, person, humanized technology, technologized human, hybrid entities, hybrid personae<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Wednesday 15:15 \u2013 16:15 &#8220;Artificial Morality&#8221; Diana Armbruster, Sarah Mandl and Anja Strobel Abstract\u2014&nbsp;Consensus on moral&nbsp;\u2018rights\u2019&nbsp;and&nbsp;\u2018wrongs\u2019&nbsp;is essential for a functioning society. Moral research has dealt with different aspects of moral cognition for decades, but has&nbsp;predominantly&nbsp;focused on human agents. With the&nbsp;increasing&nbsp;presence of artificial intelligence (AI) in&nbsp;society, questions about its ability to make moral decisions become relevant: [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"episode_type":"","audio_file":"","podmotor_file_id":"","podmotor_episode_id":"","cover_image":"","cover_image_id":"","duration":"","filesize":"","filesize_raw":"","date_recorded":"","explicit":"","block":"","itunes_episode_number":"","itunes_title":"","itunes_season_number":"","itunes_episode_type":"","footnotes":""},"class_list":["post-3860","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/pages\/3860","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/comments?post=3860"}],"version-history":[{"count":17,"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/pages\/3860\/revisions"}],"predecessor-version":[{"id":4051,"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/pages\/3860\/revisions\/4051"}],"wp:attachment":[{"href":"https:\/\/hybrid-societies.org\/en\/wp-json\/wp\/v2\/media?parent=3860"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}