A key factor for the acceptance of robots as regular partners in human-centered environments is the appropriateness and predictability of their behavior. The behavior of human-human interactions is governed by customary rules that define how people should behave in different situations, thereby governing their expectations. Socially compliant behavior is usually rewarded by group acceptance, while non-compliant behavior might have consequences including isolation from a social group. Making robots able to understand human social norms allows for improving the naturalness and effectiveness of human-robot interaction and collaboration. Since social norms can differ greatly between different cultures and social groups, it is essential that robots are able to learn and adapt their behavior based on feedback and observations from the environment.
This workshop aims to attract the latest research studies and expertise in human-robot interaction and collaboration at the intersection of rapidly growing communities, including social and cognitive robotics, machine learning, and artificial intelligence, to present novel approaches aiming at learning, producing and evaluating human-aware robot behavior. Furthermore, it will provide a venue to discuss the limitations of the current approaches and future directions towards creating intelligent human-aware robot behaviors.
We invite regular papers (4-6 pages) and position papers (2 pages), using the official RO-MAN 2021 format, presenting or investigating novel approaches to learn, produce and evaluate human-aware robot behavior. Suggested topics include, but are not limited to:
Submissions can be made through EasyChair.
June 30, 2021 July 11, 2021 (AoE)
Notification: July 28, 2021 (AoE)
Camera ready: August 4, 2021 (AoE)
Workshop: August 12, 2021
|09:30 - 09:40
|09:40 - 10:20
|Invited Talk: Kerstin Dautenhahn
|10:20 - 11:00
|Invited Talk: Brian Scassellati
|11:00 - 11:10
|11:10 - 11:40
|11:40 - 12:20
|Invited Talk: Ana Paiva
|12:20 - 13:00
|Invited Talk: Greg Trafton
|13:00 - 13:50
|13:50 - 14:30
|Invited Talk: Amit Kumar Pandey
|14:30 - 14:50
|14:50 - 15:00
|15:00 - 15:40
|15:40 - 17:10
|17:10 - 17:50
|Invited Talk: Matthias Scheutz
|17:50 - 18:00
Title: Reflections on expectations, social norms and anthropomorphic attributions in HRI
Abstract: My talk will discuss some issues involved in designing socially appropriate and socially acceptable robot behaviour. I will illustrate those points with research I've been involved in in different application areas, including home assistance companion robots.
Title: From dyads to groups: What robots teach us about human group behavior
Abstract: In this short talk, I will argue four points. First, social behavior of individuals varies depending on whether they are interacting in dyads or groups in significant ways. By looking at the problems of machine perception of social cues, I will show examples of. how systems that work to perceive social signals with dyads fail when applied to individuals in triads and richer social groups (featuring joint work with Iolanda Leite). Second, that robots can induce social influence and peer pressure when acting in groups to alter the behavior of individual humans (featuring joint work with Nicole Salomons). Third, that robots can influence not only social behavior of humans during direct dyadic interactions but also impact human-to-human social behavior when interacting with mixed human-robot teams (featuring joint work with Sarah Sebo). Finally, by moving from a focus on dyadic interactions to group interactions, human-robot systems have become successful therapeutic options for children with autism.
Title: From social robotics to prosocial robotics: How robots can promote altruism
Abstract: Throughout the past few years robots have become increasingly more sophisticated in terms of their hardware and software, paving the way for their more frequent use in a myriad of scenarios. New situations emerge where robots not only have to interact with humans but are also required to respect social norms and collaborate with humans in a variety of situations. With this vision in mind in this talk I will be exploring the question: how can we design robots which, immersed with humans, can promote collective and prosocial action in situations where it may not naturally arise? Prosocial behavior occurs when people and agents perform costly actions that benefit others. Acts such as helping others voluntarily, donating to charity and providing information or sharing resources, are all forms of prosocial behaviour. So, can robots play a role in promoting such collective actions? Can robots respect human’s norms, going beyond being social and promoting prosocial acts? In the talk I will provide a set of examples to address these questions and discuss some of the conditions that encourage humans to be more prosocial when interacting with robots by examining features that have shown to be relevant to promote prosociality.
Title: A cognitive model of social norms: Hypotheses and data
Abstract: In order for robots to follow social norms, we need to have a computational model of social norms. That model should be able to describe when and why people are likely to follow a particular social norm. I present a process model of social norms that describes the cognitive processes that people go through when determining whether to go along with a norm. I then compare the model's results to empirical data and generate novel predictions from that model. I then test those predictions on a completely different domain and dataset. Finally, I show how the model could run on our robots.
Title: Useful and Intelligent Bots for Society: The way forward
Abstract: Never before in history, Robots, AI and IoT, all together have been so close to us, in our society. It is a revolution towards a new ecosystem of living, where AI is now the part of our lives and robots are catching up already. The intention is to facilitate a smarter, healthier, safer and happier life. Such AI beings are getting used in education, healthcare, retail, entertainment, art, science, and even to improve our understanding about ourselves, the human being. The talk will focus on some of such potential use cases, provide industrial and applied perspectives, and point towards some of the challenges we need to address as a community. The talk will open the floor by highlighting the multidisciplinary nature of the domain, and the need for a bigger collaborative ecosystem.
Title: The need for robots to handle norm conflicts
Abstract: Moral competence is a fundamentally human trait that permeates all forms of human social life. There is mounting evidence that humans are likely to apply human norms to robots as well, especially if robots are seen to be human-like, and that robots thus need to be aware of human norms, as they will otherwise violate them and get blamed for those violations. In this presentation, we focus on the challenging problem of handling norm conflicts as they arise in dilemma-like situations and demonstrate with results from several empirical studies different human normative expectations of robots facing norm conflicts. We also discuss architectural and algorithmic approaches for dealing with norm conflicts in autonomous robots and show them in operation in two human-robot interaction scenarios.
|Cheng Lin, Jimin Rhim and AJung Moon
|Mobile Robotic Telepresence: A New Social Hierarchy?
|Joshua Zonca and Alessandra Sciutti
|Does human-robot trust need reciprocity?
|Danilo Gallo, Shreepriya Shreepriya, Tommaso Colombino, Maria Antonietta Grasso and Cecile Boulard
|Considerations about Social Norms Compliance in a Shared Elevator Scenario
|Yigit Yildirim and Emre Ugur
|Learning Social Navigation from Demonstrations with Deep Neural Networks
|Boyoung Kim, Ruchen Wen, Ewart J. de Visser, Qin Zhu, Tom Williams and Elizabeth Phillips
|Investigating Robot Moral Advice to Deter Cheating Behavior
|Mark L. Ornelas, Gary B. Smith and Masoumeh Mansouri
|Culture Is Not What You Think It Is: Diversifying the Foundations of Cultural Robotics
|Stephen Cranefield and Bastin Tony Roy Savarimuthu
|Normative Multi-Agent Systems and Human-Robot Interaction
|Stavros Anagnou and Lola Cañamero
|Towards an Affective Model of Norm Emergence and Adaptation
We are organizing a special issue on Socially Acceptable Robot Behavior: Approaches for Learning, Adaptation and Evaluation in Interaction Studies, which aims to attract the latest research aiming at learning, producing, and evaluating human-aware robot behavior, thereby, following the TSAR 2021 workshop in providing a venue to discuss the limitations of the current approaches and future directions towards intelligent human-aware robot behaviors.
The paper submission deadline is March 31, 2022 but papers can be submitted any time and will be reviewed as soon as they are received.
Please checkout the CFP for more information!