The aim of this workshop is to present rigorous scientific advances on social interaction and multimodal expression for socially intelligent robots, address current challenges in this area, and to set a research agenda to foster interdisciplinary collaboration between researchers on the domain.
Recent advances in the field of robotics and artificial intelligence contributed to the development of “socially interactive robots” that engage in social interactions with humans and exhibit certain human-like social characteristics, including the abilities to communicate with high-level dialogue, to perceive and express emotions using natural multimodal cues (e.g., facial expression, gaze, body posture) and to exhibit distinctive personalities and characters. Applications for socially interactive robots are plentiful: companions for children and elderly, household assistants, partners in industries, guides in public spaces, educational tutors at school and so on. Despite this progress, the social interaction and multimodal expression capabilities of robots are still far behind the intuitiveness and naturalness that is required to allow uninformed users to interact, establish and maintain social relationships with them in their everyday lives.
The area of social interaction and multimodal expression for socially intelligent robots remains very much an active research area with significant challenges in practice due to limitations both in technology and in our understanding of how different modalities must work together to convey human-like levels of social intelligence. Designing reliable and believable social behaviors for robots is an interdisciplinary challenge that cannot be solely approached from a pure engineering perspective. Human sciences, social sciences, and cognitive sciences play a primary role in the development and the enhancement of social interaction skills for socially intelligent robots.
This workshop will bring together a multidisciplinary audience interested in the study of multimodal human-human and human-robot interactions to address challenges in these areas, and elaborate on novel ways to advance research in the field, based on theories of human communication and empirical findings validated human-robot interaction studies. We welcome contributions on both theoretical aspects as well as practical applications. The analysis of human-human interactions is of particular importance to understand how humans send and receive social signals multimodally, through both parallel and sequential use of multiple modalities (e.g., eye gaze, touch, vocal, body, and facial expressions. Results achieved by researchers studying human-robot interactions offers researchers the opportunity to understand how uninformed interaction partners perceive the multimodal communication skills developed for social robots (e.g., children, elderly) and how they influence the interaction process (e.g., regarding usability and acceptance).
Workshop topics include, but are not limited to:
Contributions of fundamental nature
Psychophysical studies and empirical research about multimodality
Technical contributions on multimodal interaction
Novel strategies of multimodal human-robot interactions
Dialogue management using multimodal output
Work focusing on novel modalities (e.g., touch)
Multimodal interaction evaluation
Evaluation and benchmarking of multimodal human-robot interactions
Empirical HRI studies with (partial) functional systems
Methodologies for the recording, annotation, and analysis of multimodal interactions
Applications for multimodal interaction with social robots
Novel application domains for multimodal interaction
Position papers and reviews of the state-of-the-art and ongoing research
09月02日
2017
会议日期
初稿截稿日期
注册截止日期
留言