征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介

With this workshop we plan to bring together researchers from different disciplines around signal processing, machine learning, computer vision and robotics with application in HRI/HCI fields, as related to multimodal and multi-sensor processing. During the last decades, an enormous number of socially interactive systems have been developed constituting the field of Human-Computer and Human -Robot Interaction (HCI/HRI) an actual motivating challenge. This challenge has become even greater, due to the relocation of such systems outside the lab environment and into real use cases. The growing potential of multimodal interfaces in human-robot and human-machine communication setups has stimulated people’s imagination and motivated significant research efforts in the fields of computer vision, speech recognition, multimodal sensing, fusion,and human-computer interaction (HCI) and nowadays lies at the heart of such interfaces. In parallel we are interested in applications of multimodal modeling, fusion and recognition when seen from an interdisciplinary perspective such as assistive, clinical, affective and psychological aspects e.g. dealing for instance with cognitive and/or mobility impairments. From the robotics perspective, designing and controlling robotic devices constitutes an emerging research field on its own. The integration with multimodal machine learning models pose many challenging scientific and technological problems that need to be addressed in order to build efficient and effective interactive robotic systems.

征稿信息

征稿范围

The list of topics include (but are not limited to):

  • Gesture recognition 

  • Action and complex activities recognition

  • Deep learning for multimodal recognition

  • Sequential modeling with deep learning

  • Spatiotemporal action localization

  • Sign language analysis and recognition

  • Facial expression modelling and recognition 

  • Human body pose estimation and tracking

  • Hand tracking 

  • 3D Face modelling and analysis 

  • Object detection and tracking for HCI/HRI

  • Vision-based Human Computer/Human Robot Interaction

  • Visual fusion of manual and non-manuals

  • Multimodal emotion recognition

  • Affective computing 

  • Human behaviour analysis,  modeling, and recognition

  • Multi-view subspace learning 

  • Multiview/multimodal invariance learning

  • Audio-visual behaviour analysis

  • Multimodal sensory processing and fusion

  • Multimodal HRI

  • Music and audio in multimodal applications

  • Multimodal HRI for educative applications 

  • Physical human-robot interaction

  • Human-aware interaction control of assistive robots

  • Cognitive robot control architectures

  • Context and intention awareness

  • Corpora, datasets and annotations

  • Human-robot communication in assistive robotics

  • Elderly care mobility assistive robots

  • Assistive applications for children in the autism spectrum

  • Learning for Human-Robot interaction

  • Performance and task monitoring during Human-Robot interaction

  • Time series modeling and classification

留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 09月02日

    2017

    会议日期

  • 09月02日 2017

    注册截止日期

移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询