征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介

One of the key skills for a robot is to physically interact with the environment in order to achieve basic tasks such as pick-and-place, sorting etc. For physical interaction, object grasping and manipulation capabilities along with dexterity (e.g. to use objects/tools successfully) and high-level reasoning (e.g. to decide about how to fulfill task requirements) are crucial. Typical applications of robots have been welding, assembly, pick-and-place in industrial settings. However, traditional industrial robots perform their assignments in cages and are heavily dependent on hard automation that requires pre-specified fixtures and timeconsuming programming.

During recent years there have been several attempts of designing robots that are inherently safe and thus can work together with humans in mixed assembly lines out of their cages or even replace human workers without major redesigns of the workplace. Some recent remarkable product examples are the dual-armed ABB’s robot YuMi and Rethink Robotics’s Baxter. Despite the aforementioned technological achievements, robots still lack the perception, control and cognitive abilities that can allow a fluent interaction with humans both cognitively and physically. One promising direction is to include human in the loop, i.e. as an input agent that can influence the robot decision-making process. The recent release of the ISO/TS 15066 on collaborative robots demonstrates the will of having in a near future human and robots working closely together. In this direction, a key aspect to consider is that there can be different roles implicitly assigned to the human in such collaboration. Two types of involvement are usually envisioned for the human: as a teacher and as a co-worker. The former has been addressed in many ways, e.g. programming by demonstration approaches to derive robot controllers from observing humans with the aim of adapting to novel cases with minimum expertise. A key issue is how to convey the information from the human to the robot, namely the interface to provide demonstrations. One common way is to record human motions directly, but it requires addressing the not-so-trivial problem of human to robot motion mapping. The two other main approaches, namely kinesthetic teaching (guiding the robot physically) and teleoperation (human operator using the robot’s sensors and effectors, e.g. through a haptic device) bypass this mapping issue by demonstrating the motion directly within the robot configuration space. Kinesthetic teaching does not only allow teaching of motion trajectories but can also facilitate teaching of contact forces required to perform a manipulation task or in general interaction tasks that involve robots, humans and objects. The latter, human as a co-worker, can be considered in scenarios where humans and robots do share their working space and actively collaborate, through joint actions like object cooperative manipulation and object exchange (exchanging a tool or a manufactured piece). In both cases the robot should be able to predict the human intention or motion and react accordingly in order to achieve the task at hand. The presence and the involvement of the human in the task execution introduces high amount of uncertainty and variations that is not typical for standard industrial environment and requires advanced multimodal interactive perception skills for the robot.

This workshop focuses on human-in-the-loop robotic manipulation that can involve different human roles, e.g., supervisory, cooperative, active or passive. This workshop proposes to gather experts in human-in-the-loop robotic manipulation, for detecting synergies in the frameworks proposed to observe and model the human contribution to the task. We would like also to identify the critical challenges still to be addressed by the community, to reach the envisioned human-robot close and fluent collaboration, across the different approaches pertaining to the workshop topic.

征稿信息

征稿范围

Topics of interest:

  • Physical Human-robot interaction

  • Cooperative object manipulation

  • Human-robot synchronization and hand-overs

  • Learning from demonstration

  • Adaptive Control

  • Multimodal interactive perception

  • Teleoperation and haptic interfaces

  • Human motion prediction

  • Safety through mechanical and control design

  • Mapping from human to robotic skills

  • Grasp and manipulation planning

  • Learning for grasping and manipulation

  • Human involvement in industrial robotic applications, e.g., shared assembly lines

留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 09月24日

    2017

    会议日期

  • 09月24日 2017

    注册截止日期

移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询