Today, we are seeing an emergence of devices that incorporate sensing capabilities that go beyond the traditional suite of hardware (e.g., touch sensing or proximity). These devices offer more fine-grained level of contextual information, such as object recognition, and they often vary in their size, portability, embeddability, and form factor. Despite this diversity, the manifestations of these new-generation sensing approaches will inevitably unlock many of the ubiquitous, tangible, mobile, and wearable computing ecosystems that promise to improve people’s lives. These system are brought together by a variety of technologies, including computer vision, radar (e.g., Google ATAP’s Project Soli), acoustic sensing, fiducial tagging, and in general, IoT devices embedded with computational capabilities. Such systems open up a wide-range of applications spaces and novel forms of interaction. For instance, object-based interactions offer rich, contextual information that can power a wide range of user-centric applications (e.g., factory line optimization and safety, automatic grocery checkout, new forms of tangible interactions). Where, and how these interactions are applied also adds a new dimension to these applications (e.g., if a mobile device can detect which part of your body it is tapped into, it can launch the food app when tapped to your stomach). Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by “Object Recognition for Input and Mobile Interaction”.
09月04日
2017
会议日期
注册截止日期
留言