Labelling user data is a central part of the design and evaluation of pervasive systems that aim to support the user through situation-aware reasoning. It is essential both in designing and training the system to recognize and reason about the situation, either through the definition of a suitable situation model in knowledge-driven applications, or though the preparation of training data for learning tasks in data-driven models. Hence, the quality of annotations can have a significant impact on the performance of the derived systems.
Labelling is also vital for validating and quantifying the performance of applications.
With pervasive systems relying increasingly on large datasets for designing and testing models of users’ activities, the process of data labelling is becoming a major concern for the community. This also reflects the increasing need of intelligent interactive annotation tools, which can reduce the manual annotation effort and improve the annotation performance and quality in large datasets.
The topics of interest include, but are not limited to:
methods and intelligent tools for annotating user data for pervasive systems
processes of and best practices in annotating user data
methods towards an automation of the annotation
improving and evaluating the annotation quality
ethical issues concerning the annotation of user data
beyond the labels: ontologies for semantic annotation of user data
high-quality and re-usable annotation for publicly available datasets
impact of annotation on a ubiquitous and intelligent system’s performance
building classifier models that are capable of dealing with multiple (noisy) annotations and/or makinguse of taxonomies/ontologies
the potential value of incorporating modelling of the annotators into predictive models
03月19日
2018
03月23日
2018
初稿截稿日期
初稿录用通知日期
终稿截稿日期
注册截止日期
留言