DRA. LUZ ABRIL TORRES MÉNDEZ
Profesor InvestigadorPUBLICACIONES
Para ver las publicaciones de todo Robótica y Manufactura Avanzada, ver: Publicaciones RYMA
González-García, Luis C.; Torres-Mendez, Luz Abril; Martínez, Julieta; Sattar, Junaed; Little, James Are You Talking to Me? Detecting Attention in First-Person Interactions Proceedings Article En: pp. 137-142, 2015, ISSN: 2308-4197.2015
Proceedings Articles
@inproceedings{Gonz\'{a}lez-Garc\'{i}a2015,
title = {Are You Talking to Me? Detecting Attention in First-Person Interactions},
author = {Gonz\'{a}lez-Garc\'{i}a, Luis C. and Torres-Mendez, Luz Abril and Mart\'{i}nez, Julieta and Sattar, Junaed and Little, James},
url = {https://www.researchgate.net/publication/274065286_Are_You_Talking_to_Me_Detecting_Attention_in_First-Person_Interactions},
issn = {2308-4197},
year = {2015},
date = {2015-00-00},
pages = { 137-142},
abstract = {This paper presents an approach for a mobile robot to detect the level of attention of a human in first-person interactions. Determining the degree of attention is an essential task in day-today interactions. In particular, we are interested in natural Human-Robot Interactions (HRI's) during which a robot needs to estimate the focus and the degree of the user's attention to determine the most appropriate moment to initiate, continue and terminate an interaction. Our approach is novel in that it uses a linear regression technique to classify raw depth-image data according to three levels of user attention on the robot (null, partial and total). This is achieved by measuring the linear independence of the input range data with respect to a dataset of user poses. We overcome the problem of time overhead that a large database can add to real-time Linear Regression Classification (LRC) methods by including only the feature vectors with the most relevant information. We demonstrate the approach by presenting experimental data from human-interaction studies with a PR2 robot. Results demonstrate our attention classifier to be accurate and robust in detecting the attention levels of human participants. I. INTRODUCTION Determining the attention of people is an essential component of day-today interactions. We are constantly monitoring other people's gaze, head and body poses while engaged in a conversation [1][2][3]. We also perform attention estimation in order to perform natural interactions [4][5]. In short, attention estimation is a fundamental component of effective social interaction; therefore, for robots to be efficient social agents it is necessary to provide them with reliable mechanisms to estimate human attention. We believe that human attention estimation, particularly in the context of interactions, is highly subjective. However, attempts to model it have been relatively successful, e.g., allowing a robot to ask for directions when it finds a human, as in the work of Weiss et al. [6]. Nonetheless, the state-of-the-art is still far from reaching a point where a robot can successfully interact with humans without relying on mechanisms not common to natural language. Recently, the use of range images to make more natural human-machine interfaces has been in the agenda of researchers, like in the case of the Microsoft Kinect TM , which delivers a skeleton of
Are You Talking to Me? Detecting Attention in First-Person Interactions (PDF Download Available). Available from: https://www.researchgate.net/publication/274065286_Are_You_Talking_to_Me_Detecting_Attention_in_First-Person_Interactions [accessed Jun 17, 2017].},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Are You Talking to Me? Detecting Attention in First-Person Interactions (PDF Download Available). Available from: https://www.researchgate.net/publication/274065286_Are_You_Talking_to_Me_Detecting_Attention_in_First-Person_Interactions [accessed Jun 17, 2017].
Av. Industrial Metalurgia #1062, Parque Ind. Ramos Arizpe, Ramos Arizpe, Coah. C.P. 25900, México. Tel. +52 (844) 438-9600