Models for gaze tracking systems

Citation data:

Eurasip Journal on Image and Video Processing, ISSN: 1687-5176, Vol: 2007, Page: 1-16

Publication Year:
2007
Usage 339
Abstract Views 307
Full Text Views 27
Link-outs 5
Captures 60
Readers 55
Exports-Saves 5
Citations 43
Citation Indexes 43
DOI:
10.1155/2007/23570
Author(s):
Arantxa Villanueva; Rafael Cabeza
Publisher(s):
Springer Nature
Tags:
Computer Science; Engineering
article description
One of the most confusing aspects that one meets when introducing oneself into gaze tracking technology is the wide variety, in terms of hardware equipment, of available systems that provide solutions to the same matter, that is, determining the point the subject is looking at. The calibration process permits generally adjusting nonintrusive trackers based on quite different hardware and image features to the subject. The negative aspect of this simple procedure is that it permits the system to work properly but at the expense of a lack of control over the intrinsic behavior of the tracker. The objective of the presented article is to overcome this obstacle to explore more deeply the elements of a video-oculographic system, that is, eye, camera, lighting, and so forth, from a purely mathematical and geometrical point of view. The main contribution is to find out the minimum number of hardware elements and image features that are needed to determine the point the subject is looking at. A model has been constructed based on pupil contour and multiple lighting, and successfully tested with real subjects. On the other hand, theoretical aspects of video-oculographic systems have been thoroughly reviewed in order to build a theoretical basis for further studies.