Friday, April 16, 2010

16.04: Resumé

0 Kommentare
Tried to understand Kumar's saccade detection algorithm, but cannot quite grasp it.
Perhaps reading Identifying Fixations and Saccades in Eye-Tracking Protocols might help.

Found a fancy blog theme, tho' ;-)

Wednesday, April 14, 2010

Fixation Smoothing and Saccade Detection Algorithm

0 Kommentare
Kumar, Klingner et al. – Improving the Accuracy of Gaze
http://portal.acm.org/citation.cfm?id=1344488

This article describes an algorithm which determines by the gaze data whether the user is currently starting a saccade or just a microsaccade. If it is a miccrosaccade the current gaze data is not considered. Thus a better stability during a fixation can be achieved.

The algorithm encounters the problem of eye-noise:
fixations are not stable and the eye jitters during fixations due to drift, tremor and involuntary micro-saccades [Yarbus 1967]. This gaze jitter, together with the limited accuracy of eye trackers, results in a noisy gaze signal

As the analysis of the data is done in real time a minimal lag occurs. One data record is processed and afterwards the mouse pointer is set, or not. This results in a one-data-sample lag.

Error rates with gaze pointing and selection are hight than with mouse:
In the paper describing EyePoint [Kumar et al. 2007b], it was reported that while the speed of a gaze-based pointing technique was comparable to the mouse, error rates were significantly higher.

Relevance of this article
In this paper we present three methods for improving the accuracy and user experience of gaze-based pointing: an algorithm for realtime saccade detection and fixation smoothing, an algorithm for improving eye-hand coordination, and the use of focus points. These methods boost the basic performance for using gaze information in interactive applications and in our applications made the difference between prohibitively high error rates and practical usefulness of gaze-based interaction.

Method of the algorithm (TODO: I NEED TO UNDERSTAND THIS):
To smooth the data from the eye tracker in real-time, it is necessary to determine whether the most recent data point is the beginning of a saccade, a continuation of the current fixation or an outlier relative to the current fixation. We use a gaze movement threshold, in which two gaze points separated by a Euclidean distance of more than a given saccade threshold are labeled as a saccade. This is similar to the velocity threshold technique described in [Salvucci and Goldberg 2000], with two modifications to make it more robust to noise. First, we measure the displacement of each eye movement relative to the current estimate of the fixation location rather than to the previous measurement. Second, we look ahead one measurement and reject movements over the saccade threshold which immediately return to the current fixation.

Article with Gaze Hotspot Navigation

0 Kommentare
Eye-S: a Full-Screen Input Modality for Pure Eye-based Communication

ACM Link

Abstract
To date, several eye input methods have been developed, which, however, are usually designed for specific purposes (e.g. typing) and require dedicated graphical interfaces. In this paper we present Eye-S, a system that allows general input to be provided to the computer through a pure eye-based approach. Thanks to the “eye graffiti” communication style adopted, the technique can be used both for writing and for generating other kinds of commands. In Eye-S, letters and general eye gestures are created through sequences of fixations on nine areas of the screen, which we call hotspots. Being usually not visible, such sensitive regions do not interfere with other applications, that can therefore exploit all the available display space.


The Single Gaze Gestures article refers to this one and suggests an improvement:
In general the research has shown this approach to gaze gestures - where the complexity and range of gestures required for all letters and text editing functions, causes a heavy physiological and cognitive load – to be problematic. SSGs are an attempt at simplifying gestures to make them robust and reliable as well as keeping the cognitive load low.

Article: Single gaze gestures

0 Kommentare
This article contains ideas, which might be relevant for my Gaze-Gestures Web-Browing feature:
ACM Link

Emilie Mollenbach, Martin Lillholm et al. 2010 – Single gaze gestures

Abstract:
This paper examines gaze gestures and their applicability as a
generic selection method for gaze-only controlled interfaces.
The method explored here is the Single Gaze Gesture (SGG), i.e.
gestures consisting of a single point-to-point eye movement.
Horizontal and vertical, long and short SGGs were evaluated on
two eye tracking devices (Tobii/QuickGlance (QG)). The main
findings show that there is a significant difference in selection
times between long and short SGGs, between vertical and horizontal
selections, as well as between the different tracking systems.

The article discusses various projects, which also use gaze gestures or something similar. Among them is text input through gestures, i.e. letters are created or selected by gestures.

Problem with gestures:
Cognitively it may be difficult to remember a large number of gestures and physiologically it may be difficult to create and complete them [Porta et al. 2008].

Goal of the evaluation:
This experiment was designed to explore the following three hypotheses. Firstly, does frame-rate and automatic smoothing on eye trackers have an effect on either the selection completion time or the selection error rate? - Secondly, is there a difference in completing selection saccades in different directions, i.e. horizontal and vertical? – And thirdly, is there a difference in the completion times of gestures depending on various lengths of the eye movements across a screen?

Having read the article once, it is not quite clear to me what the authors mean with "gaze gestures"
Do they have the same in mind as i do??

Let's go!

0 Kommentare
Blog created! Here we go!