top of page
Writer's picture畅 刘

CalibRead: Unobtrusive Eye Tracking Calibration from Natural Reading Behavior

Updated: Nov 22

May 2023 ‑ February 2024

Chang Liu, Xiangyang Wang, Chun Yu*, Yingtian Shi, Chongyang Wang, Ziqi Liu, Chen Liang, Yuanchun Shi


In this paper, we present a novel, unobtrusive calibration method that leverages the association between eye-movement and text to calibrate eye-tracking devices during natural reading. The calibration process involves an iterative sequence of 3 steps: (1) matching the points of eye-tracking data with the text grids and boundary grids, (2) computing the weight for each point pair, and (3) optimizing the calibration parameters that best align point pairs through gradient descent. During this process, we assume that, from a holistic perspective, the gaze will cover the text area, effectively filling it after sufficient reading. Meanwhile, on a granular level, the gaze duration is influenced by the semantic and positional features of the text. Therefore, factors such as the presence of empty space, the positional features of tokens, and the depth of constituency parsing play important roles in calibration. Our method achieves accuracy error comparable to traditional 7-point mehtod after naturally reading 3 texts, which takes about 51.75 seconds. Moreover, we analyse the impact of different holistic and granular features on the calibration results.



Overlay of all eye movement traces shows distinguishable rows, indicating eye movement during reading is strongly correlated with text in rows


Calibration Workflow. (a) Point Matching: We match each gaze point with its closet text grid center and boundary grid center. (b) Computing Weights: For each point pair, we calculate its weight. For text grids, the weight is computed using gaze density and gaze duration prediction. For text grids of punctuation and boundary grids, we set their weight to contants.(c) Gradient Descent: We compute the optimal affine matrix by minimizing the weighted distance between point pairs. After transforming the gaze points with affine matrix, we return to point matching to initiate the next iteration.


Publication:




bottom of page