Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Gaze Meets ML

SuperVision: Self-Supervised Super-Resolution for Appearance-Based Gaze Estimation

Galen O'Shea · Majid Komeili

[ ] [ Project Page ]
Sat 16 Dec 8:45 a.m. PST — 9 a.m. PST
 
presentation: Gaze Meets ML
Sat 16 Dec 6:15 a.m. PST — 3 p.m. PST

Abstract:

Gaze estimation is a valuable tool with a broad range of applications in various fields, including medicine, psychology, virtual reality, marketing, and safety. Therefore, it is essential to have gaze estimation software that is cost-efficient and high-performing. Accurately predicting gaze remains a difficult task, particularly in real-world situations where images are affected by motion blur, video compression, and noise. Super-resolution (SR) has been shown to remove these degradations and improve image quality from a visual perspective. This work examines the usefulness of super-resolution for improving appearance-based gaze estimation and demonstrates that not all SR models preserve the gaze direction. We propose a two-step framework for gaze estimation based on the SwinIR super-resolution model. The proposed method consistently outperforms the state-of-the-art, particularly in scenarios involving low-resolution or degraded images. Furthermore, we examine the use of super-resolution through the lens of self-supervised learning for gaze estimation and propose a novel architecture “SuperVision” by fusing an SR backbone network to a ResNet18. While only using 20\% of the data, the proposed SuperVision architecture outperforms the state-of-the-art GazeTR method by 15.5\%.

Chat is not available.