Skip to yearly menu bar Skip to main content


Poster

Disentangling Interpretable Factors of Variations with Supervised Independent Subspace Principal Component Analysis

Jiayu Su · David A Knowles · Raúl Rabadán


Abstract:

Representing high-dimensional data in a meaningful way that aligns with existing knowledge is a central challenge in machine learning, particularly when the goal is to separate distinct subspaces. Traditional linear approaches often fall short in handling multiple subspaces, while deep generative models that capture nonlinear latent spaces typically trade off interpretability for increased complexity. Addressing these limitations, we introduce Supervised Independent Subspace Principal Component Analysis (sisPCA), an innovative extension of PCA to multiple subspaces that disentangles independent signatures in data using the Hilbert-Schmidt Independence Criterion (HSIC). We elucidate the mathematical connections of sisPCA with self-supervised learning and regularized linear regression. Through comprehensive experiments involving DNA methylation and single-cell RNA sequencing data, we demonstrate sisPCA’s ability to discern and segregate complex latent structures. Our findings also underscore the efficacy of sisPCA in enhancing the interpretability of high-dimensional data analysis.

Live content is unavailable. Log in and register to view live content