Poster
Learning to Understand: Identifying Interactions via the Möbius Transform
Justin Kang · Yigit Efe Erginbas · Landon Butler · Ramtin Pedarsani · Kannan Ramchandran
East Exhibit Hall A-C #3204
Abstract:
One of the key challenges in machine learning is to find interpretable representations of learned functions. The Möbius transform is essential for this purpose, as its coefficients correspond to unique *importance scores* for *sets of input variables*. This transform is closely related to widely used game-theoretic notions of importance like the *Shapley* and *Bhanzaf value*, but it also captures crucial higher-order interactions. Although computing the Möbius Transform of a function with inputs involves coefficients, it becomes tractable when the function is *sparse* and of *low-degree* as we show is the case for many real-world functions. Under these conditions, the complexity of the transform computation is significantly reduced. When there are non-zero coefficients, our algorithm recovers the Möbius transform in samples and time asymptotically under certain assumptions, the first non-adaptive algorithm to do so. We also uncover a surprising connection between group testing and the Möbius transform. For functions where all interactions involve at most inputs, we use group testing results to compute the Möbius transform with sample complexity and time. A robust version of this algorithm withstands noise and maintains this complexity. This marks the first sub-linear query complexity, noise-tolerant algorithm for the Möbius transform. While our algorithms are conceptualized in an idealized setting, they indicate that the Möbius transform is a potent tool for interpreting deep learning models.
Chat is not available.