Inductive Biases for Disentangled Representation Learning with Correlated Treatment--Nuisance Factors
Abstract
Accurately modelling experimental factors of variation is crucial to modern science. By understanding thedistinct contributions of treatment and nuisance factors, researchers can better interpret, and generalise experimental findings. In many real-world experiments, treatment and nuisance factors are correlated, making standard assumptions of independence unrealistic. Classical design of experiments provides many approaches for mitigating confounding, yet their integration with modern deep generative models remains underexplored. We introduce a framework that adapts variational autoencoders (VAEs) with block design–inspired inductive biases to account for treatment–nuisance dependence. Specifically, we propose stop-gradient and independence-constraint mechanisms that respect experimental structure and enforce disentanglement even under correlated assignments. Our findings highlight both the promises and pitfalls of combining block design principles with disentangled generative modelling, paving the way for principled, causally informed use of deep learning in experimental sciences.