Shared Parameter Subspaces and Cross-Task Linearity in Emergently Misaligned Behaviour
Abstract
Recent work has discovered that large language models can develop broadly misaligned behaviours after being fine-tuned on narrowly harmful datasets, a phenomenon known as emergent misalignment (EM). However, the fundamental mechanisms enabling such harmful generalization across disparate domains remain poorly understood. In this work, we adopt a geometric perspective to study EM and demonstrate that it exhibits a fundamental cross-task linear structure in how harmful behaviour is encoded across different datasets. Specifically, we find a strong convergence in EM parameters across tasks, with the fine-tuned weight updates showing relatively high cosine similarities, as well as shared lower-dimensional subspaces as measured by their principal angles and projection overlaps. Furthermore, we also show functional equivalence via linear mode connectivity, wherein interpolated models across narrow misalignment tasks maintain coherent, broadly misaligned behaviour. Our results indicate that EM arises from different narrow tasks discovering the same set of shared parameter directions, suggesting that harmful behaviours may be organized into specific, predictable regions of the weight landscape. By revealing this fundamental connection between parametric geometry and behavioural outcomes, we hope our work catalyzes further research on parameter space interpretability and weight-based interventions.