Causal inference provides a means of translating a target causal query into a causal formula, which is a function of the observational distribution, given some assumptions on the domain. With the advent of modern neural probabilistic models, this opens up the possibility to perform accurate and tractable causal inference on realistic, high-dimensional data distributions, a crucial component of reasoning systems. However, for most model classes, the computation of the causal formula from the observational model is intractable. In this work, we hypothesize that probabilistic circuits, a general and expressive class of tractable probabilistic models, may be more amenable for the computation of causal formulae. Unfortunately, we prove that evaluating even simple causal formulae is still intractable for most types of probabilistic circuits. Motivated by this, we devise a conceptual framework for analyzing the tractability of causal formulae by decomposing them into compositions of primitive operations, in order to identify tractable subclasses of circuits. This allows us to derive, for a specific subclass of circuits, the first tractable algorithms for computing the backdoor and frontdoor adjustment formulae.