Improving precision of A/B experiments using trigger intensity
Abstract
Online randomized controlled experiments (A/B tests) measure causal changes inindustry. While these experiments use incremental changes to minimize disruption,they often yield statistically insignificant results due to low signal-to-noise ratios.Precision improvement (or reducing standard error) traditionally focuses on trigger observations - where treatment and control outputs differ. Though effective, detecting all triggers (full knowledge) is prohibitively expensive. We propose asampling-based approach (partial knowledge) where the bias in the evaluation outcome decreases inversely with sample size. Simulations show bias approaches zero with just ≤ 0.1% of observations sampled. Empirical testing demonstrates a38% variance reduction compared to CUPED methods [1].