Timezone: »

Online control of the false discovery rate with decaying memory
Aaditya Ramdas · Fanny Yang · Martin Wainwright · Michael Jordan

Wed Dec 06 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #220

In the online multiple testing problem, p-values corresponding to different null hypotheses are observed one by one, and the decision of whether or not to reject the current hypothesis must be made immediately, after which the next p-value is observed. Alpha-investing algorithms to control the false discovery rate (FDR), formulated by Foster and Stine, have been generalized and applied to many settings, including quality-preserving databases in science and multiple A/B or multi-armed bandit tests for internet commerce. This paper improves the class of generalized alpha-investing algorithms (GAI) in four ways: (a) we show how to uniformly improve the power of the entire class of monotone GAI procedures by awarding more alpha-wealth for each rejection, giving a win-win resolution to a recent dilemma raised by Javanmard and Montanari, (b) we demonstrate how to incorporate prior weights to indicate domain knowledge of which hypotheses are likely to be non-null, (c) we allow for differing penalties for false discoveries to indicate that some hypotheses may be more important than others, (d) we define a new quantity called the decaying memory false discovery rate (mem-FDR) that may be more meaningful for truly temporal applications, and which alleviates problems that we describe and refer to as “piggybacking” and “alpha-death.” Our GAI++ algorithms incorporate all four generalizations simulatenously, and reduce to more powerful variants of earlier algorithms when the weights and decay are all set to unity. Finally, we also describe a simple method to derive new online FDR rules based on an estimated false discovery proportion.

Author Information

Aaditya Ramdas (University of California, Berkeley)
Fanny Yang (ETH Zurich)
Martin Wainwright (UC Berkeley)
Michael Jordan (UC Berkeley)

More from the Same Authors