Timezone: »

 
Poster
Launch and Iterate: Reducing Prediction Churn
Mahdi Milani Fard · Quentin Cormier · Kevin Canini · Maya Gupta

Wed Dec 07 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #15 #None

Practical applications of machine learning often involve successive training iterations with changes to features and training examples. Ideally, changes in the output of any new model should only be improvements (wins) over the previous iteration, but in practice the predictions may change neutrally for many examples, resulting in extra net-zero wins and losses, referred to as unnecessary churn. These changes in the predictions are problematic for usability for some applications, and make it harder and more expensive to measure if a change is statistically significant positive. In this paper, we formulate the problem and present a stabilization operator to regularize a classifier towards a previous classifier. We use a Markov chain Monte Carlo stabilization operator to produce a model with more consistent predictions without adversely affecting accuracy. We investigate the properties of the proposal with theoretical analysis. Experiments on benchmark datasets for different classification algorithms demonstrate the method and the resulting reduction in churn.

Author Information

Mahdi Milani Fard (Google)
Quentin Cormier (Google)
Kevin Canini (Google)
Maya Gupta (Google)

More from the Same Authors